uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,499,015 | arxiv | \section{Introduction}
One of the common issues that plagues real life data is missing values. The problem of missing data is present in many applications including signal processing, finance, meteorology, medicine, and others. Errors in collection and recording can lead to incomplete data which can lead to false results. Analysis and conclusions derived based on incomplete data can be misleading. Therefore, the development of effective solutions for dealing with incomplete data is an important area of research. In this paper, we focus on handling incomplete data in the context of time series. The process of replacing the missing values is commonly referred to as imputation. Dealing with time series data poses its unique challenges. Our goal is to compare various approaches to fill (impute) missing time series values and identify the most effective solution.
In certain cases simply ignoring the missing values may be acceptable. It is the easiest and most convenient approach that is frequently utilized by data scientist. However, a more nuanced approach is required when dealing with time series data.
Temporal relationships are affected when data is dropped from a time series. For instance, seasonal patterns can be obfuscated when missing time series values are ignored. Stochastic models such as autoregressive and moving average assume a continuous sequence of ordered values and can produce delusive results in case of dropped values. As a result, missing time series data requires replacement.
There exists a number of approaches to deal with incomplete data \cite{Pratama}.
In the present study, we consider three of the most commonly employed methods for estimating missing values in time series: forward fill, backward fill, and mean fill. In the \textit{forward} fill method, the missing values are replaced with the preceding observations, that is, if $x_t$ is missing then it is replaced with $x_{t-1}$. In the \textit{backward} fill method, the missing values are replaced with the following observations, that is, if $x_t$ is missing then it is replaced with $x_{t+1}$. In the \textit{mean} fill method, the missing values are replaced with the average value of the sample series. The forward and backward fill methods have the advantage in scenarios where there is a strong positive correlation between the time series values whereas mean fill yields better performance in more volatile data. The three imputation methods are simple and easy to implement making them a very popular tool among the practitioners. Unlike the more exotic imputation techniques that require significant time and effort and do not necessarily produce optimal results, the classic filling methods considered in our study provide a fast and reliable avenue for replacing missing times series values. As a result, they remain popular and highly relevant. Our paper presents an exhaustive study of the methods' performance under a range of scenarios.
We carry out a large scale numerical analysis based on 3,600 simulated time series to determine the most effective replacement technique.
In this paper, we focus on the autoregressive (AR) process of order 1. It is the most commonly used time series model. The results of the AR(1) process can be suitably extended to the general family of AR($p$) processes. Recall that the AR(1) process is given by the equation
\begin{equation}
\label{ar}
x_t = \phi x_{t-1} +w_t,
\end{equation}
where $x_t$ is the value of the time series at time $t$ and $w_t$ is white noise.
In our experiments, we simulate the AR(1) process and artificially remove a portion of the values. Then we reconstruct the missing values using the three filling techniques mentioned above. Our goal is to identify the filling method that produces sample partial autocorrelation function (PACF) values that are closest to the theoretical PACF values. Our study is based on a total of 3,600 simulated time series using different mode coefficients. The results of the experiments show that mean fill outperforms the other methods.
Time series forecasting plays an important role in a number of applications including finance, medicine, engineering and others \cite{Hamilton, Kamalov1, Kamalov2}. The issue of missing times series values can lead to inaccurate forecasts. Therefore, a good understanding of imputation methods is crucial. We believe that our study would serve as a useful reference to interested parties both inside and outside of academia.
\section{Literature}
There exists a number of methods for dealing with incomplete time series data. An overview of the classic imputation methods along with their advantages and disadvantages can be found in \cite{Pratama}. Imputation methods and algorithms are extensively implemented in statistical packages such as R \cite{Moritz}. A recent trend in imputation research involves the use of neural networks to estimate the missing time series values. For instance, the authors in \cite{Luo} employ generative adversarial networks whereas in \cite{Susanti} the authors use Bayesian networks to impute missing values in multivariate time series. In \cite{Fortuin}, the authors propose a deep sequential latent variable model to address the issue of interpretability. The authors employ non-linear dimensionality reduction using a VAE approach together with structured variational approximation.
Domain-specific imputation methods have also been proposed. In \cite{Li}, the authors employ a multiview learning method to replace missing values in traffic-related time series data. The model combines LSTM and SVR algorithms together with collaborative filtering techniques. The proposed method is able to account for the local and global variation in temporal and spatial views to capture more information from the existing data. The issue of missing data in transportation time series was also addressed in \cite{Sun}. The authors utilized an improved kNN-based imputation method to improve the accuracy by 34\%. The authors in \cite{Bokde} modify the pattern sequence algorithm to to simultaneously forecast and backcast missing values for imputation. Although test results were positive more extensive experiments are required to confirm the efficacy of the algorithm. Imputation methods have also been proposed in other domain such estimating forest biomass \cite{Nguyen} and water level forecasting \cite{Yang}.
A bagging algorithm to improve the existing imputation methods was proposed in \cite{Andiojaya}. The test results show that bagging has a positive effect on the performance of the imputation algorithms.
\section{Numerical experiments}
In this section, we carry out a range of numerical experiments to determine the best technique to estimate missing values in time series. Our focus is on the AR(1) based time series. We compare three approaches: forward fill, backward fill, and mean fill. The results of the experiments reveal that backward fill produces the most accurate results among all the tested methods.
\subsection{Methodology}
To analyze the performances of the filling models we measure their accuracy on AR(1) generated time series. To this end, we consider a range of AR(1) model coefficients $\phi$: $0.1, 0.2, 0.3, 0.4$, $0.5, 0.6, 0.7, 0.8, 0.9$.
For each value of $\phi$, we generate 100 different time series. Next, for each time series, we randomly drop a portion of the values. The resulting time series simulates a real life scenario of incomplete data. Then we estimate and replace the missing values using the above filling techniques. The sample PACF values are calculated for each restored time series. The filling methods are evaluated based on the difference between the theoretical PACF of the original process and the sample PACF based on the restored time series. For each value of $\phi$, we aggregate the results of the experiments on 100 simulated time series. We report the overall mean of the difference between theoretical and sample PACF values.
One of the key factors in the analysis of incomplete data is the quantity of missing values. In our experiments, we study the performance of filling algorithms under different drop rates: 5\%, 10\%, 20\%, and 30\% of the series values. As a result, we obtain a comprehensive perspective of the model performance.
A summary of the methodology is provided below.
\\
\\
\textbf{Algorithm}
\\
\line(1,0){150}
\\
For a fixed value of $\phi$, dropout rate $p$, and filling method $\mathcal{M}$.
\begin{enumerate}[label=\arabic*., itemsep=1ex]
\item Randomly generate 100 different sample time series based on the AR(1) model (Eq. \ref{ar}).
\item For each time series $x_t$, calculate the accuracy score using the following steps
\begin{enumerate}[label=\roman*., itemsep=1ex]
\item Drop a fraction $p$ of the time series values.
\item Replace the missing values using the method $\mathcal{M}$.
\item Calculate the sample PACF at lag $h=1$ based on the restored time series.
\item Calculate the accuracy score as the difference between the sample PACF and the theoretical PACF.
\end{enumerate}
\item Calculate the mean and standard deviation of accuracy scores over the 100 sample time series.
\end{enumerate}
The numerical experiments are implemented in Python using the \texttt{statsmodels} package \cite{Seabold}. We used Google Colab to carry out the experiments.
\subsection{Numerical experiments}
A range of numerical experiments with various parameter settings and dropout rates were performed to measure the performance of the filling methods. A total of 3,600 simulated sample time series were analyzed to obtain the final results.
To illustrate the experiments consider a single sample time series generated using the AR(1) process with parameter $\phi=0.4$ (Figure \ref{fig:a}). Note that in this case the theoretical PACF is equal to 0.329.
To mimic real-life incomplete data, we drop 20\% of the time series values as shown in Figure \ref{fig:a}. Then the filling methods are used to replace the missing values. As shown in Figures \ref{forward}-\ref{mean}, the restored series are close to the original sample time series. For each filling method, we compute the sample PACF (Figures \ref{forward}-\ref{mean}). To measure the performance of a filling method we take the absolute difference between the theoretical PACF (0.329) and the sample PACF obtained from the restored time series. Thus, the accuracy score of the forward fill method is $|0.329 - 0.464|=0.135$ (Figure \ref{forward}). The accuracy scores of the backward and mean fill methods are $0.163$ and $0.014$ respectively. This experiment is repeated 100 times for each combination of the model parameter $\phi$ and dropout rate values.
\begin{figure}[!htb]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[clip, trim=5cm 0cm 3.5cm 0cm, width=1.0\textwidth]{time_series_large}
\caption{The original and corrupted times series. The corrupted series is missing 20\% of the original values. The original sample PACF at lag 1 is 0.329.}
\label{fig:a}
\end{subfigure}
\newline
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[clip, trim=5cm 0cm 3.5cm 0cm, width=\textwidth]{forward_large}
\caption{The restored times series using forward fill method and the corresponding sample PACF plot (0.464).}
\label{forward}
\end{subfigure}
\newline
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[clip, trim=5cm 0cm 3.5cm 0cm, width=1\textwidth]{backward_large}
\caption{The restored times series using backward fill method and the corresponding sample PACF plot (0.492) .}
\label{backward}
\end{subfigure}
\newline
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[clip, trim=5cm 0cm 3.5cm 0cm, width=1\textwidth]{mean_large}
\caption{The restored times series using mean fill method and the corresponding sample PACF plot (0.315).}
\label{mean}
\end{subfigure}
\caption{The original sample time series is generated based on the AR(1) process with coefficient $\phi=0.4$. Then 20\% of the time series values were dropped to obtain an incomplete sample. The three filling methods are applied to restore the series.}
\label{example}
\end{figure}
The results of our numerical experiments are reported according dropout rates. Concretely, for each dropout rate, we repeat the experiment described in Figure \ref{example} 100 times for each value of $\phi$ and calculate the average difference between the theoretical and restored PACF. The results of the experiments are presented in Figures \ref{drop10}-\ref{drop25}. First, we consider the case when 10\% of the time series values are missing. As shown in Figure \ref{drop10}, the forward and backward fill methods produce small errors (difference) at large positive values of $\phi$. On the other hand, mean fill produces relatively larger errors. Note that at large positive values of $\phi$ the time series has a strong trend. Therefore, the forward and backward methods that follow the the trend produce better results.
However, the performance of mean fill improves as the value of $\phi$ decreases. Eventually, beyond the value of $\phi=0.5$ , mean fill produces significantly better results than the forward and backward fill methods. At the lower values of $\phi$, the time series moves more sporadically with frequent changes in the direction of movement. As a result, the forward and backward fill methods overestimate the time series values. The mean fill method produces more conservative estimates that are less likely to miss the true time series value by a large margin.
When evaluating the performances of the filling methods across the range of values of $\phi$ mean fill yields undoubtedly better results.
The performance of the filling methods on data with higher dropout rate is consistent with that of 10\% dropout rate. As shown in Figures \ref{drop15}-\ref{drop25}, the forward and backward fill methods perform well at large positive values of $\phi$. On the other hand, mean fill produces better results as the value of $\phi$ decreases. Concretely, mean fill outperforms the other methods for all values $\phi \leq 0.5$. It is interesting to observe that while the forward and backward fill methods achieve a better performance with positive values of $\phi$, mean fill achieves better results with negative values of $\phi$. We also note that the performances of all three methods deteriorate as the dropout rate increases. For instance, the PACF error of mean fill increases from under 0.10 to over 0.20 at $\phi=0.9$ as the dropout rate increase from 10\% to 25\%. Similarly, the PACF error of forward fill increases from under 0.35 to almost 0.70 at $\phi=-0.9$ as the dropout rate increase from 10\% to 25\% (Figures \ref{drop10}-\ref{drop25}).
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{drop_10}
\caption{The average true and estimated PACF difference for times series with 10\% missing values.}
\label{drop10}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{drop_15}
\caption{The average true and estimated PACF difference for times series with 15\% missing values.}
\label{drop15}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{drop_20}
\caption{The average true and estimated PACF difference for times series with 20\% missing values.}
\label{drop20}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{drop_25}
\caption{The average true and estimated PACF difference for times series with 25\% missing values.}
\label{drop25}
\end{figure}
\section{Conclusion}
In this paper, we analyzed the performance of three methods to fill the missing values in a time series data. Concretely, we studied the performance of the forward, backward, and mean fill methods in restoring the missing values from AR(1) generated time series sample. We carried out a total of 3,600 simulations with a range of dropout rates and model parameter values. The performance of the filling methods was measured based on the difference between the true and estimated PACF values. The results of the numerical experiments show that mean fill significantly outperforms the other methods at values $\phi\leq 0.5$. The results are consistent across different dropout rates. Forward and backward fill achieve better results at large positive values of $\phi$. The performance of forward fill is due to the positive trend in the time series for positive values of $\phi$. As a result, forward fill achieves accurate results by following the trend estimates. Conversely, mean fill achieves better results for value $\phi \leq 0.5$ which is explained by higher time series stochasticity. The mean fill method produces more conservative estimates that are better suited for frequently alternating series.
We conclude that the forward and backward fill methods are better suited for time series with strong positive correlations between its values. Mean fill is better suited for time series with low positive correlation or negative correlations between its values. The results hold across different dropout rates. Since the AR(1) process is one of the commonly encountered models, the outcomes of this study can be reasonably extended to other stochastic processes. Given the importance of time series forecasting \cite{Kamalov3, Kamalov4}, our study would be a valuable reference to both academics and industry practitioners.
|
1,116,691,499,016 | arxiv | \chapter{Introduction}
A {\sl cellular game} is a dynamical system; that is, the
variables it is composed of are regarded as changing
over time. These variables or cells,
arranged in a discrete structure such as a ring, are thought of as
repeatedly playing a game with their neighbors. Most of this
paper is concerned with one-dimensional cellular games, defined
more formally as follows:
\newtheorem {defn} {Definition} [chapter]
\begin{defn}
\label{cgame}
A {\bf one-dimensional cellular game} consists of:
\begin{enumerate}
\item
A one-dimensional discrete structure, uniform from the viewpoint of
each site; that is, a ring or doubly infinite path.
\item
A variable, or {\bf cell}, at each site. The components of
this variable may change at each discrete unit of time, or
{\bf round}. They consist of, at least:
\begin{enumerate}
\item A {\bf move} component, which can take on a finite number $k$
of values.
\item A {\bf strategy} component, which determines what
move a cell makes in a given round. The strategy of a cell
is based on past moves of it and its $r$ nearest neighbors
on each side. The number of past rounds considered is called the
{\bf depth} $d$
of the strategy. This $r$, as used above, is the {\bf radius}
of the game.
\end{enumerate}
\item
A fitness criterion, which does not change and is the same
for each cell. This fitness criterion is usually local; that
is, the fitness of a cell in each round is based on its move,
and those of nearest neighbors within the radius of the game.
\item
A mechanism for strategy selection, under which more fit
strategies survive and spread. Strategy selection is
usually nonlocal; that is, a more fit strategy may
spread arbitrarily far in a fixed number of time units.
An interval between strategy changes, which may be one or more rounds,
is called a {\bf generation}.
\end{enumerate}
\end{defn}
Thus, a cellular game can be considered a process in which cells
make moves each round, based on their strategies, and strategies
are updated in each generation, based on their fitness in
preceding rounds.
Note that cellular game strategies and fitness criteria
are usually stored in the form of a table. Also note
that $n$-dimensional cellular automata, with one cell for
each $n$-tuple of integers or integers mod $k$, can be
similarly defined.
One-dimensional cellular games are studied in
\cite {rogers}, \cite{cowan}, \cite{cowan2}, and \cite{miller}.
Similar systems are discussed in \cite{mat1}, \cite{mat2} and
\cite{mat3}; and games on a two-dimensional lattice in
\cite{nowak}.
Cellular games satisfy a criterion for ``artificial
life'' as discussed by Christopher Langton \cite{langton}. That
is, ``There are {\sl no} rules in the system that dictate global
behavior. Any behavior at levels higher than the (individual
cells) is, therefore, emergent.''
Cellular games are a generalization and extension of another,
more well-known, discrete dynamical system; that is, of {\sl
cellular automata}. They were created largely because of
questions arising from the observation of cellular automata.
One-dimensional cellular automata are defined as follows:
\label{CA}
\begin{defn}
A {\bf one-dimensional cellular automaton} con\-sists of:
\begin{itemize}
\item
A one-dimensional discrete structure, uniform from the viewpoint of
each site; that is, a ring or doubly infinite path.
\item
A variable, or {\bf cell}, at each site, that can
take on finitely many values or {\bf states}. The initial
states of a cell may be specified as desired.
\item A function which decides how each cell changes state
from one {\bf generation}, or discrete unit of time, to the next.
This function, or {\bf cellular
automaton rule}, is always the same for each cell, and
depends entirely on the state of a cell and that of its $r$ neighbors
on each side in the past $m$ generations. This $r$ is referred to
as the {\bf radius} of the cellular automaton, and $m$ as its
{\bf order}. Cellular automaton rules are usually stored
and described in the form of a table.
\end{itemize}
\end{defn}
It can be shown that a $m$th-order cellular automaton
is equivalent to a first-order cellular automaton with more
states. This proof \cite{waterman}, however, is dependent on
the locality of cellular automata -- that is, on the
fact that cells are directly affected only by their
neighbors. For similar mathematical objects, such as cellular
games, that are {\it not} local, this proof cannot be used.
Thus, if a cellular automaton, of radius $r$, operates on cells
that can take $k$ possible states, there are $k^{2r+1}$ possible
circumstances that need to be considered. The rule table,
therefore, has $k^{2r+1}$ entries; and there are $k^{k^{2r+1}}$
possible $r$-radius, $k$-state cellular automaton rules.
An example of a cellular automaton rule is the two-state, radius
one rule whose evolution is illustrated below. In this rule, a
cell can be in either state $0$ or state $1$. Any cell that, in
generation $g$ is in state $1$, and has both of its neighbors in
state $1$, stays in state $1$ in generation $g+1$. Otherwise, a
cell is in state $0$ in generation $g+1$. This rule is Rule 128
according to Wolfram's \cite {wolfram} classification system of
the $256$ $2$-state, radius one rules.
\begin{tabbing}
Generation 1:xx\=
1 xx\=0 xx\= 1 xx\= 1 xx\= 1 xx\= 1 xx\= 0 xx\= 1 xx\= 0xx\= 1 xx\kill
Generation 1:\> 1\> 0\> 1\> 1\> 1\> 1\> 0\> 1\> 0\> 1\\
Generation 2:\> 0\> 0\> 0\> 1\> 1\> 0\> 0\> 0\> 0\> 0\\
Generation 3:\> 0\> 0\> 0\> 0\> 0\> 0\> 0\> 0\> 0\> 0\\
\end{tabbing}
\newtheorem{tabl} [defn] {Table}
\begin{tabl}
The action of rule 128 on a circular ring of ten cells, for
three generations.
\end{tabl}
\begin{defn}
A {\bf stochastic cellular automaton} is as above, except that
neighborhood states do not determine the move made in
the next generation, but the probability that a particular
move will be made.
\end{defn}
Computer experiments on one-dimensional cellular automata are
usually conducted with cells arranged in a ring. Cell states are
indicated by colors; thus, $k$-state cellular automaton rules are
often referred to as $k$-color rules. Initial conditions are
displayed in a line on top of the screen, with each generation
being displayed below the previous generation. In such
experiments, initial conditions, and rule table entries, are
often chosen with the aid of a pseudorandom number generator.
As a matter of fact, descriptions of computer experiments with
cellular automata and other discrete dynamical systems often make
reference, informally, to ``random'' initial conditions. This
concept actually applies to mathematical models containing
infinitely many variables, such as a one-dimensional cellular
automaton with one cell for each integer. In such a case,
``random,'' ``almost all,'' or ``normal'' initial conditions
refer to conditions such that all $k^n$ of the $n$-tuples of $k$ cell
states are equally likely, for all $n$. Or, in other words, if
the states of the cells are construed as decimal places of two
real numbers, both numbers are normal to base $k$.
Such conditions cannot be exactly duplicated in the finite
case, no matter how large the number of cells. However,
conditions can be created which appear disordered and satisfy
certain statistical tests of disorder. This is done with the aid
of a pseudorandom number generator. Such initial conditions are often
loosely referred to as ``random.'' Computer simulations of discrete
dynamical systems often use such initial conditions as the most
feasible indicator of likely behavior.
In such experiments, there are, roughly, three types of
asymptotic behavior. First of all, all cells
may become and remain one color, or change color periodically,
with a small and easily observable period. Second, cells may
display ``chaotic'' behavior; that is, cell color choice may
appear to be disordered, or to result from some other simple
stochastic algorithm. Third, cell color
choice may be neither periodic nor chaotic, but appear to display
organized complexity.
That is, the cell evolution diagrams may look like biological
structures, such as plants, or social structures, such as city
maps. As a matter of fact, such diagrams are often quite
esthetically pleasing. These rule types are discussed
in \cite {wolfram2}; for more on the concept of ``complexity,''
as it applies to cellular automaton rules, see \cite {waldrop}.
On a finite ring of cells, of course, all such evolution is
eventually periodic. But, if cells can be in $2$ states, and
there are $640$ cells, there are $2^{640}$ possible ring states.
Therefore the period of ring states could, conceivably, be quite
high; and ``chaotic'' or ``complex'' rules do indeed seem to have
very high periods.
Visual representations of cellular automata can exhibit a
sophistication reminiscent of living structures. However, the
number of $k$-state, $r$-radius cellular automaton rules is very
large ($k^{k^{2r+1}}$) for all but the smallest $k$ and $r$; and
``interesting'' rules are not common and difficult to find. This
leads to the question, therefore, of whether there is some way of
``evolving'' cellular automaton rules in a desired direction.
There are two possible avenues of approach to this question. One
is to select rules based on their global properties. That is,
some computable measure of the desired characteristics is
devised, and rules are chosen by
their ability to meet this measure. Such selections are discussed
in \cite{packard2} and \cite{mitchell}.
The other way is to select rules based on their local properties.
That is, each cell uses a different rule; and there is some
universal and unchanging criterion for rule success.
This approach is more like the way living systems evolve, for
the evolution of a planetary ecology is not due to
constraints placed directly on the ecology. It is an emergent
property of constraints placed on the individual organisms. For
this reason, such models may
potentially reveal not only the nature of ``complex'' rules,
but also how their global properties emerge from local
interactions.
An evolutionary model of this sort is equivalent to a cellular
game; the only difference is the terminology. That is,
the strategy of a cell can be regarded as the individual
rule used by each cell; the depth of the strategy
as the order of the rule; cell moves as states; and instead
of referring to the smallest unit of time as a round,
and a possibly larger unit as a generation, the smallest
unit can, as with cellular automata, be referred to
as a generation. The fitness criteria and evolutionary
process stay the same.
A cellular game differs from a cellular automaton not only in the
precise definition used, but also in the philosophy under which
this definition was constructed. That is:
\begin{itemize}
\item
Cellular automata are often regarded as a physical
models; for example, each cell may be seen as
an individual atom. Thus, the rules by which each
cell operates are the same. Cellular games, on the
other hand, are seen as an evolutionary models.
Each cell uses an individual rule, or strategy, which can be
thought of as the ``genetic code'' of the cell.
\item
Cellular automata are usually thought of as deterministic,
beyond the initial generation, though stochastic cellular automata
have also been studied. Cellular games operate stochastically;
that is, the evolutionary process under which strategies
are modified is stochastic, and, often, the strategies themselves
are sto\-chas\-tic.
\item
Cellular automata are local; that is, the state of a cell is
affected only by the states of its $r$ nearest neighbors
on each side in the previous generation. In other words,
cell information cannot travel more than $r$ units per
generation. This speed is often called ``the speed of light.''
Cellular games, on the other hand, typically use nonlocal
strategy selection criteria. That is, a more fit strategy may
propagate arbitrarily far in one generation. (There is
more discussion of the nonlocality of cellular games in
Section \ref{ZDM}.)
\item
In \cite{waterman}, it is shown that $m$th-order cellular automata
are behaviorally equivalent to first-order cellular automata
with larger radius and more states. However, this
proof does not work for cellular games with nonlocal
selection criteria. Moreover, cellular games are often
constructed with strategies looking more than one generation back.
\end{itemize}
Now, it can be shown that if a cellular game has a local
fitness criterion and local rule selection process,
it is actually equivalent to a cellular
automaton with a large number of states. This automaton, of
course, will be stochastic if the game is stochastic.
\newtheorem{thm} [defn] {Theorem}
\begin{thm}
Let $G$ be a cellular game with a local fitness criterion
and local rule selection process, which operates every $R$
rounds. Let all fitness measurements start over
again after this process. Then $G$ is equivalent to a cellular
automaton $G'$ with a much larger number of states.
\end{thm}
{\sl Proof.} Let $G'$ be constructed as follows:
let the state of a cell $c$ in $G'$ be a vector with the
following components:
\begin{enumerate}
\item
The state of $c$ in $G$.
\item
The individual rule used by $c$, in $G$.
\item
A $R$-valued counting variable, which starts out as $1$
in the first generation, and thereafter corresponds to
the current generation mod $R$.
\item
A fitness variable, which corresponds to the accumulated
fitness of a cell over $R$ rounds.
\end{enumerate}
Since these components enable $G'$ to simulate the action
of $G$, it suffices to show that $G'$ is a cellular automaton.
That is, each component must have only finitely many possible
values, and be locally determined. This is shown to be
true for each component, as follows:
\begin{enumerate}
\item
By definition of $G$, the first component has only finitely
many values. It is determined by the rule of a cell, and the
states of it and its neighbors in preceding rounds.
\item
By definition \ref{cgame}, even if stochastic rules are
used only finitely many are considered.
Whether or not a cell keeps its rule, after $R$ rounds,
is based on its own fitness, and the process of selecting
new rules is assumed to be local.
\item
The counting component can be in any one of $R$ different
states. The rule for its change is simple: If it is
in state $s$ in round $d$, it is in state $s+1$ $mod$ $g$
in round $d+1$. Note that to run $G'$ as a simulation of $G$,
this counting component must be initially set at the same
value for all cells.
\item
The fitness component is set to zero after every $R$
rounds; and can be incremented or decremented
in only finitely many different ways.
How it changes in each generation, for a given cell $c$, depends on the
first components of cells $c-r$ through $c+r$.
\end{enumerate}
\rule{2mm}{3mm}
Given this equivalence, why, then, is a cellular game so
different from a cellular automaton? For one thing, cellular games
often do use a nonlocal strategy selection process; it may
be considered an approximation to a selection process that
can operate over very large distances.
For another, cellular automaton rule spaces, especially those with high
radius, typically contain very large numbers of rules.
Therefore, even if only systems with a local selection process are
considered, the evolutionary paradigm of cellular games may still
be valuable. It may be a practical method of selecting members of
these spaces with interesting properties.
In this paper, two different models of cellular games are
defined. The original Arthur-Packard-Rogers model is discussed
first in Section \ref{APR}. This model is quite extensive and uses
many different parameters. The second, simplified, model is more
amenable to mathematical analysis. This model is discussed in
Section \ref{ZDM}.
Computer simulations of both models are presented. These
simulations are similar to those of cellular automata, both in
the way they are conducted and in the way they are displayed.
That is, cell moves are indicated by colors. Strategies are
usually not pictured, due to the large size of strategy spaces.
Thus, the move of a cell may also be referred to as its color. Initial
moves of a finite ring of cells are displayed in a line on top of
the screen, and each generation is displayed below the previous
generation. Initial moves and strategies, as well as other
stochastic choices during the course of the game, are implemented
with the aid of a pseudorandom number generator.
Computer simulations of the first model display sophisticated
behavior reminiscent of living systems, or ``complicated''
cellular automata. These behaviors, which include such phenomena
as zone growth and ``punctuated equilibria,'' are discussed and
extensively illustrated in Section \ref{comp1}.
The second model admits only deterministic strategies of depth
zero; that is, strategies of the form, ``Do move $m$, without
regard to previous rounds.'' Thus, in this model, moves and
strategies can be considered equivalent. Though this model is
simpler, there are still counterintuitive results associated with
it. Even if only two strategies are allowed under this model, it
is extremely difficult to predict which, if either, will be
stable under invasion by the other. There are no simple algorithms for
determining this.
For example, consider ring viability, discussed in
Section \ref{rvsec}. For finite rings this
concept, Definition \ref{ringviability}, refers
to the average success of all cells in the ring. In this chapter,
it is shown that under any local fitness criterion $G$,
rings in which the cells have made periodic move sequences have
the highest possible viability. It is also shown that a similar result
is false in the two-dimensional case.
Now, if cellular games did indeed always evolve towards highest
ring viability, this would make their course relatively easy to
predict. However, in Section \ref{stabsec}, a two-strategy
cellular game is presented, in
which the best strategy for the ring as a whole -- that is, the strategy
that, if every cell follows it, maximizes ring viability -- is not
stable under invasion. This instability is illustrated by
computer simulations, and is also proved. This is done by
showing that if a small number of cells using the invading
strategy are surrounded by large numbers that are not, the
invading strategy tends to spread in the next generation. The
reason for this is that the first strategy, though it does well
against itself, does poorly against the second one.
On the other hand, a winning strategy may not necessarily be
stable either. That is, strategy A may defeat strategy B, but
still be unable to resist invasion by it. The reason, in this
case, is that strategy B does so much better against itself. This
result can also be demonstrated by computer simulations and
proved, using the same method. These results are also in
Section \ref{stabsec}.
Finally, consider a situation in which, if its neighbors use strategy
A, a cell has greatest success if it uses strategy A too.
It seems logical that, in this case, strategy A would
indeed be stable. As a matter of fact, such a situation
is called, in game theory, a {\sl symmetric Nash equilibrium}.
However, it can be demonstrated by computer
simulations, and also proved, that some symmetric Nash equilibrium
strategies are {\sl not} stable under invasion. The reason, in
such cases, is that strategy B has somewhat less probability of
surviving in a strategy A environment, but is very good at
causing strategy A not to survive. Therefore strategy B is
somewhat less likely to persist, but is a lot more likely to spread.
This result is also considered in Section \ref{stabsec}.
Thus, the three theorems in Section \ref{stabsec} show how
difficult it is to predict the course of cellular games,
even under a very simple model. The counterintuitive
nature of the results obtained suggests the potential
mathematical interest of this paradigm.
The second part of this thesis presents results applicable
to particular examples of the zero-depth model, called
{\sl simple cellular games}. These games have two distinguishing
characteristics:
\begin{itemize}
\item{There are only two possible strategies; these two
strategies are referred to as white, and black.}
\item{Each cell has, at all times, positive probability of
either living or not living.}
\end{itemize}
The theorems discussed in the second part apply to simple
cellular games which are left/right symmetric. The Double Glider
Theorem, \ref {main}, applies to the evolution of such games
under initial conditions under which there are only
finitely many black cells. The {\sl zone of uncertainty}
is defined as the zone between the leftmost and rightmost
black cell. It is shown that the probability this zone
will expand arbitrarily far in one direction only is $0$.
That is, with probability $1$, it will either expand in
both directions or disappear.
Section \ref{stanrec}, which follows, discusses simple
game evolution in a slightly different context; that is, under conditions
such that there is a leftmost white cell and a rightmost black
cell, or {\sl standard restricted initial conditions}.
Simple cellular games with both left/right
and black/white symmetry are classified according to their
asymptotic behavior under these circumstances.
That is, they are divided into {\sl mixing processes}
and {\sl clumping processes}. The behavior of clumping processes
is further explored, and a conjecture is made that applies to
both kinds of processes.
In Section \ref{examples}, the last chapter, specific examples of simple
cellular games are presented. Computer simulations suggest
that one of these examples, the Join or Die Process, is a
clumping process; and the other, the Mixing Process, is,
as named, a mixing process.
\chapter{Cellular Game Models}
\label{CGM}
\section{Game Theory and Cellular Games}
Success criteria in tabular form, or score tables, are
extensively used in game theory. They describe the course of any
game which can be exactly modelled, for which strategy success
can be numerically described, and in which all strategies are
based on finite, exact information. For example, consider the
game of Scissors, Paper, Stone; that is, Scissors beats Paper,
Paper beats Stone, and Stone beats Scissors. Suppose this game is
played for one round, and the only possible strategies are
deterministic. Then the table for this game is (if a win scores
1, tie at .5 and loss at 0):
\begin{tabbing}
Opponent xxxx\= Sxxcissors\= xxPaper\= xxStone\kill
Opponent \> Scissors \> Paper \> Stone \\
\\
Player \\
Scissors \> .5 \> 1 \> 0 \\
Paper \> 0 \> .5 \> 1 \\
Stone \> 1 \> 0 \> .5 \\
\end{tabbing}
The following definition is used in game theory:
\begin{defn}
A {\bf mixed strategy} is a stochastic strategy; that is, one under
which, in some specified circumstances, more than one move has
positive probability.
\end{defn}
A table can also be devised for mixed strategies, and for games
of more than one round. For mixed strategies the table entry
describes the expected success.
For example, suppose the game of
Scissors, Paper Stone is played for two rounds, and there are
three possible strategies. Strategy A is to choose each move with
probability ${1 \over 3}$, Strategy B is to choose Stone for the
first move, and the move chosen by the other player for the
second, and Strategy C is always to choose Paper. Then the table
for this game is:
\begin{tabbing}
Strategy AAAC\= Strategy CA\= Strategy CA\= Strategy AC\kill
Opponent \> Strategy A \> Strategy B \> Strategy C \\
\\
Player \\
Strategy A \> 1\> 1\> 1\\
Strategy B\> 1\> 1\> .5\\
Strategy C\> 1\> 1.5\> 1\\
\end{tabbing}
\begin{defn}
A table depicting strategy success as described above is
called the {\bf normal form} of a game.
\end{defn}
Normal
form can be used, at least theoretically, to describe extremely
sophisticated games. For example, if only a fixed finite number
of moves are allowed, and strategies consider only the history of
the current game, then there are only finitely many deterministic
strategies for the game of chess. Hence normal form could,
at least theoretically, be used to describe this game.
Of course, there are so many possible chess strategies that this
form cannot be used for practical purposes. For more on
normal form, see \cite{luce}.
Note that this form is ambiguous if mixed strategies are allowed.
For example, consider the above table. Does it indicate the
actual success levels of deterministic strategies, or the
expected success levels of stochastic ones? It is not possible to
tell without further information.
Such a normal form can also be used to describe three-player
games. For example, this table describes a
game in which there are two moves, you score .85 if you make the
same move as both other players and .15 otherwise. This game
is called the Join or Die game.
\begin{tabbing}
Your Move: xx\= B xx \= xx W exextra\= Your Move: xx\= xx B\= xx W\kill
Your Move:\> B\> \> Your Move:\> W\\
\\
Player 1:\> B\> W\> Player 1:\> B\> W\\
\\
Player 2:\> \> \> Player 2:\\
B \> .85\> .15\> B\> .15\> .15\\
W \> .15\> .15\> W\> .15\> .85\\
\end{tabbing}
Now, consider cellular games. If the success criterion, or score, is
local; that is, if it
is based entirely on the state of a cell and those of its neighbors, it
can also be encoded as a table. As a matter of fact, any game
table for $2r + 1$ players can be used as the score table for a
cellular game of radius $r$. For example, the Join or Die process
is a cellular game of radius $1$, in which each cell plays the
Join or Die game with its two nearest neighbors. The following
table is used for this process:
\begin{tabbing}
Right Neighbor: xx\= B xx \= xx W exextra\= Right Neighbor: xx\= xx B\= xx
W\kill
Cell's Move:\> B \> \> Cell's Move:\> W \\
\\
Right Neighbor: \> B \> W \> Right Neighbor:\> B\> W\\
\\
Left Neighbor: \> \> \> Left Neighbor: \\
B \> .85 \> .15 \> B \> .15 \> .15\\
W \> .15 \> .15 \> W \> .15 \> .85\\
\end{tabbing}
However, cellular games differ from the situations most analyzed
by game theorists, or the vernacular notion of a game, in the
following ways:
\begin{itemize}
\item Each cell interacts with different neighbors, as determined
by the discrete structure on which the cellular game is run. That is,
the score of cell $0$ is based on its move, and those of cells $1$ and $-1$.
The score of cell $1$ is based on the moves of cells $0$ and $2$, not
cells $0$ and $-1$.
\item The ``game'' is considered to be played repeatedly, for
many rounds. Thus, the main focus is on optimal
move behavior in the long run, not for one round only.
\item There is an explicit mechanism for determining how successful
strategies thrive and spread. The
cellular game is not completely described without this mechanism;
no assumptions about asymptotic behavior can be made just on the
basis of the score table.
\end{itemize}
\section{The Arthur-Packard-Rogers Model}
\label{APR}
The idea of cellular games was first developed by Norman Packard
and Brian Arthur at the Santa Fe Institute \cite{packard}; and
first written up by K. C. Rogers, in a Master's thesis at the
University of Illinois under the
direction of Dr. Packard \cite{rogers}. In this model, cells
arranged in a ring play a game, such as the well-known Prisoner's
Dilemma, with each of their nearest neighbors. They play for a
fixed number of rounds. At the end of these rounds, or of a
generation, strategies may change. Successful strategies are most
likely to spread and persist. The Prisoner's
Dilemma is discussed in \cite{poundstone}, \cite{axelrod} and Appendix
\ref{appris}.
For details of this model, see Appendix \ref{Rogers}. The
terms used are described in Definition \ref{cgame}.
The Arthur-Packard-Rogers model can be
summarized as follows:
Cells, arranged in a one-dimensional structure, play
a game, such as the Prisoner's Dilemma, with their neighbors,
for a predetermined number of rounds. The criteria for
success in each round do not change, and are the same
for each cell. Since the degree of success is based only
on the moves of a cell and those of its $r$ nearest neighbors
on each side, this criterion can be encoded in the form of
a table.
The strategies that govern cell move choices
may be different for each cell, may be deterministic or stochastic,
are based on past move history, and are
stored in the form of a table. Strategies may have depth zero,
one, or more.
At the end of these rounds -- that is, at
the end of a generation -- the probability that a cell
keeps its strategy in the next generation is proportional
to the size of its reward variable, which measures its success in the
game.
\begin{defn}
{\bf Cell death}: A cell is said to die if its strategy is deemed
replaceable; that is, it is thought of as unsuccessful. The replacing
strategy is usually derived from the strategies of other cells.
\end{defn}
Finally, if a cell dies at the end of a generation,
the strategy chosen is some combination of the strategies of
its nearest living neighbors. If it contains elements
of both neighbors, crossover is said to occur.
\begin{defn}
\label{crossover}
{\bf Crossover} is the existence, in a new strategy, of
behavior similar to more than one ``parent'' strategy.
\end{defn}
\begin{defn}
\label{parents}
Those cells whose strategies contribute to
the new strategy of a cell are called its {\bf parents}.
\end{defn}
There may also be a small probability of strategy table mutation.
\begin{defn}
\label{mutation}
A {\bf mutation} is said to occur when, after a strategy table
entry has been chosen from a parent cell, it is arbitrarily changed.
\end{defn}
In computer simulations, this is often done with the aid of a
pseudorandom number generator.
This model is not quite the same as the original one used in \cite{rogers}.
In that construction, strategy
replacement was not governed by locality; that is, parent cells were the
most successful in the ring. Thus, the progenitor of the strategy
of a cell was not particularly likely to be nearby.
In this model, however, parent cells are not necessarily the most
successful cells in the ring. Instead, they are the nearest
living neighbors of a cell. Such a model is more comparable with
living systems, because it bases system evolution more completely on
local properties. It is also more easily generalizable to the infinite
case, in which there is one cell for each integer. And it is only
under such a model that one can see the evolution of zones of different
strategies.
\section{Computer Experiments}
\label{comp1}
The Arthur-Packard-Rogers model has been simulated in computer experiments,
with the aid of a pseudorandom number generator. Cell moves are displayed
onscreen, in a form similar to the display of cellular automaton
states. That is, initial moves, for each generation, are shown in
a line on top of the screen; and moves for each round are shown
below the preceding round. In experiments simulating the
Prisoner's Dilemma, or variations, lighter areas indicate
cooperative moves; dark areas, defecting moves. In particular,
in the games illustrated in the accompanying figures,
all strategies are mixed, or
stochastic. That is, there is always at least a
small probability that a move is made
other than the one called for by the strategy.
The experiment illustrated in Figures 1 through
14 simulates a variation of the
Prisoner's Dilemma, the Stag Hunt. The Stag Hunt is modeled on
the dilemma of a member of a pack of hunting animals, such as
wolves or coyotes. If the whole pack hunts together, they can
bring down a stag, which is the highest reward. If a member defects, it
will be able to get a rabbit alone. If the other animals do not
defect, they will have a smaller chance of bringing down a stag,
but it may still be possible; but it is very unlikely that one
animal can bring down a stag all by itself. Thus, the highest
expected reward is for mutual cooperation; next highest, for
defecting while the other members of the pack cooperate; next,
for mutual defection, and fourth, for cooperating while the other
members of the pack defect. See \cite{poundstone} for more
information on the Stag Hunt; and Appendix \ref{appendix1} for a
more technical discussion of the experiments.
These computer experiments fully suggest the mathematical
interest of the subject. They reveal thought-provoking
behavior, such as:
\begin{itemize}
\item {\sl Zone growth.} Strategies may not evolve in the same
manner in all areas of the ring. Zones of cooperative, defecting
or other consistent behavior may arise and persist for
generations.
\item {\sl Periodic structures.} Cells may alternate between
cooperation and defection, or waves of cooperation may spread
through some or all zones of the ring.
\item {\sl ``Complexity.''} Move patterns may display a
sophistication reminiscent of living structures, or the patterns
found in ``complex'' cellular automata.
\item {\sl Long transients.} Strategies predominant for hundreds
of generations may ultimately disappear, and be replaced by
completely different behavior.
\item {\sl ``Punctuated equilibria.''} Move behavior that appears to
be stable for many generations may, suddenly, change very quickly
-- and then become stable again, for a long time.
\end{itemize}
Note that cellular games cannot be construed to represent any
particular living systems, social or biological. For one
thing, their behavior changes very easily as parameters are
modified; it is difficult to tell which features are
essential, or appropriate to any particular model. However,
the existence of the above characteristics suggests that cellular
games are evocative of biological evolution. It seems
possible that the two will turn out to have some features
in common.
\section{The Zero-Depth Model}
\label{ZDM}
Now, these experiments well suggest the richness of behavior
cellular games offer. The sophistication of patterns displayed
provides ample justification for further study of this paradigm.
But the Arthur-Packard-Rogers model does not lend itself well
to mathematical analysis. Its computer implementation is
lengthy and contains many modifiable parameters. It is
difficult to decide if any behavior exhibited is general, or just
an artifact of the specific algorithms used.
To facilitate mathematical discussion of cellular game behavior,
it is hence appropriate to simplify the model. Extensive study
has been performed on such a model, exhibiting the following
simplifications:
\begin {itemize}
\item
{\sl Elimination of crossover.} The Arthur-Pack\-ard-Ro\-gers mo\-del
al\-lows cross\-over. (Definition \ref {crossover}.)
In the simplified model, crossover is eliminated, and
each new strategy is an exact copy of one that already exists.
A rationale for this simplification, in terms of
living systems, is that one is considering the evolution of a
specific gene, which spreads on an either-or basis. However,
a particular gene may be significant only in the context of
other factors. It may thus not be appropriate to consider this gene
on its own. Note that computer experiments using genetic algorithms
reinforce the significance of crossover (see \cite{levy}).
\item
{\sl Elimination of mutation.} Another simplification is the
elimination of mutation (Definition \ref {mutation}).
That is, after the initial round, any
strategy is new for a specific cell only, and is a copy of the
strategy used by an existing cell. Particularly without crossover,
this elimination is actually
likely to change the long-term behavior of the system. For
example, suppose strategy A is successful against all other
strategies, including itself. If a ring of cells is originally
free of strategy A, but mutation is allowed, strategy A will
eventually take over the ring. If there is no mutation, the ring
will stay free of it. However, the behavior of a cellular game
that allows mutation may best be understood in terms of, and in
comparison to, the behavior of the simpler system.
\item {\sl One round per generation.} That is, cell strategy
may change after each round of play.
\item {\sl Elimination of mixed strategies.} Strategies
are deterministic, not stochastic.
\item
{\sl Elimination of depth.} The final simplification is the
elimination of depth. That is, all strategies are executed
without regard to past moves. Since there are no mixed
strategies, the strategy, then, just becomes
``do move $m$,'' and the move variable can thus be eliminated
from the description of the game.
\end{itemize}
The question of how depth and round restrictions
affect cellular game behavior is a subject for future research; however,
these restrictions are not as severe as they seem. From game theory,
we learn that all information about games with extremely sophisticated
strategies can be conveyed in table form; that is, the
``normal'' form of a game. The only restriction is that strategies
must take into account only a finite amount of information; e.g.,
the course of the game, but not anything before or beyond.
As previously discussed, such tables can be used as the score
table for a cellular game; in particular, for a zero-depth, one
round per generation cellular game.
As a matter of fact, cellular games of many rounds
per generation, and with high-depth strategies, can be rewritten
as zero-depth one round games -- if all strategies take
into account the current generation only.
Note that the Arthur-Packard-Rogers
model, discussed above, does take into account moves in the
previous generation. However, it could easily be modified
not to do so, by providing table entries to use when
there is limited information about previous rounds. For example,
there could be an entry for the move used if nothing is known
about previous moves.
\begin{thm}
Let $G$ be a cellular game of radius $r$, with $R$ rounds per generation,
and strategies of depth $d$ -- except that all strategies take into
account only moves in the current generation. Then the action
of $G$ can be exactly simulated by a cellular game $G'$ of zero
depth and one round per generation.
\end{thm}
{\sl Proof.} It suffices to show that for every such game $G$
there is a zero-depth, one round cellular game $G'$, and a mapping
$f$ from strategies in $G$ to strategies in $G'$, such that life
probabilities correspond. Actions made after cell survival is
decided can be the same in each case.
That is, suppose there are
two rings of $k$ cells each, $1 \le k \le \infty$. Let the first ring
run $G$ in generation $g$, and let each cell $c$ use strategy $S_c$.
Let the second ring run $G'$ in that generation, and let each cell $c'$
use strategy $f(S_c)$. Then the probability, at the beginning of
$g$, that $c$ survives into the next generation should be the
same as the probability that $c'$ does.
To show that such an $f$ can be constructed, it suffices to show
that the probability that, under $G$, at the beginning of a generation, that a
cell will live through to the next generation is entirely
dependent on its strategy, and those of its $(R-1) r$ nearest
neighbors on each side. For if this is true, a table can be
constructed, giving the life probability for cell $c$ if it
and its neighbors follow strategies $S_{c - (R-1) r}, \ldots,$ $S_c,
\ldots, S_{c + (R-1) r}$; and this table can be used to
create a zero-depth, one round cellular game with corresponding
life probabilities.
Now life probabilities in $G$, at the end of a generation, are
entirely dependent on the move histories of that generation. Therefore,
to show such strategy dependence, it is only necessary to show
that the probability, at the beginning of $g$, that cell $c$ will
make move $m$ in generation $q$, is entirely dependent on the
strategies of $c$ and those of its $(q-1) r$ neighbors on each side.
This is trivially true in the first round of a generation.
Since a cell has no information about past moves, the probability
it makes move $m$ is entirely dependent on its own strategy.
Now, suppose this is true for the first $q-1$ rounds. In round
$q$, the probability a cell makes move $m$ is entirely
dependent on its strategy, and the moves made by it and its
$r$ neighbors on each side, in preceding rounds of this generation.
Therefore, by the induction hypothesis, this probability
at the beginning of a generation is entirely dependent
on the strategies of the $(q-2) r$ neighbors of {\sl these}
cells -- cells $c - (q-1) r$ through $c + (q-1) r$.
\rule{2mm}{3mm}
We are thus left with the following model, in which, associated
with each cell $c$, in each generation $g$, are:
\begin{itemize}
\item A move/strategy variable $m_{c,g}$ from some finite alphabet
$\Sigma$ of $k$ characters.
\item A binary-valued life variable $L_{c,g}$. This variable can
be set to either living, or not living.
\end{itemize}
In each generation, cell strategies change, as follows:
\begin{itemize}
\item
The probability that the life variable of a cell is set to 1, so that
it ``lives'' into the next generation, is determined
by a universal and unchanging game matrix $G$. That probability
is based on the move/strategies of a cell and those of its $r$ nearest
neighbors on each side, in that generation.
\item A live cell keeps its strategy in the next generation.
\item A cell that does not live is given a
new strategy in the next generation. This strategy is either that
of its living nearest neighbor to the left, or to the right, with
a 50\% probability of each. If there are no living neighbors
to either side, all possible strategies are equally likely.
\end{itemize}
Note that, in this model, exactly two decisions are
made in a generation; first, decisions about cell
life or death; and second, decisions, for dead
cells, of color in the next generation.
This model lends itself easily to computer simulation, with the
different strategies represented by different colors. Thus, in
descriptions of this model, ``move,'' ``strategy,'' and ``color''
are equivalent. Such a simulation is presented
at the end of this paper, in Figure \ref{cl}. In this simulation,
a cell has probability
$0.27$ of living if it is the same color as both of its neighbors
and $0.53$ otherwise. Due to the shapes of the space-time zones
produced, this process is called the Cloud Process. The Cloud
Process is an example of a join/mix cellular game, as discussed
in Section \ref {examples}.
We now discuss a theorem pertinent to this model; that is, a
simple characterization of identity games. An identity game is a
game in which, outside of certain pathological cases, no cell can
change color. To avoid complications arising from these
cases, the identity game is formally defined as follows:
\begin{defn}
The {\bf identity game} is a game in which, under at least some
circumstances, cells have positive probability of living; and in
which no cell can change strategy, unless there are no living
cells either to the left or right of it.
\end{defn}
The characterization is:
\begin{thm}
Under the zero-depth model, a cellular game is the identity game
if and only if the probability that a cell stays alive, if its
strategy is different from at least one of its neighbors, is 1.
\end {thm}
{\sl Proof.} Suppose a $G$ is a zero-depth cellular game of radius $r$,
with life probabilities fitting the above description.
Suppose a cell has living neighbors on each side. Then either:
\begin {enumerate}
\item A cell is not the same color/strategy as both of its neighbors.
Then it will stay alive.
\item A cell $c$ is the same color as both of its neighbors, but
has neighbors on both sides of different colors, the nearest ones
being cells $c - r_1$ on the left and $c+r_2$ on the right.
Then cells $c-r_1+1$ and $c+r_2-1$ are alive.
Therefore, if $c$ dies, the left parent of $c$ will be cell $c-r_1+1$,
or a cell
closer to $c$; and the right parent of $c$ will be cell $c+r_2-1$, or a
cell closer to $c$. Thus if $c$ dies, both parents will be the same color
as $c$.
\end{enumerate}
On the other hand, suppose $G$ is such that there is positive
probability a cell $c_1$ of color $a$, next to a cell $c_2$ of color
$b$, may not live. Let there be a configuration of
cells giving positive life probability to the center cell.
Thus, since life probabilities
are determined locally, it is possible that there may be living
cells on either side of $c_1$. Let $c_1$ die, and let it
have living neighbors on each side. If either of these neighbors
is not the same color as $c_1$, then $c_1$ may change color; if both are,
$c_2$ will change color.
\rule {2mm}{3mm}
Finally, if cellular games, as described above, are intended to
model living systems, two questions arise. First, why is a new
strategy a symmetric function of the strategies of both parents,
instead of, for example, being more influenced by the strategy of
the nearest parent?
One answer is that this process is intended to model sexual
reproduction, in which a gene has an equal possibility of coming
from each parent. Another is that if there is {\sl positive }
probability that each gene comes from each parent, the model may
actually not behave very differently. Future research may settle
this question.
The second question is, why nonlocality? That is, why not say
that if a cell has no living neighbors near enough, it just stays
dead in the succeeding generation? In this case, comparison with
living ecosystems does suggest that locality is more appropriate,
but with a very large radius. That is, suppose there is a large
die-off of organisms in one particular area. Then organisms from
surrounding areas will rush in very fast, to fill the vacant area
-- but they cannot rush in infinitely far in one generation. Once
again, future research may settle whether the simplified
assumption, that is, nonlocality, actually creates different
long-term behavior.
\section {Ring and Torus Viability}
\label{rvsec}
The following theorem describes move behavior which results in
optimal cell viability, for a whole ring of cells. It applies to
all cellular games with a local life probability matrix; that is,
all games in which the probability a cell ``lives'' into the next
generation is determined by its moves, and those of its neighbors
less than a given number $r$ of units away. It thus applies to
the Arthur-Packard-Rogers model. However, it is here described in
terms of the one-round model given in the previous chapter.
\begin{defn}
\label{ringviability}
The {\bf ring viability} of a finite ring of cells $C$
running a one-round game $G$, in generation $g$, is the average
life probability of these cells in that generation after moves
are made, but before the life variables of the cells are actually set.
\end{defn}
Since $C$ has finitely many cells, whose moves are from a
specific finite alphabet, there is some combination of moves
which will maximize this viability. For example, in a one-round
version of the Stag Hunt game, ring viability will be maximized
if all cells cooperate; and, in some versions of the Prisoner's
Dilemma, ring viability will be maximized if cells alternate
between cooperation and defection.
The result obtained is that this optimal arrangement is periodic.
The following lemma is used in proving this:
\newtheorem{lem} [defn] {Lemma}
\begin{lem}
\label{l1}
Let $G$ be a one-round cellular game of radius $r$, in which
there are $k$ possible moves from some finite alphabet $\Sigma$.
Let $t$ be any string in $\Sigma^*$. Let $L(t)$ be the
average life probability of all cells in a ring of $\vert t \vert$
cells, such that the move of the $i$th cell is the $i$th character of $t$.
Then, if $b$, $w_1$, $w_2$ are strings in $\Sigma^*$,
$\vert b \vert \ge 2r$, then we have
\begin{equation}
\label{ringvia}
L(bw_1bw_2) = { {L(bw_1) + L(bw_2)} \over 2}
\end{equation}
\end{lem}
{\sl Proof.} Consider a ring of cells making consecutively the moves in
$bw_1bw_2$.
Cells making moves from $w_1$ are more than $r$ units away from
cells making moves from $w_2$. Therefore, these cells cannot
influence each other's life probabilities. In the same way, $b$ is
large enough so the life probabilities of cells making moves in either
copy of $b$ can be influenced by cells making moves in $w_1$, or in
$w_2$, but not by both. Therefore the average life probability of all cells
is the same as if they were considered to be in two different rings.
\rule {2mm}{3mm}
The main result follows:
\begin{thm}
\label{rv}
Let $G$ be a one-round cellular game as above. Then
there is some $m > 0$ and some sequence $t$ of $m$ moves,
such that rings
of $nm$ cells, in which the moves of $t$ are repeated $n$ times,
have the maximum ring viability, under $G$,
for finite rings of any size.
\end{thm}
{\sl Proof.}
There are only a finite number of strings in $\Sigma^*$ that either
contain no more than $4r$ letters, or,
when circularly arranged, no duplicate, nonoverlapping $2r$-tuples. Let such
strings be called ``good''; and let $t$ be any ``good'' string
that maximizes $L(t)$. We wish to show that
\begin{equation}
\label{v1}
L(t) = \max_{s \in \Sigma^*} L(s)
\end{equation}
\noindent because, then, rings repeating the moves of $t$ one or more times
would have maximal viability.
Now, this is trivially true for $s$ such that $\vert s \vert \le 2r$,
because all such $s$ are good. Suppose it is true for all $s$ such that
$\vert s \vert < n$. We wish to show that it is true for $s$, such that
$\vert s \vert = n$.
If $s$ is good, this is trivially true. Suppose $s$ is not good. Then
we have $s
= w_1 b w_2 b$, $\vert w_1 \vert$, $\vert w_2 \vert \ge 0$, $\vert b
\vert = 2r$. Lemma \ref{l1} shows that
\begin{equation}
\label{ringvia2}
L(w_1 b w_2 b) = { {L(w_1 b) + L(w_2 b)} \over 2}
\end{equation}
And, by our induction hypothesis, we know that $L(w_1 b) \le L(t)$
and $L(w_2 b) \le L(t)$.
\rule {2mm}{3mm}
A corollary to this theorem is concerned with asymptotic
viability of doubly infinite arrays of cells.
\begin{defn}
Let $l(c)$ be the life probability of a cell $c$, given its move
and those of its $r$ neighbors on each side.
\end{defn}
\begin{defn}
Let the {\bf asymptotic viability} $L(I)$, of a doubly infinite array of
cells $I$, be measured as follows:
\begin{equation}
\label{v2}
L(I) = \limsup_{n \rightarrow \infty} { {\sum_{i= -n}^n l(I_i)} \over
{2n+1} }
\end{equation}
\end{defn}
\newtheorem{cor} [defn] {Corollary}
\begin{cor}
Let $I$ be a doubly infinite array of cells.
Then if $t$ is that finite string that maximizes L(t), $L(I)$
cannot be greater than $L(t)$.
\end{cor}
{\sl Proof.}
Consider what life probability cells $n$ through $-n$ would have
if they were arranged in a ring, instead of part of a doubly
infinite lattice. The only cells that might have different life
probability are cells $-n$ through $-n+r-1$ and $n$ through $n-
r+1$. And as $n$ becomes larger, the contribution of these $2r$
cells to ring viability goes to $0$. \rule {2mm}{3mm}
In the two-dimensional case, however, a result similar to Theorem
\ref{rv} is false. That is, there are two-dimensional cellular
games, for which no finite torus can achieve maximal torus
viability. This is not shown directly, but is a corollary of
results about Wang tiles.
A Wang tile is a square tile with a specific color on each side.
A set of Wang tiles is a finite number of such tiles, along with
rules for which colors can match. For example, a red edge may be
put next to a blue edge, but not a white edge. Such a set is
said to tile the plane, if the entire plane can be covered by
copies of tiles in the set, so that all edge matchings follow the
rules. Robert Berger \cite{berger} showed that there is a
set of Wang tiles that can tile the plane, but permit no
periodic tiling. Raphael Robinson \cite{robinson} subsequently
discovered another, smaller and simpler set of tiles that
does the same thing.
Note that the set of tiles described by Robinson admits an
``almost periodic'' tiling. That is, for any positive integer
$N$, the plane can be covered with these tiles periodically so
that, under the given rules, the proportion of tiles having
unmatching edges is less than ${1 \over N}$.
A two-dimensional cellular game can be made from a $k$-
colored set of Wang tiles as follows: Let a cell be considered a
tile; let there be $k^4$ possible moves, and let these moves be
considered direct products of the colors of the Wang tiles. Let
the life probability of a cell be increased by ${1 \over 4}$ for every
match of a component of its move, with the corresponding
component of the move of its neighbor. For example, ${1 \over 4}$
would be added to the life probability of a cell, if the left
component of its move were compatible to the right component of
the move of its left neighbor.
Suppose a cellular game were made, in this manner, from the set
of tiles described by Robinson. Then no torus could have
viability one, because otherwise there would be a periodic tiling
of the plane using these tiles. However, there are periodic
tilings of the plane for which only an arbitrarily small
proportion of the tiles have unmatching edges. Therefore, since a
periodic tiling of the plane can be considered a tiling of a
torus, there are torus tilings having viability $1 -\epsilon$,
for any $\epsilon > 0$.
The comparison of cellular games and Wang tilings suggests other
possibilities for future research on tilings. For example,
instead of a Wang tiling in which two colors either match or not,
one could consider a tiling in which two colors can partially
match. This would correspond to a cellular game in which more
than two different levels of success were possible.
\section {Strategy Stability}
\label{stabsec}
In the preceding chapter, the concept of ring viability was
discussed. That is, for each cellular game, there is some
periodic combination of moves which maximizes average cell
viability. One might assume that all cellular games would
stabilize with cells exhibiting, or mostly exhibiting, such a
combination of moves. If this assumption were true, questions
about the long-term evolution of cellular games could be
trivially resolved.
However, computer experiments suggest that this is not necessarily
the case. That is, a one-round cellular game is simulated in
which each cell plays the Prisoner's Dilemma with each of its
neighbors. Specifications are:
\begin {itemize}
\item {\sl Radius.} The game is of radius one.
\item {\sl Strategies.} There are two strategies, or colors:
``C,'' cooperate, or white, and ``D,'' defect, or black.
\item {\sl Game Table.} The game life probability table is: $G(CDC) = 1$,
$G(CDD) = G(DDC) = {7 \over 10}$, $G(CCC) = {6 \over 10}$,
$G(DDD) = {4 \over 10}$, $G(CCD) = G(DCC) = {3 \over 10}$, $G(DCD) = 0$.
($G(m_1m_2m_3)$ is the probability of a cell surviving, if the move
of its right neighbor is $m_1$, its own move is $m_2$, and
the move of its left neighbor is $m_3$.)
\end{itemize}
Under these circumstances, maximal ring viability is achieved by
a ring of all-cooperating cells. And yet, computer experiments
simulating this game do {\sl not} show the mostly cooperative
state to be stable. In the simulation depicted in Figure
\ref{fpris}, a small number of defecting cells
are put in the middle of a large ring of cooperators. The
defecting strategy quickly takes over the ring.
The reason for this is that, although defectors do badly
against each other, they do extremely well against cooperators.
Thus, if a small zone of defecting cells is placed in a large
ring of cooperating cells, the area between the leftmost and
rightmost defecting cells tends to expand.
To address such questions more formally, we use the concept of a
domain:
\begin{defn}
\label{domain}
A {\bf domain} is a contiguous row of same-colored cells.
\end{defn}
We would like to examine what happens when a small defecting domain
is placed between two very large cooperating domains. Is the
number of defecting cells in the vicinity of that domain likely
to go up, or down? If it is more likely to go up, we can
reasonably say that cooperative behavior is not stable under
invasion.
Of course, conceivably, each strategy could be
unstable under invasion by the other; that is, there could be a
tendency for large domains of each color to break up into smaller
ones.
Let there be a doubly infinite lattice of cells, running the
Prisoner's Dilemma game described above. Let $B$ be a small, but
greater than one-cell, black domain in this lattice, bordered, in
generation $1$, by two large white domains $W_l$ and $W_r$. Let
$\vert B \vert $ be the number of black cells in $B$ in generation $0$. Let
$\delta B$ equal the number of cells that were white in
generation $1$, and, in generation $2$, have black strategies
descended from the strategies of cells in $B$ -- minus the
number of cells that were in $B$ in generation $1$, and are white
in generation $1$. Thus, $\delta B$ is, roughly, the change in the
number of black
cells in the vicinity of $B$ in the next generation. Finally, let
$c_1$ be the rightmost member of $W_l$, $c_2$ the leftmost member
of $B$, $c_3$ the rightmost member of $B$, and $c_4$ the leftmost
member of $W_r$, in generation $1$.
Now, two terms used in the theorems presented in this
chapter are defined.
\begin{defn}
Let a {\bf black incursion} be a situation in which a black
cell $c$, in $D$, becomes in the next generation the parent of
newly black cells in $W_l$ or $W_r$. If it becomes the parent of
cells in both, let it be regarded as two incursions.
\end{defn}
\begin{defn}
Let the cell $c$, the parent of the newly black cells in
the incursion, be called the {\bf parent} of the incursion.
\end{defn}
\begin{defn}
Let a {\bf white incursion}, and its {\bf parent}, be defined in a similar
manner; that is, a situation in which a white cell becomes the
parent of cells formerly in $B$.
\end{defn}
\begin{defn}
Let a {\bf black incursion possibility} be a situation in which
an incursion into $W_l$ is possible, because $c_1$ has died,
or a situation in which an incursion into $W_r$ is possible,
because $c_4$ has died. Similarly, let a {\bf white incursion
possibility} be a situation in which an incursion into $B$
with parent in $W_l$ is possible, because $c_2$ has died, or
an incursion into $B$
with parent in $W_r$ is possible, because $c_3$ has died.
\end{defn}
We now show that as the size of the bordering white domain
becomes arbitrarily large, the expected size of a
black incursion into that domain (if possible, as explained above),
should approach $5 \over 6$.
\begin{lem} Let $E_n$ be the expected size of a black incursion
into a white domain $W$, given that there is a black incursion
possibility with parent in $B$, and that $\vert W \vert = n$. Then,
under $G$
\begin{equation}
\label{pris1}
\lim_{n \rightarrow \infty} E_n = {5 \over 6}
\end{equation}
\end{lem}
{\sl Proof.} Suppose the nearest cell $w$, in $W$, to $B$ to stay
alive is such that there are $k$ dead cells in $W$ between $w$
and $B$. Then cells in $W$ between $w$ and $B$ have parents of
both colors, and their probability of becoming black is thus ${1
\over 2}$. Now, the probability of there being $k$ such cells to
die, under $G$, given the incursion possibility, is $G(CCC) (1 -
G(CCC))^{k-1} = {3 \over 5} ({2 \over 5})^{k-1}$. That is, each
white cell with two white neighbors has probability $G(CCC) = {3
\over 5}$ of living. Thus
\begin{equation}
\label{pris2}
\lim_{n \rightarrow \infty} E_n =
\lim_{n \rightarrow \infty}
\sum_{k=1}^{n} ({k \over 2}) ({3 \over 5}) ({2 \over 5})^{k-1} =
\sum_{k=1}^{\infty} ({k \over 2}) ({3 \over 5}) ({2 \over 5})^{k-1} =
{5 \over 6}
\end{equation}
\rule {2mm}{3mm}
We also bound the expected size of a white incursion.
\begin{lem} Let $E_m$ be the expected size, under $G$ of a
white incursion into $B$
from a white domain $W$, given that there is a white incursion
possibility with parent in $W$, and that $\vert B \vert = m$.
Then $E_m < {5 \over 4}$.
\end{lem}
{\sl Proof.} Suppose the nearest cell $b$, in $B$ to $W$ to stay
alive is located so that there are $k$ dead cells in $B$ between $b$ and
$B$. Then
cells in $B$ between $b$ and $W$ have parents of both colors, and
their probability of becoming white is thus ${1 \over 2}$.
Now, the probability of there being $k$ such cells to die,
under $G$, given the incursion possibility, is
$G(DDD) (1 - G(DDD))^{k-1} = {2 \over 5} ({3 \over 5})^{k-1}$.
(Since each black cell with two black
neighbors has probability ${2 \over 5}$ of living.) Thus
\begin{equation}
\label{pris22}
E_m = \sum_{k=1}^{m} ({k \over 2}) ({2 \over 5}) ({3 \over 5})^{k-1} <
\sum_{k=1}^{\infty} ({k \over 2}) ({2 \over 5}) ({3 \over 5})^{k-1} =
{5 \over 4}
\end{equation}
\rule {2mm}{3mm}
The main theorem follows:
\begin{thm} Let $B$ be a small black domain on a doubly infinite
lattice, on which the Prisoner's Dilemma game $G$ is run. Let all
variables be as described above. Then, if $\vert B \vert \ge 2$, and $W_l$
and $W_r$ are large enough, the expected value of $\delta B$,
which is roughly the expected change in the number of black cells in the
vicinity of $W$, is positive.
\end{thm}
{\sl Proof.} We examine eight cases, depending on the life
of $c_1$, $c_2$, $c_3$, and $c_4$. Note that $c_1$ and $c_4$ have
probability $G(CCD) = G(DCC) = {3 \over 10}$ of living;
and $c_2$ and $c_4$, have probability $G(CDD) = G(DDC) = {7 \over 10}$.
\begin{enumerate}
\item All four cells live. Then $\delta B = 0$.
\item $c_1$, $c_2$, $c_3$ live, $c_4$ does not (or the reflection
of this case). The probability of this is $2 ({3 \over 10}) ({7
\over 10})^3$. There is one black incursion possibility (with
$c_3$ as the parent), of expected size that approaches ${5 \over
6}$, as the neighboring domain becomes arbitrarily large.
\item $c_1$, $c_2$ live, $c_3$ dies, $c_4$ lives (or the
reflection). The probability of this is $2 ({3 \over 10}) ({7
\over 10}) ({3 \over 10})^2$. There is one white incursion
possibility (with $c_4$ as the parent), of expected size $< {5
\over 4}$.
\item $c_1$, $c_2$ live, $c_3$, $c_4$ die (or the reflection).
The probability of this is $2 ({3 \over 10}) ({7 \over 10}) ({3
\over 10}) ({7 \over 10})$. There is one black incursion
possibility (with $c_2$ or a cell between $c_2$ and $c_3$ as the
parent), of expected asymptotic size ${5 \over 6}$; and there may
be one white incursion possibility (with a cell to the right of
$c_4$ as the parent), of expected size $< {5 \over 4}$.
\item $c_1$ dies, $c_2$ lives, $c_3$ lives, $c_4$ dies. This
case has probability ${7 \over 10}^4$. There are two black
incursion possibilities (with $c_2$ and $c_3$ as the parents),
of expected asymptotic size ${5 \over 6}$ each.
\item $c_1$ dies, $c_2$ lives, $c_3$ dies, $c_4$ lives (or the
reflection). The probability of this is $2 ({7 \over 10})^2 ({3
\over 10})^2$. There is one black incursion possibility (with
parent $c_2$), of expected asymptotic size ${5 \over 6}$; and one white
incursion possibility (with parent $c_4$), of expected size $< {5
\over 4}$.
\item $c_1$ dies, $c_2$ lives, $c_3$ and $c_4$ die (or the
reflection). The probability of this is $2 ({7 \over 10})^2 ({3
\over 10}) ({7 \over 10})$. There is one black incursion
possibility (with parent $c_2$), of asymptotic size ${5 \over
6}$; and there may be one white incursion possibility (with
parent to the right of $c_4$), of expected size $< {5 \over 4}$.
\item $c_2$ and $c_3$ both die. The probability of this is ${3
\over 10}^2$. There may not be a black incursion, if every cell
in $D$ dies. There are at most two white incursion possibilities
of expected size $< {5 \over 4}$ each.
\end{enumerate}
Thus, if $\vert B \vert \ge 2$, and $W_l$ and $W_r$ are large
enough, under all cases the expected value of $\delta B$ must
exceed $2 ({3 \over 10}) ({7 \over 10})^3 ({5 \over 6}) -
2 ({7 \over 10}) ({3 \over 10})^3 ({5 \over 4}) +
2 ({7 \over 10})^2 ({3 \over 10})^2 ({5 \over 6} - {5 \over 4}) +
({7 \over 10})^4 2 ({5 \over 6}) +
2 ({7 \over 10})^2 ({3 \over 10})^2 ({5 \over 6} - {5 \over 4}) +
2 ({7 \over 10})^3 ({3 \over 10}) ({5 \over 6} - {5 \over 4}) -
({3 \over 10})^2 2 ({5 \over 4}) = {841 \over 6000}$.
\rule {2mm}{3mm}
However, it is not always the case that, in a two-strategy
system, the ``dominant'' strategy will prevail. One strategy may
lose against another, but do so well against itself that its use
tends to expand. This happens in zero-depth versions of the
previously discussed Stag Hunt, a game similar to the Prisoner's
Dilemma, except that successful cooperation is more profitable
than exploitation. If computer experiments (Figure \ref {fstag})
simulate this game,
giving a high enough premium for mutual cooperation, then
cooperative behavior does tend to prevail. Specifically, the
game has the same radius and number of moves as the Prisoner's
Dilemma game described above. Its table is: $G(CDC) = {10 \over
16}$, $G(CDD) = G(DDC) = {7 \over 16}$, $G(CCC) = 1$, $G(DDD) =
{4 \over 16}$, $G(CCD) = G(DCC) = {8 \over 16}$, $G(DCD) = 0$.
It is possible, using the same techniques as above, to show that
black domains are unstable in this game.
\begin{thm} Let $W$ be a small white domain on a doubly infinite
lattice, on which the Stag Hunt game as described above is run.
Let $B_l$ and $B_r$ be its neighbors, and $\vert W \vert$ its size in
generation $1$. Let $\delta W$ equal the number of cells
that were black in generation $0$, and which
in generation $1$, have
white strategies descended from the strategies of cells in $W$ --
minus the number of cells that were in $W$ in generation $1$,
and are black in generation $2$. Then, if $\vert W \vert \ge 2$, and $B_l$
and $B_r$ are large enough, the expected value of $\delta W$,
roughly the expected change in the number of white cells in the
vicinity of $W$, is positive.
\end{thm}
{\sl Proof.} The same calculations as described above are carried
out, except that white and black are exchanged, and the
probabilities of the Stag Hunt game are used. The asymptotic
expected size of a white incursion, given the possibility of
such, turns out to be $2$. The expected size of a black
incursion, given the possibility of such, turns out to be
less than or equal to ${1
\over 2}$ (since cells that are white and bordered on both sides
by white neighbors cannot die). The asymptotic expected change in
the number of white cells in the vicinity of $W$ turns out to exceed
${223 \over 256}$. \rule {2mm}{3mm}
Nash equilibria of cellular games have also been analyzed \cite{cowan}.
\begin{defn}
\label{SNE}
In a cellular game context, a {\bf symmetric Nash equilibrium}
(SNE) arises if, when the $r$ nearest neighbors of a cell on each side use
strategy $s$, its best response is also to use $s$.
\end{defn}
For example, in the Stag Hunt game described above, both
unilateral cooperation and defection give rise to such
equilibria. That is, if the neighbors of a cell always cooperate
(defect), a cell is best off cooperating (defecting) too.
As with ring viability, it is easy to assume that Nash equilibria
determine the course of a game; that is, that a strategy giving
rise to a symmetric Nash equilibrium is stable under
invasion by other strategies. However, while
the study of Nash equilibria is a promising avenue
to understanding cellular games, such an automatic assumption is
not necessarily the case. For example, in the Stag Hunt,
unilateral cooperation gives rise to a SNE. However, in some
versions of this game, cooperating domains are unstable.
This is because though isolated defecting cells
don't survive well, they are likely to kill off their
neighbors. Thus, they tend to have more descendants than their
neighbors.
The parameters used in this version of the Stag Hunt are
not exactly the same as above. They are: $G(CDC) = $ ${16 \over
18}$, $G(CDD)$ $ = G(DDC) = {15 \over 18}$, $G(CCC) = 1$, $G(DDD) =
{14 \over 18}$, $G(CCD)$ $ = G(DCC) = $ ${9 \over 18}$, $G(DCD) = 0$.
Computer experiments simulating this process (Figure \ref {fstag2})
do indeed suggest
that white domains are unstable. This result can also be
proved using the same techniques as above.
\begin{thm} Let $B$ be a small black domain on a doubly infinite
lattice, on which the second Stag Hunt game as described above is run.
Let $W_l$ and $W_r$ be its neighbors, and $\vert B \vert$ its size in
generation $1$. Let $\delta B$ equal the number of cells
that were white in generation $1$, and, in generation $2$, have
black strategies descended from the strategies of cells in $B$ --
minus the number of cells that were in $B$ in generation $1$,
and are white in generation $2$. Then, if $\vert B \vert \ge 2$, and $W_l$
and $W_r$ are large enough, the expected value of $\delta B$,
roughly the expected change in the number of black cells in the
vicinity of $B$, is positive.
\end{thm}
{\sl Proof.} The same calculations as described for the
Prisoner's Di\-lem\-ma case are carried out, except that the
probabilities of the second Stag Hunt game are used. The asymptotic
expected size of a black incursion, given the possibility of
such, turns out to be ${1 \over 2}$, since cells that are white and
bordered on both sides
by white neighbors cannot die. The expected size of a white
incursion, given the possibility of such, turns out to be
less than or equal to ${9
\over 14}$ . The asymptotic expected change in
the number of black cells in the vicinity of $B$ turns out to exceed
${311 \over 1008}$. \rule {2mm}{3mm}
Thus, we see that cellular game behavior is difficult to
anticipate. These systems reflect the richness of living
ecologies, in which a species' survival is
determined by how well the organisms of that species
compete with others, how well they cooperate among themselves,
and how many descendants
they have. No one factor automatically decides the issue.
\chapter {Two Symmetric Strategies}
\label{TSS}
\section{Introduction and Definitions}
Under the zero-depth model described previously, the simplest
case to examine is that of games with only two possible strategies.
Let these strategies be called {\sl black} and {\sl white}; and
let a cell using a black (white) strategy be called a black
(white) cell. We thus have the following model.
Associated with each cell, in each generation, are:
\begin{itemize}
\item A binary-valued move/strategy variable.
\item A binary-valued life variable. This variable can
be set to either living, or not living.
\end{itemize}
In each generation, cell strategies change, as follows:
\begin{itemize}
\item
The probability that the life variable of a cell is set to 1, so that
it ``lives'' into the next generation, is determined
by a universal and unchanging game matrix $G$. That probability
is based on the move/strategies of a cell, and those of its $r$ nearest
neighbors on each side, in that generation.
\item A live cell keeps its strategy in the next generation.
\item A cell that does not live is given a
new strategy for the next generation. This strategy is either that
of its living nearest neighbor to the left, or to the right, with
a 50\% probability of each. If there are no living neighbors
to either side, all possible strategies are equally likely.
\end{itemize}
We wish to understand the long-term behavior of such processes.
For simplicity, we first consider systems with infinitely many
cells. And, to understand their behavior in general, it is
illuminating to first consider their behavior in the following
case, in which the possible
future courses of evolution are countable.
\begin{defn}
Initial conditions in which there are finitely many black
cells are called {\bf finitely describable} initial conditions.
\end{defn}
Note that if there are initially only finitely many black
cells, there will always be only finitely many black cells.
Therefore, it is more appropriate to speak about a game evolving
{\sl under} such conditions, than {\sl from} such conditions.
The following definitions are also used:
A {\bf domain} (Definition \ref{domain} is a contiguous row of
same-colored cells.
\begin{defn}
\label{zone}
Under finitely describable initial conditions, let
the {\bf zone of uncertainty} start with the leftmost black
cell and end with the rightmost one. If there are
no black cells, there is no such zone.
\end{defn}
Now, suppose each cell had
probability $1$ of staying alive, no matter what. Then all dynamics
would be trivial; the system could never change.
We would like to avoid such situations; that is, we
would like to assure that change is always possible.
We would also like to assure that, under initial conditions
as described above, the two domains on either side of the zone of
uncertainty will, almost always, contain infinitely many living cells.
Both ends are achieved by specifying that each cell always has
positive probability of either living or not living.
\begin{defn}
\label{simple}
Let a cellular game as described above; that is, zero depth,
with two strategies,
and the above restrictions on life probabilities, be called a
{\bf simple cellular game}.
\end{defn}
Now, the main problem associated with any stochastic process is
to figure out how it behaves in the long run; not only to
figure out how it may
behave, but how it must behave.
In this chapter, we settle this question, at least partially,
for certain classes of games. That is, we
consider simple cellular games with left/right symmetry,
evolving under finitely describable initial
conditions. We show that for such games, the probability
that the zone of uncertainty will grow arbitrarily
far in one direction only is zero. It must, with
probability $1$, either disappear, or grow forever
in both directions.
How is this proved? First, we use Theorem \ref{mix00}, presented
below, a result which applies both to cellular games and other
stochastic processes. This theorem implies that if a simple
cellular game evolves as above, and if, under any conditions, the
probability this zone will ``glide'' arbitrarily far to the left
is positive, there are initial conditions under which this
probability can be made as high as desired; that is, greater than
$1 - \epsilon$, for any $\epsilon > 0$.
Then, we show that under such initial conditions $I_{\epsilon}$,
with very high probability of the zone of uncertainty
``gliding'' off in one direction, there would have
to be probability
greater than some constant that another glider
will spin off and shoot out in the other direction. This
constant would not depend on the initial conditions,
but only on the game. This part of the proof is
accomplished in the following manner:
First, without loss of generality, we locate $I_{\epsilon}$
so that the rightmost black cell is cell $0$.
Then, we count cases in which the zone of uncertainty
``glides'' arbitrarily far in one direction only. We need
to count cases in such a way that no case is counted
twice. To do this elegantly, we restrict our attention to
particular cases in which this zone moves to the right in
a certain way; that is, those cases in which, just before
this zone moves past cell $0$ for the last time, there
is exactly one nonnegative black cell, at position $r$ or
greater.
In a lemma, it is shown that under any $I_{\epsilon}$, with
$\epsilon$ small enough,
the probability that the ``glider''
will operate in such a way is more than some fixed
proportion $\gamma$ of the probability that a glider will
operate at all. This $\gamma$ is dependent on the game
only, and not on the initial conditions. Thus, the
sum of all such cases must be greater than $\gamma (1 - \epsilon)$.
For each such case, we show there is another case
with probability only a fixed proportion less,
in which another glider goes off in the other direction.
To do this, we use the fact that what happens at the
end of the zone of uncertainty; that is, to some specific, fixed
number of cells, cannot change the probability of
a one-generation history very much.
Thus, we can put a lower bound $\beta$ to the probability that
in generation $g$, the game behaves exactly as in the
case counted above, except that a two, three or four-cell black
domain $D$ is spun off, at a distance from all other
black cells greater than the radius of the game.
We can show that if there is any positive probability of a
glider moving in one direction, there is positive probability
at least $\alpha$ that, if the zone of uncertainty contains
only a domain like $D$:
\begin{enumerate}
\item This zone will act like a glider, moving arbitrarily
far to the right.
\item This zone will, in every generation, contain more than
one black cell.
\end{enumerate}
Note that this $\alpha$ will also apply if the positive
cells are as above, and the negative black cells
$D$ itself acts as a glider, moving to the right
and staying from that point on in the positive area, and that
this glider from that point on continues to contain two or
more cells. Since the negative black cells are themselves acting
as a glider, it can be shown that they will not interfere with
the behavior of cells in the positive area. It is in this part of
the proof that the left/right symmetry comes in; it is used to
show that gliders can move in both directions.
Since this right-traveling glider continues to contain
two or more cells, we are able again to avoid counting cases twice.
That is, each case is assigned to the last generation in which
there is exactly one nonnegative black cell.
Thus, the probability that the domain between the two gliders
will grow arbitrarily large, and the zone of uncertainty will
continue to expand forever in both directions, can be given a
lower bound. It can be shown, for small enough $\epsilon$, to be
greater than $\gamma \beta \alpha (1 - \epsilon)$,
with these constants depending only on $G$. If $\epsilon$
is small enough, this forces a contradiction. In reference to
these two gliders, this main theorem, Theorem \ref {main}, is
called the Double Glider Theorem.
Another kind of initial condition is also
discussed; that is, initial conditions under which
there is a leftmost white cell and a rightmost
black cell. A conjecture is presented which applies
to such conditions.
Processes that are symmetric black/white, as well
as right/left, are discussed. They are separated
into two categories, mixing processes and clumping
processes. This separation is based on their behavior under
standard restricted initial conditions.
The properties of clumping processes are further examined.
In this context, a theorem is used which can be applied
to symmetric random walks in general.
Finally, computer experiments are presented. These
models simulate the evolution of simple cellular
games, with both kinds of symmetry, on a circular lattice.
It is shown how this evolution varies as parameters vary.
The following theorem applies to all discrete-time
Markov chains. It can be used to characterize
cellular game evolution under finitely describable
initial conditions.
\begin{thm}
\label{mix00}
Let $M = \{ X(t), t \in 0, 1, 2, \ldots\}$ be a discrete-time
Markov chain. Let a finite history be a list of possible values for
$X(i)$, $0 \le i \le n$, for some $0 \le n < \infty$.
Let $H$ be any collection of infinite histories, which
can be expressed as a countable Boolean combination of
finite histories. Furthermore, let no finite part of
any history in $H$ determine membership in $H$.
Let the probability
of $H$, under any initial conditions $X(0) = x$, be positive. Then,
for any $\epsilon > 0$, there are initial conditions $I_\epsilon$
such that there is probability $1 - \epsilon$ the infinite history
of this process (that is, the values of $X(0), X(1), \ldots, X(n),
\ldots$) will be in $H$.
\end{thm}
{\sl Proof.}
Let all possible finite histories of $M$, given $X(0) = x$, be
placed in correspondence with open intervals in $(0,1)$ as follows:
\begin{enumerate}
\item $If P_{xi} > 0$, let the event that $X(1) =i$ correspond
to the open interval
$(\sum_{j < i} P_{xj}, \sum_{j < i} P_{xj} + P_{xi})$.
\item Suppose $X(n) = s$ in generation $n$, $n \ge 1$.
Let the interval $(a,b)$ correspond to the
values of $X(0) \ldots X(n)$. Then, if $P_{si} > 0$, let the event
that $X(n+1) = i$ in this generation correspond to the open
interval $(a + \sum_{j < i} P_{sj} (b-a),a +
\sum_{j < i} P_{sj} + P_{si} (b-a) )$.
\end{enumerate}
Similarly, let countable Boolean combinations of finite histories
correspond to countable Boolean combinations of history intervals.
Note that under this relationship, the probability of any finite
history equals the length of the interval; and
the probability of any countable boolean combination of finite
histories $H$ equals the Lesbegue measure of the corresponding measurable
subset of $(0,1)$. Thus,
if $H$ has positive probability, it corresponds to a real subset $S$ of
(0,1) of positive measure.
By a theorem of real analysis \cite {rudin}, if $S \cap (0,1)$
has positive measure, there is some point $p$ contained in
$(0,1)$ such that
\begin{equation}
\label{analysis}
\lim_{\epsilon \rightarrow 0} { {\mu(S \cap (p - \epsilon, p + \epsilon))
}\over {2 \epsilon} } = 1
\end{equation}
\noindent By the construction, there is a history interval contained
in every interval on the unit line. Hence, for every $\epsilon >
0$, there is a history interval $I$, corresponding to a finite
$n$-step history $h$ in which $X(n) = s$, such that ${ {\mu(I
\cap S)} \over {\mu(I)} } \ge 1 - \epsilon$. By the construction,
then, the probability that the future history of $M$ will be in
$H$, given $h$, exceeds $1 - \epsilon$. By the Markov property of
$M$, and the fact that the finite history $h$ does not determine
membership in $H$, the probability of this, given $X(0) = s$,
must also exceed $1 - \epsilon$. \rule {2mm}{3mm}
Note that for this theorem to apply, $H$ must be such that
no finite history determines membership in $H$. For example,
$H$ cannot be all histories such that $X(2) = 1$.
On the other hand, $H$ could be all histories
such that $X(n) = 1$ for infinitely many $n$.
\begin{cor}
\label{mix0}
Let $G$ be any simple cellular game. Let it evolve under finitely
describable initial conditions. Let $H$ be any countable Boolean
combination of finite game histories. Let the probability
of $H$, under any initial conditions, be positive. Then,
for any $\epsilon > 0$, there are finite initial conditions such that
there is probability $1 - \epsilon$ the infinite history of this
game will be in $H$.
\end{cor}
{\sl Proof.} Let the state $X(g)$ of $G$ in generation $g$
be a list of black cells at the beginning of that generation.
Thus, the states of $G$ can be matched with
the positive integers.
The evolution of $G$ can be considered a Markov chain,
since the probability of entering any state is dependent on
conditions in the previous generation only.
\rule {2mm}{3mm}
\section {The Double Glider Theorem}
\label{doubleglider}
The Double Glider Theorem applies to all simple cellular
games with left/\-right symmetry. It shows that if such a
game evolves under finitely describable initial conditions,
the probability that the zone of uncertainty will expand arbitrarily
far in one direction only is zero. That is, the zone of uncertainty cannot
``glide'' forever to the left, or right. It is
shown that if such a glider could evolve, as it
progressed it could throw off a reflected glider, moving in
the opposite direction; and that if both such actions had positive
probability, there would be a contradiction.
A new definition is used in the implementation of this proof.
\begin{defn}
Let the {\bf effective zone of uncertainty} consist, in
each generation, of cells in the following categories:
\begin{enumerate}
\item Cells in the zone of uncertainty.
\item Cells beyond the zone of uncertainty that have a black
cell as one of their nearest living neighbors.
\end{enumerate}
\end{defn}
That is, cells beyond the zone of uncertainty that can
become either black or white are also in this zone.
The extent of this zone in generation $g$ is dependent not
only on cell colors at the beginning of that generation, but
on life/death decisions made during that generation.
Thus, the evolution of a simple cellular game, under finite
initial conditions, can be considered to occur in each
generation as follows:
First, life/death decisions are made about cells within
the zone of uncertainty. Then, if the leftmost living
cell in the zone of uncertainty is black, life/death decisions are
made about cells to the left of this zone. These decisions start with
the cell on its border, and proceed left until
one lives. Then, if the rightmost living cell in the
zone of uncertainty is black, decisions are made in the
same way about cells to the right of this zone. Finally,
black/white decisions are made. There are no other decisions
that can affect the course of this game.
The concept of effective zone of uncertainty can be extended
to apply to cells on each side of a domain.
\begin{defn}
Let the {\bf left effective zone of uncertainty} $D_l$ of
a white domain $D$ consist of:
\begin{enumerate}
\item Those cells in the effective zone of uncertainty to
the left of $D$.
\item Those dead cells in $D$ whose nearest living neighbor
to the left is black (and thus to the left of $D$).
\end{enumerate}
Let the {\bf right effective zone of uncertainty} $D_r$ be
defined similarly.
\end{defn}
Thus, cells that are in $D$, and not in either $D_l$ or
$D_r$, must stay white. We now show that if these two
effective zones stay separated far enough, they
cannot affect each other.
\begin{thm}
\label{newest}
Let $G$ be a simple cellular game of radius $r$, operating
under finite initial conditions.
Let $D$ be a white domain under $G$. In generation $g$,
let $D$ include at least cells $0$ through $r$.
Furthermore, let all
cells in $D_l$ be to the left of cell $0$ and all cells
in $D_r$ be to the right of cell $r$. Then the life/death
probability of any cell in $D_l$ ($D_r$) will not have
been influenced by that of any cell in $D_r$ ($D_l$).
Also, black/white decisions for all cells in $D_l$ ($D_r$)
will be exactly the same as if $D_r$ ($D_l$) did not exist;
that is, if $D_l$ ($D_r$) comprised the entire effective zone
of uncertainty.
\end{thm}
{\sl Proof.} The first statement is true because $\vert D \vert
\ge r + 1$. The second statement is true because if the
effective zone is as thus stated, each cell in $D_l$ ($D_r$)
must have at least one parent in $D_l$ ($D_r$), and
no dead cell can have parents
from both $D_l$ and $D_r$ unless both parents are white.
\rule {2mm}{3mm}
The following lemmas characterize the expansion of
the zone of uncertainty.
\begin{lem}
\label{l11}
Let $G$ be a simple cellular game with left/right symmetry. Let $R(g)$
be the position
of the right border of the zone of uncertainty in generation $g$,
if it exists.
Let $\alpha_1$ be the smallest probability that any
cell stays alive, and $\alpha_2$ the largest.
Then, for any $n$, there is always probability at least
${1 \over 2} (\alpha_1)^2 (1 - \alpha_2)^{n+1}$
that $R(g+2) - R(g) > n$;
and probability at least ${1 \over 2}^{n+2} \alpha_1^4 (1 - \alpha_2)^{n+2}$
that $R(g) - R(g+2) > n$.
\end{lem}
{\sl Proof.}
Without loss of generality, assume $R(g) = 0$; that is, assume that
cell $0$ is black and there are no black cells to the right
of it. Thus, there is
probability at least $\alpha_1 (1 - \alpha_2)^{n+1}$ that,
in generation $g$, cell $0$ lives, and all cells between it and cell $n+2$
do not. Given these events, there is probability at least ${1 \over 2}$
that cell $n+1$ becomes black in that generation. Given these
events, in generation $g+1$ there is probability at least
$\alpha_1$ that
cell $n+1$ lives, thus staying black into the next generation.
Thus there is probability at least
${1 \over 2} (\alpha_1)^2 (1 - \alpha_2)^{n+1}$ that
$R(g+2) - R(g) > n$.
Now, suppose cell $-n-2$ is black. Then there is probability at least
$(\alpha_1)^2 (1 - \alpha_2)^{n+2}$ that, in generation $g$, cell $1$,
which is white, lives, cell $-n-2$ lives,
and all cells between those two do not. Given
these events, there is probability ${1 \over 2}^{n+2}$ that cells
$0$ through $-n$ become white, and cell $-n-1$ black, in
that generation. Given these events, in generation $g+1$ there
is probability at least $(\alpha_1)^2$ that cells $-n$ and
$-n-1$ both live. This will ensure that at the beginning
of generation $g+2$, the zone of uncertainty will still
exist and have the desired border.
On the other hand, suppose cell $-n-2$ is white. Then
there is probability at least $\alpha_1^2 (1 - \alpha_2)^{n+1}$ that,
in generation $g$, cell $0$, which is black, lives, cell $n-2$
lives, and all cells between these two do not. Given
these events, there is probability ${1 \over 2}^{n+1}$ that
cell $-n-1$ becomes black, and cells $-n$ through $-1$
become white in that generation.
Given these events, in generation $g+1$ there is probability
at least $\alpha_1^2 (1-\alpha_2)$ that cells $-n-1$ and $-n$
live and cell $0$ dies. As before, this will ensure that
at the beginning of generation $g+2$, the zone of uncertainty
will still exist and have the desired border. Thus, there is
probability at least ${1 \over 2}^{n+2} \alpha_1^4 (1 - \alpha_2)^{n+2}$
that $R(g) - R(g+2) > n$.
\rule {2mm}{3mm}
Similar results, of course, apply to $L(g)$.
\begin{lem}
\label{l2}
Suppose the zone of uncertainty moves arbitrarily far to the left
only. Then the probability that its right border will not recede
arbitrarily far to the left (that is, that it will stay within
some bounded interval) is $0$. Furthermore, the probability that
the right effective border will not also recede arbitrarily
far to the left is $0$.
\end{lem}
{\sl Proof.}
Let $\alpha_1$ be the smallest probability that any
cell stays alive, and $\alpha_2$ the largest.
Let $R(g)$ be as above. By Lemma \ref{l11}, if
$-k < R(g) < k$ there is probability at least
${1 \over 2} (\alpha_1)^2 (1 - \alpha_2)^{k+n+1}$
that $R(g+2) > n$. Thus, if $-k < R(g) < k$ for infinitely many $g$,
then $R(g)$ will almost always, infinitely many times, be greater
than any $n$.
Let $R'(g)$ be the position of the right border of the
effective zone of uncertainty in generation $G$. (Again, let $R'(g)$ be
defined only if this zone exists.) Each time $R'(g) > -k$, either
$R(g) > -k$, or cell $-k$ has $50\%$ probability of becoming black.
If this cell does become black, $R(g+1)$ will exceed $-k$.
Thus if $R'(g)$ exceeds $-k$ infinitely many times, $R(g)$ will,
with probability $1$, exceed $-k$ infinitely many times too.
\rule {2mm}{3mm}
As above, similar results, apply to the left border of the
zone of uncertainty.
Some concepts are now presented for subsequent use.
Let a cell history for generations $g$ up to $h$ consist of:
\begin{enumerate}
\item The system state (that is, the positions of all black cells)
at the beginning of generation $g$.
\item All meaningful life decisions made in generations $g$ through
$h-1$; that is, all life decisions made within the zone of uncertainty,
and for those cells outside it whose nearest living neighbor on one side
is black.
\item All color decisions made where color is in doubt; that is,
for cells that die and have nearest living neighbors of different
colors on each side.
\end{enumerate}
Let $H(g,h)$ refer to a cell history as described above. Note
that this description only refers to life decisions made within
the effective zone of uncertainty. Thus, the probability of
any history is affected only by such decisions.
Let the following function be defined for any cell history $h =
H(1,g)$ that starts at generation $1$. Let $F_1(h) = 1$ if, under
$h$, in generation $g$ there is exactly one nonnegative black cell,
at position $r$ or greater. Let $F_1(h) = 0$ otherwise.
Similarly, let $F_2$ and $F_3$ be defined for one-generation
cell histories $h = h(g,g+1)$. Let $F_2(h) = 1$, if in generation
$g$ there exactly one nonnegative black cell, in position
$r$ or greater (that is, if $F_1$ would be $1$ for the
previous history), and, under $h$, in generation $g+1$ there are none;
and let $F_2(h) = 0$ otherwise. Let $F_3(h) = 1$ if
in generation $g+1$ there are two, three or four black nonnegative cells,
both next to each other, and both in positions $r$ or greater.
Let $F_3(h) =0$ otherwise.
The following lemmas are used in constructing the main proof.
The next two lemmas, which compare the probabilities of different
$1$-genera\-tion cell histories, both use the same idea:
Changing what happens to only a specific number of
cells is likely to have only a limited effect on the
probability of the history.
\begin{lem}
\label{mix2} Let $G$ be a simple cellular game.
Then for each $1$-generation history $h$ such that $F_2(h) = 1$,
there is a different $1$-generation history $h'$ such that
all the following apply.
\begin{enumerate}
\item
$F_3(h') = 1$.
\item
$h$ and $h'$ both start with the same system states.
\item
At the end of generation $g$, given history $h'$, the negative
black cells are exactly the same as those at the end of $g$
given $h$.
\item
For any history (starting at generation $1$) $h_0$, we have
\begin{eqnarray}
P(H(g,g+1) = h' \vert H(1,g) = h_0) \ge \\
\beta P(H(g,g+1) = h \vert H(1,g) = h_0)
\end{eqnarray}
with $\beta$ depending only on $g$.
\end{enumerate}
\end{lem}
{\sl Proof.} Let $h$ be a cell history such that $F_2(h) = 1$. That
is, at the beginning of the generation $g$ in which $h$ occurs,
there is one nonnegative black cell $c$. Under $h$, $c$
must die, because in generation $g+1$ there will no longer
be any more nonnegative black cells. Let $b$ be the
nearest cell to $c$, on the left, that stays alive in
generation $g$.
Let $\alpha_1$ be the smallest probability that any
cell stays alive, and $\alpha_2$ the largest.
By the definition of a simple cellular game, both
these numbers must be greater than $0$.
Let $\alpha_3$ be the minimum of $\alpha_1$,
$\alpha_2$, $1 - \alpha_1$, $1 - \alpha_2$.
Case I: $b$ is black (and thus in a negative-numbered position).
Let cell $d$ be the closest living neighbor of cell $c$ on the right.
Let it die as before, and let cell $d+1$ die. As under $h$,
all dead cells between $b$ and $d$ have a $50\%$ chance of becoming black.
Let their colors be assigned the same; e.g., cells $0$ through $d-1$ will
become white. Let cells $d$ and $d+1$ become black. Let all other
life/death and black/white decisions be as under $h$.
Thus, this new history $h'$ satisfies $F_3(h') = 1$,
it produces the same negative black cells as $h$, and we have
\begin{eqnarray}
P(H(g,g+1) = h' \vert H(1,g) = h_0) \ge \\
{(\alpha_3)^2 \over 2} P(H(g,g+1) = h \vert H(1,g) = h_0)
\end{eqnarray}
Also, $h$ can be reconstructed if $h'$ is known; that is:
\begin{enumerate}
\item Initial conditions are the same for both histories.
\item Under $h'$, the location of cells $d$ and $d+1$ are
known; they are the only nonnegative black cells in generation $g+1$.
\item All life/death and color decisions in the effective zone
of uncertainty are the same, except for cells $d$ and $d+1$.
\item The history of cell $d$, under $h$, is exactly known. It
stays alive and stays white.
\item The life or death of cell $d+1$, under $h$, is not known.
However, under $h$, this cell is not in the effective zone
of uncertainty and decisions about it are not considered part
of the cell history.
\end{enumerate}
Thus, in this case, for each different $h$ there is a different
$h'$ satisfying the conditions of this lemma.
Case II: Cell $b$ is white, and one cell to the left of $c$.
Under $h'$, let cells $c$ and $c+3$ live. Let cells $c+1$
and $c+2$ die. Since $c$ is their right parent, they
can become black in the next generation; let them do so.
Let all other cells live or die, and change color,
as under $h$. Note that cell $c$ cannot become a parent
of cells to the left, since it is bordered on the left
by the living cell $b$.
Thus, $F_3(h')$ will be $1$, it will produce the same
negative black cells as $h$, and we have
\begin{eqnarray}
P(H(g,g+1) = h' \vert H(1,g) = h_0) \ge \\
{(\alpha_3)^4 \over 4} P(H(g,g+1) = h \vert H(1,g) = h_0)
\end{eqnarray}
A history $h'$ constructed in this manner cannot be
confused with one created using the first method, since
at the end there are three nonnegative cells rather than
two. Its uniqueness can be shown by methods similar
to those used in the first case.
Case III: Cell $b$ is white and more than one cell to the
left of $c$. Let cells $c-1$ and $c$ live; let
cells $c+1$ through $c+3$ die, and let cell $c+4$ live.
Let all other cells live or die as under $h$.
Now, cell $c-1$ must be white, since cell $c$ is isolated.
Therefore, cells $b+1$ through $c-2$ must, as under
$h$, become white. Let cells $c+1$ through $c+3$
become black. Note that all other cells have the
same color options as under $h$.
Thus, $F_3(h')$ will be $1$, it will produce the same
negative black cells as $h$, and we have
\begin{eqnarray}
P(H(g,g+1) = h' \vert H(1,g) = h_0) \ge \\
{(\alpha_3)^6 \over 8} P(H(g,g+1) = h \vert H(1,g) = h_0)
\end{eqnarray}
This $h'$ cannot be confused with one created using the first
two methods, since at the end there are four nonnegative cells rather than
three or two. Its further uniqueness can also be shown by methods similar
to those used in the first case. Therefore, the conditions
of the theorem are satisfied for all three cases, with
$\beta = { (\alpha_3)^6 \over 8}$.
\rule {2mm}{3mm}
Now, if there is positive probability of a glider --
that is, of the effective zone of uncertainty moving
arbitrarily far in one direction only -- then there is
positive probability that in some generation $g$, this
zone will leave the nonnegative area for the last time.
The following lemma characterizes, for certain initial conditions,
how this can happen. For these conditions, we put a minimum
bound on the probability that, in the generation this zone
leaves the nonnegative area, there is exactly one black cell
-- and this cell is at position $r$ or greater. This bound
depends only on $G$.
The ways this zone can leave the nonnegative area are divided
into four cases. (Actually, three main cases; the last two are
quite similar.) For each of these cases, a different construction is used
to accomplish the proof. As in the preceding lemma, each of
these constructions uses histories that behave similarly
to the ones under consideration, and hence have similar
probabilities of occurrence.
\begin{lem}
\label{mix2a} Let $G$ be a simple cellular game of radius $r$,
operating under finite initial conditions $I$.
Let $\alpha_1$
be the lowest probability, under $G$, that any cell stays alive.
Let there be positive probability that under $I$ the effective zone
of uncertainty moves arbitrarily far to the left; that is,
that some generation $g$ is the last in which the effective zone
of uncertainty contains nonnegative cells. Let $Z_{g} = 1$ if
this is true for generation $g$, and $0$ otherwise. Let
$P(\exists g, Z_{g} = 1)$ exceed $1 - {\alpha_1 \over 2}$.
Let $X_{g} = 1$ if $Z_{g} = 1$, and at the beginning of $g$ there is only one
nonnegative black cell, at position $r$ or greater, and $0$ otherwise.
Then, for some $\gamma$ depending only on $G$
\begin{equation}
P(\exists g, X_{g} = 1) \ge \gamma P(\exists g, Z_{g} = 1)
\end{equation}
\end{lem}
{\sl Proof.}
Let $\alpha_2$ be the highest probability, under $G$, that any
cell stays alive. (By definition, $\alpha_1$, $\alpha_2 > 0$.)
Let $\alpha_3$, again, be the minimum of $\alpha_1$, $\alpha_2$,
$1 - \alpha_1$ and $1 - \alpha_2$.
Let $\alpha_4$ be the life probablity of a black cell
whose $r$ neighbors on each side are also black.
Let $c_{g}$ be the rightmost living cell in the zone of uncertainty,
in generation $g$. Let $D_{g}$ be the rightmost black domain in that zone,
and let $e_{g}$ be the white cell at its left border.
First of all, we know that there is probability at least $\alpha_1$
that in generation $1$, the leftmost black cell lives. Therefore,
there is probability at least $\alpha_1$ that $Z_{1} = 0$. Thus,
if $P(\exists g, Z_{g} = 1) > 1 - {\alpha_1 \over 2}$, we know that
$P(\exists g, g \ge 2, Z_{g} = 1) \ge {\alpha_1 \over 2}$.
Now, suppose there exists a generation $g > 1$ such that $Z_{g} = 1$.
The conditions under which that occurs can be divided into
four cases, as follows:
\begin{enumerate}
\item $c_{g}$, as described above, is black.
\item $c_{g}$ is white, and $c_{g-1}$ is black.
\item $c_{g}$ is white, $c_{g-1}$ is white, and $e_{g-1}$ is alive.
\item $c_{g}$ is white, $c_{g-1}$ is white, and $e_{g-1}$ is not alive.
\end{enumerate}
Let $C_{g}$ be $k$, $1 \le k \le 4$, if case $k$ holds. Thus, there is a
$k$, $1 \le k \le 4$, such that
\begin{equation}
\label{opossum}
P(\exists g, Z_{g} = 1, C_{g} = k) \ge
{1 \over 4} {\alpha_1 \over 2} P(\exists g, Z_{g} = 1)
\end{equation}
Case I. (\ref {opossum}) is true with $k$ set to 1.
In this case, $c_{g}$ is black. Let $d_{g}$ be the living
cell just to the right of the effective zone of uncertainty. For
$Z_{g}$ to be $1$, $d_{g}$ must be at position $1$ or greater.
We wish to show that for each two consecutive $1$-generation histories
$h$, $i$ such that if $H(g,g+1) = h$, $C_{g} = 1$,
there exists a different collection of histories
$h'$, $i'$, such that, for $\kappa$ depending only on $G$, we have
\begin{eqnarray}
\label{a0}
P(H(g,g+1) = h', H(g+1,g+2) \in i') \ge \\
\kappa P(H(g,g+1) = h, H(g+1,g+2) = i)\\
\end{eqnarray}
and
\begin{eqnarray}
\label{a1}
P(X_{g+1} = 1 \vert H(g,g+1) = h',\\
H(g+1,g+2) \in i') =\\
P(Z_{g} = 1 \vert H(g,g+1) = h,\\
H(g+1,g+2) = i)
\end{eqnarray}
Let $h'$ be constructed as follows:
\begin{enumerate}
\item Initial colors are the same as under $h$.
\item Cells $d_{g}$ through $d_{g}+r$ die.
\item Cell $d_{g}+r+1$ lives.
\item All other cells live or die as under $h$. Thus, cells $d_{g}$ through
$d_{g}+r$ are the only ones with different color possibilities than
under $h$; that is, they have a $50\%$ chance of becoming black,
with $c_{g}$ as their parent.
\item Cells $d_{g}$ through $d_{g}+r-1$ become white.
\item Cell $d_{g}+r$ becomes black.
\item All other cells become black or white as under $h$.
\end{enumerate}
At the end of $h'$, we are left with exactly
the same black cells as at the end of $h$, except that cell $d_{g}+r$
is black. And, because of cells added to the zone of uncertainty
under $h'$:
\begin{equation}
\label{a2}
P(H(g,g+1) = h') \ge
{(\alpha_3)^{r+2} \over 2^{r+1}} P(H(g,g+1) = h)
\end{equation}
Also, $h$ can be reconstructed if $h'$ is known; that is:
\begin{enumerate}
\item Initial conditions are the same for both histories.
\item The location of cell $d_{g}+r$ can be recovered. After
the completion of $h'$, it is the right black cell. Hence,
the location of cell $d_{g}$ can be recovered.
\item Under $h$, all life/death and color decisions in the effective zone
of uncertainty, through cell $d_{g}-1$, are the same.
\item Under $h$, cell $d_{g}$ lives, thus bounding the zone of uncertainty.
\end{enumerate}
For $Z_{g}$ to be $1$, in generation $g+1$ the effective
zone of uncertainty must not reach the nonnegative area.
Therefore, $d_{g+1}$ must not be positive.
Let $H(g+1,g+2) = i$ be such a history.
Let $i'$ be constructed as follows, given $i$ and its predecessor $h$:
\begin{enumerate}
\item Let initial colors be the same as under $i$, except
that cell $d_{g}+r$ is black. (The position of $d_{g}$ can
be determined, given $h$.)
\item Let the life of all cells in the effective zone of uncertainty of
$i$ be determined as under $i$.
\item Let cell $d_{g}+r-1$ live. Thus, since the effective
zone of uncertainty of $i$ stays in the negative area, all cells
in this zone will face the same black/white decisions. Also,
cells $d_{g+1}$ through $d_{g}+r-2$ must, if they die, become white.
\item Let cell $d_{g}+r$ die.
\item Let cell $d_{g}+r+1$ live. Thus, cell $d_{g}+r$ will become white.
\item Let all black/white decisions in the effective
zone of uncertainty of $i$ be determined just as under $i$.
\end{enumerate}
In this generation, cells $d_{g+1}$ through $d_{g}+r-2$ can live or
die without affecting the inclusion of a history in $i'$.
Note that the only additional specification for what happens in
$i'$, as opposed to $i$, is the life or death of three particular cells.
Thus, we have
\begin{eqnarray}
\label{a3}
P(H(g+1,g+2) \in i' \vert H(g,g+1) = h') \ge\\
(\alpha_3)^3 P(H(g+1,g+2) = i \vert H(g,g+1) = h)
\end{eqnarray}
Note that $i$ can be recovered, given $i'$, because all
decisions in the effective zone of uncertainty of $i$ are the same.
Also note that conditions after $i'$ are the same
as after $i$. Thus, we have
\begin{eqnarray}
\label{a4}
P(Z_{g+1} = 1 \vert H(g,g+1) = h', H(g+1,g+2) \in i') =\\
P(Z_{g} = 1 \vert H(g,g+1) = h, H(g+1,g+2) = i)
\end{eqnarray}
Since $i'$ starts with exactly one nonnegative cell, at position
$r$ or greater, (\ref{a1}) holds.
Combining (\ref{a2}) and (\ref{a3}), we have (\ref{a0}) holding
with $\kappa = {(\alpha_3)^{r+5} \over 2^{r+1}}$.
Since there is a different $h'$, $i'$ for each different $h$, $i$,
we have
\begin{eqnarray}
P(\exists g, X_{g+1} = 1, C_{g} = 1) \ge\\
\sum_{g,h,i} P(X_{g+1} = 1, C_{g} = 1 \vert\\
H(g,g+1) = h', H(g+1,g+2) \in i')\\
P(H(g,g+1) = h', H(g+1,g+2) \in i') \ge\\
\sum_{g,h,i} P(Z_{g} = 1, C_{g} = 1 \vert\\
H(g,g+1) = h, H(g+1,g+2) = i)\\
\kappa P(H(g,g+1) = h, H(g+1,g+2) = i) =\\
P(\exists g, Z_{g} = 1, C_{g} = 1)
\end{eqnarray}
Thus, by our case hypothesis, we have
\begin{equation}
P(\exists g, X_{g+1} = 1,C_{g} = 1) \ge
{\kappa \over 4} {\alpha_1 \over 2} P(\exists g, Z_{g} = 1)
\end{equation}
Case II. (\ref {opossum}) is true with $k$ set to 2.
In this case, $c_{g}$ is white, and $c_{g-1}$ is black.
Let $d_{g-1}$ be the living
cell just to the right of the zone of uncertainty,
in generation $g-1$. Note that this cell is to the
right of any cells that are black in generation $g$.
Hence, for $Z_{g}$ to be $1$, $d_{g-1}$ must be at position $1$ or greater.
We wish to show that for each three consecutive $1$-generation histories
$k$, $h$, $i$ such that if $H(g,g+1) = h$, $C_{g} = 2$,
there exists a different collection of histories
$k'$, $h'$, $j'$ such that, for $\kappa$ depending only on $G$, we have
\begin{eqnarray}
\label{b0}
P(H(g-1,g) = k',H(g,g+1) \in h',H(g+1,g+2) \in i') \ge \\
\kappa
P(H(g-1,g) = k,H(g,g+1) = h, H(g+1,g+2) \in i)
\end{eqnarray}
and
\begin{eqnarray}
\label{b1}
P(X_{g+1} = 1 \vert H(g-1,g) = k', H(g,g+1) \in h',\\
H(g+1,g+2) \in i') =\\
P(Z_{g} = 1 \vert H(g-1,g) = k, H(g,g+1) = h,\\
H(g+1,g+2) = i)
\end{eqnarray}
Let $k'$ be constructed as follows:
\begin{enumerate}
\item Initial colors are the same as under $k$.
\item Cells $d_{g-1}$ through $d_{g-1}+r$ die.
\item Cell $d_{g-1}+r+1$ lives.
\item All other cells live or die as under $k$. Thus, cells $d_{g-1}$
through $d_{g-1}+r$ are the only ones with different color possibilities
than under $k$; that is, they have a $50\%$ chance of becoming black,
with $c_{g-1}$ as their parent.
\item Cells $d_{g-1}$ through $d_{g-1}+r-1$ become white.
\item Cell $d_{g-1}+r$ becomes black.
\item All other cells become black or white as under $k$.
\end{enumerate}
At the end of $k'$, we are left with exactly
the same black cells as at the end of $k$, except that cell $d_{g-1}+r$
is black. And, because of cells added to the zone of uncertainty
under $k'$, we have
\begin{equation}
\label{b2}
P(H(g-1,g) = k') \ge
{(\alpha_3)^{r+2} \over 2^{r+1}} P(H(g-1,g) = k)
\end{equation}
Also, $k$ can be reconstructed if $k'$ is known; that is:
\begin{enumerate}
\item Initial conditions are the same for both histories.
\item The location of cell $d_{g-1}+r$ can be recovered. After
the completion of $k'$, it is the right black cell. Hence,
the location of cell $d_{g-1}$ can be recovered.
\item Under $k$, all life/death and color decisions in the effective zone
of uncertainty, through cell $d_{g-1}-1$, are the same.
\item Under $k$, cell $d_{g-1}$ lives, thus bounding the zone of uncertainty.
\end{enumerate}
Let $H(g,g+1) = h$ be a history that, together
with its predecessor $k$, satisfies the conditions for $C_{g}$ to be $2$,
and for $Z_{g}$ to possibly be $1$: That is, under $h$, let
the leftmost living cell in the zone of uncertainty be white,
and let this zone leave the nonnegative area.
Let $h'$ be constructed as follows, given $h$ and its predecessor $k$:
\begin{enumerate}
\item Let initial colors be the same as under $h$, except
that cell $d_{g-1}+r$ is black. (The position of $d_{g}$ can
be determined, given $h$.)
\item Let the life of all cells in the effective zone of uncertainty of
$h$ be determined as under $h$.
\item Let cell $d_{g-1}+r-1$ live. Thus, since under $h$ the
the effective zone of uncertainty does not reach this far to
the left, all cells in this zone will face the same black/white decisions.
Note that since under $h$ the left border of the zone of
uncertainty recedes, $e_{g}$ -- that is, the white cell
at the border of this zone -- must be to the right of cell $d_{g-1}+r$.
Also, cells $e_{g}$ through $d_{g-1}+r-2$ must, if they die,
become white.
\item Let cell $d_{g}+r$ live.
\item Let cell $d_{g}+r+1$ live.
\item Let all black/white decisions in the effective
zone of uncertainty of $h$ be determined just as under $h$.
\end{enumerate}
In this generation, cells $e_{g}$ through $d_{g}+r-2$ can live or
die without affecting the inclusion of a history in $h'$.
Note that the only additional specification for what happens in
$h'$, as opposed to $h$, is that three particular cells live.
Thus, we have
\begin{eqnarray}
\label{b3}
P(H(g,g+1) \in h' \vert H(g-1,g) = k') \ge\\
(\alpha_3)^3 P(H(g,g+1) = i \vert H(g-1,g) = k)
\end{eqnarray}
Note that $h$ can be recovered, given $h'$, because all
decisions in the effective zone of uncertainty of $h$ are the same.
For $Z_{g}$ to be $1$, in generation $g+1$ the effective
zone of uncertainty must not reach the nonnegative area.
Therefore, $d_{g+1}$ must not be positive.
Let $H(g+1,g+2) = i$ be such a history.
Let $i'$ be constructed as follows, given $i$ and its predecessors
$h$ and $k$:
\begin{enumerate}
\item Let initial colors be the same as under $i$, except
that cell $d_{g-1}+r$ is black. (The position of $d_{g-1}$ can
be determined, given $k$.)
\item Let the life of all cells in the effective zone of uncertainty of
$i$ be determined as under $i$.
\item Let cell $d_{g-1}+r-1$ live. Thus, since the effective
zone of uncertainty of $i$ stays in the negative area, all cells
in this zone will face the same black/white decisions. Also,
cells $d_{g+1}$ through $d_{g-1}+r-2$ must, if they die, become white.
\item Let cell $d_{g-1}+r$ die.
\item Let cell $d_{g-1}+r+1$ live. Thus, cell $d_{g-1}+r$ will become white.
\item Let all black/white decisions in the effective
zone of uncertainty of $i$ be determined just as under $i$.
\end{enumerate}
In this generation, cells $d_{g+1}$ through $d_{g-1}+r-2$ can live or
die without affecting the inclusion of a history in $i'$.
Note that the only additional specification for $i'$, as opposed
to $i$, is the life or death of three particular cells.
Thus, we have
\begin{eqnarray}
\label{bb3}
P(H(g+1,g+2) \in i' \vert\\
H(g,g+1) \in h', H(g-1,g) = k') \ge\\
(\alpha_3)^3 P(H(g+1,g+2) = i \vert\\
H(g,g+1) = h,H(g-1,g) = k)
\end{eqnarray}
Note that $i$ can be recovered, given $i'$, because all
decisions in the effective zone of uncertainty of $i$ are the same.
Also note that conditions after $i'$ are the same
as after $i$. Thus, we have
\begin{eqnarray}
\label{bb4}
P(Z_{g+1} = 1 \vert\\
H(g-1,g) = k', H(g,g+1) \in h', H(g+1,g+2) \in i') =\\
P(Z_{g} = 1 \vert \\
H(g-1,g) = k, H(g,g+1) = h, H(g+1,g+2) = i)
\end{eqnarray}
Since $i'$ starts with exactly one nonnegative cell, at position
$r$ or greater, (\ref{b1}) holds.
Combining (\ref{b2}), (\ref{b3}),
and (\ref{bb3}), we have (\ref{b0}) holding
with $\kappa = {(\alpha_3)^{r+8} \over 2^{r+1}}$.
Since there is a different $k'$, $h'$, $i'$ for each different $k$,
$h$, $i$, we have
\begin{eqnarray}
P(\exists g, X_{g+1} = 1, C_{g} = 2) \ge\\
\sum_{g,k,h,i} P(X_{g+1} = 1, C_{g} = 2 \vert\\
H(g-1,g) = k',H(g,g+1) \in h', H(g+1,g+2) \in i')\\
P(H(g-1,g) = k', H(g,g+1) \in h', H(g+1,g+2) \in i') \ge\\
\sum_{g,k,h,i} P(Z_{g} = 1, C_{g} = 2 \vert\\
H(g-1,g) = k, H(g,g+1) = h, H(g+1,g+2) = i)\\
\kappa P(H(g-1,g) = k, H(g,g+1) = h, H(g+1,g+2) = i) =\\
P(\exists g, Z_{g} = 1, C_{g} = 2)
\end{eqnarray}
Thus, by our case hypothesis, we have
\begin{equation}
P(\exists g, X_{g+1} = 1,C_{g} = 2) \ge
{\kappa \over 4} {\alpha_1 \over 2} P(\exists g, Z_{g} = 1)
\end{equation}
Case III. (\ref {opossum}) is true with $k$ set to 3.
In this case, $c_{g}$ is white, and $c_{g-1}$ is white.
Let $b_{g-1}$ be the white cell just to the right of $D_{g-1}$. In
this case, $b_{g-1}$ equals $c_{g-1}$.
We wish to show that for each three consecutive $1$-generation histories
$k$, $h$, $i$ such that if $H(g,g+1) = h$, $C_{g} = 3$,
there exists a different collection of histories
$k'$, $h'$, $j'$ such that, for $\kappa$ depending only on $G$, we have
\begin{eqnarray}
\label{c0}
P(H(g-1,g) \in k', H(g,g+1) \in h',H(g+1,g+2) \in i') \ge\\
\kappa
P(H(g-1,g) = k)\\
P(H(g,g+1) = h \vert H(g-1,g) = k)\\
P(H(g+1,g+2) = i \vert\\
H(g-1,g) = k, H(g,g+1) = h)
\end{eqnarray}
and
\begin{eqnarray}
\label{c1}
P(X_{g+1} = 1 \vert H(g-1,g) \in k', H(g,g+1) \in h',\\
H(g+1,g+2) \in i') =\\
P(Z_{g} = 1 \vert H(g-1,g) = k, H(g,g+1) = h,\\
H(g+1,g+2) = i)
\end{eqnarray}
Let $k'$ be constructed as follows:
\begin{enumerate}
\item Initial colors are the same as under $k$.
\item Both the leftmost and rightmost cells in $D$ live
(cells $b_{g-1}+1$ and $e_{g-1}-1$). Thus, all cells in $D$ must become black.
\item Cells $e_{g-1}$ through $e_{g-1}+r$ die.
\item Cell $e_{g-1}+r+1$ lives. Thus, cells $e_{g-1}$ through $e_{g-1}+r$
may become either black or white.
\item All other cells, up to the left border of the zone of uncertainty
of $k$, live or die as under $k$. In specific, $b_{g-1}$ lives, as under
$k$. Thus, all cells to the left of $b_{g-1}$ are faced with the same
black/white decisions as under $k$.
\item Cells $e_{g-1}$ through $e_{g-1}+r-1$ become white, and cell
$e_{g-1}+r$ becomes black. \item All other cells become black or
white as under $k$. \end{enumerate}
Note that all cells in $D_{g-1}$, except for those on each border, can
live or die without affecting the inclusion of a history in $k'$.
At the end of any history in $k'$, we are left with exactly
the same black cells as under $k$, except that all cells in
$D_{g-1}$ are black and cell $e_{g-1}+r$ is black.
Now, consider those cells in the interior of $D_{g-1}$.
Under $k$, they must all die; under
$k'$, their life or death does not matter. On the other
hand, the two cells at the border of $D_{g-1}$ die under $k$,
and live under $k'$. Also, cells $e_{g-1}$ through $e_{g-1}+r$ are
outside the zone of uncertainty under $k$. Under $k'$,
they die, and their colors are specified.
Thus, if $n$ is the maximum of $\vert D_{g-1} \vert - 2r$ and $0$, we have
\begin{equation}
\label{c2}
P(H(g-1,g) \in k') \ge
{(\alpha_3)^{r+3} \over 2^{r+1} (1 - \alpha_4)^{n}} P(H(g-1,g) = k)
\end{equation}
Also, $k$ can be reconstructed if $k'$ is known; that is:
\begin{enumerate}
\item Initial conditions are the same for both histories.
\item Under $k$, all life/death and color decisions in the effective zone
of uncertainty, up to cell $b_{g-1}$, are the same.
\item The location of cell $b_{g-1}$ can be recovered. After
the completion of $k'$, it is the rightmost white cell in the
next-to-rightmost finite white domain.
\item Cell $b_{g-1}$ lives, as under $k'$.
\item The location of cell $e_{g-1}$ can be recovered. After the
completion of $k'$, it is the leftmost cell in the rightmost
finite white domain.
\item Under $k$, cells $b_{g-1}+1$ through $e_{g-1}-1$ die, and become white.
\item Under $k$, cells $e_{g-1}$ and all cells to the right of
it are outside the zone of uncertainty.
\end{enumerate}
Let $H(g,g+1) = h$ be a history that, together
with its predecessor $k$, satisfies the conditions for $C_{g}$ to
be $3$, and for $Z_{g}$ to possibly be $1$. That is, under
both $k$ and $h$, let
the leftmost living cell in the zone of uncertainty be white.
Thus, since the right border of this zone will recede in
generation $g-1$, $D_{g}$ is completely to the left of $D_{g-1}$.
Also, in generation $g$, let this zone leave the nonnegative
area; and let $b_{g-1}$ be alive.
Let $h'$ be constructed as follows, given $h$ and its predecessor $k$:
\begin{enumerate}
\item Let initial colors be the same as under $h$, except
that cell $e_{g-1}+r$, and all cells in $D_{g-1}$, are black.
(The location of $D_{g-1}$, and hence of cells $b_{g-1}$ and
$e_{g-1}$, can be determined given $k$.)
\item Let the life of all cells in the zone of uncertainty of
$h$ be determined as under $h$.
(Note that $D_{g-1}$ is to the right of this zone.)
Furthermore, let cell $e_{g}$ live or die as under $h$.
\item Let the white cells at each border of $D_{g-1}$ -- that is,
cells $b_{g-1}$ and $e_{g-1}$ -- live.
\item Let all cells in $D_{g-1}$ die. Thus, they must all
become white.
\item Let cell $e_{g}+r-1$ live.
\item Let cell $e_{g-1}+r$ live.
\item Let cell $e_{g-1}+r+1$ live.
\end{enumerate}
Note that in generation $g$, cells $e_{g}$ through $e_{g-1}$ --
that is, the cells between the border of the zone of uncertainty
of $h$ and the left border of $D_{g-1}$ -- can
live or die without affecting the inclusion of a history in $h'$.
Also, cells $e_{g-1}+1$ through $e_{g-1}+r-2$, (if $r$ is
large enough for these cells to exist) can live or die
without affecting this inclusion.
At the end of any history in $h'$, we are left with exactly
the same black cells as under $h$, except that cell $e_{g-1}+r$
is black. Now, since $e_{g-1} > e_{g} \ge 0$, we have $e_{g-1}+r) > r$.
And for $Z_{g}$ to be $1$, there must be no nonnegative black
cells at the end of generation $g$. Thus, at the end of $h'$ there
will be only one nonnegative black cell, cell $e_{g-1}+r$.
Now, consider those cells in the interior of $D_{g-1}$.
Under $h'$, they must all die; under
$h$, they are outside the zone of uncertainty. Also, those
cells $r$ or less to the left of $D_{g-1}$ (cells $b_{g-1} - 1$
through $b_{g-1} - r$ may have different life probabilities.
Finally, we have to consider the life probabilities of cells
$e_{g}$, and $e_{g}+r-1$ through $e_{g}+r+1$.
Thus, if $n$ is the maximum of $\vert D_{g-1} \vert - 2r$ and $0$, we have
\begin{equation}
\label{c3}
P(H(g,g+1) \in h') \ge (\alpha_3)^{3r+4} (1 - \alpha_4)^{n}
P(H(g,g+1) = h)
\end{equation}
Also, $h$ can be reconstructed if $h'$ is known. That is,
\begin{enumerate}
\item Under $h'$, $D_{g-1}$ is the second black domain on the left.
\item Under $h$, initial conditions to the right of $D_{g-1}$ are the
same as under $h'$.
\item Under $h$, the zone of uncertainty does not reach $D_{g-1}$.
\item Decisions in the zone of uncertainty of $h$, and at its border,
both life/death and black/white, are exactly as under $h$.
\end{enumerate}
Now, for $Z_{g}$ to be $1$, in generation $g+1$ the effective
zone of uncertainty must not reach the nonnegative area.
Therefore, $d_{g+1}$ must not be positive.
Let $H(g+1,g+2) = i$ be such a history.
Let $i'$ be constructed as follows, given $i$ and its predecessors
$h$ and $k$:
\begin{enumerate}
\item Let initial colors be the same as under $i$, except
that cell $d_{g-1}+r$ is black. (The position of $d_{g-1}$ can
be determined, given $k$.)
\item Let the life of all cells in the effective zone of uncertainty of
$i$ be determined as under $i$.
\item Let cell $d_{g-1}+r-1$ live. Thus, since the effective
zone of uncertainty of $i$ stays in the negative area, all cells
in this zone will face the same black/white decisions. Also, any
cells between $d_{g+1}$ and $d_{g-1}+r-2$ must, if they die, become white.
\item Let cell $d_{g-1}+r$ die.
\item Let cell $d_{g-1}+r+1$ live. Thus, cell $d_{g-1}+r$ will become white.
\item Let all black/white decisions in the effective
zone of uncertainty of $i$ be determined just as under $i$.
\end{enumerate}
In this generation, cells $d_{g+1}$ through $d_{g-1}+r-2$ can live or
die without affecting the inclusion of a history in $i'$.
Note that the only additional specification for
$i'$, as opposed to $i$, is the life or death of three particular cells.
Thus,
\begin{eqnarray}
\label{cb3}
P(H(g+1,g+2) \in i' \vert\\
H(g,g+1) \in h', H(g-1,g) = k') \ge\\
(\alpha_3)^3 P(H(g+1,g+2) = i \vert\\
H(g,g+1) = h,H(g-1,g) = k)
\end{eqnarray}
Note that $i$ can be recovered, given $i'$, because all
decisions in the effective zone of uncertainty of $i$ are the same.
Also note that conditions after $i'$ are the same
as after $i$. Thus, we have
\begin{eqnarray}
\label{cb4}
P(X_{g+1} = 1 \vert\\
H(g-1,g) \in k', H(g,g+1) \in h', H(g+1,g+2) \in i') =\\
P(Z_{g} = 1 \vert \\
H(g-1,g) = k, H(g,g+1) = h, H(g+1,g+2) = i)
\end{eqnarray}
Since $i'$ starts with exactly one nonnegative cell, at position
$r$ or greater, (\ref{b1}) holds.
Combining (\ref{c2}), (\ref{c3}),
and (\ref{cb3}), we have (\ref{c0}) holding
with $\kappa = {(\alpha_3)^{4r+10} \over 2^{r+1}}$.
Since there is a different $k'$, $h'$, $i'$ for each different $k$,
$h$, $i$, we have
\begin{eqnarray}
P(\exists g, X_{g+1} = 1, C_{g} = 3) \ge\\
\sum_{g,k,h,i} P(X_{g+1} = 1, C_{g} = 3 \vert\\
H(g-1,g) \in k',H(g,g+1) \in h', H(g+1,g+2) \in i')\\
P(H(g-1,g) \in k',H(g,g+1) \in h',
H(g+1,g+2 ) \in i') \ge\\
\sum_{g,k,h,i} P(Z_{g} = 1, C_{g} = 3 \vert\\
H(g-1,g) = k, H(g,g+1) = h, H(g+1,g+2) = i)\\
\kappa P(H(g-1,g) = k, H(g,g+1) = h,
H(g+1,g+2 ) = i) =\\
P(\exists g, Z_{g} = 1, C_{g} = 3)
\end{eqnarray}
Thus, by our case hypothesis, we have
\begin{equation}
P(\exists g, X_{g+1} = 1,C_{g} = 3) \ge
{\kappa \over 4} {\alpha_1 \over 2} P(\exists g, Z_{g} = 2)
\end{equation}
Case IV. (\ref {opossum}) is true with $k$ set to 4.
This case can be handled almost exactly the same as Case III.
The only difference between the two is that, in Case IV,
cell $b_{g-1}$ is not alive in generation
$g-1$. Under $k'$, this cell lives; and in this case,
if $k$ is reconstructed from $k'$, it is assumed
that this cell is not alive. Since, under $k$, the nearest
living cell to the right of $b_{g-1}$ is white, all black/white
decisions to the left of $b_{g-1}$ are the same for $k'$ as for
$k$.
\rule {2mm}{3mm}
\begin{lem}
\label{mix4}
If there is positive probability that, given any finitely
describable initial conditions, the zone of uncertainty
will expand arbitrarily far to the left (right) only, there is positive
probability $\alpha$ that, given a zone of uncertainty
consisting of two, three or four, contiguous black cells:
\begin{enumerate}
\item The effective zone of uncertainty will expand arbitrarily
far to the left, never again going to the right
of the position of the original black cells.
\item The effective zone of uncertainty will never contain less
than two black cells.
\end{enumerate}
\end{lem}
{\sl Proof.} Let $\alpha_1$ be the smallest probability that
any cell will live; and $\alpha_2$ the largest.
By Lemma \ref{l2}, if under any initial conditions there is positive
probability the zone of uncertainty will expand arbitrarily far to the
left only, there is positive probability that under these conditions
the left border of the effective zone of uncertainty will expand
arbitrarily far to the left, and the right border recede arbitrarily
far to the left.
By Corollary \ref{mix0}, if there are initial conditions under
which there is positive probability of the effective
zone of uncertainty behaving as above, then, for any $\epsilon > 0$,
there are initial conditions $I_{\epsilon}$ under which there is
probability $1 - \epsilon$ of it behaving as above.
Now, suppose that under $I_{\epsilon}$ there is probability
$1$ that the zone of uncertainty will eventually contain
one cell. Then there is probability $1 - \alpha_2$ that this
zone will eventually disappear. If $\epsilon$ is small
enough, this is a contradiction.
Therefore, at least for small enough $\epsilon$, under conditions
$I_{\epsilon}$, there is positive probability that the zone will
not only behave as above, but always contain at least two cells.
Let $\gamma$ be one such probability, for any particular conditions
$I_{\epsilon}$. Let $c$ be the cell on the right border
of $I_{\epsilon}$.
Now, under $I_{\epsilon}$, there must be some
$n$, such that that there is probablity at least ${\gamma \over 2}$
the zone of uncertainty will never reach cell $c+n$.
Let $m$ be the length of the zone of uncertainty under $I_{\epsilon}$,
plus $n$.
The proof is completed by noting that if there are two black
cells in the zone of uncertainty, there is probability at least
$\alpha_1 (1-\alpha_2)^{m} \alpha_1$ that the right black cell lives,
its $m$ neighbors on the right die, and the next white cell lives.
Given this, there is probability ${1 \over 2}^m$
that the cells that die form the pattern of $I_{\epsilon}$, with
$n$ white cells to the right. Finally, given this pattern that is
just like that of $I_{\epsilon}$, except for one black cell $c$, $n$
units to the right, there is probablity at least ${\gamma \over 2} \alpha_1
(1 - \alpha_2)^2$ that $c$ dies, its two neighbors live, and
the rest of the zone does not ever reach cell $c$.
Thus, there is probability at least $\alpha = {\gamma \over 2} (\alpha_1)^3
(1-\alpha_2)^{m+2} {1 \over 2}^m$ of
events transpiring as desired.
\rule {2mm}{3mm}
The main theorem now follows.
\begin{thm} [The Double Glider Theorem]
\label{main}
Let $G$ be a simple cellular game of radius $r$, with left/
right symmetry. Then, under $G$, with finite initial conditions,
the probability that the zone of uncertainty will extend
arbitrarily far in one direction only is zero.
\end{thm}
{\sl Proof.}
Suppose that under $G$, under any finite initial conditions,
there is positive probability of the zone of uncertainty
extending arbitrarily far to the left (or right) only. Without
loss of generality, since $G$ is symmetric, let us say the left.
Then, by Lemma \ref{l2}, there is positive probability that
both left and right effective borders of the zone of uncertainty
will move arbitarily far to the left. Since this refers
to a countable Boolean combination of finite histories, in which
no finite history determines membership,
Corollary \ref{mix0} can be applied. That is, for any $\epsilon > 0$,
there are fixed initial conditions $I_{\epsilon}$ such that, given these
initial conditions, the probability of this happening
is greater than $1 - \epsilon$.
Since this is true for any $\epsilon > 0$, let us assume
that $\epsilon < {\alpha_1 \over 2}$, where $\alpha_1$ is
the smallest probability, under $G$, that any cell will
stay alive.
Also, without loss of generality, let the rightmost black cell, under
$I_{\epsilon}$, be regarded as cell $0$. Thus, there is
probability at least $1 - \epsilon$ that, in some generation $g$,
the rightmost cell in the effective zone of uncertainty will
be at a nonnegative position, and in all subsequent generations
at a negative one.
Thus, $I_{\epsilon}$ satisfies the conditions for Lemma \ref{mix2a}.
That is, there is a constant $\gamma$ such that if, under $I_{\epsilon}$,
this crossover does occur, the probability it does so in a generation
in which, at the beginning of the generation, there is only one nonnegative
black cell (and that cell is at position $r$ or greater)
is at least $\gamma$. This $\gamma$ is not dependent on any
other characteristics of $I_{\epsilon}$, but only on $G$.
Let $X_g$ be $1$ if:
\begin{enumerate}
\item At the beginning of generation $g$, there is only one
nonnegative black cell.
\item In generations $g+1$ and later, the effective zone
of uncertainty stays out of the nonnegative area. That is,
it no longer contains nonnegative cells.
\end{enumerate}
Let $X_g$ be $0$ otherwise.
Thus, given initial conditions $I_{\epsilon}$, we can say that
\begin{eqnarray}
\label{m1a}
\gamma (1 - \epsilon) <
\sum_g P(X_g = 1,X_k = 0 \forall k < g) = \\
\sum_{g,h} P(H(1,g) = h)
P(X_g = 1,
X_k = 0 \forall k < g \vert H(1,g) = h)
\end{eqnarray}
Note that if $X_k$, with $k < g$ is $1$, $X_g$ must be $0$; that
is, the effective zone of uncertainty can leave the nonnegative area
for the last time in only one generation. Thus, the left
side of (\ref{m1a}) becomes
\begin{equation}
\label{m1b}
\sum_{g,h} P(H(1,g) = h) P(X_g = 1 \vert H(1,g) = h)
\end{equation}
Separating out the effects of the next generation,
we get
\begin{eqnarray}
\label{m1c}
\sum_{g,h,h'} P(H(1,g) = h) P(H(g,g+1) = h' \vert H(1,g) =
h)\\
P(X_g = 1 \vert H(1,g) = h,H(g,g+1)=h')
\end{eqnarray}
Now, for it to be possible that $X_g$ be $1$, the cell history in
generations $1$ through $g-1$ must meet certain conditions. That
is, at the beginning of generation $g$ there must be only one black
nonnegative cell, at position $r$ or greater; in other words, $F_1(H(1,g))$
must be $1$. In addition, the history of generation $g$ must meet certain
requirements. That is, in generation $g+1$ the zone of
uncertainty must contain only negative cells; in other words,
$F_2(H(g,g+1))$ must be $1$. Thus, (\ref {m1c}) becomes
\begin{eqnarray}
\label{m1}
\sum_{g,h,h'} P(H(1,g) = h) F_1(H(1,g))\\
P(H(g,g+1) = h' \vert H(1,g) = h) \\
F_2(H(g,g+1))
P(X_g = 1 \vert H(1,g) = h,H(g,g+1)=h')
\end{eqnarray}
Now, given initial conditions $I_{\epsilon}$, the probability that the
zone of uncertainty does {\sl not} extend arbitrarily far to the left
only (that is, that it extends arbitrarily far to the right, or
eventually disappears) has to be less than $\epsilon$. Since
$\epsilon$ is arbitrary,
showing that this probability must be greater than some
constant dependent only on $G$ will force a contradiction.
To show this, let $r(g)$ be the position of the rightmost
cell in the effective zone of uncertainty in generation $g$. Furthermore,
let $p$ be the probability that the zone extends
arbitarily far to the right, or eventually disappears.
Then $p$ is larger than the probability that one domain in the
middle of the zone of uncertainty grows arbitrarily large in
both directions. This, in turn, is larger than the probability
that, for some generation $g$ all the conditions below hold:
\begin{enumerate}
\item
In generation $g$, there is only one nonnegative black cell $c$,
at position $r$ or greater.
\item
In generation $g+1$, there are two, three or four nonnegative
black cells, both next to each other, and both at
positions $r$ or greater.
\item
The white domain $D$ which in generation $g$ is
between cell $c$ and all other black cells, grows
arbitrarily large in both directions.
\item
In generations $g+2$ and later, either the leftmost living cell of $D$
is at position $0$ or less, or the leftmost cell in $D$ is
at position $0$ or less, and the leftmost living cell after that
is white. That is, the left effective border of $D$ is always
at position $0$ or less.
\item
In generations $g+2$ and later, either the rightmost living cell of $D$
is at position $r-1$ or less, or the rightmost cell in $D$ is
at position $r-1$ or less, and the rightmost living cell after that
is white. That is, the right effective border of $D$ is always
at position $r-1$ or less.
\item
In generation $g+2$ and after, there are always more than
two black cells to the right of $D$; that is, at positions
$r$ or greater.
\end{enumerate}
That is, a white domain $D$ develops in generation $g$, and the
two ``gliders'' on each side of $D$ in that generation
fly apart, and never touch. The right glider, after generation
$g$, always contains at least two black cells; and both
gliders continue to exist forever.
Now, we examine the probability of these events happening.
Let $Y_g$ be $1$ if the above events are satisfied for generation
$g$, and $0$ otherwise.
Thus, the probability that the zone of uncertainty grows
arbitrarily large in both directions is greater than
\begin{equation}
\label{m2}
\sum_{g,h} P(H(1,g) = h) P(Y_g = 1, Y_k = 0 \forall k < g \vert
H(1,g) = h)
\end{equation}
Now, $Y_g$ and $Y_k$, with $k < g$, cannot both be $1$. The
reason for this is that for $Y_k$ to be true, there
must be exactly one black nonnegative cell in generation
$k$, and never again. Thus, (\ref{m2}) is equivalent to
\begin{equation}
\label{m3a}
\sum_{g,h} P(H(1,g) = h) P(Y_g = 1 \vert H(1,g) = h)
\end{equation}
or, separating out the effects of generation $g$, we have
\begin{eqnarray}
\label{m3b}
\sum_{g,h,h'} P(H(1,g) = h) P(H(g,g+1) = h' \vert H(1,g) = h)\\
P(Y_g = 1 \vert H(1,g) = h,H(g,g+1)=h')
\end{eqnarray}
For $Y_g$ to be $1$, the cell history in
generations $1$ through $g-1$ must meet the same conditions that
enable $X_g$ to be $1$; that is, in generation $g$ there must be
only one black nonnegative cell, at position $r$ or greater.
In addition, the history of generation $g$ must
meet certain requirements. That is, in generation $g+1$ there
must be two or more black nonnegative cells, both next to each
other, and both in positions $r$ or greater; that is,
$F_3(H(g,g+1))$ must be $1$. Thus, (\ref {m1c}) becomes
\begin{eqnarray}
\label{m3c}
\sum_{g,h,h'} P(H(1,g) = h) F_1(H(1,g))\\
P(H(g,g+1) = h' \vert H(1,g) = h)\\
F_3(H(g,g+1))
P(Y_g = 1 \vert H(1,g) = h,H(g,g+1)=h')
\end{eqnarray}
By Lemma \ref{mix2}, for every $1$-generation history $h_2$
such that $F_2(h_2)$ $= 1$, there is a constant $\beta$
depending only on $G$, and a $1$-generation history
$h_3$ such that
\begin{enumerate}
\item
$F_3(h_3) = 1$.
\item
Initial conditions are the same as under $h_2$.
\item
For any previous
history (starting at generation $0$) $h$, we have
\begin{eqnarray}
\label{m4a}
P(H(g,g+1) = h_3 \vert H(1,g) = h) \ge\\
\beta P(H(g,g+1) = h_2 \vert H(1,g) = h)
\end{eqnarray}
\item
At the end of generation $g$, given history
$h_3$, the negative black cells are exactly
the same as those at the end of $g$ given
$h_2$.
Furthermore, for no two $h_2$ will this $h_3$ be the same.
\end {enumerate}
Thus, (\ref {m3c}) is greater than
\begin{eqnarray}
\label{m5}
\sum_{g,h,h'} P(H(1,g) = h) F_1(H(1,g))\\
\beta P(H(g,g+1) = h' \vert H(1,g) = h) \\
F_2(H(g,g+1))
P(Y_g = 1 \vert H(1,g) = h,H(g,g+1)=h')
\end{eqnarray}
For $Y_g$ to be true, the white domain $D$ must, from that
point on, include at least cells $0$ through $r$.
Therefore, by Theorem \ref{newest},
in all infinite histories for which $Y_g$ is $1$, and
all finite histories in which the possibility of $Y_g$ remaining
$1$ stays open, the actions of cells on the two sides
of $D$ remain independent of each other. Hence, these
actions can be considered separately, as if we were dealing
with two different games.
Thus, at the beginning of generation $g+2$,
the probability that behavior on both
sides of $D$ will be appropriate is the product of the probabilities
of appropriate behavior on each side.
The probability that the behavior on the left side is appropriate
is the same as the probability that behavior on the left side
would be appropriate if, at this point, the negative black
cells were exactly the same as they are now, but there were
no nonnegative black cells.
Similarly, the probability that behavior on the right side
is appropriate is just the probability that all behavior
is appropriate, if the zone of uncertainty consisted
only of two, three or four contiguous black cells. By Lemma \ref{mix4},
this probability is at least $\alpha$.
Note that this is where the left-right symmetry of $G$
comes in; that is, the probability a symmetric zone
of uncertainty will glide arbitrarily far to the left
only must be the same as the probability it will glide
arbitrarily far to the left only.
Thus, (\ref {m5}) becomes
\begin{eqnarray}
\label{m6}
\sum_{g,h,h'} P(H(1,g) = h) F_1(H(1,g))\\
\beta P(H(g,g+1) = h' \vert H(1,g) = h) \\
F_2(H(g,g+1))
P(X_g = 1 \vert H(1,g) = h,H(g,g+1)=h') \alpha
\end{eqnarray}
This sum is less than the probability, given initial conditions
$I_{\epsilon}$, that the zone of
uncertainty will expand arbitarily far in both directions;
however, by comparison to (\ref{m1a}) through (\ref{m1}), it
is seen to be greater than $\beta \alpha \gamma (1 - \epsilon)$,
with $\alpha$, $\beta$, and $\gamma$ depending only on the
game, not on the initial conditions.
If $\epsilon$ is small enough, this contradicts
the assumption that, given these conditions, this probability
must be less than $\epsilon$.
\rule {2mm}{3mm}
\section{Standard Restricted Initial Conditions}
\label{stanrec}
It may be useful to consider another form of finitely
describable initial conditions, defined as follows:
\begin{defn}
{\bf Standard restricted initial conditions} are conditions such
that there is a rightmost black cell, and a leftmost white cell.
\end{defn}
In other words, under standard restricted initial conditions,
an infinite black domain is followed, left to right, by
none, two, or any other even number of finite domains (of
alternate colors), followed by an infinite white domain.
The zone of uncertainty is defined similarly as for
finitely describable initial conditions.
\begin{defn}
\label{zone2}
Under standard restricted initial conditions,
the\- {\bf zone of uncertainty} consists those
finite domains (if any), in between the two infinite domains.
\end{defn}
In some respects, the behavior of cellular games under
these conditions is easier to analyze.
That is, if there are finitely many black cells
there is always positive probability that all black cells die out.
This essentially ends the course of the game; thus, it makes it more
awkward to discuss the long-term behavior of a system. Under standard
restricted initial conditions, however, the two infinite domains
cannot merge, and cells of each color will always be present.
Behavior under standard restricted initial conditions can
be delineated as follows:
\begin{thm}
\label{t}
Let $G$ be a simple cellular game.
Then, under standard restricted initial conditions, one,
but not both, of
the two statements below hold:
\begin{enumerate}
\item The zone of uncertainty will, almost always, become empty
infinitely many times.
\item It will, almost always, become empty only finitely many
times.
\end{enumerate}
\end{thm}
{\sl Proof.} Suppose $G$ is such that, when the zone of uncertainty
is empty, there is positive probability $p$ it is for the
last time. Then the probability that it will reach minimal
size infinitely many times is
\begin{equation}
\label{ts1}
\lim_{n \rightarrow \infty} (1-p)^n = 0
\end{equation}
\rule {2mm}{3mm}
\begin{defn} A {\bf clumping process} is
a simple cellular game in which, under standard restricted
initial conditions, the zone of uncertainty almost always becomes
empty infinitely many times.
\end{defn}
\begin{defn}
Let a simple cellular game in which this zone, almost always,
becomes empty only finitely many times be called a {\bf mixing process.}
\end{defn}
Now, there is another kind of symmetry which may be
applied to cellular games; namely, they may be black/white
symmetric, as well as left/right.
We examine clumping processes which have both symmetries.
We show that if $G$ is a clumping process with both
such symmetries, each cell will change color infinitely
many times. To do this, we use a theorem which can be applied to
all symmetric, one-dimensional random walks with the Markov property.
In this theorem, we
show that the walker will {\sl cross} any position
infinitely many times. (In the ``usual'' walk,
in which the walker can only move one unit at a
time, this means that the walker will {\sl visit} every position infinitely
many times.)
\begin{thm}
Let $M$ be a one-dimensional random walk with the Markov
property. Let $X(t)$ be the position of that walk at time $t$.
Let $P_{0,1}$ equal $p_0 > 0$, $P_{0,k}$ equal $P_{i,i+k} \forall i$, and
$P_{0,k}$ equal $P_{0,-k}$ for all $k$. Then, for any $n$,
any $g$, and any value of $X(g)$, the quantity
$P( \exists h, h > g, X(h) < n)$ equals $P(\exists h, h > g, X(h) > n)$,
and they both equal $1$. That is, this random walk will almost always
cross every position infinitely many times.
\end{thm}
{\sl Proof.}
First, the probability that the $X(g)$ will stay bounded is $0$.
That is, suppose it were not. Then, there would be some $n$
such that
\begin{equation}
\label{ts2}
P(n = \limsup_{k \rightarrow \infty} \vert X(k) \vert) > 0
\end{equation}
However, we know $P_{-n,-n-1} = P_{n,n+1} = p_0 > 0 \forall n$.
Therefore, if the walk reaches position
$n$ ($-n$) infinitely often, it will almost always reach position
$n+1$ ($-n-1$) infinitely often.
We now show that the probability that there
are infinitely many $k$, such that $X(k)$ is not the same sign
as $X(k+1)$, is $1$.
Let a sequence $\{C_i\}$ with each $C_i \in \{-1,1\}$,
and integer sequences $\{k_i\}$ and
$\{n_i\}$, be constructed as follows: By the above discussion,
we know that, with probability $1$, there must eventually
be a $k$ for which $\vert X(k) \vert \ge 2$.
Let $k_1$ be the first $k$ for which this is true, and let $n_1 =
X(k_1)$. Let $C_1$ be $1$ if $X(k_1) \ge 2$, and $-1$ if $X(k_1)
\le -2$.
Given $C_{i-1}$, $k_{i-1}$, and $n_{i-1}$, such that $X(k_i) =
n_i$, let $C_{i}$, $k_{i}$, and $n_{i}$ be constructed as
follows. Let $k_i$ be the first $k$ such that $\vert X(k_i) -
n_{i-1} \vert \ge n_{i-1}$; note that there will
almost always be such a $k_i$. Let $n_i = X(k_i)$, and let $C_i =
1$ if $n_i \ge 2 n_{i-1}$, and $-1$ if $n_i \le 2 n_{i-1}$. Thus,
if $C_i$ is a different sign from $C_{i-1}$, then $X(k_i)$ will be a
different sign from $X(k_{i-1})$.
Now, since $P_{0,-k} = P_{0,k} = P_{n,n+k} \forall n,k$
the probability that each $C_i$ is the
same sign as the previous is ${1 \over 2}$. Since
the each $C_i$ is independent of all others, they will, therefore,
almost always change sign infinitely many times.
The same argument can be used to show that, for any $c$,
$X(k) - c$ will change sign infinitely often, and hence
that any point will be crossed infinitely many times.
\rule{2mm}{3mm}
\begin{cor}
Let $G$ be a clumping process with both left/right
and black/white symmetry. Let $G$ evolve under standard
restricted initial conditions. Then, under $G$, each cell will,
almost always, change color infinitely many times.
\end{cor}
{\sl Proof.} Let $X(i)$ be the position of the leftmost
cell in the white domain, the $i$th time the zone of
uncertainty is empty. Then we know there will, almost
always, be infinitely many $X(i)$. Since cellular
game evolution is independent of exact location, $P_{0,k} =
P_{n,n+k} \forall n,k$. Since $G$ is symmetric
in both senses, $P_{0,-k}$ will equal $P_{0,k}$ for all $k$.
Now, let $\alpha$ be the smallest probability that
any cell lives, and $\beta$ the largest. By definition,
they are both positive. Let the zone of uncertainty
be empty in generation $g$ for the $i$th time. There
is probability at least $\alpha (1 - \beta) \alpha$ that
cell $X(i) - 1$ lives, cell $X(i)$ dies, and cell
$X(i) + 1$ lives. Given these events, there is probability
${1 \over 2}$ that cell $X(i)$ becomes white in
the next generation, thus ensuring that $X(i+1) = X(i) + 1$.
Thus $P_{X(i),X(i)+1}$, and hence $P_{0,1}$ and $P_{0,-1}$
must be positive. Therefore the process $X(0), X(1), \ldots,
X(n), \ldots$ satisfies the requirements of the above theorem.
\rule{2mm}{3mm}
Similar results, however, have not yet been obtained for
mixing processes. That is, we cannot show that for
mixing processes with both left/right and black/white
symmetry, evolving under standard restricted initial
conditions, the zone of uncertainty will, almost always,
expand arbitrarily far in both directions.
As shown before, there cannot,
under these conditions, be a ``gli\-der'' with two
domains of the {\sl same} color on each side of it.
This does not automatically imply that there
cannot be a ``glider'' with two domains of {\sl different}
colors on each side of it. However, the one fact
does suggest the other, which is here presented as a conjecture.
\newtheorem{conj}[defn]{Conjecture}
\begin{conj}
Let $G$ be a simple cellular game with both left/\-right and
black/white symmetry. Then, under standard restricted
initial conditions, the probability that the zone of
uncertainty will expand arbitrarily far in one direction
only is $0$.
\end{conj}
Note that if this conjecture is true, it can be shown
that under both finite and standard restricted initial
conditions, no finite domain $D$ (with probability $1$) will
grow arbitrarily large. This would be done by considering
the two areas between $D$ and the infinite domains on the left and
right to be gliders. Since $D$ will grow arbitrarily large,
each of these gliders could be shown not to be affected by what happens
on the other side of $D$. They could thus be considered
to be ``gliding'' arbitrarily far in one direction, between
two infinite domains. By Theorem \ref{main} (The
Double Glider Theorem), this is
not possible if the two domains are the same color;
and, if the above conjecture is true, this would not be possible
if the two domains are different colors.
\section{Examples}
\label{examples}
At this point, one may ask if either mixing processes or
clumping processes exist. Computer simulations suggest
that both kinds of behavior are indeed possible.
The experiments described in this chapter simulate one-dimensional
simple games of radius $1$. In these games,
the life probability of a cell is one value, $p_1$, if it is the
same color as both of its neighbors, and a different
value, $p_2$, otherwise. These games are thus both left/right
and black/white symmetric. Let such games be called
``join/mix'' processes.
Using the definition of simple cellular game, these processes can be
specified more formally as follows:
\begin{itemize}
\item There is one cell for each integer, or each integer mod $k$.
\item In each generation, each cell is either white or black.
\item If a cell is the same color as both of its neighbors,
its probability of living in that generation is $p_1 > 0$. Otherwise,
its probability of living is $p_2 > 0$.
\item If a cell lives in a generation $g$, it keeps its color
in generation $g+1$.
\item If a cell dies in generation $g$, its color in
generation $g+1$ is either that of its nearest living neighbor
to the left, or to the right, with a $50\%$ probability of each.
\item If, in generation $g$, a cell has no living neighbors
on each side, it has a $50\%$ probability of assuming either
color in generation $g+1$.
\end{itemize}
In computer experiments, games of this type are run on
a circular lattice of cells. Initially, two black
domains are placed in a mostly white area. Figure \ref {square} shows
how results vary as $p_1$ and $p_2$ vary. That is, if $p_1$
is high, there seems to be little noise at the borders
between domains. In such cases, $p_2$ determines the rate
of domain movement. If, on the other hand, $p_1$ is low
and $p_2$ high, the noise between domains seems to grow
so fast it quickly takes over the ring. If $p_1$ and
$p_2$ are both low, the asymptotic behavior of the process
is not readily apparent. However, the resemblance to
natural structures is noticeable.
\begin{defn}
The join/mix game such that $p_1 = 0.85$ and $p_2 = 0.15$
is called the the {\bf Join or Die Process}.
\end{defn}
The process is given this name because a cell must join; that
is, be the same color as both of its neighbors, or else
it is very likely to die. Computer simulations suggest that
the Join or Die process is, in fact, a clumping process. That
is, the area of ``noise'' between two large domains appears to stay,
quite small most of the time. We thus conjecture:
\begin{conj}
The Join or Die process is a clumping process.
That is, if it evolves under standard restricted initial conditions,
the zone of uncertainty will almost always become empty infinitely
many times.
\end{conj}
Now, consider what happens, under the Join or Die or other
clumping processes, to ``normal'' or ``almost all'' initial
conditions. Let us suppose that average domain size will, almost
always, grow arbitrarily large. Thus, after many generations,
most cells in any given section of the lattice would, most
likely, be in extremely large domains; and a visual depiction of
this section would show large domains, with a noisy boundary
between them (consisting of small domains, many containing no
living cells). The noisy boundary between two such large domains
would, therefore, move in some sort of symmetric random walk; and
it might be unlikely that the noise in the boundary would grow to
significant size, compared to the domains it bordered.
Thus, the evolution of such a process might be very similar to
that of a process in which the size of the ``noise'' between
domains stayed bounded. Let us suppose, without loss of
generality, that the size of the ``noise'' stayed at one cell.
Let us describe such a model (which is {\sl not} a cellular game)
as follows:
\begin{itemize}
\item
There is one cell for each integer.
\item
Each cell, at each time, is in either a black, white, or gray state.
\item
Gray domains, which may be no more than one cell wide, are called
``particles.'' Particles separate black and white domains,
which alternate.
\item
Particles move either to the right or left, in accordance with
some symmetric random walk.
\item
If two particles meet or cross, then two white domains have
absorbed a black domain (or two black domains a white one). Thus,
these two particles, which represent the boundaries between two
domains, disappear.
\end{itemize}
This is, exactly, a stochastic process discovered by Erd\H os and
Ney \cite{erdos} and called the {\sl annihilating particle
model}. And, computer simulations do, indeed, show apparent
similarities of behavior. These similarities suggest that
study of one subject may shed light on the other.
Another join/mix game is the Mixing Process.
\begin{defn}
The join/mix game such that $p_1 = 0.15$ and $p_2 = 0.85$
is called the the {\bf Mixing Process}.
\end{defn}
That is, the probabilities are exactly reversed from those
used for the Join or Die process. As this process evolves,
computer experiments suggest that the ``noise'' between
two large domains is likely to grow with time.
\begin{conj}
\label{cmix}
The Mixing Process is a mixing process.
That is, if it evolves
under standard restricted initial conditions, the zone
of uncertainty will almost always grow arbitrarily large.
\end{conj}
\end{thesis}
|
1,116,691,499,017 | arxiv | \section{Introduction}
By far the most abundant nuclei in the universe are hydrogen and helium, still
little modified from their primordial ratio of about 12 to 1 by number. Of all
elements however the most abundant absorber is helium, because it is far
less ionized than hydrogen; under the hard photoionizing conditions of
intergalactic space at redshifts greater than 2, HeII outnumbers HI by a
factor $\eta$ of the order of 100, and has a greater Lyman-$\alpha$ line
opacity by a factor of $\eta/4\gg10$. This transition is therefore the most
sensitive observational probe of diffuse matter. Helium opacity is measurable
almost everywhere---as opposed to HI absorption which over most of redshift
space (between the HI Lyman-$\alpha$ forest lines) has a very low optical depth.
HeII Lyman-$\alpha$ absorption occurs in the deep ultraviolet
(303.8~\AA\ in the rest frame, four times shorter than HI
Lyman-$\alpha\ $), and can only be observed from space, in a small number of
high redshift quasars which lie by chance on relatively clean lines of
sight with little foreground absorption. Three quasars are currently
known for which the HeII Lyman-$\alpha$ transition is
accessible to {\it Hubble Space Telescope} (hereafter {\it HST})
spectroscopy, along with one more quasar at slightly lower redshift
which has been observed by the Hopkins Ultraviolet Telescope (HUT).
Recent observational progress on HeII Lyman-$\alpha$ studies began with
Jakobsen et al.'s\markcite{Jak94} (1994) discovery of the ``HeII Gunn-Peterson effect" in
Q0302-003 ($z=3.285$), followed closely by
Davidsen et al.'s\markcite{David96} (1996)
higher-resolution HUT observation of HS~1700+64 (for which HeII at $z=2.72$
was inaccessible to {\it HST} spectrographs), and Tytler's and Jakobsen's
observation (Tytler \& Jakobsen\markcite{Tyt96} 1996;
Jakobsen\markcite{Jak96} 1996) of the
faintest of the HeII quasars, PKS~1935-692 ($z=3.18$). These papers
established the presence of absorption at
low resolution and suggested the increase of the mean opacity with redshift,
as expected in models of cosmic ionization. Later, Goddard High Resolution
Spectrograph (GHRS) observations of Q0302-003
(Hogan, Anderson \& Rugers\markcite{HAR97} 1997;
hereafter HAR97) and the brighter quasar HE~2347-4342 at $z=2.885$
(Reimers et al.\markcite{Rei97} 1997) added enough resolution to
begin to resolve some features of the helium Lyman-$\alpha$ forest and to
cross-correlate with the absorption observed at the same redshift in HI,
allowing studies of matter in the gaps between the HI forest lines.
These data have provided the best constraints on diffuse matter at these
redshifts, and estimates of gas density give broad agreement with
predictions based on numerical simulations of galaxy formation
and extrapolations from the observed HI Lyman-$\alpha$ forest
(Croft et al.\markcite{Croft97} 1997; Zhang et al.\markcite{Zhang97} 1997;
Zheng, Davidsen, \& Kriss\markcite{Zheng98} 1998).
The detection of the ``proximity zone" around Q0302-003
(HAR97\markcite{HAR97}) also allowed
us to deploy a new set of arguments constraining
ionizing conditions at those redshifts, giving broad agreement with model
expectations (Haardt \& Madau\markcite{Haardt96} 1996; Fardal, Giroux,
\& Shull\markcite{Far98} 1998).
The introduction of the Space Telescope Imaging Spectrograph (hereafter, STIS;
Kimble et al.\markcite{Kim98} 1998)
allows further improvements, including larger spectral range (allowing study
at lower redshift on the same lines of sight), better background subtraction
(allowing more precise estimates of high optical depths), and better
sensitivity (allowing more precise optical depth estimates and in some cases
higher spectral resolution). We have used new STIS data together with archival
{\it HST}/GHRS data to study the faint quasar PKS~1935-692 in enough detail
to detect the HI correlation and the proximity zone, with comparable quality to
the GHRS results on Q0302-003. The results here for PKS~1935-692 are remarkably
consistent with the earlier observations of HAR97\markcite{HAR97} and
Reimers et al.\markcite{Rei97} (1997),
supporting the idea that these few lines of sight can be used to draw general
conclusions about the cosmic evolution of baryon distribution and ionization.
\section{STIS Observations and Reductions for PKS~1935-692}
The new {\it HST} data discussed herein were taken on 2--3 July 1998 (UT)
using STIS. The G140L grating and the FUV-MAMA detector provide
good throughput at modest 1.2~\AA\ spectral resolution, and cover a wide
wavelength range of 1150--1720~\AA. The 0.2$''$ STIS slit was chosen as a
compromise to limit geocoronal contamination, while still providing good UV
throughput. The spectra were taken in ``TIME-TAG" mode, to eventually allow
for even subtle comparisons of data taken during
low vs. high background intervals, etc. However in this initial study, we
report merely on the combined observations obtained by summing together the
TIME-TAG observations of the July 1998 visit (i.e., our initial analysis
here treats the STIS data as though they were simple ``ACCUM" observations).
The spectra discussed here were processed through the standard STScI/STIS
data reduction pipeline; absolute wavelengths should be calibrated to better
than 0.6~\AA, with the accuracy of absolute spectrophotometry better than
$\sim$10\% (and relative wavelengths and photometry somewhat more accurate;
e.g., see Baum et al.\markcite{Baum96} 1996).
The pipeline calibrated STIS spectrum is shown in Figure 1, overplotted with an
error spectrum which assumes $\sqrt{N}$ statistics
(errors plotted are per pixel, with 2 pixels per
1.2 \AA\ resolution element). These STIS data present a marked improvement
over earlier GHRS and Faint Object Spectrograph data on PKS~1935-692 for a
number of reasons: STIS has much
lower instrumental background which is monitored simultaneously with the
science data, excellent UV throughput, capability for efficient target
acquisition permitting observations in a small aperture, and as a long-slit
device much improved subtraction and monitoring of geocoronal
contamination (e.g., compare Figures 2a and 2b). These aspects are especially
important for PKS~1935-692, which has
the lowest flux near HeII of any of the four ``HeII Gunn-Peterson" quasars
discussed to date, and because geocoronal Lyman-$\alpha$ contaminates the
spectral region of interest shortward of HeII. Figure 1 displays the grand
average of 12,785~s of STIS exposure taken during the 5 {\it HST} orbits of the
July 1998 observations of PKS~1935-692. The data from each individual
{\it HST} orbit are consistent with this average spectrum, except that those
from the second orbit may be systematically somewhat low in flux
(perhaps suggestive of an overestimate of the background level for this one
orbit). Pending a better understanding of the origin of this possibly
discrepant 2nd {\it HST}-orbit pipeline reduction, we conservatively include
it in our subsequent analysis; however, in most instances our results would
be modified only slightly if, for example, just data from the other 4
{\it HST} orbits (10,085~s of exposure) were considered.
\section{Results for PKS~1935-692}
Shown in Figure 2a is a portion of our STIS spectrum, highlighting
a region of interest for HeII considerations. The absorption evident below
about 1270~\AA\ is almost surely due to HeII. The break is observed to occur
near 1271$\pm$2~\AA\, consistent with HeII near the redshift of $z=3.185$
quoted for PKS~1935-692 by Jakobsen\markcite{Jak96} (1996).
\footnote{Whether or not $z=3.185$ is
a reliable value for the ``true" cosmic or systemic redshift of the quasar
remains to be determined as, for example, a redshift based on narrow emission
lines has not been published for PKS~1935-692 as far as we are aware;
moreover the redshift quoted in several QSO catalogs is $z=3.17$.}
A chance superposition of a strong HI Lyman limit system causing the
absorption break seems unlikely, as our STIS spectrum does not reveal any
higher order HI Lyman series absorption at $z=0.4$ with HI column density
in excess of
about 10$^{15}$--10$^{16}$cm$^{-2}$, as would likely be detectable if an
unrelated low redshift HI Lyman limit system were the cause of the break
near 1271~\AA. The most persuasively identified HeII absorption features are
those matching redshifts of HI features, including the break and a prominent
void (see below).
Blueward of the HeII break, a ``shelf" of flux is seen in PKS~1935-692 at a
mean level of $3.4\pm 0.2\times 10^{-17}$ erg/sec/cm$^{-2}$/\AA\
and extends to at least about 1250~\AA\, or 20~\AA\ from the edge.
A very similar shelf was also detected in Q0302-003 by HAR97\markcite{HAR97},
and as we argued there such a shelf may plausibly be
attributed to a ``proximity effect", due to the fact that more of the helium
is doubly ionized proximal to the hard radiation field of the quasar.
At 1246.5~\AA, a strong recovery void is seen in HeII that is correlated in
redshift with a marked void in HI absorption. The reality of this recovery
feature at 1246.5~\AA\ is hardly in doubt. First, the
statistical significance of the feature is quite high in the
grand average STIS spectrum. Second, the feature is strongly detected in
STIS data from each of the 5 {\it HST} orbits, examined separately.
Finally, this feature is also independently seen in a 41,000~s GHRS spectrum
of PKS~1935-692 available from the {\it HST} archive (see Figure 2b), and
therefore is surely not an instrumental artifact. (Note that the zero-level of
the GHRS spectrum in Figure 2b is unreliable, as the GHRS data were taken using
a $2''\times2''$ aperture and are evidently strongly contaminated by
geocoronal Lyman-$\alpha$ and OI 1302~\AA\ airglow).
Blueward of the proximity shelf and recovery void, the flux at $<$1242~\AA\ in
PKS~1935-692 is further depressed, which we again attribute (see also
Jakobsen et al.\markcite{Jak94} 1994, Davidsen et al.\markcite{David96} 1996,
Jakobsen\markcite{Jak96} 1996,
HAR97\markcite{HAR97}, Reimers et al.\markcite{Rei97} 1997) to absorption by
a more diffuse intergalactic gas. For
comparison to the results of various other studies, we estimate that the
total HeII optical depth toward PKS~1935-692 outside the quasar proximity
zone (and excluding the void at 1246.5\AA), including both cloud and diffuse
contributions is $\tau_{total}\approx 3.1 (+1.4,-0.6)$ at 95\% confidence
(but note the 99\% confidence interval includes infinite optical depth). Here
we have compared the marginally detected mean flux of
$6.7 \pm 2.6 \times 10^{-18}$ erg/sec/cm$^2$/\AA\ averaged over the
1225--1240~\AA\ region shortward of the shelf/void, to a representative value
of $1.5 \times 10^{-16}$ erg/sec/cm$^2$/\AA\ longward of the break; the flux
longward of the break is estimated from the average flux in the
1305--1320~\AA\ region, although this value is also typical of the continuum
level over a much broader 1300--1550~\AA\ region, as may be seen in Figure 1.
[If we consider data from all {\it HST}
orbits except the second, the mean flux in the 1225--1240~\AA\ region is
$11 \times 10^{-18}$ erg/sec/cm$^2$/\AA, and
hence $\tau_{total}\approx 2.6 (+0.8,-0.4)$.]
The value for $\tau_{total}$ estimated here for PKS~1935-692 compares well with
that which we estimated shortward of the proximity shelf in Q0302-003
(HAR97\markcite{HAR97})
of $\tau_{total}\approx 2.0 (+1.0,-0.5)$. These values are also compatible
with the Reimers et al.\markcite{Rei97} (1997) spectrum for HE~2347-4342,
if $\tau_{total}$ for HE~2347-4342
is also estimated over a comparably broad redshift interval.
The region blueward of geocoronal Lyman-$\alpha$ in PKS~1935-692 may potentially
prove of interest with additional data and more sophisticated reductions, but
we do not consider it further in this initial study. The strong low-redshift HI
Lyman-$\alpha$ feature at 1583~\AA\ evident in Figure 1
(see Jakobsen\markcite{Jak96} 1996)
might be expected to cause strong HI Lyman limit absorption below $1190$~\AA.
Related $z=0.3$ Lyman-$\beta$ and Lyman-$\gamma$ absorption is likely present
near 1336~\AA\ and 1266~\AA, respectively, although the former
is probably blended with interstellar CII absorption, and the latter hard to
interpret in detail as it falls in the proximity shelf and hence is blended
with HeII absorption.
A more restricted portion of our new PKS~1935-692 STIS spectrum
is displayed in Figure 3, to facilitate comparison with simple models for
the HeII absorption. Following our procedure detailed in HAR97\markcite{HAR97},
we overlay the observed spectrum with a synthetic spectrum
generated from models which assume that the absorption arises
entirely from helium in the same clouds that cause the HI Lyman-$\alpha$
forest absorption. In making these models, we begin with an optical
echelle CTIO spectrum of PKS~1935-692 kindly made available by
Outram et al.\markcite{Out98} (1998), and derive HI line lists (from fitted
Voigt profiles via VPFIT; Webb\markcite{Webb87} 1987).
Then, using the HI line parameters down to a threshold
column density of $\sim 2\times 10^{13}$~cm$^{-2}$, the HI forest cloud Doppler
parameters, redshifts, and column densities are used to predict the HeII
absorption from the same clouds, and the predicted HeII absorption degraded to
the STIS resolution. We also include in our models the opacity contribution
from the strong HI system at $z=0.3$ (using an HI column density estimated from
applying VPFIT to the STIS spectrum).
The most prominent features of
the HeII absorption, the major HeII break itself near 1271~\AA\ and the
marked void at 1246.5~\AA,
are accurately predicted from the HI cloud
models. (Note however that some of the absorption near 1266~\AA\ is
likely related to the low-redshift strong HI Lyman-series system at $z=0.3$).
In addition we note the hint of a correlation
with several other observed HeII absorption
features, in particular a tendency of the HeII flux to
recover in the voids or gaps between HI lines.
With these new results, the correlation of some HeII features with HI
clouds has thus been demonstrated in all three of the HeII quasars
observed with {\it HST} resolution: Q0302-003 (HAR97\markcite{HAR97}),
HE~2347-4342 (Reimers et al.\markcite{Rei97} 1997), and
PKS~1935-692 (this work).
Correlating the HI and HeII absorption yields constraints
on the ionizing spectrum, cloud properties and diffuse
gas density which cannot be deduced from HI absorption alone. We derive
some specific constraints in \S 4.
The models displayed in Figure 3 assume two constant ratios of
HeII to HI for all the clouds, $\eta\equiv N(HeII)/N(HI) = 20$,
corresponding to a spectral slope $\alpha= 1.8$ (roughly appropriate
near the quasar), and $\eta = 100$ (roughly appropriate to our
limit derived below far from the quasar). A change in $\eta$ indeed
appears to be required by the data. There is, of course, some uncertainty
even in the appropriate value to assume for the near-quasar ionizing spectral
slope; Zheng et al.\markcite{Zheng97} (1997) find $\alpha =1.8$ for typical
radio-quiet quasars,
but a somewhat steeper spectral slope of $\alpha=2.2$ ($\eta \sim 35$) for
radio-loud objects, and one might roughly estimate $\alpha \approx 2$
($\eta \sim 30$) from the nearby continuum data displayed in Figure 1
for PKS~1935-692 itself. For such reasons, the models displayed in
Figure 3 are only illustrative. These models also assume
$\xi\equiv b_{He}/b_H= 1$ corresponding
to pure ``turbulent" broadening, giving the maximal HeII/HI optical depth.
The predicted absorption depends sensitively on $\xi$, as illustrated
in Figure 4, which shows the maximal and minimal predicted
absorption for $\eta = 100$ and $\eta = 500$.
For all the models, the HeII absorption
indicates the presence of some additional gas undetected in HI,
especially in most of the voids.
Of course the gas might be detected in HI with better optical data.
Because the HI optical depth is so much smaller than that of HeII, the model
predictions from HI magnify small observed optical depths which
are sensitive to
noise and to systematic uncertainties such as the continuum level.
In this work we do not attempt a detailed reconstruction of these
effects but note that the HeII data require diffuse HeII absorption from
material undetected in HI at the level of the current optical data
(corresponding to $\tau_{\rm HI}$ \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.1), although the amount
of this material required differs substantially in various portions of the
spectrum. For example, note in Figures 3 and 4 the substantial model flux near
1230~\AA\ that is not evident in the HeII observations. It is clear that HeII
data provide the best information about the low-density gas at most redshifts,
and this will be true even with much better HI optical data.
\section{Implications for Diffuse Gas and Ionizing Radiation}
The properties of the absorption we find are similar in
virtually all respects to those we found for Q0302-003, and we
interpret our results in parallel with our paper on that quasar.
We summarize the arguments here and the new quantitative results on
PKS~1935-692; for a more detailed discussion including the derivations,
assumptions, and further references, see the corresponding sections (noted
below) of HAR97\markcite{HAR97}.
\subsection{Conditions Near the Quasar (HAR97 \S 3.2)}
For several arguments concerning the absorption in the proximity
shelf we wish to understand at a rough quantitative level the influence
of the quasar on the ionization of the surrounding gas. PKS~1935-692 is
fainter than Q0302-003 by a factor of about 0.6, at close to the same
redshift, so estimates of its ionizing influence on the foreground gas scale
by this factor. We assume that the HeII ionizing continuum flux (228~\AA) is
comparable to that longward of 304~\AA, probably a fair
assumption since the HeII Lyman-$\alpha$ emission line, where much
of the internally absorbed ionizing continuum would be reradiated,
is not strong. However it is important to allow for the possibility of other
sources of absorption much farther along the line of sight, such as an
accumulation of HI Lyman continuum absorption at lower redshift than the gas
in the proximity zone. Suppose absorption reduces the observed 304~\AA\
flux by a total factor $R^{-1}$; conversely, given the observed flux, the
flux that would be observed at the continuum edge if unabsorbed increases to
$f_\lambda\approx 1.5 \times 10^{-16}R\ {\rm erg\ cm^{-2}\ s^{-1} \AA^{-1}}$,
(or $f_\nu \approx 5 \times 10^{-30}R\ {\rm erg\ cm^{-2}\ \ s^{-1} Hz^{-1}}$).
We assume an intrinsic power law spectral energy distribution
$f_\nu\equiv f_\lambda c\nu^{-2}\propto \nu^{-\alpha},$
where typically $\alpha\approx 1.8$. Then, at a wavelength offset
$\delta\lambda$ from the HeII edge we are viewing absorption
by HeII which sees an ionizing spectral flux from the quasar,
$$
F_\nu \approx 0.7\times 10^{-23} (\delta\lambda/20\AA)^{-2}
R\ {\rm erg\ cm^{-2}\ \ s^{-1} Hz^{-1}},
$$
and the ionizing photon flux
$$
F_\gamma\equiv \int_{\nu_i}^\infty d\nu(F_\nu/h\nu)\approx 1.1\times 10^{4}
\alpha^{-1}
(\delta\lambda/20\AA)^{-2} R {\rm cm^{-2}sec^{-1}}.
$$
These estimates are for $\Omega=1$, and differ by factors of order unity for
other cosmological models. We apply this estimate recognizing that
uses of the quasar flux for ionization arguments in the following two
subsections will be subject to time variability of the source, so it is not
possible to calibrate the errors in these estimates precisely.
\subsection{Quasar Lifetime and Ionizing Background from the
Proximity Effect (HAR97 \S 3.3)}
With this ionizing flux, the time it takes to ionize helium in a sphere
out to a distance from the quasar corresponding to spectral
offset $\delta\lambda$ is (in a flat cosmological model,
and ignoring recombinations):
$$
t_Q\ge 10^7 h^{-1} \alpha R^{-1} (\delta \lambda/ 20\AA)^3
(\Omega_gh^2/10^{-2})\ {\rm yr}.
$$
Since this is a plausible lifetime for the quasar, the shelf
can be explained as due to the second ionization of the helium
by the light of the quasar itself, even if $R$ is not much
larger than unity.
Assume this is the case. Then it may be that the size of the region
influenced primarily by quasar ionization is limited by the quasar lifetime, or
that the edge is defined by the point at which the quasar light matches the
general intergalactic background. In the latter case, we can use the proximity
effect to estimate the background ionizing flux; in the former case, it
gives an upper limit to the background flux.
Taking $\delta\lambda\approx$20~\AA\ as the point where
the quasar ionizing flux is equal to the ionizing background,
the ionizing spectrum has a specific intensity
$ J_{228}\approx 0.6\times 10^{-24}R\
{\rm erg\ cm^{-2}\ \ s^{-1} Hz^{-1}sr^{-1}}$, implying
a soft spectrum, with ratio of hydrogen to helium
intergalactic ionizing fluxes $S \equiv J_{912}/J_{228}
\approx 1.7\times 10^3 R^{-1} J_{912,-21}$. If $R$ is of
the order of unity, this large ratio implies a very soft
spectrum, consistent with the idea that breakthrough has
not yet been achieved at 228~\AA\ (e.g., Reimers et al.\markcite{Rei97} 1997);
for consistency with other estimates and models (e.g. Haardt
\& Madau\markcite{Haardt96} 1996) which give $\eta\approx 1.7 S\approx 10^2$,
we require $R\ge 10$, or an observed quasar flux smaller by a comparable
factor from its average over the last ionization-response time
($\approx 10^{6\pm1}$ years).
\subsection{Diffuse Gas Near the Quasar (HAR97 \S 3.4)}
In the region near the quasar where its flux dominates
the photoionization, we can use estimates of
the mean optical depth $\tau_{GP}$ to estimate
the density of diffuse gas.
We use the above estimate (\S 4.1) for the
intensity of ionizing radiation at a spectral
offset $\delta\lambda$, tied to the observed
quasar flux, to give the HeII/He ratio. We assume the
standard primordial helium abundance by number
$Y_P=0.24$ to derive a total mass density from $n(He)$.
Absorption is computed in the optically-thin limit (that is,
not counting atoms whose absorption is hidden in saturated
lines) and ignoring recombinations (a valid assumption
for low overdensity). Relaxing either of these assumptions
results in a larger density, so a lower limit to
the required overall gas density is
$$
\Omega_g\approx 0.01 \tau_{GP}^{1/2} R^{1/2}(\alpha/1.5)^{-1/2}
(\delta\lambda/20\AA)^{-1}(h/0.7)^{1/2}
$$
This limit may be applied to our data on the proximity shelf in the voids
between HI clouds, where the absorption is likely dominated by underdense
diffuse gas. We should adopt $R=1$ in foreground absorption, and an optical
depth $\tau_{GP}$\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1 in the proximity zone (crudely accounting for the
discrete cloud contribution to the opacity), to derive a lower limit on
$\Omega_g$. (The limit is again
subject to uncertainty from quasar variability over $10^{6\pm1}$ year
timescales, and so cannot be precisely calibrated; to lower the limit however,
the quasar in this case would have to have been dimmer in the past,
making the large proximity zone discussed in \S 4.2 even harder
to explain).
The allowed range for the total baryon density from nucleosynthesis and from
low redshift direct measurements is about
$\Omega_b\approx 0.01$ to $0.04$ (see Fukugita, Hogan, \& Peebles\markcite
{Fuk98} 1998). Thus a substantial fraction of all the baryons is
needed in diffuse, optically-thin gas to produce the observed absorption
in the near-quasar region. This also accords with
expectations from gravitational-instability models.
Note that the ``near-quasar region" extends over about 4,000 km/sec,
many correlation lengths,
so the distribution of the gas, clouds and dark matter over this
region is probably typical of the universe as whole, even
if the ionization state is not.
The low-$\tau$ void at 1246.5~\AA\ in PKS~1935-692 can be explained as a
modest underdensity ($\sim 10^{-1}$) of gas,
extending over a fairly large interval 3~\AA\ $\approx$ 1000 km/sec.
This type of structure is expected on occasion.
\subsection{Diffuse Gas in the HI Lyman-$\alpha$ Voids (HAR97 \S 3.6, \S 3.9)}
We now turn to results which are pure foreground effects,
not dependent on the properties of the quasar.
First, a conservative lower limit on the
gas density can be derived from simply assuming that all of the
helium is in the form of HeII. This yields
$$
\Omega_g h^2 \ge 1.7\times 10^{-5}h
\tau_{GP}[n({\rm HeII)}/n({\rm He})][(1+z)/4.185]^{-3/2}.
$$
At any given redshift, this is the density implied
to achieve an optical depth $\tau_{GP}$. If one
considers the integrated contribution of
all matter then this applies regardless of saturated lines or
thermal broadening, since it simply counts the mean
number of atoms needed to achieve a given optical depth
and this increases with any saturation.
In the gaps far between identified lines, we know
there is still absorption (beyond the thermal wings of the lines)
and this gives the minimal amount of matter required to
produce absorption at those redshifts.
We can reverse this argument if, as observed, some flux
is detected far from the quasar. It is implausible to evacuate
gas to a very low density (e.g. $\Omega \le 10^{-4}$) by
gravitational instability. If detectable flux gets through then
over some redshift intervals
the helium is mostly doubly ionized HeIII.
There may be unresolved gaps between HeIII regions of high opacity
but they can fill at most a fraction $1- <\tau>^{-1}$ of the
volume where $<\tau>$ is the smoothed optical depth.
Note that Reimers et al.\markcite{Rei97} (1997) found evidence in HE~2347-4342
for large (6--10 \AA) regions of the spectrum consistent with zero flux
(see Figure 5c). If zero-flux regions are large and common then the epoch of
HeII ionization is being observed. The fact that no definitive evidence
appears for such regions in PKS~1935-692 (nor in GHRS spectra for Q0302-003)
is consistent with HeII ionization being mostly finished. The differing
results for HE~2347-4342 may be due to real variations in conditions over the
limited spectral (redshift) regions analyzed thus far, or chance differences
between the lines of sight, or to artifacts caused by the imperfect control
of backgrounds in GHRS (e.g., HAR97\markcite{HAR97} \S 2,
Heap\markcite{Heap97} 1997). For
example, if we re-reduce the Reimers et al. HE~2347-4342 GHRS spectrum using
only a subset of the data taken during intervals of low noise background
(defined arbitrarily as those 13,700~s of data during
which mean GHRS background count rates were $<0.008$ counts/sec/diode), we
find more consistency with the spectra of the other two quasars,
including a proximity shelf from the quasar itself (see Figure 5d).
On the other hand, such a re-reduction may also yield a biased estimate of the
zero-level, because for GHRS (unlike STIS) background noise events
(which occur in ``bursts") are not measured strictly simultaneously
with the on-object counts.
The true situation should be resolved soon with additional STIS data on all
three quasars.
\subsection{Ionizing Spectrum Far from the Quasar (HAR97 \S 3.7)}
As in Q0302-003 and HE~2347-4342, we find gaps in the HI forest lines where
there is no recovery in the helium spectrum (e.g., see Figures 3--4
near $\sim$1230\AA). The ratios of HI and HeII optical depths in such (HI)
gaps can be used to derive the spectral hardness, without knowing the gas
density but again assuming primordial abundances.
In a conspicuous Lyman-$\alpha$ void (e.g., near $\lambda_{HI}
\approx 4\times 1230$~\AA), the average optical depth of diffuse HI absorption
allowed by our current data is about 0.05.
The optical depth of HeII is at least of order
unity, requiring $\eta$ \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} $4\times 1\div 0.05 = 80$, in rough
agreement with, e.g., Haardt \& Madau's\markcite{Haardt96} (1996) prediction.
The ratio can be used to
constrain the spectrum in the usual way. This limit and the following can be
made more reliable with a better signal-to-noise spectrum of the HI forest.
\subsection{Upper Limit on $z$-Filling Gas (HAR97 \S 3.8)}
Using this limit on the spectrum, we can deduce a limit on the
IGM density, tied not to the ionizing flux from the quasar
(as we did above) but to the cosmic ionizing flux at the HI Lyman edge,
$J_{912,-21}$
(in units of $10^{-21}{\rm erg\ cm^{-2}\ \ s^{-1} sr^{-1} Hz^{-1}}$),
which has other observational constraints such as the HI clouds proximity
effect. Suppose that there is a smooth density of gas at all redshifts, even
in the voids; the fact that flux gets through on average means that
the density cannot be too high. Calculating the Gunn-Peterson optical depth
for smooth photoionized gas yields
$$
\Omega_g= 0.018 (\tau_{GP}/3)^{0.5}
( h/0.7)^{-1.5} (\eta/100)^{-0.5}
(J_{912,-21}/0.5)^{0.5}.
$$
As in Q0302-003 we thus get an upper limit $\Omega_g$\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.02 by using
representative values estimated above of $\eta$\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}$10^2$ and $\tau_{GP}$\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 3,
and using typical estimates $J_{912,-21}\approx 0.5$
(Haardt \& Madau\markcite{Haardt96} 1996).
The assumptions leading to this limit are conservative
in most respects. Modest density perturbations lead to
more recombinations, therefore more absorption for a given
mean density. It is true that the absorption per atom is
reduced in highly saturated lines,
but we know that the saturated material is dominated by
the clouds already detected in the
HI lines. Therefore, material between the identified HI forest
clouds cannot be repository for
a large number of baryons far in excess of the nucleosynthesis
value of $\Omega\approx 0.02$. Since helium is an inert gas,
this argument applies even to models where the hydrogen is
locked up in molecular or solid form.
\section{Conclusions}
There is broad consistency with a picture in which some helium absorption
occurs in the same clouds giving rise to the HI Lyman-$\alpha$ forest, but
with additional helium absorption by material with $\Omega_g \sim 10^{-2}$
that is essentially similar to the HI forest clouds, albeit at a lower density
and filling a substantial fraction of the space between them. The total amount
of material in this latter form is comparable to, but possibly somewhat smaller
than, that in the clouds; it is certainly not much greater. These constraints
on the overall distribution of baryonic material cannot be derived on the basis
of the hydrogen data alone, although they agree with the expectations for the
baryon density derived from nucleosynthesis, and with the
gravitational-instability formation model of the clouds. This overall picture
is consistent for all three currently known quasars where HeII Lyman-$\alpha$
is observable with {\it HST}, and therefore may be representative of the
Universe generally at $z\approx3$.
\section{Acknowledgments}
We are grateful for the superb support to this program provided by S. Baum,
K.~Peterson, and others at STScI. We thank J. Baldwin, G. Williger, and
P. Outram for kindly providing access to the CTIO optical data on
PKS~1935-692. We also thank A.~Davidsen for useful comments on this
paper, and J.~Wadsley for interesting discussions of some theoretical aspects.
This work was
supported at the University of Washington by NASA/{\it HST} grant
GO-07272.01.96A, and is based on
observations with the NASA/ESA Hubble Space Telescope, obtained at the Space
Telescope Science Institute, which is operated by AURA, Inc. under NASA
contract.
|
1,116,691,499,018 | arxiv | \section*{Introduction}
Path integrals \cite{feynman2010quantum}, also known as Wiener integrals in stochastic calculus \cite{wiener253, wiener294, kac1966}, are a well-established mathematical formalism
which has been used for a long time in Physics to develop accurate approximations and efficient computational techniques \cite{kleinert2009path}.
Among these, so-called semi-classical methods \cite{kleinert2009path}
play a central role.
These approximations can be developed in several ways which, while sharing the same limiting behavior, lead to genuinely different results.
The renowned Wentzel-Kramers-Brillouin approximation \cite{Wentzel1926, Kramers1926, Brillouin1926}, which is equivalento to a saddle-point approximation of the path integral \cite{kleinert2009path, Rajaraman1975, Kakushadze2015}, and the Wigner-Kirkwood expansion \cite{Wigner1932,Kirkwood1933,FujiwaraOW1982,HilleryCSW1984},
are well-known theoretical devices in this context.
A prominent role among semi-classical approximations is played by so-called {\em effective potential} methods \cite{feynman2010quantum,feynman1998statistical} based, borrowing renormalization group ideas, on `integrating out' the fluctuations around a `classical' trajectory. Although exact in principle, the calculation can be performed only at some level of approximation, using a perturbation scheme in which the choice of the unperturbed system plays a crucial role in the quality of the approximation.
A particularly successful effective potential approximation is the one stemming on a simple and nice idea originally due to Feynman \cite{feynman2010quantum} and independently developed by Giachetti and Tognetti \cite{GiachettiTognetti1985} and Feynman and Kleinert \cite{FeynmanKleinert1986} (GTFK), which is based on a self-consistent (non-local) harmonic approximation of the effective potential in a sense that will become clear in the following sections.
Basically, the GTFK effective potential is employed within the usual classical formalism, but accounts for the quantum nature of a system through suitable renormalization parameters it contains; hence, the approximation does not immediately lead to final results, but reduces a quantum-mechanical problem to a classical one, to be treated by any known method. Physicists know that this amounts to an enormous simplification.
The most appealing aspect is that the classical behavior is fully accounted for by the GTFK potential, so it opened the way to face challenging quantum systems whose classical analogues were known to be characterized by peculiar nonlinear excitations, {\em e.g.}, those dubbed solitons in 1D or vortices in 2D. The latter are the `engine' of a topological phase transition, for the study~\cite{KosterlitzT1973} of which Michael Kosterlitz and David Thouless (KT) earned the 2016 Nobel prize. By the GTFK method it has been possible to establish that some real magnetic compounds do show a KT transition.
Other quantum systems that were succesfully treated by (suitable generalizations of) the same method, are frustrated antiferromagnets, {\em e.g.}, the so-called two-dimensional (2D) $J_1$-$J_2$ model~\cite{CapriottiFRT2004}, and 2D Josephson-junction arrays, which can be artificially fabricated, also with the inclusion of resistors; in the latter case, the effective potential could be naturally extended to account for the related dissipative coupling with the environment~\cite{1997CRTVpre}.
The connection between the so-called euclidean path integrals \cite{feynman2010quantum, kleinert2009path}, namely those employed to describe the thermodynamics of quantum systems, and the formalism of derivatives pricing has also been known since the seminal papers of \cite{Linetsky1997} and \cite{BennatiRosaClot1999} (see also the recent review \cite{Kakushadze2015}). In particular, it is a known fact that a variable following a non-linear diffusion process can be described by the same formalism used to model the finite-temperature properties of a quantum particle in a potential which is linked to the drift of the diffusion, where the role of the mass is played by the inverse of the volatility squared, that of the temperature by the inverse of time and that of quantum fluctuations by the Brownian noise \cite{BennatiRosaClot1999}. The interest in financial engineering for the path-integral formalism mainly stems from the possibility of developing accurate approximation schemes, that are not otherwise available, or known, in traditional formulations of stochastic calculus \cite{BennatiRosaClot1999, Kakushadze2015, Capriotti2006}.
In this paper, we will consider the application of the GTFK method to generalized short-rate models of the form $r_t = r(Y_t)$
with $Y_t$ following the non-linear diffusion process specified by the following stochastic differential equation (SDE)
\begin{equation}\label{eq.diffusion}
dY_t =\mu_y(Y_t)\,dt + \sigma_y(Y_t) \,dW_t,
\end{equation}
for $t>0$, where $\mu(Y_t)$ and $\sigma(Y_t)$ are the drift and volatility functions, respectively, $Y_{0} = y_{0}$, and $W_t$ is a standard Brownian motion.
Short-rate models are of paramount importance in financial modeling, providing the foundation of many approaches
used for pricing of both interest rate and credit derivatives \citep{andersen2010interest,o2010modelling}. In particular, celebrated
affine models \citep{duffie} like those of \citep{Vasicek1977}, \citep{hw} and \citep{cir}, play a prominent role. This is mainly due to their
analytical tractability allowing one to derive closed-form expressions for fundamental building blocks like zero-coupon bonds
or, in the context of default intensity models \citep{o2010modelling}, survival probabilities.
Unfortunately, the availability of closed-form solutions comes often at the price of less than realistic properties of the underlying rates. For instance, Gaussian models
such as those of \cite{Vasicek1977} and \cite{hw}, when calibrated to financial data, typically imply that rates can assume negative values with sizable probabilities. While this can be possibly not a problem
for interest-rate models, especially in a low interest-rate environment, it is not consistent with absence of arbitrage in the context of
default intensity models \citep{o2010modelling}. On the other hand, square-root diffusions such as that of \cite{cir} - while guaranteed to be non-negative - may give rise to distributions of the par swap rate, see \cite{andersen2010interest,MercurioGarch}, that do not admit values below a finite threshold and may be considered therefore unrealistic.
Unfortunately, more realistic models lacks the same degree of analytical tractability as that shown by affine models. As a result, although widely used in practice,
their implementations rely on computationally intensive partial differential equations (PDE) or Monte Carlo (MC) methods for the calculation of bond prices or survival probabilities.
This is particularly onerous in the context of multi factor problems, notably the ones involving the calculation of valuation adjustments (XVA), cf. \cite{gregory}, that are currently very prominent in financial engineering.
Indeed, these applications require Monte Carlo simulations and, {\em e.g.}, the valuation of conditional bond prices or survival probabilities at different points
of the simulated paths, which are expensive to compute for models that lack closed-form solutions for these quantities. In this context,
reliable analytical approximations are particularly important to reduce the numerical burden associated with these computations.
More specifically, in this paper we will focus on developing approximations of the so-called (generalized) Arrow-Debreu (AD) densities,
see \citep{andersen2010interest,karatzas1991brownian}, also known as Green's functions, which are the fundamental building
blocks for pricing contingent claims. These are defined, in this setting, as
\begin{equation} \label{eq.ad}
\psi^Y_\lambda(y_T, y_{0}, T) = \mathbb{E}\Big[\delta(Y_T-y_T)e^{- \lambda \int_{0}^T du \,r_u } \Big|\, Y_{0} = y_{0} \Big],
\end{equation}
where $\lambda$ is a real number, and $\delta(\cdot)$ is the standard Dirac's delta function.
This, for $\lambda =0$, gives the transition density, of paramount importance for maximum-likelihood estimations in econometrics \cite{sahalia1999}, such that
\begin{equation}\label{trans}
\int_A dy_T \,\, \psi^Y_{0}(y_T, y_{0}, T) \equiv \mathbb{P}\left[Y_{T} \in A \,|\, Y_{0} = y_{0}\right]~.
\end{equation}
The price at time $t=0$ of a European option with expiry $T$ and payout of the form $P(r_T)$,
\begin{equation}
V(0) = \mathbb{E}\Big[e^{-\int_{0}^T du \, r_u} P(r_T) \Big|\, Y_{0} = y_{0} \Big],
\end{equation}
can be obtained by integrating the product of the payout function and the ($\lambda = 1$) AD density over all the possible values of the short rate
at time $T$, namely
\begin{equation}\label{cc}
V(0) = \int dy_T\, \psi^Y_1(y_T, y_{0}, T) P(y_T) ,
\end{equation}
where the integration is performed over the range of the function $y_T=r^{-1}(r_T)$. In particular, the moment generating function for
the random process $\int_{0}^T \-du\,r_u$ can be obtained for $P\equiv 1$,
\begin{equation}\label{eq.zeroad}
Z_\lambda(r_{0}, T) = \int dy_T \, \psi^Y_\lambda(y_T, y_{0}, T)~,
\end{equation}
which, for $\lambda =1$, gives the value at time $t=0$ of a zero-coupon bond with maturity $T$ \cite{andersen2010interest}.
In the context of default intensity models, where the default of a firm is modeled by the first arrival of a Poisson process with time-dependent intensity $r_t$,
\cite{o2010modelling},
Eq.~($\ref{eq.zeroad}$) for $\lambda = 1$ represents the survival probability up to time $T$, conditional on survival up to time $t=0$. This is the fundamental building block for the evaluation of cash flows that are contingent on survival or default, see \cite{o2010modelling}.
The structure of the paper is as follows. We start by reviewing the formalism of the GTFK effective potential method in the context of the path-integral formulation of quantum statistical mechanics. We then make the connection between the formalism used in quantum Physics and the one used in finance by reviewing the path-integral formulation of AD densities for non-linear diffusion and we show how the GTFK approximation can be used in the mathematical setting of stochastic calculus in order to develop a semi-analytical approximation for the generalized AD densities (\ref{eq.ad}), and zero-coupon bonds (\ref{eq.zeroad}) for non-linear diffusion of the form (\ref{eq.diffusion}). Remarkably, the GTFK method, yielding exact results in the limit of zero volatility and time to maturity as any semi-classical approximation, is also exact whenever the drift potential is quadratic, which means it is exact, as we will recall, for the Vasicek \cite{Vasicek1977} and quadratic model \cite{Kakushadze2015}. We finally illustrate the remarkable accuracy of the GTFK method for models for which an analytical solution is not available via the application to the so-called Black-Karasinski (BK) model \cite{BK} and the so-called GARCH linear stochastic differential equation (SDE) \cite{MercurioGarch, EEGARCH}, both of particular relevance for the valuation of credit derivatives.
\section*{Effective Potential Approximation in Quantum Statistical Mechanics}
We start by recalling the path-integral formalism of quantum thermodynamics for a non-relativistic particle of mass $m$ described by the standard Hamiltonian
\begin{equation}
\hat{\cal H} = \frac{\hat p^2}{2m} + V(\hat x)
\end{equation}
where $\hat x$ and $\hat p$ are the canonical coordinate and momentum operators such that $[\hat x, \hat p] = i\hbar$, with $\hbar$ the reduced Planck's constant,
and where $V(\hat x)$ is the potential the particle is subject to.
The quantum thermodynamical properties of the particle at temperature ${\cal T}$ can be described by the {\em density matrix} \cite{feynman2010quantum},
\begin{equation}\label{eq.pi0}
\hat \rho = e^{-\beta \hat{\cal H} }
\end{equation}
where $\beta = 1/ k_B {\cal T}$, with $k_B$ the Boltzmann's constant. The elements of the density matrix, in the coordinate representation, can be expressed
in terms of Feynman's path integral \cite{feynman2010quantum} as
\begin{align}
\rho(x_T, x_0, T) &\equiv \langle x_T | \hat \rho | x_0 \rangle = \nonumber \\ & \int_{x(0) =x_0}^{x(T) = x_T} \hspace{-0.1cm}{\cal D} [x(t)] \, \,e^{S[x(t)]}~,
\label{eq.pi}
\end{align}
where the path integration is defined over all paths $x(t)$ such that $x(0) = x_0$ and $x(T) = x_T$, with $T = \beta \hbar$ the so-called {\em euclidean time} and
the functional
\begin{equation}\label{eq.action}
S[x(t)] = - \frac{1}{\hbar} \int_{0}^T dt \left[ \frac{m}{ 2} \dot x^2(t) + V(x(t)) \right]~,
\end{equation}
is the {\em euclidean action}. The functional integration in Eq.~(\ref{eq.pi}), is formally defined as the limit for $N\to \infty$ of the expression
\begin{align}
\left(\frac{m}{{2\pi \hbar \Delta t}}\right)^{N/2} \int \ldots \int \prod_{i=1}^{N-1}dx_i \exp{\big[S(x_i,x_{i-1})\big]}~,
\end{align}
with $\Delta t = T / N$, $x_N\equiv x_T$ and
\begin{align}
&S(x_i,x_{i-1}) = \nonumber \\ &-\frac{\Delta t}{\hbar }\left [ \frac{m}{2}\frac{(x_i - x_{i-1})^2}{\Delta t } + V((x_{i-1}+x_{i})/2) \right]~.
\end{align}
Although the evaluation of the path integral in Eq.~(\ref{eq.pi}) is possible just in a few cases for simple potentials, the formalism allows for new kinds of approximations.
In particular, here we pursue an approximation stemming on an idea originally due to Feynman, that consists in classifying the paths according to an equivalence relation, and consequently decompose the integral into a first sum over all paths belonging to the same class, and a second one over all the equivalence classes. In particular, equivalent paths are those who share the average point, defined as the functional
\begin{equation}
\bar x[x(t)] = \frac{1}{T} \int_{0}^T \-\-dt \,\,x(t)~,
\end{equation}
so that each equivalence class is labelled by a real number $\bar x$ representing the common average point and we can factor out in Eq.~(\ref{eq.pi}) an ordinary integral over $\bar x$, namely
\begin{equation}\label{eq.PIADprice}
\rho(x_T, x_0, T) = \int d\bar x \,\, \rho_{\bar x}(x_T, x_0, T)~,
\end{equation}
where the {\em reduced density matrix}
\begin{align}\label{eq.reddens}
& \rho_{\bar x}(x_T, x_0, T) = \nonumber \\ & \int_{x(0) =x_0}^{x(T) = x_T} \hspace{-0.1cm}{\cal D} [x(t)] \delta \left(\bar x - \frac{1}{T} \int_{0}^T dt \,\,x(t) \right) \, \,e^{S[x(t)]}~,
\end{align}
represents the contribution to the path integral in Eq.~(\ref{eq.pi}) that comes from those paths that have $\bar x$ as average point.
As the path integration has been reduced to paths belonging to the same class, we can develop a specialized approximation for each class. In particular, the GTFK method approximates the potential in the action Eq.~(\ref{eq.action}) with a quadratic potential in the
displacement from the average point $\bar x$
\begin{equation}\label{eq.trialpot}
V_{\bar x}(x) = w(\bar x) + \frac{m}{2} \omega^2(\bar x) (x-\bar x)^2~,
\end{equation}
where the parameters $w(\bar x)$ and $\omega^2(\bar x)$ are to be optimized so that the {\em trial} reduced density matrix
\begin{align}\label{eq.trialreddens}
& \bar \rho_{\bar x}(x_T, x_0, T) \nonumber \\ &= \int_{x(0) =x_0}^{x(T) = x} \hspace{-0.1cm}{\cal D} [x(t)] \delta \left(\bar x - \frac{1}{T} \int_{0}^T dt \,\,x(t) \right) \, \,e^{S_{\bar x}[x(t)]}~,
\end{align}
with the action given by
\begin{equation}\label{eq.trialaction}
S_{\bar x}[x(t)] = - \frac{1}{\hbar} \int_{0}^T dt \left[ \frac{m}{ 2} \dot x^2(t) + V_{\bar x}(x(t)) \right]~,
\end{equation}
best approximates the reduced density matrix in Eq.~(\ref{eq.reddens}). Note that one does not need to include a linear term in the trial potential~\eqref{eq.trialpot}, since it would give a vanishing contribution to the trial action~\eqref{eq.trialaction}, due to the very definition of $\bar{x}$.
The path integral in Eq.~(\ref{eq.reddens}), corresponding to the harmonic action (\ref{eq.trialaction}) can be worked out analytically \cite{PQSCHA}, giving
\begin{align}\label{eq.reddens2}
& \bar \rho_{\bar x}(x_T, x_0, T) = \sqrt{\frac{m}{2\pi\beta \hbar^2}} e^{- \beta w(\bar x)} \frac{f}{\sinh f} \times \nonumber \\
&\frac{1}{\sqrt{2\pi\alpha}} \exp\left[ -\frac{\xi^2}{2\alpha}
-\frac{m\omega\coth f}{4\hbar}\,(x_T-x_0)^2 \right]~,
\end{align}
where $\xi = (x_T+x_0)/2 - \bar x$, $f = \beta \hbar \omega(\bar x) /2$ and
\begin{equation}\label{eq.alpha}
\alpha(\bar x) = \frac{\hbar}{2m\omega(\bar x)}\left (\coth f(\bar x) -\frac{1}{f(\bar x)} \right )~.
\end{equation}
The diagonal elements of the reduced density matrix read in particular
\begin{align}\label{eq.reddens3}
& \bar \rho_{\bar x}(x_0, x_0, T) = \sqrt{\frac{m}{2\pi\beta \hbar^2}} e^{-\beta w(\bar x)} \frac{f}{\sinh f} \times \nonumber \\
&\frac{1}{\sqrt{2\pi\alpha}} \exp\left[ -\frac{\xi^2}{2\alpha} \right]~,
\end{align}
taking a suggestive form in terms of a Gaussian distribution with mean $\bar x$ and variance $\alpha(\bar x)$, describing the fluctuations around the average point.
In particular, the so-called {\em partition function}, ${\cal Z}$, \cite{feynman1998statistical} assumes the classical form
\begin{align}\label{eq.part}
{\cal Z} \equiv & \int d\bar x \int d x_0 \,\, \rho_{\bar x}(x_0, x_0, T) = \nonumber \\
& \sqrt{\frac{m}{2\pi \beta \hbar^2}} \int d\bar x \,\,e^{-\beta \,V_{\rm eff}(\bar x)}~,
\end{align}
where the GTFK {\em effective potential} reads:
\begin{align}\label{eq.effpot}
V_{\rm eff}(\bar x) = w(\bar x)
+ \frac{1}{\beta} \ln \frac{\sinh f(\bar x)}{f(\bar x)}~.
\end{align}
In order to close the approximation we still need to devise an optimization scheme for the parameters $w(\bar x)$ and $\omega(\bar x)$ in Eq.~(\ref{eq.trialpot}). For example, we could simply identify the trial potential (\ref{eq.trialpot}) with the expansion of $V(\bar x)$ up to second order
by setting $w(\bar x) = V(\bar x)$ and $\omega(\bar x) = V^{\prime\prime}(\bar x)$ for any $\bar x$. However, this approximation has limitations. For instance, it can happen that $V^{\prime\prime}(\bar x)$ is negative: in this case, writing $f = \beta \hbar \omega /2$ as $f= i\phi $, $\alpha$ can be analytically continued as $\alpha = (\beta \hbar^2 /4m)(1/\phi^2 - \cot \phi/\phi)$, which diverges to $+\infty$ for $\phi \to \pi^-$ (or $f^2 \to -\pi^2$) and is negative for $\phi > \pi$ ($f^2 < -\pi^2$). As a consequence, if $\omega^2(\bar x)$ is negative,
for sufficiently large time horizons $T$ we have $f^2 < -\pi^2$ and $\alpha(\bar x) < 0$. In this situation, the reduced density matrix (\ref{eq.reddens2}) is not well defined and the approximation breaks down.
A more robust approximation can be devised by observing that the Gaussian density $\bar\rho_{\bar x}(x_0,x_0,T)$ has to be close to $\rho_{\bar x}(x_0,x_0,T)$, so that $V_{\bar x}(x)$ must approximate $V(x)$ not only at $\bar x$: this is accomplished by requiring the equality of the Gaussian averages of the true and the trial potentials, and of their derivatives up to the second one
\begin{align}
\langle\!\langle V(\bar x +\xi) \rangle\!\rangle &= \langle\!\langle V_{\bar x}(\bar x + \xi) \rangle\!\rangle \nonumber \\ & = w(\bar x) + \frac{m}{2}\omega^2(\bar x)\alpha(\bar x)~, \label{GTFK1}\\
\langle\!\langle V^{\prime\prime}(\bar x + \xi) \rangle\!\rangle&=\langle\!\langle V_{\bar x}^{\prime\prime}(\bar x + \xi)\rangle\!\rangle = m\omega^2(\bar x)~,\label{GTFK2}
\end{align}
with the short-hand notation
\begin {align}
\langle\!\langle F(\bar x + \xi) \rangle\!\rangle &\equiv \frac{1}{\sqrt{2 \pi\alpha(\bar x)}} \int_{-\infty}^{+\infty} d\xi \,\,e^{-\xi^2/2\alpha(\bar x)} F(\bar x + \xi) \nonumber \\
&= e^{\frac{\alpha(\bar x)}{2}\partial_x^2} F(\bar x)~,
\end{align}
and $\alpha(\bar x)$ given by Eq.~(\ref{eq.alpha}).
The equations above impose that the expectation value according to the Gaussian probability distribution in Eq.~(\ref{eq.reddens3}) of the potential and of its second order expansion are in agreement with each other, for {\em every} value of $\bar x$. Under the GTFK approximation the quantum effects are embedded in the notion of the effective potential (\ref{eq.effpot}) which is a renormalized version of the potential $V(x)$ where $\alpha(\bar x) \equiv \langle\!\langle \xi^2 \rangle\!\rangle$ -- representing the average quadratic fluctuations around $\bar x$ due to the quantum effects -- is the renormalization parameter. Note that Eq.~\eqref{GTFK2} is self consistent, meaning that its solution $\omega^2(\bar x)$ in turn determines the variance~$\eqref{eq.alpha}$.
It can be shown that the above determination of the parameters $w({\bar{x}})$ and $\omega({\bar{x}})$ satisfies a variational principle based on the so-called Jensen-Feynman inequality, ${\cal Z}\ge {\cal Z}_0 \,e^{\langle S-S_0\rangle_0}$, where the functional average is taken with whatever trial action $S_0$, ${\cal{Z}}_0$ being the corresponding partition function. Indeed, taking $S_0=S_{\bar{x}}$ and maximizing the right-hand side of the inequality one just finds Eqs.~\eqref{GTFK1} and~\eqref{GTFK2}.
The GTFK method, becomes exact in both limits of high-temperature $\beta \to 0$ and vanishing quantum effects $\hbar/m \to 0$, for which the parameter $\alpha$ vanishes as $\beta\hbar^2/12m$ and the effective potential (\ref{eq.effpot}), coincides with the exact classical potential:
\begin{equation}\label{VeffhighT}
V_{\rm eff}(\bar x) = V(\bar x) + \frac{\beta\hbar^2}{24m} V^{\prime\prime}(\bar x) + O(\beta^2\hbar^4/m^2),
\end{equation}
so that the partition function in Eq.~(\ref{eq.part}) coincides with the well-known exact classical result \cite{feynman1998statistical}.
The effective potential can be compared to the semiclassical effective potential introduced by Wigner and Kirkwood~\cite{Wigner1932,Kirkwood1933,FujiwaraOW1982,HilleryCSW1984} (WK), that was substantially found as an expansion in $\beta$ and $\hbar$ of the exact classical effective potential $V_{\rm ex}$, defined such that the quantum density bears the classical form
\begin{equation}
\rho(x_0, x_0, T) \equiv \frac 1{\cal Z}~e^{-\beta V_{\rm ex}(x_0)} ~.
\end{equation}
The WK expansion is in principle exact, but only the first few terms are practically affordable, and while lowering the temperature all terms soon diverge. One has indeed~\cite{JizbaZ2014}
\begin{equation}
V_{\rm ex}(x_0) = V(x_0)+\frac{\beta\hbar^2}{12m}V^{\prime\prime}(x_0)
-\frac{\beta^2\hbar^2}{24m}{V^\prime}^2(x_0) +\dots
\end{equation}
This apparently disagrees from the expansion~\eqref{VeffhighT} because the comparison is a little subtle: indeed, $V_{\rm eff}$ has not to be directly compared with $V_{\rm{ex}}$, because, in order to obtain $\rho(x_0, x_0, T)$ one cannot integrate over $x_0$ as made in Eq.~\eqref{eq.part}, but rather over $\bar{x}$. Accounting for this~\cite{Kleinert1986}, the WK and the GTFK effective potentials do agree~\cite{1990VTijmpb,1992CTVVpra}. Similarly, GTFK is distinct from the exponential power series expansion of \cite{makri}, previously applied successfully in the financial context \cite{Capriotti2006, EEBK, EEGARCH}, and which we will use as one of the benchmarks when discussing our numerical results.
With respect to these approaches, the GTFK method has a strong advantage: it still gives a meaningful representation of the thermodynamics down to zero temperature, where it is equivalent to the so-called self-consistent harmonic approximation~\cite{Koehler1966a,Koehler1966b}, that was initially applied to quantum crystal lattices. Therefore, increasing temperature from zero the accuracy increases more and more, because the renormalization parameter $\alpha(\bar x)$ decreases. The price to pay is that one still has to solve the classical problem with the effective potential, but this is nevertheless a huge simplification, especially in view of the plenty of methods that have been developed to treat classical systems. In particular, thanks to the fact that the nonlinear character of the potential is kept, the GTFK approach allows for studying quantum systems whose classical counterpart is characterized by nonlinear excitations (solitons, vortices) and constitutes a much simpler and clearly interpretable alternative to heavy numerical approaches, such as Quantum Monte Carlo.
The GTFK approach is also distinct from other semi-classical path-integral approximations, like the Wentzel-Kramers-Brillouin (WKB) \cite{Wentzel1926, Kramers1926, Brillouin1926} or the equivalent saddle-point approximations \cite{kleinert2009path, Rajaraman1975, Kakushadze2015}, which are based on a power-series expansion of the action around the classical trajectory $x_{\rm{c}}(t)$ rather than around the average point, {\em i.e.}, the density matrix, Eqs.~\eqref{eq.pi} and~\eqref{eq.action}, is expressed as
\begin{align}
\rho(x_T, x_0, T) = e^{S[x_{\rm{c}}(t)]} \int_{\tilde x(0) = 0}^{\tilde x(T) = 0}\!\!{\cal D} [\tilde x(t)] \, \,e^{\tilde S[\tilde x(t)]}~,
\label{eq.pi.wkb}
\end{align}
where $x_{\rm{c}}(t)$ obeys the classical equation of motion $\delta{S}/\delta{x(t)}=0$ and satisfies the boundary conditions $x_{\rm{c}}(0)=x_0$ and $x_{\rm{c}}(T)=x_T$, while the path summation is over closed paths $\tilde{x}(t)=x(t){-}x_{\rm{c}}(t)$ with the expanded action
\begin{equation}\label{eq.action.wkb}
\tilde S[\tilde x(t)] = - \frac{1}{\hbar}
\int_{0}^T dt \left[ \frac{m}{ 2} \dot{\tilde x}^2(t) + \frac{V^{\prime\prime}(x_{\rm{c}}(t))}2\,{\tilde x}^2(t) +\dots\right]~.
\end{equation}
The WKB approximation is exact for a quadratic potential, and, the first term being of order $\hbar^{-1}$, it can include the effect of tunneling (for instance, in a double-well potential) at variance with the GTFK; however, one has to consider that it is not crucial to account for tunneling effects, as they are soon overwhelmed by quantum thermal fluctuations and are practically absent in many-body systems; moreover, beyond few relatively simple cases, the evaluation of the path integral~\eqref{eq.action.wkb} is generally hard, mainly due to the dependence of $\tilde{S}$ upon the classical path.
On the other hand, the non-local nature of the GTFK approximation yields the possibility of tuning two families of parameters, $w(\bar x)$ and $\omega(\bar x)$, allowing one to look for the best approximation of the true action in a richer space, while preserving the property of being exact in the classical limit and for harmonic actions. By `richer space' we mean that the trial action, thank to its dependence on the average-point functional, is much more general than the local actions corresponding to physical potentials. The GTFK can also be systematically improved, at least in principle, without suffering from the divergencies appearing instead in most perturbative approaches \cite{kleinert2009path}.
The generalizations of the GTFK approach to many degrees of freedom, as well as to Hamiltonian systems~\cite{1992CTVVpra,PQSCHA}, have found numerous applications in Physics and Physical Chemistry. Besides the tests on simple models with one degree of freedom~\cite{FeynmanKleinert1986,JankeK1986,JankeK1987,1990VTijmpb}, it is noteworthy that the very first paper regarded the 1D sine-Gordon model~\cite{GiachettiTognetti1985,1988GTVpra1}, whose classical version is characterized by the existence of topological nonlinear excitations, the solitons, that determine an anomaly of thermodynamic quantities like the specific heat: the GTFK method allowed for the first time to quantify the same anomaly for the quantum system, and was shown to agree with the outcomes of hard Quantum Monte Carlo calculations~\cite{1988GTVpra1} and to admit a renormalized continuum limit in agreement with exact `Bethe Ansatz' calculations~\cite{1988GTVpra3}.
Among many accomplishments, one should mention the quantitative explanation~\cite{1991CTVVprb} of experimental data regarding a quasi-1D magnet CsNiF$_3$, that behaves similarly to the sine-Gordon model, while a major one has been the study of 2D quantum anisotropic magnets~\cite{1995CTVVpra,1998CCTVVphd}, whose classical counterpart shows the topological phase transition studied~\cite{KosterlitzT1973} by Kosterlitz and Thouless (KT);
the GTFK approach allowed also to quantitatively characterize~\cite{2006CGVVjap} earlier experiments, showing that magnetic and calorimetric measurements performed in 1983 were the first known experimental observation of KT behavior in a real magnet; a further success in the magnetic realm was providing a consistent picture of the elusive Ising phase transition in a frustrated model such as the 2D quantum $J_1$-$J_2$ Heisenberg antiferromagnet~\cite{CapriottiFRT2004}.
2D Josephson-junction arrays are also typical KT systems: the effective potential was extended to include the dissipative effect of resistive shunts among the junctions used in experiments, getting quantitative accuracy for the phase diagram~\cite{2000CFTVprb}. The versatility of the GTFK potential is witnessed also by recent applications in the theoretical interpretation of thermal expansion measurements obtained by $x$-ray absorption spectroscopy in alloys~\cite{YokoyamaE2013,YokoyamaKU2018}.
\section*{Path-Integral formulation of Stochastic Calculus}
In this section, we briefly review how the formalism of stochastic calculus can be recast in the language of path-integrals in Euclidean time, focussing for simplicity on the case of a single SDE as in Eq.~(\ref{eq.diffusion}). As a first step, in order to simplify the derivation,
it is convenient to transform the original process into an auxiliary one, $X_t$, with constant volatility $\sigma$. Following \cite{sahalia1999}, this
can be achieved in general through the so-called Lamperti's transform
\begin{equation}
X_t = \gamma(Y_t) \equiv \sigma \int^{Y_t}_{0} \frac{dz}{\sigma_y(z)}~.
\label{inttransf}
\end{equation}
A straightforward application of Ito's Lemma gives the stochastic differential equation satisfied by $X_t$ for $t\ge 0$:
\begin{equation}\label{eq.stproc}
d X_t = \mu(X_t) dt + \sigma d W_t,
\end{equation}
where
\begin{equation}
\mu(x) = \sigma \left[\, \frac{\mu_y(\gamma^{-1}(x))}{\sigma_y(\gamma^{-1}(x))} - \frac{1}{2}\frac{\partial \sigma_y}{\partial y}(\gamma^{-1}(x))\right].
\label{driftX}
\end{equation}
Here, $y = \gamma^{-1}(x)$ is the inverse of the transformation (\ref{inttransf}).
The generalized AD density (\ref{eq.ad}) for the processes $X_t$ and $Y_t$ are related by the Jacobian associated with (\ref{inttransf}) giving
\begin{align}
\psi_\lambda^Y(y_T, y_{0}, T) = \sigma \frac{\psi_\lambda(\gamma(y_T),x_0, T)}{\sigma_y (y_T)}~.
\end{align}
It is well known, see {\em e.g.}, \cite{andersen2010interest,karatzas1991brownian}, that the generalized AD density (\ref{eq.ad}) for the process (\ref{eq.stproc}) satisfies the following conjugate forward (Fokker-Planck)
partial differential equation (PDE)
\begin{align}\label{eq.fp}
\partial_t \psi_\lambda(x_t, x_0, t) = \Big( &-\lambda r(x) - \partial_x \mu(x_t) \nonumber \\ &+ \frac{1}{2} \sigma^2\partial_x^2 \Big) \psi_\lambda(x_t, x_0, t)~,
\end{align}
with the initial condition $\psi_\lambda(x_t, x_0, 0) = \delta(x_0 - x_t)$.
\begin{figure}[t]
\centerline{\includegraphics[width=80mm]{Fig1}}
\vspace*{8pt}
\caption{Black-Karasinki model: GTFK self-consistent parameters (left axis) $\omega^2(\bar x)$ (dashed line), $\alpha(\bar x)$ (dotted line) and diagonal trial reduced density matrix $\bar \rho_{\bar x}(x_0, x_0, T)$ (right axis) as a function of the average point $\bar x$ for different values of the the time to maturity and volatility ({\em e.g.}, of the strength of the diffusive effects). The other parameters of the diffusion are mean-reversion level $a = 0.1$, speed $b=\ln 0.04$, and $x_0 = \ln 0.06$.}
\label{BKalphaomegarho}
\end{figure}
A path-integral representation of the AD density can be constructed \cite{BennatiRosaClot1999} starting from the Euler approximation, correct up to $O(\Delta t)$, for the solution of
the Fokker-Planck PDE (\ref{eq.fp})
\begin{align}\label{eq.Euler}
&\psi_\lambda(x_{\Delta t},x_0,\Delta t) = e^{-\lambda r\left(x_0\right)\Delta t} \times \nonumber \\
&\frac{1}{\sqrt{2\pi\sigma^2\Delta t}} \exp\left[ -\frac{(x_{\Delta t}-x_0-\mu\left(x_0\right)\Delta t)^2}{2\sigma^2\Delta t}\right]~.
\end{align}
Using the Markov property, the equation above gives a prescription to write the solution of the Fokker-Planck equation in the form of a convolution product of short-time
AD densities as:
\begin{align}\label{eq.discrete}
&\psi_\lambda(x_T,x_0,T) = \left(\frac{1}{{2\pi\sigma^2\Delta t}}\right)^{N/2} \times \nonumber \\ &\int \ldots \int \prod_{i=1}^{N-1}dx_i \exp{\big[\tilde S(x_i,x_{i-1})\big]}~,
\end{align}
with $\Delta t = T / N$, $x_N\equiv x_T$ and
\begin{align}\label{eq.discaction}
&\tilde S(x_i,x_{i-1}) = -\frac{\Delta t}{2\sigma^2}\left [ \frac{(x_i - x_{i-1})}{\Delta t} + \mu((x_{i-1}+x_i)/2)\right]^2 - \nonumber \\
& \Delta t \big[\partial_x \mu((x_{i-1}+x_{i})/2)/2 +\lambda r((x_{i-1}+x_{i})/2)\big]~,
\end{align}
where the term
\begin{equation}
\Delta t \partial_x \mu((x_{i-1}+x_{i})/2)/2~,
\end{equation}
arises, at order $O(\Delta t)$, from using the analytically convenient Stratonovich mid-point discretization \cite{BennatiRosaClot1999}.
As a result, the limit $N\to \infty$ of Eq.~(\ref{eq.discrete}) can be formally written as
\begin{equation}\label{eq.adpi}
\psi_\lambda(x_T, x_0, T) = e^{-W(x_T, x_0)} \rho(x_T, x_0, T)~,
\end{equation}
where
\begin{equation}\label{eq.piSC}
\rho(x_T, x_0, T) = \int_{x(0) =x_0}^{x(T) = x} \hspace{-0.1cm}{\cal D} [x(t)] \, \,e^{S[x(t)]}~,
\end{equation}
has the same form of the {\em density matrix} in Eq.~(\ref{eq.pi}), the functional
\begin{equation}\label{eq.actionSC}
S[x(t)] = - \int_{0}^T dt \left[ \frac{1}{ 2\sigma^2 } \dot x^2(t) + V(x(t)) \right]~,
\end{equation}
has the same form of the {\em euclidean action} in Eq.(\ref{eq.action}),
\begin{equation}\label{eq.driftpot}
V(x) = \frac{\mu(x)^2}{2\sigma^2} + \frac{\mu^\prime(x)}{2} + \lambda r(x)~,
\end{equation}
can be called {\em drift potential} and we have defined
\begin{equation}
W(x_T, x_0) = - \frac{1}{\sigma^2} \int_{x_0}^{x_T} dx \,\,\mu(x)~,
\end{equation}
in order to give Eq.~(\ref{eq.actionSC}) a suggestive Lagrangian structure as in Eq.~(\ref{eq.action}).
The key observation is that the path integral in Eq.~(\ref{eq.piSC}) is formally equivalent to density matrix in Eq.~(\ref{eq.pi}) describing the quantum termodynamics of a particle of mass $m = \hbar/\sigma^2$ in a potential $\hbar V(x)$, at temperature ${\cal T}= \hbar / k_B T$ (such that $\beta \hbar = T$).
The GTFK can be therefore applied straightforwardly and here for convenience we restate the results with the notation of stochastic calculus:
\begin{align}\label{eq.reddens2SC}
& \bar \rho_{\bar x}(x_T, x_0, T) = \sqrt{\frac{1}{2\pi\sigma^2T}} e^{-Tw(\bar x)} \frac{f}{\sinh f} \times \nonumber \\
&\frac{1}{\sqrt{2\pi\alpha}} \exp\left[ -\frac{\xi^2}{2\alpha} -\frac{\omega}{4\sigma^2}\coth f (x_T-x_0)^2 \right]~,
\end{align}
where $\xi = (x_T+x_0)/2 - \bar x$, $f = \omega(\bar x) T/2$ and
\begin{equation}\label{eq.alphaSC}
\alpha(\bar x) = \frac{\sigma^2}{2\omega(\bar x)}\left (\coth f(\bar x) -\frac{1}{f(\bar x)} \right )~,
\end{equation}
with $w(\bar x)$ and $\omega(\bar x)$ solutions of the self-consistent equations:
\begin{align}
\langle \langle V(\bar x + \xi) \rangle\rangle &= \langle \langle V_{\bar x}(\bar x + \xi) \rangle\rangle \nonumber \\ & = w(\bar x) + \frac{\omega^2(\bar x)\alpha(\bar x)}{2\sigma^2}~, \label{GTFK1_SC}\\
\langle \langle V^{\prime\prime}(\bar x + \xi) \rangle\rangle&=\langle \langle V_{\bar x}^{\prime\prime}(\bar x + \xi)\rangle\rangle = \frac{\omega^2(\bar x)}{\sigma^2}~.\label{GTFK2_SC}
\end{align}
The GTFK method, becomes exact in the limit of short time to maturity $T\to 0$ and vanishing volatility $\sigma \to 0$ for which the parameter $\alpha$ vanishes as $\sigma^2 T/12$.
Furthermore, given the form of the chosen trial potential, for harmonic actions, the GTFK approximation is, in fact, exact. This is for instance the case for the Vasicek model \cite{Vasicek1977} as it will be illustrated in the next section.
\begin{figure}[t]
\centerline{\includegraphics[width=80mm]{Fig2}}
\vspace*{8pt}
\caption{Black-Karasinki AD densities obtained with the GTFK method (dashed line) and a numerical solution of the Fokker-Plank PDE (continuos line) for different values of the the time to maturity.
The parameters of the BK process are: mean-reversion speed $a = 0.1$, level $b = \ln 0.04$, volatility $\sigma = 0.85$, and initial rate $r_{0} = 0.060$. The inset is an enlargement of the region of the maximum where the discrepancy between the PDE result and GTFK approximation is largest.}
\label{BKpsi}
\end{figure}
\section*{Numerical Results}
In this section we illustrate the effectiveness of the GFTK approach by discussing its application to a few diffusions processes of the form (\ref{eq.diffusion}), starting from two cases in which the method gives
exact results, namely the Vasicek and the so-called quadratic short-rate model. We then discuss the Black-Karasinski (BK) \cite{BK} and GARCH linear SDE model \cite{MercurioGarch, EEGARCH} - for which the AD density (\ref{eq.ad}) or zero-coupon bonds (\ref{eq.zeroad}) are not know analytically - by presenting the comparison of the GTFK results with those obtained by solving numerically the relevant PDEs and by employing other approximations.
\subsection{Vasicek model}
The Vasicek model \cite{Vasicek1977} is a simple example of affine process \cite{duffie}
\begin{equation}\label{eq.OU}
dX_t = a(b-X_t) dt + \sigma dW_t
\end{equation}
where $a$ is the mean-reversion speed, $b$ the mean-reversion level, $\sigma$ the volatility, and $r(X_t) = X_t$. The drift potential (\ref{eq.driftpot}) is given by the quadratic form
\begin{equation}
V_V(x) = \frac{a^2 (b-x)^2}{2\sigma^2} - \frac{a}{2} + \lambda x~.
\end{equation}
The path integral for quadratic potentials is known to be analytically tractable and corresponds in quantum Physics
to the so-called {\em harmonic oscillator} \cite{feynman2010quantum}. In this case, the GTFK self-consistent
conditions (\ref{GTFK1_SC}) and (\ref{GTFK2_SC}) read:
\begin{align}
w(\bar x) = V_V(\bar x)~,\,\,\,\,\,\,\, \omega^2(\bar x) = a^2~,
\end{align}
and the reduced density matrix (\ref{eq.reddens2SC}) reads:
\begin{align}\label{eq.reddensvasicek}
& \bar \rho_{\bar x}(x_T, x_0, T) = \sqrt{\frac{1}{2\pi\sigma^2T}} e^{-T V_V(\bar x)} \frac{f}{\sinh f} \times \nonumber \\
&\frac{1}{\sqrt{2\pi\alpha}} \exp\left[ -\frac{\xi^2}{2\alpha} -\frac{a}{4\sigma^2}\coth f (x_T-x_0)^2 \right]~,
\end{align}
with $\alpha = \sigma^2/2 a (\coth f - 1/f)$, $f = aT/2$, both independent of $\bar x$. The integral over $\bar x$ in Eq.~(\ref{eq.PIADprice}) can then be
performed analytically giving, after a somewhat tedious but straightforward calculation,
\begin{align}\
&\psi_\lambda(x_T, x_0, T) = \frac{1}{\sqrt{2\pi\bar \sigma^2}} e^{{\lambda}(x-x_0)/a} e^{-T(\lambda b - \lambda^2 \sigma^2 / 2 a^2) } \times \nonumber \\ &
\exp\left[ -\frac{\big((x_T - b + \frac{\lambda\sigma^2}{a^2}) - (x_0 - b + \frac{\lambda\sigma^2}{a^2}) e^{-aT}\big)^2}{2\bar\sigma^2} \right]
\end{align}
where $\bar \sigma^2 = \sigma^2 (1 - \exp(-2 a T))/ 2 a$, in agreement with the known result \cite{Jashmidian1989}.
\subsection{Quadratic Short Rate Model}
In the quadratic short rate model, the short rate is defined as
\begin{equation}\label{eq.qr}
r(X_t) = 1 + \beta X_t + \gamma X_t^2~,
\end{equation}
with $X_t$ following the OU diffusion (\ref{eq.OU}), which is positive definite for $\beta>0$ and $\gamma^2<4\beta$. In this case, the drift potential (\ref{eq.driftpot}) reads
\begin{equation}
V_Q(x) = \frac{a^2(b-x)^2}{2\sigma^2} -\frac{a}{2} + \lambda (1 + \beta x + \gamma x^2)~,
\end{equation}
while the GTFK conditions, (\ref{GTFK1_SC}) and (\ref{GTFK2_SC}), can be determined as
\begin{equation}
w(\bar x) = V_Q(\bar x)~,\,\,\,\,\,\,\, \omega^2(\bar x) = a^2 + 2 \lambda \gamma \sigma^2~,
\end{equation}
which, as in the Vasicek model discussed above give a frequency $\omega$ that is not dependent on the average point and a function $w(\bar x)$ which is
quadratic in $\bar x$. Also in this case the Gaussian integration can be performed analytically leading to the exact result.
\subsection{Black-Karasinki Model}
The BK \cite{BK} model is a conspicuous example of a diffusion that is particularly suitable
for financial applications because the short rate at any time horizon follows an intuitive lognormal distribution.
Unfortunately, it lacks the same degree of analytical tractability as that shown by affine models.
As a result, although widely used in practice, BK implementations rely on computationally intensive numerical simulations based on PDE or
Monte Carlo \cite{andersen2010interest}.
The short rate in the BK model is defined as
\begin{equation}
r(X_t) = \exp{X_t}~,
\end{equation}
with $X_t$ following the OU diffusion (\ref{eq.OU}). In this case, the drift potential (\ref{eq.driftpot}) reads
\begin{equation}
V_{BK}(x) = \frac{a^2(b-x)^2}{2\sigma^2} -\frac{a}{2} + \lambda e^x~,
\end{equation}
while the GTFK conditions, (\ref{GTFK1_SC}) and (\ref{GTFK2_SC}), can be determined with some straightforward algebra as
\begin{align}
w(\bar x) &= V_{BK}(\bar x) + \frac{a^2-\omega^2(\bar x)}{\sigma^2}\alpha(\bar x) \nonumber \\ & + \lambda \left(e^{\alpha(\bar x)/2} - 1\right) e^{\bar x} ~, \\
\omega^2(\bar x) &= a^2 + \lambda \sigma^2e^{\alpha(\bar x)/2}e^{\bar x}~,
\end{align}
with the second to be solved self-consistently with the renormalization parameter in Eq.~(\ref{eq.alphaSC}).
In Fig.~\ref{BKalphaomegarho} we plot the GTFK self-consistent parameters $\omega^2(\bar x)$, and $\alpha(\bar x)$ and the diagonal trial reduced density matrix $\bar \rho_{\bar x}(x_0, x_0, T)$
in Eq.~(\ref{eq.reddens2SC}) as a function of the average point $\bar x$ for different strength of the diffusive effects, namely of the time to maturity and volatility. For weak diffusive effects, the parameter $\alpha(\bar x)$ is relative small and the trial reduced density matrix has a sharp peak around $x_0$. In this region, both $\alpha(\bar x)$ and $\omega^2(\bar x)$ display a weak dependence on $\bar x$ which signals the adequacy of a local harmonic approximation to capture the purely diffusive effects in the problem. However, as the diffusive effects increase, with larger volatility and/or time to maturity, the renormalization parameter $\alpha(\bar x)$ increases, the trial density broadens and both $\alpha(\bar x)$ and $\omega^2(\bar x)$ display a more marked dependency on the average point $\bar x$, signaling that a non-local approximation is needed to best capture the diffusive effects given an harmonic ansatz of the effective potential.
An illustration of the accuracy of the BK AD densities (\ref{eq.ad}) obtained with the GTFK approximation is displayed for a high volatility case in Fig.~\ref{BKpsi}, for different values of time to maturity, by comparing with a numerical solution of the Fokker-Planck equation (\ref{eq.fp}). Here we observe that the GTFK approximation is hardly distinguishable from the PDE result up to $T=5$, and remains
very accurate even for large time horizons.
This is also confirmed by the results for zero-coupon bonds (\ref{eq.zeroad}) reported in Table \ref{tablevsEE} illustrating how the GTFK method compares favorably with the results obtained with recently proposed semi-analytical approximations, namely the Exponent Expansion (EE) \cite{EEBK}, and the Karhunen-Lo\'eve (KL) expansions \cite{daniluk2016} when benchmarked agains a numerical solution of the associated PDE. In particular, for short time horizons, the GTFK approximation has comparable accuracy with the EE. For larger time horizons, the GTFK compares better and better and remains very accurate even when the EE, which has a finite convergence ratio in $T$, eventually breaks down. Similarly, the GTFK method has better accuracy than the first order KL expansion, and comparable accuracy with the second order KL expansion for short time horizons, while it has significantly better accuracy for large time horizons. Even for time horizons as large as 20 years the GTFK approximation produces zero-coupon bond prices within 50 basis points from the exact result, as also illustrated in Fig.~\ref{zeroiBK}. Similar conclusions can also be drawn when comparing with other recently proposed approaches as those in Refs.~\cite{Hagan07, AntonovSpector2011}.
\begin{widetext}
\begin{table}[t]
{\begin{tabular}{@{}lccccc@{}} \toprule
$T$ & ${\rm EE}$ & KL(1) & KL(2) & GTFK & PDE \\ \colrule
0.1 & 0.9939 (0.00\%) & 0.9939 (0.00\%) & 0.9939 (0.00\%) & 0.9939 (0.00\%) & 0.9939 \\
0.5 & 0.9681 (0.00\%) & 0.9681 (0.00\%) & 0.9681 (0.00\%) & 0.9681 (0.00\%) & 0.9681 \\
1.0 & 0.9331 (0.00\%) & 0.9331 (0.00\%) & 0.9331 (0.00\%) & 0.9331 (0.00\%) & 0.9331 \\
2.0 & 0.8581 (0.01\%) & 0.8580 (0.02\%) & 0.8581 (0.01\%) & 0.8582 (0.00\%) & 0.8582 \\
3.0 & 0.7845 (0.01\%) & 0.7842 (0.05\%) & 0.7844 (0.02\%) & 0.7847 (0.01\%) & 0.7846 \\
5.0 & 0.6595 (0.04\%) & 0.6582 (0.24\%) & 0.6593 (0.08\%) & 0.6602 (0.06\%) & 0.6598 \\
10.0 & - & 0.4545 (1.69\%) & 0.4601 (0.48\%) & 0.4628 (0.10\%) & 0.4623 \\
20.0 & - & 0.2440 (9.06\%) & 0.2592 (3.38\%) & 0.2672 (0.41\%) & 0.2683 \\
\botrule
\end{tabular} }
\caption{Black-Karasinski $T$ maturity zero-coupon bonds obtained with the GTFK approximation, the Exponent Expansion (EE) of Ref.~\cite{EEBK}, the Karhunen-Lo\'eve (KL) expansion of Ref.~\cite{daniluk2016} to first and second order, and by solving numerically the associated PDE. The parameters of the BK process are: mean-reversion speed $a = 0.1$, level $b = \ln 0.04$, volatility $\sigma = 0.85$, and initial rate $r_{0} = 0.06$.}
\label{tablevsEE}
\end{table}
\end{widetext}
\begin{figure}[t]
\centerline{\includegraphics[width=80mm]{Fig0}}
\vspace*{8pt}
\caption{GTFK zero-coupon bond prices as a function of time to maturity for the Black-Karasinski model, with mean-reversion speed $a = 0.1$, level $b = \ln 0.04$, initial rate $r_{0} = 0.06$, and different values of the volatility. Crosses indicate the PDE results. The inset is an enlargement for short times to maturity.}
\label{zeroiBK}
\end{figure}
\subsection{GARCH Linear SDE}
As an example of a more challenging application, we then consider the GARCH linear SDE or Inhomogenous Geometric Brownian Motion \cite{MercurioGarch,EEGARCH} model, which is a special case of
the so-called Continuous Elasticity of Variance (CEV) diffusion \cite{CEV}, namely
\begin{equation}\label{eq.garch}
dY_t = a(b-Y_t)dt + \sigma Y_t dW_t~,
\end{equation}
with $r(Y_t) = Y_t$.
The process defined by the SDE in Eq.~(\ref{eq.garch}) can be shown to be strictly positive \cite{KloedenPlaten}. As a result, like the BK model, it is well suited to represent default intensities. It can be also shown to have probability density profiles which are more intuitive than those generated by the widely used square-root processes \cite{cir,MercurioGarch}. Unfortunately, even if it can be solved exactly \cite{KloedenPlaten} it does not admit a closed form for the (generalized) AD prices (\ref{eq.ad}).
\begin{figure}[t]
\centerline{\includegraphics[width=80mm]{Fig3}}
\vspace*{8pt}
\caption{GARCH linear SDE AD densities obtained with the GTFK method (dashed line) and a numerical solution of the Fokker-Plank PDE (continuos line) for different values of the the time to maturity and volatility. The other parameters of the process are: mean-reversion speed $a = 0.1$, level $b = 0.02$, and initial rate $y_{0} = 0.01$. The inset is an enlargement of the region of the maximum where the discrepancy between the PDE result and GTFK approximation is the largest.}
\label{GARCHpsi}
\end{figure}
Under the Lamperti's transformation (\ref{inttransf}) for this process, namely $X_t = \log Y_t$, Eq.~(\ref{eq.garch}) reads
\begin{equation}\label{eq.garchLam}
dX_t = \mu_G(X_t) dt +\sigma dW_t~,
\end{equation}
with
\begin{equation}
\mu_G(x) = a b \,e^{-x} - a -\sigma^2/2~.
\end{equation}
The drift potential (\ref{eq.driftpot}) associated with the SDE (\ref{eq.garchLam}) reads therefore
\begin{align}\label{eq.Gpot}
V_{G}(x) &= \frac{a^2b^2}{2\sigma^2} e^{-2x} - \frac{ab}{\sigma^2}e^{-x}(a+\sigma^2)+ \nonumber\\
&\frac{1}{2\sigma^2}(a^2+{\sigma^2}/{2})^2 + \lambda e^{x}~,
\end{align}
which is related to the so-called Morse potential \cite{morse}. The GTFK conditions, (\ref{GTFK1_SC}) and (\ref{GTFK2_SC}), can be determined
with some straightforward algebra as
\begin{align}
&w(\bar x) = \frac{a^2b^2}{2\sigma^2} e^{-2x} e^{2\alpha} - \frac{ab}{\sigma^2}e^{-x}(a+\sigma^2) e^{\alpha/2}+ \nonumber \\
&\frac{1}{2\sigma^2}(a^2+{\sigma^2}/{2})^2 + \lambda e^{x}e^{\alpha/2}-\frac{\omega^2(\bar x)\alpha(\bar x)}{2\sigma^2}~ \\
&\omega^2(\bar x) = 2 a^2 b^2 e^{-2x} e^{2\alpha} - ab e^{-x}(a+\sigma^2) e^{\alpha/2} + \nonumber \\& \lambda \, \sigma^2 e^{x}e^{\alpha/2}~.
\end{align}
Examples of AD densities (\ref{eq.ad}) obtained with the GTFK approximation for the GARCH linear SDE are displayed in Fig.~\ref{GARCHpsi}, for different values of the diffusion parameters, with a comparison
with a numerical solution of the Fokker-Planck equation (\ref{eq.fp}). Here we observe that the GTFK approximation, as in the BK case, is difficult to distinguish from the PDE result up to several years maturity, and for large enough volatilities. As in the BK case, the accuracy of the approximations depends on the chosen model parameters, and the maturity being considered. The approximation becomes less accurate for larger maturities $T$ and volatility. The behaviour with respect to the mean-reversion speed $a$ is instead less clear-cut as this parameter affects both the variance of the process and the non-linearity of the drift potential
(\ref{eq.Gpot}).
The accuracy of the GTFK method for the GARCH linear SDE is also illustrated for zero-coupon bonds (\ref{eq.zeroad}) in Table \ref{tableGARCH} and \ref{tableGARCH2} for two sets of model parameters, showing how the GTFK method compares favorably with the results obtained with recently proposed semi-analytical approximations, namely the EE \cite{EEGARCH},
when benchmarked agains a numerical solution of the associated PDE. In general, although less accurate than in the BK case, due to the more complex form of the drift potential (\ref{eq.Gpot}), the approximation produces satisfactory results for maturities up to several years even in regimes of high volatility.
\begin{table}[t]
{\begin{tabular}{@{}lccc@{}} \toprule
$T$ & ${\rm EE}$ & GTFK & PDE \\ \colrule
0.1 & 0.9940 (0.00\%) & 0.9940 (0.00\%) & 0.9940 \\
0.5 & 0.9707 (0.00\%) & 0.9707 (0.00\%) & 0.9707 \\
1.0 & 0.9429 (0.00\%) & 0.9429 (0.00\%) & 0.9429 \\
2.0 & 0.8914 (0.03\%) & 0.8920 (0.03\%) & 0.8917 \\
3.0 & 0.8459 (0.08\%) & 0.8472 (0.07\%) & 0.8466 \\
5.0 & 0.7834 (1.40\%) & 0.7717 (0.12\%) & 0.7726 \\
7.5 & - & 0.6923 (1.45\%) & 0.7025 \\
10.0 & - & 0.6223 (3.92\%) & 0.6477 \\
\botrule
\end{tabular} }
\caption{GARCH linear SDE $T$ maturity zero-coupon bonds obtained with the GTFK approximation, the Exponent Expansion (EE) of Ref.~\cite{EEGARCH}, and by solving numerically the associated PDE. The parameters of the process are: mean-reversion level $a = 0.1$, level $b = 0.04$, volatility $\sigma = 0.6$, and initial rate $y_0 = 0.06$.}
\label{tableGARCH}
\end{table}
\begin{table}[t]
{\begin{tabular}{@{}lccc@{}} \toprule
$T$ & ${\rm EE}$ & GTFK & PDE \\ \colrule
0.1 & 0.9990 (0.00\%) & 0.9990 (0.00\%) & 0.9990 \\
0.25 & 0.9975 (0.00\%) & 0.9975 (0.00\%) & 0.9975 \\
0.5 & 0.9949 (0.00\%) & 0.9949 (0.00\%) & 0.9949 \\
1.0 & 0.9896 (0.00\%) & 0.9896 (0.00\%) & 0.9896 \\
2.5 & 0.9728 (0.02\%) & 0.9723 (0.03\%) & 0.9726 \\
5.0 & 0.9359 (0.62\%) & 0.9403 (0.15\%) & 0.9417 \\
10.0 & 0.8315 (5.10\%) & 0.8709 (0.60\%) & 0.8762 \\
\botrule
\end{tabular} }
\caption{GARCH linear SDE $T$ maturity zero-coupon bonds obtained with the GTFK approximation, the Exponent Expansion (EE) of Ref.~\cite{GaukharThesis}, and by solving numerically the associated PDE. The parameters of the process are: mean-reversion level $a = 0.1$, level $b = 0.02$, volatility $\sigma = 0.5$, and initial rate $y_0 = 0.01$.}
\label{tableGARCH2}
\end{table}
\section*{Conclusions}
An effective-potential path-integral formalism of quantum statistical mechanics -- dubbed GTFK after the authors \cite{GiachettiTognetti1985, FeynmanKleinert1986} who originally introduced it -- has been widely utilized in Physics for the study of the quantum thermodynamics of condensed matter systems. The method is based on a self-consistent harmonic approximation of the pure-quantum contributions to the thermodynamics, while fully accounting for the classical behaviour of the system \cite{PQSCHA}. As a semiclassical approach, it is exact in the high-temperature and zero-quantum fluctuations limits but, remarkably, it also gives a meaningful representation in the zero-temperature limit, where it is equivalent to a self-consistent harmonic approximation of the potential.
By exploiting the path-integral formulation of stochastic calculus, we have shown how the GTFK approach can be used to develop an accurate semi-analytical approximation of (generalized) Arrow-Debreu densities, and zero-coupon bonds for non-linear diffusions. The method is exact in the limit of zero volatility, zero time to maturity, and for Ornstein-Ulhenbeck diffusions.
The GTFK provides remarkably accurate results for the Black-Karasinski and GARCH linear SDE for interest rates or default intensities, even for high volatilities and long time horizons, with results that compare favorably with previously presented approximation schemes \cite{Hagan07, AntonovSpector2011, daniluk2016, EEBK, EEGARCH}, with expressions that are more compact and easier to
compute, and less severe limitations arising from a finite convergence radius in the time to maturity or volatility. Similarly to the approach in \cite{Capriotti2006}, the range of application of the expansion can be further extended to even larger time horizons by means of a fast numerical convolution \cite{BennatiRosaClot1999}.
The GTFK approximation can be potentially improved in one of two ways: by pursuing higher-order corrections as in the so-called variational perturbation theory \cite{kleinert2009path} or by its generalization
to Hamiltonian systems \cite{1992CTVVpra,PQSCHA} that would allow avoiding the non-linearities in the potential introduced ({\em e.g.}, as for the GARCH linear SDE) via the Lamperti's transformation (\ref{inttransf}).
The accuracy and ease of computation of the GTFK method makes it a computationally efficient alternative to fully numerical schemes such as binomial trees, PDE or Monte Carlo for the
calculation of transition densities -- whether for the maximization of classical likelihoods or the computation of posterior distributions -- and for the evaluation of European-style derivatives.
This is of practical utility {\em e.g.}, for econometric applications \cite{sahalia1999}, for speeding up pricing or calibration routines for valuation of derivatives \cite{andersen2010interest} or in the context of time consuming multi-factor simulations that are common place in financial engineering in a variety of applications \cite{hull2017options}.
\begin{acknowledgements}
It is a pleasure to acknowledge Jim Gatheral, Tao-Ho Wang and Mehdi Sonthonnax for useful discussions.
The authors are grateful to Prof. Valerio Tognetti for igniting in them the passion for Path Integrals, and for his warm
support throughout the years.
\end{acknowledgements}
|
1,116,691,499,019 | arxiv | \section{Introduction and Main Results}
\noindent
Maximizing or minimizing polynomial functions is a central problem in
science and engineering.
Typically, the polynomials have an
underlying structure, e.g., sparsity, small expansion with respect to a
particular basis, invariance with respect to a group action, etc.
In the setting of sparsity, Fewnomial Theory \cite{kho} has succeeded
in establishing bounds for the number of real solutions (or real
extrema) that depend just on the number of monomial terms.
However, the current general complexity bounds for real solving and
nonlinear optimization (see, e.g., \cite{bpr,eldin,parrilo}) are
still stated in terms of degree and number of variables, and all but ignore
any finer input structure. In this paper, we present new speed-ups for the
optimization of certain sparse multivariate polynomials, extended to allow
real exponents as well. Along the way, we also present two
new families of problems that are ${\mathbf{NP}}_\mathbb{R}$-complete, i.e., the analogue of
${\mathbf{NP}}$-complete for the {\bf BSS model over $\mathbb{R}$}. (The BSS model,
derived in the 1980s by Blum, Shub, and Smale \cite{bss}, is a
generalization of the classical Turing model of computation with an eye toward
unifying bit complexity and algebraic complexity.)
Our framework has both symbolic and numerical aspects in that
(a) we deal with real number inputs and (b) our algorithms give either
yes or no answers that are always correct, or numerically approximate answers
whose precision can be efficiently tuned. Linear Programming (LP) forms
an interesting parallel to the complexity issues we encounter.
In particular, while LP admits polynomial-time algorithms relative to the
Turing model, polynomial-time algorithms for linear programming relative to
the BSS model over $\mathbb{R}$ (a.k.a.\ strongly polynomial-time algorithms or
polynomial arithmetic complexity) remain unknown. Furthermore, the
arithmetic complexity of LP appears to be linked with a fundamental
invariant measuring the intrinsic complexity of numerical solutions:
the {\bf condition number} (see, e.g.,
\cite{vavasisye,cucker}). Our results reveal a class of non-linear
problems where similar subtleties arise when comparing discrete and
continuous complexity.
To state our results, let us first clarify some basic notation
concerning sparse polynomials and complexity classes over $\mathbb{R}$. Recall that
$\lfloor x \rfloor$ is the greatest integer not exceeding a real number $x$,
and that $R^*$ is the multiplicative group of nonzero elements in any ring $R$.
\newpage
\begin{dfn}
When $a_j\!\in\!\R^n$, the notations $a_j\!=$\linebreak
$(a_{1,j},\ldots,a_{n,j})$,
$x^{a_j}\!=\!x^{a_{1,j}}_1\cdots x^{a_{n,j}}_n$, and $x\!=\!(x_1,\ldots,x_n)$
will be understood. If $f(x)\!:=\!\sum^m_{j=1} c_ix^{a_j}$
where $c_j\!\in\!\R^*$ for all $j$,
and the $a_j$ are pair-wise distinct, then we call $f$ a
{\bf (real) $\pmb{n}$-variate $\pmb{m}$-nomial}, and we define
$\mathrm{Supp}(f)\!:=\!\{a_1,\ldots,a_m\}$ to be the {\bf support} of $f$.
We also let ${\mathcal{F}}_{n,m}$ denote the set of all $n$-variate
$\lfloor m\rfloor$-nomials\footnote{Here we allow real coefficients,
unlike \cite{finally} where the same notation included a restriction to integer
coefficients.} and, for any
$m\!\geq\!n+1$, we let ${\mathcal{F}}^*_{n,m}\!\subseteq\!{\mathcal{F}}_{n,m}$
denote the subset consisting of those $f$ with $\mathrm{Supp}(f)$ {\bf not}
contained in any $(n-1)$-flat. We also call any
$f\!\in\!{\mathcal{F}}^*_{n,m}$ an {\bf honest
$\pmb{n}$-variate $\pmb{m}$-nomial} (or {\bf honestly $\pmb{n}$-variate}). $\diamond$
\end{dfn}
For example, the dishonestly $4$-variate trinomial\\
\mbox{}\hfill $-1+\sqrt{7}x^2_1x_2x^7_3x^3_4-e^{43}
x^{198e^2}_1x^{99e^2}_2x^{693e^2}_3x^{297e^2}_4$ \hfill\mbox{}\\
(with support contained in a line segment) has the same supremum
over $\mathbb{R}^4_+$ as the {\bf honest uni}variate trinomial\\
\mbox{}\hfill
$-1+\sqrt{7}y_1-e^{43}y^{99e^2}_1$
\hfill\mbox{}\\
has over $\mathbb{R}_+$.
More generally, via a monomial change of variables,
it will be natural to restrict to ${\mathcal{F}}^*_{n,n+k}$ (with $k\!\geq\!1$) to
study the role of sparsity in algorithmic complexity over the real numbers.
We will work with some well-known complexity classes from the
BSS model over $\mathbb{R}$ (treated fully in \cite{bcss}), so we will only briefly
review a few definitions, focusing on a particular extension we need.
Our underlying notion of\linebreak input size, including a
variant of the
condition number, is clarified in Definition \ref{dfn:cond} of Section
\ref{sub:input} below, and\linebreak illustrated
in Example \ref{ex:input} immediately following our first main theorem.
So for now let us just recall the following basic\linebreak
inclusions of complexity
classes: $\mathbf{NC}^1_\mathbb{R}\!\subsetneqq\!\mathbf{P}_\mathbb{R}\!\subseteq\!{\mathbf{NP}}_\mathbb{R}$ \cite[Ch.\ 19,
Cor.\ 1, pg.\ 364]{bcss}. (The properness of the latter\linebreak
inclusion remains a
famous open problem, akin to the more famous classical
$\mathbf{P}\text{\scalebox{1}[.7]{$\stackrel{?}{=}$}}{\mathbf{NP}}$
question.) Let us also recall that $\mathbf{NC}^k_\mathbb{R}$ is the family of real valued
functions (with real inputs)
computable by arithmetic
circuits\footnote{This is one of $2$ times we will mention circuits
in the sense of complexity theory: Everywhere else in this paper,
our circuits will be {\bf combinatorial} objects as in Definition
\ref{dfn:ckt} below.} with size polynomial in the input size
and depth $O\!\left(\log^k\!\left(\text{Input Size}\right)\right)$
(see \cite[Ch.\ 18]{bcss} for further discussion).
To characterize a natural class of problems with efficiently
computable numerical answers, we will define the notion of a
{\bf High Precision Polynomial Time Approximation Scheme}: We
let ${\mathbf{HPTAS}_\mathbb{R}}$ denote the class
of functions $\phi : \mathbb{R}^\infty\longrightarrow \mathbb{R}\cup\{+\infty\}$
such that, for any ${\varepsilon}\!>\!0$, there is an algorithm guaranteed to
approximate $\phi(x)$ to within a $1+{\varepsilon}$ factor, using
a number of arithmetic operations \linebreak
polynomial in $\mathrm{size}(x)$
{\bf and} $\log\log\frac{1}{{\varepsilon}}$.\footnote{When $\phi(x)\!=\!0$ we
will instead require an {\bf additive} error of ${\varepsilon}$ or less.
When $\phi(x)\!=\!+\infty$ we will require the approximation to
be $+\infty$, regardless of ${\varepsilon}$.}
Our notation is\linebreak
inspired by the more familiar classical family of
problems ${\mathbf{FPTAS}}$ (i.e., those problems admitting a {\bf Fully\linebreak
Polynomial Time
Approximation Scheme}), where\linebreak instead the input is
Boolean and the complexity
need only be polynomial in $\frac{1}{{\varepsilon}}$. The complexity class ${\mathbf{FPTAS}}$ was
\linebreak
formulated in \cite{acg} and a highly-nontrivial\linebreak
example of a problem
admitting a ${\mathbf{FPTAS}}$ is counting\linebreak
matchings in bounded degree graphs \cite{count}.
\newpage
\begin{rem}
For a vector function $\phi=(\phi_1,\ldots,\phi_k) : \mathbb{R}^\infty \longrightarrow
(\mathbb{R}\cup\infty)^k$ it will be natural to say that $\phi$ admits an
${\mathbf{HPTAS}}$ iff each coordinate of $\phi_i$ admits an ${\mathbf{HPTAS}}$. $\diamond$
\end{rem}
\subsection{Sparse Real Optimization}
\noindent
The main computational problems we address are the\linebreak
following.
\begin{dfn}
Let $\mathbb{R}_+$ denote the positive real numbers, and let ${\mathbf{SUP}}$ denote the
problem of deciding, for a given $(f,\lambda)\!\in\!
\left(\bigcup\limits_{n\in\mathbb{N}}\mathbb{R}[x^a \; | \; a\!\in\!\R^n]\right)\times
\mathbb{R}$, whether $\sup_{x\in\R^n_+} f\!\geq\!\lambda$ or
not. Also, for any subfamily ${\mathcal{F}}\subseteq\bigcup_{n\in\mathbb{N}}
\mathbb{R}[x^a\; | \; a\!\in\!\R^n]$, we let
${\mathbf{SUP}}({\mathcal{F}})$ denote the natural restriction of ${\mathbf{SUP}}$
to inputs in ${\mathcal{F}}$. Finally, we let ${\mathbf{FSUP}}$ (resp.\ ${\mathbf{FSUP}}({\mathcal{F}})$)
denote the obvious functional analogue of ${\mathbf{SUP}}$ (resp.\
${\mathbf{SUP}}({\mathcal{F}})$) where (a) the input is instead $(f,{\varepsilon})\!\in\!
\left(\bigcup\limits_{n\in\mathbb{N}}\mathbb{R}[x^a \; | \; a\!\in\!\R^n]\right)\times
\mathbb{R}_+$ and (b) the output is instead a pair\\
\mbox{}\hfill $(\bar{x},\bar{\lambda})\!\in\!(\mathbb{R}_+\cup\{0,+\infty\})^n\times
(\mathbb{R}\cup\{+\infty\})$\hfill \mbox{}\\
with $\bar{x}\!=\!(\bar{x}_1,\ldots,\bar{x}_n)$ (resp.\
$\bar{\lambda}$) an ${\mathbf{HPTAS}}$ for $x^*$ (resp.\ $\lambda^{*}$)
where $\lambda^*\!:=\!\sup_{x\in\R^n_+} f\!=\!\lim_{x\rightarrow x^*}f(x)$
for some $x^{*}\!=\!(x^*_1,\ldots,x^*_n)\!\in\!(\mathbb{R}_+\cup\{0,+\infty\})^n$.
\end{dfn}
\begin{rem}
Taking logarithms, it is clear that our\linebreak problems above
are equivalent to maximizing a function of the form $g(y)\!=\!
\sum^m_{i=1}c_ie^{a_i\cdot y}$ over $\R^n$. When convenient, we will use
the latter notation but, to draw parallels with
the algebraic case, we will usually speak of ``polynomials'' with real
exponents. $\diamond$
\end{rem}
We will need to make one final restriction when\linebreak optimizing
$n$-variate $m$-nomials: we let ${\mathcal{F}}^{**}_{n,n+k}$ denote the
subset of ${\mathcal{F}}^*_{n,n+k}$ consisting of those $f$ with $\mathrm{Supp}(f)\!\ni\!
\mathbf{O}$. While technically convenient, this restriction is also natural
in that level sets of $(n+k)$-nomials in ${\mathcal{F}}^{**}_{n,n+k}$ become
zero sets of $(n+k')$-nomials with $k'\!\leq\!k$.
We observe that checking whether the zero set of an $f\!\in\!\mathbb{R}[x_1,
\ldots,x_n]$ is nonempty (a.k.a.\ the {\bf real (algebraic)\linebreak
feasibility
problem}) is equivalent to checking whether the maximum of
$-f^2$ is $0$ or greater. So it can be argued
that the ${\mathbf{NP}}$-hardness (and ${\mathbf{NP}}_\mathbb{R}$-hardness) of ${\mathbf{SUP}}$ has
been known at least since the 1990s \cite{bcss}. However, it appears that no
sharper complexity upper bounds in terms of sparsity were known earlier.
\begin{thm}
\label{THM:BIG}
We can efficiently optimize $n$-variate\linebreak $(n+k)$-nomials
over $\R^n_+$ for $k\!\leq\!2$. Also, for $k$ a slowly growing function
of $n$, optimizing $n$-variate $(n+k)$-nomials over $\R^n_+$ is
${\mathbf{NP}}$-hard. More precisely:
\begin{enumerate}
\addtocounter{enumi}{-1}
\item{Both ${\mathbf{SUP}}\!\left(\bigcup_{n\in\mathbb{N}} {\mathcal{F}}^{**}_{n,n+1}\right)$ and
${\mathbf{FSUP}}\!\left(\bigcup_{n\in\mathbb{N}} {\mathcal{F}}^{**}_{n,n+1}\right)$
are in $\mathbf{NC}^1_\mathbb{R}$. }
\item{${\mathbf{SUP}}\!\left(\bigcup_{n\in\mathbb{N}}{\mathcal{F}}^{**}_{n,n+2}\right)\!\in\!\mathbf{P}_\mathbb{R}$
and ${\mathbf{FSUP}}\!\left(\bigcup_{n\in\mathbb{N}}{\mathcal{F}}^{**}_{n,n+2}\right)\!\in\!{\mathbf{HPTAS}_\mathbb{R}}$. }
\item{For any fixed $\delta\!>\!0$, \scalebox{.85}[1]
{${\mathbf{SUP}}\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$($}}}
\bigcup\limits_{\substack{n\in\mathbb{N}\\
0<\delta'<\delta}} {\mathcal{F}}^{**}_{n,n+n^{\delta'}}
\cap\mathbb{R}[x_1,\ldots,x_n]\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$)$}}}$}
\vspace{-.6cm}
is ${\mathbf{NP}}_\mathbb{R}$-complete.}
\end{enumerate}
\end{thm}
\begin{ex}
\label{ex:input}
Suppose ${\varepsilon}\!>\!0$.
A very special case of \linebreak
Assertion (1) of Theorem \ref{THM:BIG} then implies
that we can\linebreak approximate within a factor of $1+{\varepsilon}$ ---
for any real nonzero $c_1,\ldots,c_{n+2}$ and
$D$ --- the maximum of the function $f(x)$ defined to be\\
\scalebox{.8}[1]{$c_1+c_2(x^D_1\cdots x^{D^n}_n) +
c_3 x^{2D}_1\cdots x^{2^n D^n}_n +\cdots+c_{n+2}x^{(n+1)D}_1\cdots
x^{(n+1)^nD^n}_n$,}\\
using a number of arithmetic operations linear in\\
\mbox{}\hfill $n^2(\log(n)+\log D)+\log\log\frac{1}{{\varepsilon}}$.
\hfill\mbox{}\\
The best previous results in the algebraic setting
(e.g., the critical points method as detailed in \cite{eldin}, or
by combining \cite{bpr} and the efficient numerical approximation results of
\cite{mp98}) would yield a bound polynomial in\\
\mbox{}\hfill $n^nD^{n}+\log\log\frac{1}{{\varepsilon}}$ \hfill \mbox{}\\
instead, and only under the assumption that $D\!\in\!\mathbb{N}$.
Alternative approaches via semidefinite programming also appear to result in
complexity bounds superlinear in $n^nD^{n}$ (see, e.g.,
\cite{parrilo,lassere,niesparse,kojima}), and still require
$D\!\in\!\mathbb{N}$. Moving to Pfaffian/Noetherian function
techniques, \cite{gv} allows arbitrary real $D$ but still yields an arithmetic
complexity bound exponential in $n$. It should of course be pointed out that
the results of \cite{bpr,mp98,eldin,parrilo,lassere,niesparse,kojima,gv} apply
to real polynomials in complete generality. $\diamond$
\end{ex}
We thus obtain a significant speed-up for a particular class of
analytic functions, laying some preliminary groundwork for improved
optimization of $(n+k)$-nomials with $k$\linebreak
arbitrary. Our advance is possible
because, unlike earlier methods which essentially revolved around commutative
\linebreak
algebra (and were more suited to complex algebraic\linebreak geometry), we are
addressing a real analytic problem with real analytic tools.
Theorem \ref{THM:BIG} is proved in Section \ref{sub:thresh}\linebreak
below.
Our main new technique, which may be of\linebreak independent interest, is
an extension of ${\mathcal{A}}$-discriminants\linebreak (a.k.a.\ sparse
discriminants) to real
exponents (Theorem \ref{THM:DISC} of Section \ref{sub:disc}).
Our algorithms are quite implementable (see Algorithm \ref{algor:fsup}
of Section \ref{sub:thresh}) and derived via a
combination of\linebreak tropical geometric ideas and ${\mathcal{A}}$-discriminant
theory, both extended to real exponents. In particular, for $n$-variate
$(n+1)$-nomials, a simple change of variables essentially tells us that
tropical geometry rules (in the form of {\bf Viro diagrams} \cite[Ch.\ 5,
pp.\ 378--393]{gkz94}, but extended to real exponents), and in the case at
hand this means that one can
compute extrema by checking inequalities involving the coefficients (and
possibly an input $\lambda$).
Tropical geometry still applies to the $n$-variate $(n+2)$-nomial case, but
only after one evaluates the sign of a particular generalized
${\mathcal{A}}$-discriminant.\footnote{For $n$-variate $(n+3)$-nomials, knowing the sign
of a discriminant is no longer sufficient, and efficient optimization
still remains an open problem. Some of the intricacies are detailed in
\cite{drrs,reu08}.} More precisely, an $n$-variate $m$-nomial $f$ (considered
as a function on $\R^n_+$) with bounded supremum $\lambda^*$ must
attain the value $\lambda^*$ at a critical point of $f$ in the nonnegative
orthant. In particular, the nonnegative zero set of $f-\lambda^*$ must be
degenerate, and thus we can attempt to solve for $\lambda^*$ (and a
corresponding maximizer) if we have a sufficiently tractable notion of
discriminant to work with.
So our hardest case reduces to (a) finding efficient
formulas for discriminants of $n$-variate $m$-nomials and\linebreak (b)
efficiently
detecting unboundedness for $n$-variate $m$-\linebreak
nomials. When $m\!=\!n+2$, (a)
fortuitously admits a solution, based on a nascent theory developed further
in \cite{evy}. We can also reduce Problem (b) to Problem (a) via
some\linebreak tropical geometric tricks. So our development ultimately hinges
deriving an efficient analogue of discriminant polynomials for discriminant
varieties that are no longer algebraic.
\newpage
\begin{ex}
Consider the trivariate pentanomial
$f:=$\linebreak
$c_1+c_2x^{999}_1+c_3x^{73}_1x^{\sqrt{363}}_3
+c_4x^{2009}_2+c_5x^{74}_1x^{108e}_2x_3$,
with $c_1,\ldots,$\linebreak
$c_4\!<\!0$ and $c_5\!>\!0$. Theorem \ref{thm:bigger}
of Section \ref{sub:degen} then\linebreak easily implies that
$f$ attains a maximum of $\lambda^*$ on $\mathbb{R}^3_+$
iff $f-\lambda^*$ has a degenerate root in $\mathbb{R}^3_+$. Via
Theorem \ref{THM:DISC} of Section \ref{sub:disc} below,
the latter occurs iff\\
\mbox{}\hfill $b^{b_5}_5(c_1-\lambda^*)^{b_1} c^{b_2}_2
c^{b_3}_3 c^{b_4}_4-b^{b_1}_1 b^{b_2}_2 b^{b_3}_3 b^{b_4}_4 c^{b_5}_5$
\hfill\mbox{}\\
vanishes, where $b\!:=\!(b_1,b_2,b_3,b_4,-b_5)$ is any generator of the
kernel of the map $\varphi : \mathbb{R}^5\longrightarrow \mathbb{R}^4$ defined by
the matrix\\
\mbox{}\hfill \scalebox{.7}[.7]{$\begin{bmatrix}1 & 1 & 1 & 1 & 1\\
0 & 999 & 73 & 0 & 74\\
0 & 0 & 0 & 2009 & 108e\\
0 & \sqrt{363} & 0 & 0 & 1 \end{bmatrix}$,}\hfill\mbox{}\\
normalized so that $b_5\!>\!0$.
In particular, such a $b$ can be computed easily via $5$
determinants of $4\times 4$ submatrices (via Cramer's Rule), and we thus see
that $\lambda^*$ is nothing more than $c_1$ minus a monomial
(involving real exponents) in $c_2,\ldots,c_5$. Via the
now classical fast algorithms for approximating $\log$ and
$\exp$ \cite{brent}, real powers of real numbers (and thus $\lambda^*$)
can be efficiently approximated. Similarly, deciding whether $\lambda^*$
exceeds a given $\lambda$ reduces to checking an inequality involving real
powers of positive numbers. $\diamond$
\end{ex}
\subsection{Related Work}
\noindent
The computational complexity of numerical analysis continues to be
an active area of research, both in theory and in practice. On the
theoretical side, the BSS model over $\mathbb{R}$ has proven quite useful
for setting a rigourous foundation. While this model involves exact
arithmetic and field operations, there are many results building upon this
model that elegantly capture round-off error and numerical
conditioning (see, e.g., \cite{cuckersmale,burgisser}).
Furthermore, results on $\mathbf{P}_\mathbb{R}$ and ${\mathbf{NP}}_\mathbb{R}$ do ultimately impact classical
complexity classes. For instance, the respective {\bf Boolean parts} of these
complexity classes, ${\mathbf{BP}}(\mathbf{P}_\mathbb{R})$ and ${\mathbf{BP}}({\mathbf{NP}}_\mathbb{R})$, are defined as
the respective restrictions of $\mathbf{P}_\mathbb{R}$ and ${\mathbf{NP}}_\mathbb{R}$ to integer inputs.
While the best known bounds for these Boolean parts are still rather loose
--- \\
\mbox{}\hfill $\ppoly\!\subseteq\!{\mathbf{BP}}(\mathbf{P}_\mathbb{R})\!\subseteq\!\pspoly$
\cite{burgisser},
\hfill\mbox{}\\
\mbox{}\hfill $\npoly\!\subseteq\!{\mathbf{BP}}({\mathbf{NP}}_\mathbb{R})\!\subseteq\!{\mathbf{CH}}$
\cite{burgisser},\hfill\mbox{}\\
--- good algorithms for the BSS model and good algorithms for the
Turing model frequently inspire one another, e.g., \cite{realkoiran,bpr}.
We recall that $\ppoly$, referred to as {\bf non-uniform
polynomial-time}, consists of those decision problems solvable by a
non-uniform family of circuits\footnote{i.e., there is no restriction
on the power of the algorithm specifying the circuit
for a given input size} of size polynomial in the input.
${\mathbf{CH}}$ is the {\bf counting hierarchy} ${\pp\pp}\cup {\pp\pp}^{{\pp\pp}}
\cup {\pp\pp}^{{\pp\pp}^{\pp\pp}}\cup \cdots$, which happens to be contained
in $\mathbf{PSPACE}$ (see \cite{burgisser} and the references therein).
Let us also point out that the number of natural\linebreak
problems known to be
${\mathbf{NP}}_\mathbb{R}$-complete remains much smaller than the number of natural
problems known to be ${\mathbf{NP}}$-complete: deciding the
existence of a real roots for\linebreak multivariate polynomials (and
various subcases involving\linebreak
quadratic systems or single quartic
polynomials) \cite[Ch.\ 5]{bcss}, linear programming feasibility \cite[Ch.\
5]{bcss}, and bounding the real dimension of algebraic sets \cite{realkoiran}
are the main representative ${\mathbf{NP}}_\mathbb{R}$-complete problems.\linebreak
Optimizing $n$-variate $(n+n^\delta)$-nomials (with $\delta\!>\!0$
fixed and $n$ unbounded), and the corresponding feasibility
problem (cf.\ Corollary \ref{COR:RFEAS} below), now join this
short list.
While sparsity has been profitably explored in
the context of interpolation (see, e.g., \cite{ky,gll}) and factorization over
number fields \cite{lenstra,kk,aks},
it has been mostly ignored in numerical analysis (for nonlinear polynomials)
and the study of the BSS model over $\mathbb{C}$ and $\mathbb{R}$.
For instance, there appear to be no earlier
published complexity upper bounds of the form
${\mathbf{SUP}}\left({\mathcal{F}}_{1,m}\right)\!\in\!\mathbf{P}_\mathbb{R}$ (relative to the sparse encoding)
for any $m\!\geq\!3$, in spite of beautiful recent work in
semi-definite programming (see, e.g., \cite{lassere,niesparse,kojima})
that begins to address the optimization of sparse multivariate polynomials
over the real numbers. In particular, while the latter papers give significant
practical speed-ups over older techniques such as resultants and Gr\"obner
bases, the published complexity bounds are still exponential (relative to the
sparse encoding) for $n$-variate $(n+2)$-nomials, and require the assumption
of integer exponents.
We can at least obtain a glimpse of sparse optimization beyond $n$-variate
$(n+2)$-nomials by combining our framework with an earlier result from
\cite{rojasye}. The proof is in Section \ref{sub:4}.
\begin{cor}
\label{COR:4} \mbox{}\\
(0) Using the same notion of input size as for ${\mathbf{FSUP}}$ (cf.\linebreak
\mbox{}\hspace{.6cm}Definition \ref{dfn:cond} below), the positive roots of
any real\linebreak
\mbox{}\hspace{.6cm}trinomial in ${\mathcal{F}}_{1,3}\cap\mathbb{R}[x_1]$ admit an ${\mathbf{HPTAS}}$.\\
(1) ${\mathbf{SUP}}({\mathcal{F}}^{**}_{1,4}\cap\mathbb{R}[x_1])\!\in\!\mathbf{P}_\mathbb{R}$ and
${\mathbf{FSUP}}({\mathcal{F}}^{**}_{1,4}\cap\mathbb{R}[x_1])\!\in$\linebreak
\mbox{}\hspace{.6cm}${\mathbf{HPTAS}_\mathbb{R}}$.
\end{cor}
As for earlier complexity lower bounds for ${\mathbf{SUP}}$ in terms of sparsity,
we are unaware of any. For instance, it is not even known whether
${\mathbf{SUP}}(\mathbb{R}[x_1,\ldots,x_n])$ is ${\mathbf{NP}}_\mathbb{R}$-hard for some fixed $n$
(relative to the sparse encoding).
The paper \cite{finally}, which deals exclusively with decision
problems (i.e., yes/no answers) and bit complexity (as opposed to
arithmetic complexity), is an important precursor to the present work.
Here, we thus expand the context to real coefficient and
real exponents, work in the distinct setting of optimization, and derive
(and make critical use of) a new tool:
generalized ${\mathcal{A}}$-discriminants for exponential sums. As a consequence,
we are also able to extend some of the complexity lower bounds from
\cite{finally} as follows. (See Section \ref{sub:thresh} for the proof.)
\begin{dfn}
Let ${\mathbf{FEAS}}_\mathbb{R}$ (resp.\ ${\mathbf{FEAS}}_+$) denote the problem of deciding whether an
arbitrary system of equations from
$\bigcup_{n\in\mathbb{N}} \mathbb{R}[x^a\; | \; a\!\in\!\R^n]$ has a real root (resp.\
root with all coordinates positive). Also, for any collection ${\mathcal{F}}$ of tuples
chosen from $\bigcup_{k,n\in\mathbb{N}}(\mathbb{R}[x^a\; | \; a\!\in\!\R^n])^k$, we let
${\mathbf{FEAS}}_\mathbb{R}({\mathcal{F}})$ (resp.\ ${\mathbf{FEAS}}_+({\mathcal{F}})$) denote the natural restriction of
${\mathbf{FEAS}}_\mathbb{R}$ (resp.\ ${\mathbf{FEAS}}_+$) to inputs in ${\mathcal{F}}$. $\diamond$
\end{dfn}
\begin{cor}
\label{COR:RFEAS}
For any $\delta\!>\!0$,\\
\mbox{}\hfill
${\mathbf{FEAS}}_\mathbb{R}\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$($}}}
\bigcup\limits_{\substack{n\in\mathbb{N}\\ 0<\delta'<\delta}}
{\mathcal{F}}^{**}_{n,n+n^{\delta'}}\cap\mathbb{R}[x_1,\ldots,x_n]
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$)$}}}
$\hfill \mbox{}\\
and \\
\mbox{}\hfill
${\mathbf{FEAS}}_+
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$($}}}
\bigcup\limits_{\substack{n\in\mathbb{N}\\ 0<\delta'<\delta}}
{\mathcal{F}}^{**}_{n,n+n^{\delta'}}\cap\mathbb{R}[x_1,\ldots,x_n]
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$)$}}}
$\hfill \mbox{}\\
are each ${\mathbf{NP}}_\mathbb{R}$-complete.
\end{cor}
\section{Background}
\label{sec:back}
\subsection{Input Size}
\label{sub:input}
To measure the complexity of our algorithms,
let us fix the following definitions for input size and condition number.
\begin{dfn}
\label{dfn:cond}
Given any subset ${\mathcal{A}}\!=\!\{a_1,\ldots,a_m\}\!\subset\!\R^n$ of cardinality
$m$, let us define ${\hat{\cA}}$ to be the $(n+1)\times m$ matrix whose $j^{\text{\underline{th}}}$
column is $\{1\}\times a_j$, and $\beta_J$ the absolute value of the
determinant of the submatrix of ${\hat{\cA}}$ consisting of those columns
of ${\hat{\cA}}$ with index in a subset $J\!\subseteq\!\{1,\ldots,m\}$
of cardinality $n+1$. Then,
given any $f\!\in\!{\mathcal{F}}^*_{n,m}$ written $f(x)\!=\!\sum^{m}_{i=1}
c_ix^{a_i}$, we define its {\bf condition number}, ${\mathcal{C}}(f)$, to be\\
\mbox{}\hfill $\left(\prod\limits^{m}_{i=1}\max\!\left
\{3,|c_i|,\frac{1}{|c_i|}\right\}\right)\times
\prod\limits_{\substack{J\subseteq\{1,\ldots,m\}\\ \#J=n+1}}
\max^*\!\left(3,|\beta_J|,\frac{1}{|\beta_J|}\right)$,\hfill\mbox{}\\
where $\max^*(a,b,c)$ is $\max\{a,b,c\}$ or $a$, according as
$\max\{b,c\}$ is finite or not.
Throughout this paper, we will use the following notions of input size for
${\mathbf{SUP}}$ and ${\mathbf{FSUP}}$:
The size of any\linebreak
instance $(f,\lambda)$ of ${\mathbf{SUP}}$ (resp.\ an instance
$(f,{\varepsilon})$ of ${\mathbf{FSUP}}$) is $\log\!\left(\max^*\left(3,|\lambda|,\frac{1}
{|\lambda|}\right)\right)+\log {\mathcal{C}}(f)$
(resp.\ $\log {\mathcal{C}}(f)$). $\diamond$
\end{dfn}
While our definition of condition number may appear\linebreak unusual, it is
meant to concisely arrive at two important properties:
(1) $\log {\mathcal{C}}(f)$ is polynomial in $n\log \deg f$ when
$f\!\in\!{\mathcal{F}}_{n,n+k}\cap\mathbb{R}[x_1,\ldots,x_n]$ and $k$ is fixed, (2) ${\mathcal{C}}(f)$
is closely\linebreak
related to an underlying discriminant (see Theorem \ref{THM:DISC}\linebreak
below) that dictates how much numerical accuracy we will\linebreak
\scalebox{.89}[1]{need to solve
${\mathbf{FSUP}}$. We also point out that for $f\!\in\!\mathbb{Z}[x_1,\ldots,x_n]$,}\linebreak
it is easy to show that $\log {\mathcal{C}}(f)\!=\!O(nS(f))$ where $S(f)$ is the
{\bf sparse size} of $f$, i.e., $S(f)$ is the number of bits needed to write
down the monomial term expansion of $f$. For\linebreak sufficiently sparse
polynomials,
algorithms with\linebreak
complexity polynomial in $S(f)$ are much faster than those with
complexity polynomial in $n$ and $\deg(f)$. \cite{lenstra,kk,aks,ky,gll,
finally} provide other interesting\linebreak
examples of algorithms with complexity polynomial in $S(f)$.
\subsection{Tricks with Exponents}
\label{sub:exp}
\noindent
A simple and useful change of variables is to use\linebreak
monomials in new variables.
\begin{dfn}
For any ring $R$, let $R^{m\times n}$ denote the set of $m\times n$ matrices
with entries in $R$. For any $M\!=\![m_{ij}]\!\in\!\mathbb{R}^{n\times n}$
and $y\!=\!(y_1,\ldots,y_n)$, we define the formal expression
$y^M\!:=\!(y^{m_{1,1}}_1\cdots y^{m_{n,1}}_n,\ldots,
y^{m_{1,n}}_1\cdots y^{m_{n,n}}_n)$. We call the substitution
$x\!:=\!y^M$ a {\bf monomial change of variables}. Also, for
any $z\!:=\!(z_1,\ldots,z_n)$, we let $xz\!:=\!(x_1z_1,\ldots,x_nz_n)$.
\linebreak
Finally, let $\mathbb{G}\mathbb{L}_n(\mathbb{R})$ denote the group of all
invertible matrices in $\mathbb{R}^{n\times n}$. $\diamond$
\end{dfn}
\begin{prop}
\label{prop:monochange}
(See, e.g., \cite[Prop.\ 2]{tri}.)
For any $U,V\!\in\!\mathbb{R}^{n\times n}$, we have the formal identity\\
\mbox{}\hfill $(xy)^{UV}\!=\!(x^U)^V(y^U)^V$.\hfill\mbox{}\\
Also, if $\det U\!\neq\!0$, then the function
$e_U(x)\!:=\!x^U$ is an\linebreak analytic
automorphism of $\R^n_+$, and preserves smooth points and singular
points of
positive zero sets of analytic functions.
Finally, $U\!\in\!\mathbb{G}\mathbb{L}_n(\mathbb{R})$
implies that $e^{-1}_U(\R^n_+)\!=\!\R^n_+$ and that $e_U$
maps distinct open orthants of $\R^n$ to distinct open orthants of $\R^n$. \qed
\end{prop}
A consequence follows: Recall that the {\bf affine span}
of a point set ${\mathcal{A}}\!\subset\!\R^n$, $\mathrm{Aff} {\mathcal{A}}$, is the set of real
linear\linebreak combinations $\sum_{a\in{\mathcal{A}}} c_a a$ satisfying
$\sum_{a\in{\mathcal{A}}}c_a\!=\!0$. To\linebreak
optimize an $f\!\in\!
{\mathcal{F}}^{**}_{n,n+1}$ it will help to have a much simpler canonical form.
In what follows, we use $\#$ for set cardinality and
$e_i$ for the $i^{\text{\underline{th}}}$ standard basis
vector of $\R^n$.
\vfill\eject
\begin{cor}
\label{cor:can}
For any $f\!\in\!{\mathcal{F}}^{**}_{n,n+1}$ we can compute
$c\!\in\!\mathbb{R}$ and $\ell\!\in\!\{0,\ldots,n\}$ within $\mathbf{NC}^1_\mathbb{R}$ such that\\
\mbox{}\hfill $\barf(x)\!:=\!c+x_1+\cdots+x_\ell-x_{\ell+1}-\cdots -x_n$
\hfill\mbox{}\\
satisfies:\\
\mbox{}\hspace{.4cm}(1) $\bar{f}$ and $f$ have exactly the same number of
positive\\
\mbox{}\hspace{1cm}coefficients, and\\
\mbox{}\hspace{.4cm}(2)
$\bar{f}\!\left(\R^n_+\right)\!=\!f\!\left(\R^n_+\right)$.
\end{cor}
\noindent
{\bf Proof:} Suppose $f$ has support ${\mathcal{A}}\!=\!\{0,a_2,\ldots,a_{n+1}\}$
and corresponding coefficients $c_1,\ldots,c_{n+2}$.
Letting $B$ denote the $n\times n$ matrix whose $i^{\text{\underline{th}}}$ column is $a_{i+1}$,
Proposition \ref{prop:monochange}, via the substitution
$x\!=\!y^{B^{-1}}$, tells us that we may assume that
$f$ is of the form $c_1+c_2x_1+\cdots+c_{n+1}x_n$. Moreover,
to obtain $\bar{f}$, we need only perform a suitable positive rescaling and
reordering of the variables. In summary, $c$ is simply the constant
term of $f$ and $\ell$ is the number of positive
coefficients not belonging to the constant term --- both of
which can be computed simply by a search and a sort clearly belonging to
$\mathbf{NC}^1_\mathbb{R}$. \qed
\smallskip
\noindent
Note that we don't actually need to compute $B^{-1}$ to obtain $\ell$:
$B^{-1}$ is needed only for the proof of our corollary.
A final construction we will need is the notion of a\linebreak
{\bf generalized} Viro diagram. Recall that a {\bf triangulation} of a
point set ${\mathcal{A}}$ is simply a simplicial complex $\Sigma$ whose vertices lie in
${\mathcal{A}}$. We say that a triangulation of ${\mathcal{A}}$ is {\bf induced by a lifting} iff
it its simplices are exactly the domains of linearity for some function
that is convex, continuous, and piecewise linear on the convex hull
of\footnote{i.e., smallest convex set containing...}
${\mathcal{A}}$.
\begin{dfn}
Suppose ${\mathcal{A}}\!\subset\!\R^n$ is finite, $\dim \mathrm{Aff} {\mathcal{A}}\!=\!n$, and ${\mathcal{A}}$
is equipped with a triangulation $\Sigma$ induced by a lifting {\bf and} a
function $s : {\mathcal{A}} \longrightarrow \{\pm\}$ which we will call a
{\bf distribution of signs for ${\mathcal{A}}$}. We then define a
piece-wise linear manifold --- the {\bf generalized Viro diagram}
${\mathcal{V}}_{\mathcal{A}}(\Sigma,s)$ --- in the following local manner: For any $n$-cell
$C\!\in\!\Sigma$,
let $L_C$ be the convex hull of the set of midpoints of edges of
$C$ with vertices of opposite sign, and then define
${\mathcal{V}}_{\mathcal{A}}(\Sigma,s)\!:=\!\bigcup\limits_{C \text{ an } n\text{-cell}}
L_C$. When ${\mathcal{A}}\!=\!\mathrm{Supp}(f)$ and $s$ is the corresponding sequence of
coefficient signs, then we also call ${\mathcal{V}}_{\Sigma}(f)\!:=\!{\mathcal{V}}_{\mathcal{A}}(\Sigma,s)$
the {\bf (generalized) Viro diagram of $f$}. $\diamond$
\end{dfn}
\noindent
We use the appelation ``generalized'' since, to the best of our knowledge,
Viro diagrams have only been used in the special case ${\mathcal{A}}\!\subset\!\Z^n$
(see, e.g., Proposition 5.2 and Theorem 5.6 of
\cite[Ch.\ 5, pp.\ 378--393]{gkz94}). We give examples of
Viro diagrams in Section \ref{sub:degen} below.
\subsection{Generalized Circuit Discriminants and\\ Efficient Approximations}
\label{sub:disc}
\noindent
Our goal here is to extract an extension of ${\mathcal{A}}$-discriminant theory
sufficiently strong to prove our main results.
\begin{dfn}
\label{dfn:adisc}
Given any ${\mathcal{A}}\!=\!\{a_1,\ldots,a_m\}\!\subset\!\R^n$ of
cardinality $m$ and $c_1,\ldots,c_m\!\in\!\C^*$, we
define $\nabla_{\mathcal{A}}\!\subset\!{\mathbb{P}}^{m-1}_\mathbb{C}$ ---
the {\bf generalized ${\mathcal{A}}$-discriminant variety} ---
to be the closure of the
set of all $[c_1:\cdots :c_m]\!\in\!{\mathbb{P}}^{m-1}_\mathbb{C}$ such that
$g(x)\!=\!\sum^m_{i=1} c_ie^{a_i\cdot y}$ has a degenerate root in $\C^n$.
In particular, we call $f$ an {\bf $n$-variate exponential $m$-sum}.
$\diamond$
\end{dfn}
To prove our results, it will actually suffice to deal with a small
subclass of ${\mathcal{A}}$-discriminants.
\begin{dfn}
\label{dfn:ckt}
We call ${\mathcal{A}}\!\subset\!\R^n$ a {\bf (non-degenerate)
circuit}\footnote{This terminology comes from matroid theory and
has nothing to do with circuits from complexity theory.}
iff ${\mathcal{A}}$ is affinely dependent, but every proper subset of ${\mathcal{A}}$ is affinely
independent. Also, we say that ${\mathcal{A}}$ is a {\bf degenerate circuit} iff
${\mathcal{A}}$ contains a point $a$ and a proper subset ${\mathcal{B}}$ such that $a\!\in\!{\mathcal{B}}$,
${\mathcal{A}}\setminus a$ is affinely independent, and ${\mathcal{B}}$ is a non-degenerate
circuit. $\diamond$
\end{dfn}
\noindent
For instance, both \epsfig{file=ckt2.eps,height=.35cm} and
\epsfig{file=ckt.eps,height=.35cm} are circuits, but
\epsfig{file=degenckt.eps,height=.35cm} is a degenerate circuit.
In general, for any degenerate circuit ${\mathcal{A}}$, the subset ${\mathcal{B}}$ named above is
always unique.
\begin{dfn}
\label{dfn:chamber}
For any ${\mathcal{A}}\!\subset\!\R^n$ of cardinality $m$, let ${\mathcal{G}}_{\mathcal{A}}$
denote the set of all $n$-variate exponential $m$-sums with support ${\mathcal{A}}$. $\diamond$
\end{dfn}
There is then a surprisingly succinct description for $\nabla_{\mathcal{A}}$ when ${\mathcal{A}}$
is a non-degenerate circuit. The theorem below is inspired by
\cite[Prop.\ 1.2, pg.\ 217]{gkz94} and \cite[Prop.\ 1.8, Pg.\ 274]{gkz94}
--- important precursors that covered the special case of integral
exponents.
\begin{thm}
\label{THM:DISC}
Suppose ${\mathcal{A}}\!=\!\{a_1,\ldots,a_{n+2}\}\!\subset\!\R^n$ is a non-degenerate
circuit, and let $b\!:=\!(b_1,\ldots,b_{n+2})$ where
$b_i$ is $(-1)^i$ times the determinant
of the matrix with columns $1\times a_1,\ldots,\widehat{1\times a_i},
\ldots,a_{n+2}$ ($\widehat{(\cdot)}$ denoting omission). Then:
\begin{enumerate}
\item{$\nabla_{\mathcal{A}}\!\subseteq\!\left\{[c_1:\cdots:c_{n+2}]\!\in\!{\mathbb{P}}^{n+1}_\mathbb{C}\;
: \; \prod\limits^{n+2}_{i=1} \left|\frac{c_i}{b_i}\right|^{b_i}
\!=\!1\right\}$. Also, $(b_1,\ldots,b_{n+2})$ can be computed in
$\mathbf{NC}^2_\mathbb{R}$. }
\item{There is a $[c_1:\cdots:c_{n+2}]\!\in\!{\mathbb{P}}^{n+1}_\mathbb{R}$
with\\
\mbox{}\hspace{1cm}(i) $\mathrm{sign}(c_1b_1)\!=\cdots=\!\mathrm{sign}(c_{n+2}b_{n+2})$\\
and\\
\mbox{}\hspace{1cm}(ii) $\prod\limits^{n+2}_{i=1}
(\mathrm{sign}(b_ic_i)c_i/b_i)^{\mathrm{sign}(b_ic_i)b_i}\!=\!1$\\
iff the real zero set of
$g(y)\!:=\!\sum^{n+2}_{i=1}c_i e^{a_i\cdot y}$
contains a degenerate point $\zeta$. In particular, any such
$\zeta$ satisfies $e^{a_i\cdot \zeta}\!=\!\mathrm{sign}(b_1c_1)b_i/c_i$
for all $i$, and thus the real zero set of $g$ has at most one degenerate
point.}
\end{enumerate}
\end{thm}
\noindent
Theorem \ref{THM:DISC} is proved in Section \ref{sec:proofs} below.
We will also need a variant of a family of fast algorithms discovered
independently by Brent and Salamin.
\begin{brent}
\cite{brent,salamin}
Given any\linebreak positive $x,{\varepsilon}\!>\!0$, we can approximate
$\log x$ and $\exp(x)$ within a factor of $1+{\varepsilon}$ using just
$O\!\left(|\log x| + \log\log\frac{1}{{\varepsilon}}\right)$ arithmetic operations. \qed
\end{brent}
\noindent
While Brent's paper \cite{brent} does not explicitly mention general
real numbers, he works with a model of floating point number from which
it is routine to derive the statement above.
\subsection{Unboundedness and Sign Checks}
\label{sub:degen}
\noindent
Optimizing an $f\!\in\!{\mathcal{F}}^{**}_{n,n+1}$ will ultimately
reduce to checking simple inequalities involving just the coefficients
of $f$. The optimum will then in fact be either $+\infty$ or the\linebreak
constant
term of $f$. Optimizing an $f\!\in\!{\mathcal{F}}^{**}_{n,n+2}$ would be as easy were
it not for two additional difficulties: deciding unboundedness already
entails checking the sign of a\linebreak
generalized ${\mathcal{A}}$-discriminant, and the optimum
can be a transcendental function of the coefficients.
To formalize the harder case, let us now work at the level of
exponential sums: let us define ${\mathcal{G}}_{n,m}$, ${\mathcal{G}}^*_{n,m}$, and
${\mathcal{G}}^{**}_{n,m}$ to be the obvious respective exponential
$m$-sum analogues of ${\mathcal{F}}_{n,m}$, ${\mathcal{F}}^*_{n,m}$, and ${\mathcal{F}}^{**}_{n,m}$.
Recall that $\mathrm{Conv} {\mathcal{A}}$ is the convex hull of ${\mathcal{A}}$.
\begin{thm}
\label{thm:bigger}
Suppose we write $g\!\in\!{\mathcal{G}}^{**}_{n,n+2}$ in the form
$g(y)\!=\!\sum^{n+2}_{i=1} c_ie^{a_i\cdot y}$ with ${\mathcal{A}}\!=\!\{a_1,
\ldots,a_{n+2}\}$. Let us also order the monomials of $f$ so that
${\mathcal{B}}\!:=\!\{a_1,\ldots,a_{j'}\}$ is the\linebreak
\scalebox{.92}[1]{unique
non-degenerate sub-circuit of ${\mathcal{A}}$ and let
$b\!:=\!(b_1,\ldots,b_{n+2})$}\linebreak where
$b_i$ is $(-1)^i$ times the determinant
of the matrix with columns $1\times a_1,\ldots,
\widehat{1\times a_i}, \ldots,a_{n+2}$ ($\widehat{(\cdot)}$ denoting omission).
Then $\sup_{y\in\R^n} g(y)\!=\!+\infty
\Longleftrightarrow$ one of the following $2$ conditions hold:
\begin{enumerate}
\item{$c_j\!>\!0$ for some vertex $a_j$ of $\mathrm{Conv}{\mathcal{A}}$ not equal to $\mathbf{O}$.}
\item{$\mathbf{O}\!\not\in\!{\mathcal{B}}$, we can further order the monomials of $f$ so that
$a_{j'}$ is the unique point of ${\mathcal{B}}$ in the relative\linebreak
\scalebox{.92}[1]{interior of ${\mathcal{B}}$,
$c_{j'}\!>\!0$, and $\prod^{j'}_{i=1}
\left(\mathrm{sign}(b_{j'})\frac{c_i}{b_i}\right)^{\mathrm{sign}(b_{j'})b_i}\!
<\!1$.}}
\end{enumerate}
Finally, if $\sup_{y\in\R^n} g(y)\!=\!\lambda^*\!<\!+\infty$ and
$a_j\!=\!\mathbf{O}$, then $\lambda^*\!=\!c_j$, or $\lambda^*$ is
the unique solution to\\
\scalebox{.9}[1]{$\left(\mathrm{sign}(b_{j'})\frac{c_{j}-\lambda^*}{b_{j}}
\right)^{\mathrm{sign}(b_{j'})b_{j}} \times \prod
\limits_{i\in\{1,\ldots,j'\}\setminus\{j\}}
\left(\mathrm{sign}(b_{j'})\frac{c_i}{b_i} \right)^{\mathrm{sign}(b_{j'})b_i}\!=\!1$}
\linebreak
with $(c_j-\lambda^*)b_jb_{j'}\!>0$; where the equation for
$\lambda^*$ holds iff:
\begin{enumerate}
\addtocounter{enumi}{2}
\item{$\mathbf{O}\!\in\!{\mathcal{B}}$, we can further order the monomials of $f$ so that
$a_{j'}$ is the unique point of ${\mathcal{B}}$ in the relative interior of ${\mathcal{B}}$,
and $c_{j'}\!>\!0$. }
\end{enumerate}
\end{thm}
\noindent
It is easily checked that $c_1b_1b_{j'},\ldots,c_{j'-1}b_{j'-1}b_{j'}\!>\!0$
when Conditions 2 or 3 hold.
While the $3$ cases above may appear complicated, they are easily
understood from a tropical perspective: our cases above correspond to
$4$ different families of generalized Viro diagrams that characterize how the
function $g$ can be bounded from above (or not) on $\R^n$.
Some representative examples are illustrated below:\\
\begin{picture}(200,220)(0,-120)
\put(20,0){\epsfig{file=virocase1b.eps,height=1.3in}}
\put(46,74){Case 1} \put(92,1){{\small $y_1$}} \put(50,36){{\small $y_2$}}
\put(13,17){{\small $\mathbf{O}$}}
\put(130,0){\epsfig{file=virocase2b.eps,height=1.3in}}
\put(162,74){Case 2} \put(220,2){{\small $y_1$}} \put(125,25){{\small $y_2$}}
\put(129,1){{\small $\mathbf{O}$}}
\put(20,-110){\epsfig{file=virocase3b.eps,height=1.3in}}
\put(47,-40){{\bf Not} Case 3} \put(84,-106){{\small $y_1$}}
\put(57,-50){($\lambda^*\!<\!+\infty$)} \put(84,-106){{\small $y_1$}}
\put(64,-70){{\small $y_2$}} \put(13,-93){{\small $\mathbf{O}$}}
\put(135,-110){\epsfig{file=virocase5d.eps,height=1.3in}}
\put(166,-40){Case 3} \put(229,-102){{\small $y_1$}}
\put(178,-50){($\lambda^*\!<\!+\infty$)}
\put(126,-100){{\small $\mathbf{O}$}} \put(148,-115){{\small $y_2$}}
\end{picture}
\noindent
For example, the first two illustrations are meant to encode the
fact that there exist directions in the positive quadrant
along which $g$ increases without bound. Similarly,
the last $2$ illustrations respectively show cases where
$g$ either approaches a supremum as some $y_i\longrightarrow-\infty$
or has a unique maximum in the real plane.
\medskip
\noindent
{\bf Sketch of Proof of Theorem \ref{thm:bigger}:}
First, we identify the graph of $g$ over $\R^n$ with the real zero set $Z$ of
$z-g(y)$. Since the supremum of $g$ is unaffected by a linear change of
variables, we can then assume (analogous to Corollary \ref{cor:can}) that
$g$ is of the form\\
\mbox{}\hfill $c+e^{y_1}+\cdots+e^{y_\ell}-e^{y_{\ell+1}}-\cdots-e^{y_n}
+c'e^{\alpha\cdot y}$.\hfill\mbox{}\\
(Note in particular that a linear change of variables for an
exponential sum is, modulo applications of $\exp$ and $\log$, the
same as a monomial change of variables.) Note also that the
classical Hadamard bound for the determinant guarantees that
$\log {\mathcal{C}}(g)$ increases by at worst a factor of $n$ after our
change of variables. Let $P$ denote the convex hull of $\{\mathbf{O},e_1,
\ldots,e_n,e_{n+1},\alpha\}$.
Via a minor variation of the {\bf moment map} (see, e.g., \cite{tfulton})
one can then give a homeomorphism $\varphi : \mathbb{R}^{n+1} \longrightarrow
\mathrm{Int}(P)$ that extends to a map $\bar{\varphi}$ encoding the
``limits at toric infinity'' of $Z$
in terms of data involving $P$. (See also \cite[Sec.\ 6]{tri}.) In particular,
$\bar{\varphi}(Z)$ intersects the facet of $P$ parallel to the
$y_i$ coordinate hyperplane iff $Z$ contains points with $y_i$
coordinates approaching $-\infty$. Similarly, the function $g$
is unbounded iff $\bar{\varphi}(Z)$ intersects a face of $P$ incident to
$e_{n+1}$ and some point in $\{e_1,\ldots,e_n,\alpha\}$. This
correspondence immediately accounts for Condition 1.
This correspondence also accounts for Condition 2, but in
a more subtle manner. In particular, $Z$ has topology depending exactly on
which connected component of the complement of $\nabla_{\mathcal{A}}$ contains
$g$. Thanks to Theorem \ref{THM:DISC}, this can be decided
by determining the sign of expression involving powers of
ratios of $c_i$ and $b_i$. In particular, Condition
2 is nothing more than an appropriate accounting
of when $\bar{\varphi}(Z)$ intersects a face of $P$ incident to
$e_{n+1}$ and some point in $\{e_1,\ldots,e_n,\alpha\}$.
To conclude, one merely observes that Condition 3 corresponds
to $\bar{\varphi}(Z)$ intersecting a face of $P$ incident to
$\mathbf{O}$ and $e_{n+1}$. In particular, the sign conditions merely guarantee that
$g$ has a unique maximum as some $y_i$ tend to $-\infty$. \qed
\section{The Proofs of Our Main Results: Theorems \ref{THM:DISC} and
\ref{THM:BIG}, and Corollaries \ref{COR:RFEAS} and \ref{COR:4}}
\label{sec:proofs}
\noindent
We go in increasing order of proof length.
\subsection{The Proof of Theorem \ref{THM:DISC}}
\label{sub:proof2}
\medskip
\noindent
{\bf Assertion (1):} It is easily checked
that $Z_\mathbb{C}(f)$ has a degenerate point $\zeta$ iff
\[ {\hat{\cA}} \begin{bmatrix}
c_1 e^{a_1\cdot \zeta}\\
\vdots \\
c_{n+2} e^{a_{n+2}\cdot \zeta}\\
\end{bmatrix}
= \begin{bmatrix} 0 \\ \vdots \\ 0 \end{bmatrix}. \]
\noindent
In which case, $(c_1 e^{a_1\cdot y},\ldots,c_{n+2} e^{a_{n+2}\cdot y})^T$
must be a generator of the right null space of ${\hat{\cA}}$. On the other hand,
by Cramer's Rule, one sees that $(b_1,\ldots,b_{n+2})^T$ is also
a generator of the right null space of ${\hat{\cA}}$. In particular,
${\mathcal{A}}$ a non-degenerate circuit implies that $b_i\!\neq\!0$ for all $i$.
We therefore obtain that\\
\mbox{}\hfill $(c_1 e^{a_1\cdot \zeta},\ldots,c_{n+2}
e^{a_{n+2}\cdot \zeta})\!=\!\alpha
(b_1,\ldots,b_{n+2})$\hfill\mbox{}\\
for some $\alpha\!\in\!\C^*$. Dividing
coordinate-wise and taking absolute values, we then obtain\\
$\left(|c_1/b_1|e^{a_1\cdot {\mathbf{Re}}(\zeta)},\ldots,|c_{n+2}/b_{n+2}|
e^{a_{n+2}\cdot {\mathbf{Re}}(\zeta)}\right) \!=\!(|\alpha|,\ldots,|\alpha|)$. Taking
both sides to the vector power $(b_1,\ldots,b_{n+2})$ we then clearly obtain\\
\scalebox{.84}[1]{$\left(|c_1/b_1|^{b_1}\cdots
|c_{n+2}/b_{n+2}|^{b_{n+2}}\right)
\left(e^{(b_1 a_1+\cdots b_{n+2}a_{n+2})\cdot {\mathbf{Re}}(\zeta)}\right)\!=\!
|\alpha|^{b_1+\cdots+b_n}$.}\linebreak
Since ${\hat{\cA}}(b_1,\ldots,b_{n+2})^T\!=\!\mathbf{O}$, we thus obtain
$\prod\limits^{n+2}_{i=1} \left|\frac{c_i}{b_i}\right|^{b_i}\!=\!1$.
Since the last equation is homogeneous in the $c_i$, its zero set
in ${\mathbb{P}}^{n+1}_\mathbb{C}$ actually defines a closed set of $[c_1:\cdots:c_{n+2}]$.
So we obtain the containment for $\nabla_{\mathcal{A}}$.
\scalebox{.95}[1]{The assertion on the complexity of computing
$(b_1,\ldots,b_{n+2})$}\linebreak
then follows immediately from the classic efficient parallel
algorithms for linear algebra over $\mathbb{R}$ \cite{csanky}. \qed
\smallskip
\noindent
{\bf Assertion (2):}
We can proceed by
almost exactly the same argument as above, using one simple additional
observation:
$e^{a_i\cdot \zeta}\!\in\!\mathbb{R}_+$ for all $i$ when $\zeta\!\in\!\mathbb{R}$. So then,
we can replace our use of absolute value by a sign factor, so that
all real powers are well-defined. In particular, we immediately obtain
the ``$\Longleftarrow$'' direction of our desired equivalence.
To obtain the ``$\Longrightarrow$'' direction, note that when\\
\mbox{}\hfill
$Z_\mathbb{R}\!\left(\sum^{n+2}_{i=1}
c_i e^{a_i\cdot y}\right)$ \hfill \mbox{}\\
has a degeneracy $\zeta$, we directly
obtain $e^{a_i\cdot \zeta}\!=\!
\mathrm{sign}(b_1c_1)b_i/c_i$ for all $i$ (and the constancy of $\mathrm{sign}(b_ic_i)$
in particular). We thus obtain the system of equations\\
\mbox{}\hfill
$\left(e^{(a_2-a_1)\cdot \zeta},\ldots,e^{(a_{n+1}-a_1)\cdot \zeta}\right)
=\left(\frac{b_2c_1}{b_1c_2},\ldots,\frac{b_{n+1}c_1}{b_1c_{n+1}}\right)$,
\hfill\mbox{}\\
and $a_2-a_1,\ldots,a_{n+1}-a_1$ are linearly independent since ${\mathcal{A}}$ is a
circuit. So, employing Proposition \ref{prop:monochange}, we can easily solve
the preceding system for $\zeta$ by taking the logs of the coordinates of
$\left(\frac{b_2c_1}{b_1c_2},\ldots,
\frac{b_{n+1}c_1}{b_1c_{n+1}}\right)^{[a_2-a_1,\ldots,a_{n+1}-a_1]^{-1}}$.
\qed
\subsection{Proving Corollary \ref{COR:RFEAS} and Theorem \ref{THM:BIG} }
\label{sub:thresh}
\medskip
\noindent
{\bf Corollary \ref{COR:RFEAS} and Assertion (2) of Theorem
\ref{THM:BIG}:}\linebreak
\scalebox{.96}[1]{Since our underlying family of putative hard problems
shrinks}\linebreak as $\delta$ decrease, it clearly suffices to prove the
case $\delta\!<\!1$.
So let assume henceforth that $\delta\!<\!1$. Let us also define
$\qsat_\mathbb{R}$ to be the problem of deciding whether an input
{\bf quartic} polynomial $f\!\in\!\bigcup_{n\in\mathbb{N}} \mathbb{R}[x_1,\ldots,x_n]$
has a real root or not. $\qsat_\mathbb{R}$ (referred to as $4$-FEAS in \cite{bcss})
is one of the fundamental ${\mathbf{NP}}_\mathbb{R}$-complete problems (see Chapter
4 of \cite{bcss}).
That ${\mathbf{SUP}}\!\in\!{\mathbf{NP}}_\mathbb{R}$ follows immediately from
the definition of ${\mathbf{NP}}_\mathbb{R}$. So it suffices to prove that\\
\mbox{}\hfill
${\mathbf{SUP}}\!\left(
\bigcup\limits_{\substack{n\in\mathbb{N}\\ 0<\delta'<\delta}}
{\mathcal{F}}^{**}_{n,n+n^{\delta'}}\cap\mathbb{R}[x_1,\ldots,x_n]
\right)$\hfill\mbox{}\\
is ${\mathbf{NP}}_\mathbb{R}$-hard. We will do this by giving an explicit
reduction of $\qsat_\mathbb{R}$ to\\
\mbox{}\hfill ${\mathbf{SUP}}\!\left(
\bigcup\limits_{\substack{n\in\mathbb{N}\\ 0<\delta'<\delta}}
{\mathcal{F}}^{**}_{n,n+n^{\delta'}}\cap\mathbb{R}[x_1,\ldots,x_n]\right)$,\hfill\mbox{}\\
passing through ${\mathbf{FEAS}}_+
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$($}}}
\bigcup\limits_{\substack{n\in\mathbb{N}\\ 0<\delta'<\delta}}
{\mathcal{F}}^{**}_{n,n+n^{\delta'}}\cap\mathbb{R}[x_1,\ldots,x_n]
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$)$}}}$
\vspace{-.4cm}
\noindent
along the way.
\medskip
To do so, let $f$ denote any $\qsat_\mathbb{R}$ instance, involving, say,
$n$ variables. Clearly, $f$ has no more than \scalebox{.7}[.7]
{$\begin{pmatrix} n+4\\ 4\end{pmatrix}$}\linebreak
monomial terms.
Letting $\qsat_+$ denote the natural\linebreak variant of $\qsat_\mathbb{R}$ where one
instead asks if $f$ has a root in $\R^n_+$, we will first need to
show that $\qsat_+$ is ${\mathbf{NP}}_\mathbb{R}$-hard as an intermediate step.
This is easy, via the introduction of slack variables: using $2n$ new
variables $\left\{x^\pm_i\right\}^n_{i=1}$
and\linebreak
forming the polynomial $f^\pm(x^\pm)\!:=\!f\!\left(x^+_1-x^-_1,\ldots,
x^+_n-x^-_n\right)$, it is clear that $f$ has a root in $\R^n$ iff
$f^\pm$ has a root in $\mathbb{R}^{2n}_+$. Furthermore, we easily see that\\
\mbox{}\hfill $\mathrm{size}(f^\pm)\!=\!(16+o(1))\mathrm{size}(f)$.\hfill\mbox{}\\
So $\qsat_+$ is
${\mathbf{NP}}_\mathbb{R}$-hard. We also observe that we may\linebreak
restrict the inputs
to quartic polynomials with full-\linebreak
dimensional Newton polytope, since
the original proof for the ${\mathbf{NP}}_\mathbb{R}$-hardness of $\qsat_\mathbb{R}$ actually
involves polynomials having nonzero constant terms and nonzero
$x^4_i$ terms for all $i$ \cite{bcss}.
So now let $f$ be any $\qsat_+$ instance with, say, $n$\linebreak variables.
Let us also define, for any $M\!\in\!\mathbb{N}$, the polynomial
$t_M(z)\!:=\!1+z^{M+1}_1+\cdots+z^{M+1}_M-(M+1)z_1\cdots z_M$.
One can then check via the Arithmetic-Geometric Inequality
\cite{hlp} that $t_M$ is nonnegative on $\mathbb{R}^M_+$, with a unique
root at $z\!=\!(1,\ldots,1)$.
Note also that $f^2$ has no more than \scalebox{.7}[.7]{$\begin{pmatrix}
n+4\\ 4\end{pmatrix}^2$} monomial terms.
Forming the polynomial $F(x,z)\!:=\!f(x)^2+t_M(z)$
with $M\!:=\!\left\lceil \begin{pmatrix} n+4\\ 4\end{pmatrix}^{2/\delta}
\right\rceil$, we see that $f$ has a root in $\R^n_+$ iff $F$ has a root in
$\mathbb{R}^{n+M}_+$. It is also easily checked that $F\!\in\!{\mathcal{F}}^{**}_{N,N+k}$ with
$k\!\leq\!N^{\delta'}$, where $N\!:=\!n+M$ and $0\!<\!\delta'\!\leq\!\delta$.
In particular,\\
\mbox{}\hfill
$k\!<\!\begin{pmatrix} n+4\\ 4\end{pmatrix}^2\!\leq\!\left\lceil\begin{pmatrix}
n+4\\ 4\end{pmatrix}^{2/\delta}\right\rceil^\delta\!=\!M^\delta\!<\!
(n+M)^\delta$.\hfill\mbox{}\\
So we must now have that\\
\mbox{}\hfill
${\mathbf{FEAS}}_+
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$($}}}
\bigcup_{\substack{n\in \mathbb{N}\\ 0<\delta'<\delta}}
{\mathcal{F}}^{**}_{n,n+n^\delta}\cap\mathbb{R}[x_1,\ldots,x_n]
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$)$}}}
$\hfill\mbox{}\\
is ${\mathbf{NP}}_\mathbb{R}$-hard. (A small digression allows us to succinctly prove that\\
\mbox{}\hfill ${\mathbf{FEAS}}_\mathbb{R}
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$($}}}
\bigcup_{\substack{n\in \mathbb{N}\\ 0<\delta'<\delta}}{\mathcal{F}}^{**}_{n,n+n^\delta}
\cap\mathbb{R}[x_1,\ldots,x_n]
\text{\raisebox{-.3cm}{\scalebox{1}[2.3]{$)$}}}
$\hfill\mbox{}\\
is ${\mathbf{NP}}_\mathbb{R}$-hard as well: we simply repeat the argument from the last
paragraph, but use $\qsat_\mathbb{R}$ in place of $\qsat_+$, and define
$F(x,z)\!:=\!f(x)^2+t_M(z^2_1,\ldots,z^2_M)$ instead.)
To conclude, note that $F(x,z)$ is nonnegative on $\R^n_+$.
So by checking whether $-F$ has supremum $\geq\!0$ in $\R^n_+$, we can
decide if $F$ has a root in $\R^n_+$. In other words,\\
\mbox{}\hfill
${\mathbf{SUP}}\!\left(\bigcup\limits_{\substack{n\in\mathbb{N}\\ 0<\delta'<\delta}}
{\mathcal{F}}^{**}_{n,n+n^{\delta'}}\cap\mathbb{R}[x_1,\ldots,x_n]\right)$\hfill \mbox{}\\
must be ${\mathbf{NP}}_\mathbb{R}$-hard has well. So we are done. \qed
\medskip
\noindent
{\bf Assertion (0) of Theorem \ref{THM:BIG}:} Letting $(f,{\varepsilon})$ denote
any instance of ${\mathbf{FSUP}}\!\left(\bigcup_{n\in\mathbb{N}} {\mathcal{F}}^{**}_{n,n+1}\right)$,
first note that via\linebreak Corollary \ref{cor:can}
we can assume that\\
\mbox{}\hfill $f(x)\!=\!c_1+x_1+\cdots+x_\ell-x_{\ell+1}-\cdots
-x_n$,\hfill\mbox{}\\
after a computation in $\mathbf{NC}^1_\mathbb{R}$. Clearly then, $f$ has an\linebreak
unbounded
supremum iff $\ell\!\geq\!1$. Also, if $\ell\!=\!0$, then
the\linebreak supremum of $f$ is exactly $c_1$. So
${\mathbf{FSUP}}\!\left(\bigcup_{n\in\mathbb{N}}
{\mathcal{F}}^{**}_{n,n+1}\right)\!\in\!\mathbf{NC}^1_\mathbb{R}$. That
${\mathbf{SUP}}\!\left(\bigcup_{n\in\mathbb{N}} {\mathcal{F}}^{**}_{n,n+1}\right)\!\in\!\mathbf{NC}^1_\mathbb{R}$ is
obvious as well: after checking the signs of the $c_i$, we make merely
decide the sign of $c_1-\lambda$. \qed
\begin{rem}
\label{rem:nc1}
Note that checking whether a given $f\!\in\!{\mathcal{F}}_{n,n+1}$
lies in ${\mathcal{F}}^*_{n,n+1}$ can be done within $\mathbf{NC}^2$:
one\linebreak simply finds $d\!=\!\dim \mathrm{Supp}(f)$ in $\mathbf{NC}^2$ by
computing the rank of the matrix whose columns are $a_2-a_1,
\ldots,a_m-a_1$ (via the parallel algorithm of Csanky \cite{csanky}),
and then checks whether $d\!=\!n$. $\diamond$
\end{rem}
\noindent
{\bf Assertion (1):} We will first derive the ${\mathbf{HPTAS}}$ result.
Let us assume $f\!\in\!{\mathcal{F}}^{**}_{n,n+2}$ and observe the following
algorithm:
\newpage
\begin{algor}
\label{algor:fsup}
\mbox{}\\
{\bf Input:} A coefficient vector $c\!:=\!(c_1,
\ldots,c_{n+2})$, a (possibly degenerate) circuit
${\mathcal{A}}\!=\!\{a_1,\ldots,a_{n+2}\}$ of cardinality $n+2$, and
a precision parameter ${\varepsilon}\!>\!0$. \\
{\bf Output:} A pair
\mbox{}\hfill $(\bar{x},\bar{\lambda})\!\in\!(\mathbb{R}_+\cup\{0,+\infty\})^n\times
(\mathbb{R}\cup\{+\infty\})$\hfill \mbox{}\\
with $\bar{x}\!=\!(\bar{x}_1,\ldots,\bar{x}_n)$ (resp.\
$\bar{\lambda}$) an ${\mathbf{HPTAS}}$ for $x^*$ (resp.\ $\lambda^{*}$)
where $f(x)\!:=\!\sum^{n+2}_{i=1}c_ix^{a_i}$ and
$\lambda^*\!:=\!\sup_{x\in\R^n_+} f\!=\!\lim_{x\rightarrow x^*}f(x)$
for some $x^{*}\!=\!(x^*_1,\ldots,x^*_n)\!\in\!(\mathbb{R}_+\cup\{0,+\infty\})^n$.
\medskip
\noindent
{\bf Description:}
\vspace{-.2cm}
\begin{enumerate}
\item{If $c_i\!>\!0$ for some $i$ with $a_i\!\neq\!\mathbf{O}$ a vertex of
$\mathrm{Conv} {\mathcal{A}}$ then output\\ ``{\tt $f$ tends to $+\infty$ along a
curve of the form\\ $\{ct^{a_i}\}_{t\rightarrow +\infty}$}''\\
and {\tt STOP}.}
\item{Let $b\!:=\!(b_1,\ldots,b_{n+2})$ where
$b_j$ is $(-1)^j$ times the\linebreak determinant
of the matrix with columns $1\times a_1,\ldots,$\linebreak
$\widehat{1\times a_j}, \ldots,a_{n+2}$ ($\widehat{(\cdot)}$ denoting omission).
If $b$ or $-b$ has a unique negative
coordinate $b_{j'}$, and $c_{j'}$ is the unique negative coordinate of $c$,
then do the following:
\begin{enumerate}
\item{Replace $b$ by $-\mathrm{sign}(b_{j'})b$
and then reorder $b$, $c$, and ${\mathcal{A}}$ by the same
permutation so that $b_{j'}\!<\!0$ and [$b_i\!>\!0$ iff $i\!<\!j'$]. }
\item{If $a_i\!\neq\!\mathbf{O}$ for all $i\!\in\!\{1,\ldots,j'\}$ and\\
\mbox{}\hfill
$\prod^{j'}_{i=1}
\left(\mathrm{sign}(b_{j'})\frac{c_i}{b_i}\right)^{\mathrm{sign}(b_{j'})b_i}\!<\!1$
\hfill\mbox{}\\
then output\\ ``{\tt $f$ tends to $+\infty$ along a
curve of the form\\ $\{ct^{a_{j'}}\}_{t\rightarrow +\infty}$}''\\
and {\tt STOP}.}
\item{If $a_j\!=\!\mathbf{O}$ for some $j\!\in\!\{1,\ldots,j'\}$ then
output\\
``{\tt $f(z)$ tends to a supremum of $\bar{\lambda}$ as
$z$ tends\\ to the point $\bar{x}$ on the $(j'-2)$-dimensional\\
sub-orbit corresponding to $\{a_1,\ldots,a_{j'}\}$.}'',\\
where $x\!\in\!\mathbb{R}^{j'-2}_+$ is the unique solution to the\linebreak
binomial system\\
\mbox{}\hfill
$\left(x^{a_2-a_1},\ldots,x^{a_{j'-1}-a_1}\right)
=\left(\frac{b_2c_1}{b_1c_2},\ldots,\frac{b_{n+1}c_1}{b_1c_{n+1}}\right)$,
\hfill\mbox{}\\
$\bar{x}$ is a $(1+{\varepsilon})$-factor approximation\footnote{We compute $\bar{x}$
and $\bar{\lambda}$ via Proposition \ref{prop:monochange} and the
Brent-Salamin Theorem.} of $x$, $\lambda$ is the unique solution of\\
\mbox{}\hspace{-1.3cm}
\scalebox{.8}[1]{$\left(\mathrm{sign}(b_{j'})\frac{c_{j}-\lambda}{b_{j}}
\right)^{\mathrm{sign}(b_{j'})b_{j}} \times \prod
\limits_{i\in\{1,\ldots,j'\}\setminus\{j\}}
\left(\mathrm{sign}(b_{j'})\frac{c_i}{b_i} \right)^{\mathrm{sign}(b_{j'})b_i}\!=\!1$}
\linebreak
with $(c_j-\lambda)b_jb_{j'}\!>0$, and
$\bar{\lambda}$ is
a $(1+{\varepsilon})$-factor approximation$^8$ of $\lambda$;
and {\tt STOP}.}
\end{enumerate} }
\item{Output\\
``{\tt $f$ approaches a supremum of $c_j$ as
all $x^{a_i}$ with\\ $a_i$ incident to $a_j$ approach $0$.}'',\\
where $a_j\!=\!\mathbf{O}$, and {\tt STOP}.}
\end{enumerate}
\end{algor}
Our proof then reduces to proving correctness and a suitable complexity
bound for Algorithm \ref{algor:fsup}. In particular,
correctness follows immediately from Theorem \ref{thm:bigger}. So we now
focus on a complexity analysis.
Steps 1 and 3 can clearly be done within $\mathbf{NC}^1_\mathbb{R}$, so let us
focus on Step 2.
For Step 2, the dominant complexity comes from Part (b).
(Part (a) can clearly be done in $\mathbf{NC}^1_\mathbb{R}$, and Part (c) can clearly
be done in $\mathbf{NC}^2_\mathbb{R}$ via Csanky's method \cite{csanky}.)
The latter can be done by taking the logarithm
of each term, thus reducing to checking the sign of a linear combination of
logarithms of positive real numbers.
So the arithmetic complexity of our algorithm is
$O\!\left(\log {\mathcal{C}}(f)+\log\log\frac{1}{{\varepsilon}}\right)$ and we thus
obtain our ${\mathbf{HPTAS}}$ result.
The proof that
${\mathbf{SUP}}\!\left(\bigcup_{n\in\mathbb{N}}{\mathcal{F}}^{**}_{n,n+2}\right)\!\in\!\mathbf{P}_\mathbb{R}$
is almost completely identical. \qed
\medskip
Note that just as in Remark \ref{rem:nc1}, checking whether a given
$f\!\in\!{\mathcal{F}}_{n,n+2}$ lies in ${\mathcal{F}}^*_{n,n+2}$ can be done within $\mathbf{NC}^2$
by \linebreak
computing $d\!=\!\dim \mathrm{Supp}(f)$ efficiently. Moreover,
deciding whether a circuit is degenerate (and
extracting ${\mathcal{B}}$ from ${\mathcal{A}}$ when ${\mathcal{A}}$ is degenerate) can be done in $\mathbf{NC}^2$
as well since this is ultimately the evaluation of $n+2$ determinants.
\subsection{The Proof of Corollary \ref{COR:4}}
\label{sub:4}
\noindent
{\bf Assertion (0):} Since the roots of $f$ in $\mathbb{R}_+$ are
unchanged under multiplication by monomials, we can clearly
assume $f\!\in\!{\mathcal{F}}^{**}_{1,3}\cap\mathbb{R}[x_1]$. Moreover, via
the classical Cauchy bounds on the size of roots of
polynomials, it is easy to show that the log of any root of $f$
is $O(\log{\mathcal{C}}(f))$. We can then invoke Theorem 1 of \cite{rojasye}
to obtain our desired ${\mathbf{HPTAS}}$ as follows:
If $D\!:=\!\deg(f)$,
\cite[Theorem 1]{rojasye} tells us that we can count exactly the number of
positive roots of
$f$ using $O(\log^2 D)$ arithmetic operations, and ${\varepsilon}$-approximate
all the roots of $f$ in $(0,R)$ within
$O\!\left((\log D)\log\left(D\log\frac{R}{{\varepsilon}}\right)\right)$ arithmetic
operations. Since we can take $\log R\!=\!O(\log {\mathcal{C}}(f))$ via our
root bound observed above, we are done. \qed
\medskip
\noindent
{\bf Assertion (1):}
Writing any $f\!\in\!{\mathcal{F}}^{**}_{1,4}\cap\mathbb{R}[x_1]$ as
$f(x)\!=\!c_1+c_2x^{a_2}+c_3x^{a_3}+c_4x^{a_4}$ with
$a_2\!<\!a_3\!<\!a_4$, note
that $f$ has unbounded supremum on $\mathbb{R}_+$ iff $c_4\!>\!0$
So let us assume $c_4\!<\!0$.
Clearly then, the supremum of $f$ is attained either at a
critical point in $\mathbb{R}_+$ or at $0$.
But then, any positive critical point is a positive root of a trinomial,
and by Assertion (0), such critical points must admit an
${\mathbf{HPTAS}}$. Similarly, since $f$ is a tetranomial (and thus
evaluable within $O(\log \deg(f))$ arithmetic operations),
we can efficiently approximate (as well as
efficiently check inequalities involving) $\sup_{x\in\mathbb{R}_+} f$.
So we are done. \qed
\section*{Acknowledgements}
We thank Peter B\"urgisser, Felipe Cucker, Johan Hastad, and
Gregorio Malajovich for discussions on complexity classes over $\mathbb{R}$.
The second author would also like to thank MSRI and the Wenner Gren
Foundation for their support during the completion of this paper.
In particular, special thanks go to Mikael Passare and
Boris Shapiro of Stockholm University for their hospitality.
\bibliographystyle{abbrv}
|
1,116,691,499,020 | arxiv | \chapter{Introduction}
Intuitively, an \emph{open dynamical system} is a machine or worker with an interface by which to interact with whatever else is out there. Open dynamical systems can be organized as circuits or control loops, so that they affect each other by their outward expressions of internal work, and thereby possibly form a more complex worker. The framework here is fractal---or more precisely \emph{operadic}---in its structure: organizations of workers can be nested into arbitrary hierarchies of abstraction.
\begin{figure}[H]
\[
\begin{tikzpicture}[oriented WD, bb min width =.5cm, bbx=.5cm, bb port sep =1,bb port length=.08cm, bby=.14cm]
\path (0,0) pic {SmallNestingPic};
\end{tikzpicture}
\]
\caption{
A nesting of interacting open dynamical systems: the $X_{i,j}$ are wired together to form the $Y_i$, which are wired together to form $Z$; often these groupings are chosen to create new abstractions, e.g.\ in logical circuits or control systems. The permanence of the above-displayed wiring pattern is exactly what is relaxed in this paper; a dynamic organization is one in which interactions may change dynamically based on what flows within the system.
}\label{fig.nesting}
\end{figure}
But if we think about some things that interact to do work in the real world, we often notice that often the organization itself---the connections themselves---change. Unlike what we see in \cref{fig.nesting}, the way we connect this hour may be different from the way we connect next hour; in particular, our interfaces go in and out of contact. At the end of this paragraph, look away from the page for a few seconds, think about some things you know that interact together or influence each other, and ask yourself three questions about them: Do these things ever stop interacting? If so, do they ever start interacting again? And how is it decided?
\section{Accounting for organizational change}
We propose that the metaphysical nature and scope of these questions should be complemented by some sort of guard rails to keep our contemplation on track. This is the role of mathematics in our work. It provides a symbolic \emph{accounting system} which is articulate enough to facilitate one in explicating an example and another in asking questions about it.
The category $\Cat{Poly}$ of polynomial functors in one variable is an ergonomic mathematical structure with many applications and spin-off categorical gadgets. We will begin in \cref{chap.org} by recalling one such gadget from \cite{spivak2021learners}: a category-enriched multicategory ${\mathbb{O}\Cat{rg}}$ that will be the conceptual centerpiece of our accounting system. Its objects are polynomial functors in one variable, and its morphisms are polynomial coalgebras related to a certain monoidal closed structure on $\Cat{Poly}$. We will see that the morphisms in ${\mathbb{O}\Cat{rg}}$ are intuitively ``collective organizational patterns that change dynamically''.
Leaving the mathematics aside until \cref{chap.org}---at which point we will have little more to say about the background philosophy---let's return to the question ``how is the organizational pattern between various systems decided, moment-by-moment?'' Let's mesh this question with the idea that the so-organized systems can be nested into arbitrary hierarchies of abstraction. And let's think about all this in the frame of a certain worldview which we invite you the reader to engage with like a fictional movie, not intended to convince you of fact but instead simply to convey an experience. Here goes.
In this worldview, we notice that everything that makes any sense to us happens to be a collective. A cell body, a human body, an antibody, Topos Institute, an idea, an airport, a sentence, a mathematical definition, a grain of sand, ... each is a collective of interacting parts that may themselves be collectives. Stay with an example or two---e.g.\ any of the above---for now, not the counterexamples or counterpoints yet, because there are plenty of examples (collectives) and they point more toward the subject-matter of this paper.
It's quite often the case that these collectives, like the ship of theseus, are not permanent organizations that are fixed for all time; they are adapting to forces from within and without the system. Even a grain of sand can break or melt; even a mathematical definition can be refactored. So then what's outside the system, generating these forces that influence it? We imagine that what's outside is in fact more of the same kind of stuff as what's inside, just not as cohesive perhaps. Let's go full-on woo: if the universe is a big system, then maybe the sort of thing that happens in our head is---in some way---just like what happens outside of it. Maybe the motives that organize Brandon and David into a collaborative thinking and paper-writing unit are, in the some reasonable account, of the same nature as the motives that organize each one of them into a body.
But is this right? How could you check such a claim? One would need to give a reasonable account of it, and since we as authors can't currently give such an account, we don't make this claim. Instead, what we present here is an \emph{accounting system} in which the woo-person, (or would it instead be the reductive materialist?) who thought that what went on inside the head was somehow the same as what went on outside, could endeavor to provide such an account of their thinking.
\section{Dynamic categorical structures}
Our main definition in this paper is what we call an \emph{dynamic categorical structure}. We might poetically say that a \emph{dynamic} category is one where the morphisms between two objects change in response to what flows between those objects. To define it, we first refactor the definition of ${\mathbb{O}\Cat{rg}}$ from \cite{spivak2021learners} from an operad to a monoidal double category; we then define a dynamic *thing* to be a *thing* enriched in ${\mathbb{O}\Cat{rg}}$. Once these are defined, we give a couple examples: a \emph{prediction market} operad and a \emph{deep learning} monoidal category. In the prediction market, a population $Y$ predicts a distribution based on the predictions of its member populations $X_i$ weighted by their reputations, and the reputations change dynamically based on the returned outcome. A similar story holds with deep learning.
We thank you the reader for having postponed your counterpoints and counterexamples, and we ask you to reengage both skepticism and interest as you see fit. We invite you to ask openly: what's not a collective of interacting parts that are themselves collectives? Nature, love, or experience perhaps? It all depends on how you look. What we present here is an accounting system for making sense of a certain sort of experiential pattern; the matter itself is whatever it is.
\section{Acknowledgments}
The influences on this paper are too numerous and unranked to name, but in particular we thank Sophie Libkind for stimulating conversations, and we thank Scott Garrabrant for teaching us about Kelly betting, which partially inspired \cref{sec.kelley}.
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0348.
\chapter{The Monoidal Double Category ${\mathbb{O}\Cat{rg}}$}\label{chap.org}
In \cite{spivak2021learners}, the second author defined a category-enriched multicategory ${\mathbb{O}\Cat{rg}}$, whose objects are polynomials and whose morphisms are polynomial coalgebras. In this chapter, we describe how ${\mathbb{O}\Cat{rg}}$ in fact more naturally takes the form of a monoidal double category, with coalgebras as horizontal morphisms, maps of polynomials as vertical morphisms, and the Dirichlet tensor product $\otimes$ providing the monoidal structure.%
\footnote{In fact, ${\mathbb{O}\Cat{rg}}$ is a duoidal double category, with a second monoidal structure given by $\mathbin{\triangleleft}$, but we will not use that here.}
Before we begin, recall that a polynomial is a functor $p\colon \Cat{Set}\to\Cat{Set}$ which is isomorphic to a sum of representables; following \cite{spivak2021learners}, we denote $p,q\in\Cat{Poly}$ by
\begin{equation}\label{eqn.poly_notation}
p = \sum_{I \in p(1)} \mathcal{y}^{p[I]} \qquad\text{and}\qquad q = \sum_{J \in q(1)} \mathcal{y}^{q[J]}
\end{equation}
and refer to each $I\in p(1)$ as a \emph{$p$-position} and to each $i\in p[I]$ as a \emph{$p$-direction at $I$}. A map $\phi\colon p\to q$ of polynomials is a natural transformation. Combinatorially, $\phi$ provides: for each $I\in p(1)$ a choice of $\phi(I)\in q(1)$ and for each $j\in q[\phi(I)]$ a choice of $\phi(I,j)\in p[I]$.%
\footnote{In \cite{spivak2021learners}, what we denote $\phi(I)$ is denoted $\phi_1(I)$ and what we denote $\phi(I,j)$ is denoted $\phi^\sharp_I(j)$.}
\section{$[p,q]$-coalgebras}
We first recall the definitions of the internal-hom polynomials $[p,q]$ and concretely describe the category of $[p,q]$-coalgebras, which will form the category of morphisms from $p$ to $q$ in the underlying bicategory of ${\mathbb{O}\Cat{rg}}$.
\begin{definition}\label{coalgebras}
For polynomials $p,q\in\Cat{Poly}$ as in \eqref{eqn.poly_notation},
their \emph{internal hom} with respect to the tensor product $\otimes$ is the polynomial
\begin{equation}\label{eqn.internal_hom}
[p,q]\coloneqq \sum_{\phi\colon p \to q} \mathcal{y}^{\sum\limits_{\;I \in p(1)} q[\phi(I)]}
\end{equation}
It can also be written $[p,q]\cong\prod_{I\in p(1)}\sum_{J\in q(1)}\prod_{j\in q[J]}\sum_{i\in p[I]}\mathcal{y}$.
\end{definition}
For intuition, a $[p,q]$-coalgebra (denoted $p \xslashar{} q$) is a machine that outputs maps $\phi\colon p\to q$ and that inputs what \emph{flows} between them: pairs $(I,j)$ where $I\in p(1)$ is a position of $p$, which ``flows'' to $q$ as $J\coloneqq\phi(I)\in q(1)$, and $j\in q[J]$ is a direction of $q$, which ``flows'' backward to $p$ as $\phi(I,j)\in p[I]$. More precisely, using \cite[Definition 2.10]{spivak2021learners}, we define $[p,q]$-coalgebras as follows.
\begin{definition}
The category $[p,q]\tn{-}\Cat{Coalg}$ has as objects pairs $\S = (S,\beta)$ where $S$ is a set and $\beta\colon S \to [p,q] \mathbin{\triangleleft} S$ is a function, and where a morphism from $\S$ to $\S'$ is a function $f\colon S \to S'$ making \eqref{eqn.coalg_map} commute.
\begin{equation}\label{eqn.coalg_map}
\begin{tikzcd}
S \rar{\beta} \dar[swap]{f} & {[p,q]} \mathbin{\triangleleft} S \dar{{[p,q]} \mathbin{\triangleleft} f} \\
S' \rar[swap]{\beta'} & {[p,q]} \mathbin{\triangleleft} S'
\end{tikzcd}
\end{equation}
We refer to $S$ as the \emph{state set} and to each element $s\in S$ as a \emph{state}.
\end{definition}
Unwinding this definition, it is useful to break $\beta$ into two functions $\beta\coloneqq(\tn{act}^\beta,\tn{upd}^\beta)$, an \emph{action} function
\[\tn{act}^\beta\colon S\to\Cat{Poly}(p,q)=[p,q](1)\]
and, for each state $s \in S$, an \emph{update} function
\[\tn{upd}^\beta_s\colon \sum_{I \in p(1)} q\left[\tn{act}^\beta_s(I)\right] \to S.\]
For a state $s\in S$ and position $I\in p(1)$ we often write $\tn{act}^\beta_s\colon p\to q$ and $\tn{upd}^\beta_s(I)\colon q[\tn{act}^\beta_s(I)]\to S.$ We may suppress the $\beta$ when it is clear from context, writing $\tn{act}_s$ and $\tn{upd}_s$. A coalgebra map $\S\to\S'$ is a function $S\to S'$ between the state sets that preserves actions and updates.
When, for each $s \in S$, the update $\tn{upd}_s$ is the constant function sending everything to $s$, we say the coalgebra $\S$ is \emph{static}, as it remains constantly at $s$ regardless of the inputs $I \in p(1)$ and $j\in q[\tn{act}_s(I)]$ flowing between $p$ and $q$.
\begin{example}\label{ex.single_state}
A special case of a static $[p,q]$-coalgebra is given by a map $\phi \in \Cat{Poly}(p,q)$. For each such $\phi$, there is a coalgebra $\{\phi\}$ with a singleton state set and with $\tn{act}^\beta$ sending the point to $\phi$; we call it a \emph{singleton} coalgebra.
A coalgebra is static iff it is the coproduct of singleton coalgebras.
\end{example}
\section{Composition of hom-coalgebras}\label{sec.compose_hom_coalg}
We now describe how $[p,q]$-coalgebras behave like morphisms from $p$ to $q$.
\begin{proposition}\label{prop.def_org}
The categories $[p,q]\tn{-}\Cat{Coalg}$ form the hom-categories in a bicategory ${\mathbb{O}\Cat{rg}}$, which has polynomials as objects.
\end{proposition}
We use ${\mathbb{O}\Cat{rg}}$ to denote both the bicategory from \cref{prop.def_org} and the categorical operad in \cite[Definition 2.19]{spivak2021learners}, as both are derived from the monoidal double category ${\mathbb{O}\Cat{rg}}$ described in the following sections. For now, we merely present the identities and composites in this bicategory. Identities are easy: the identity object in ${\mathbb{O}\Cat{rg}}(p,p)=[p,p]\tn{-}\Cat{Coalg}$ is given by the one-state coalgebra $\idcoalg{p}$.
The composition functor ${\mathbb{O}\Cat{rg}}(p,q)\times{\mathbb{O}\Cat{rg}}(q,r)\longrightarrow{\mathbb{O}\Cat{rg}}(p,r)$ is defined as the composite:
\[[p,q]\tn{-}\Cat{Coalg} \times [q,r]\tn{-}\Cat{Coalg} \to \left([p,q] \otimes [q,r]\right)\tn{-}\Cat{Coalg} \longrightarrow [p,r]\tn{-}\Cat{Coalg},\]
where the first functor is the lax monoidality of $(-)\tn{-}\Cat{Coalg}\colon\Cat{Poly} \to \Cat{Cat}$, as described in \cite[Proposition 2.13]{spivak2021learners}, and the second is given by applying $(-)\tn{-}\Cat{Coalg}$ to the usual ``composition'' map of internal-homs $[p,q] \otimes [q,r] \to [p,r]$ in $\Cat{Poly}$. Using \eqref{eqn.internal_hom} we see that on positions, this map takes the form\vspace{-.1cm}
\[\left([p,q] \otimes [q,r]\right)(1) = \Cat{Poly}(p,q) \times \Cat{Poly}(q,r) \To{\mathbin{\fatsemi}} \Cat{Poly}(p,r) = [p,r](1)\]
and on directions it is given for $\phi\colon p \to q$ and $\psi\colon q \to r$ by the function
\[\bigg(\sum_{I \in p(1)} q[\phi(I)]\bigg) \times \bigg(\sum_{J \in q(1)} r[\psi(J)]\bigg) \leftarrow \sum_{I \in p(1)} r[\psi(\phi(I))]\]
which sends $(I,k)$ to $\big((I,\psi(\phi(I),k)),(\phi(I),j)\big)$.
Concretely, the composite of a $[p,q]$-coalgebra $\S$ and a $[q,r]$-coalgebra $\S'$ is a $[p,r]$-coalgebra which we denote $\S\mathbin{\fatsemi}\S'$ and define as follows:
\begin{itemize}
\item its state set is given by $S \times S'$
\item the action of the pair $(s,s')$ is given by the composite
\[\tn{act}^{\beta\mathbin{\fatsemi}\beta'}_{s,s'}\coloneqq(\tn{act}^\beta_s\mathbin{\fatsemi}\tn{act}^{\beta'}_{s'})\colon p \to q \to r\]
\item the update function of $(s,s')$ is induced by the functions
\begin{align*}
\sum_{I \in p(1)} r\left[\tn{act}^{\beta\mathbin{\fatsemi}\beta'}_{s,s'}(I)\right] \To{(I,k)\mapsto\left(I,\tn{act}^{\beta'}_{s'}\left(\tn{act}^\beta_s(I),k\right)\right)} \sum_{I \in p(1)} q\left[\tn{act}^\beta_s(I)\right] \To{\tn{upd}^\beta_s} S,\\
\sum_{I \in p(1)} r\left[\tn{act}^{\beta\mathbin{\fatsemi}\beta'}_{s,s'}(I)\right] \To{(I,k)\mapsto\left(\tn{act}^\beta_s(I),k\right)} \sum_{J \in q(1)} r\left[\tn{act}^{\beta'}_{s'}(J)\right] \To{\tn{upd}^{\beta'}_{s'}} S'.
\end{align*}
\end{itemize}
Horizontal composition of coalgebra-morphisms---i.e.\ of the 2-cells in the bicategory ${\mathbb{O}\Cat{rg}}$---is given simply by the cartesian product. The coherence isomorphisms and axioms for a bicategory then follow from the essential uniqueness of finite products of sets, and the unitality and associativity of composition for polynomial maps
\section{Monoidal product of coalgebras}
It is shown in \cite[Proposition 2.13]{spivak2021learners} that the tensor product $\otimes$ of polynomials extends to make ${\mathbb{O}\Cat{rg}}$ a monoidal bicategory. That is, for polynomials $p,q,p',q'$ there is a functor
\[[p,q]\tn{-}\Cat{Coalg} \times [p',q']\tn{-}\Cat{Coalg} \to \left([p,q] \otimes [p',q']\right)\tn{-}\Cat{Coalg} \to [p \otimes p',q \otimes q']\tn{-}\Cat{Coalg}\]
derived from the map of polynomials $[p,q] \otimes [p',q'] \to [p {\otimes} p',q {\otimes} q']$ given on positions by
\[\Cat{Poly}(p,q) \times \Cat{Poly}(p',q') \To{\otimes} \Cat{Poly}(p \otimes p',q \otimes q')\]
and on directions by, for $\phi\colon p \to q$ and $\phi'\colon p' \to q'$,
\[\bigg(\sum_{I \in p(1)} q[\phi_1(I)]\bigg) \times \bigg(\sum_{I' \in p'(1)} q'[\phi'_1(I')]\bigg) \longleftarrow \sum_{(I,I') \in p(1) \times p'(1)} q[\phi_1(I)] \times q'[\phi'_1(I')]\]
sending $(I,I',j,j')$ to $(I,j,I',j')$.
Concretely, this tensor product takes a $[p,q]$-coalgebra $\S$ and a $[p',q']$-coalgebra $\S'$ to the $[p \otimes p',q \otimes q']$-coalgebra with states $S \times S'$, action
\[S \times S' \to \Cat{Poly}(p,q) \times \Cat{Poly}(p',q') \to \Cat{Poly}(p \otimes p',q \otimes q'),\]
and update described similarly componentwise. The tensor product of coalgebra morphisms is also given by the cartesian product of functions, and it is (very) tedious but ultimately straightforward to check that the essential uniqueness of products guarantees that $\otimes$ gives a monoidal structure on ${\mathbb{O}\Cat{rg}}$.
\section{${\mathbb{O}\Cat{rg}}$ as a double category}
Defining ${\mathbb{O}\Cat{rg}}$ as a monoidal bicategory is sufficient for most of the constructions of ${\mathbb{O}\Cat{rg}}$-enriched structures in \cref{chap.org_enrich}. However, using a double category structure casting singleton coalgebras $\S_\phi\in[p,q]\tn{-}\Cat{Coalg}$ (see \cref{ex.single_state}) as morphisms $\phi\colon p\to q$ in $\Cat{Poly}$ facilitates our eventual definition of maps between dynamic structures.
Specifically, the definition of ${\mathbb{O}\Cat{rg}}$ as a monoidal bicategory extends to a monoidal (pseudo-)double category with coalgebras as horizontal morphisms, maps in $\Cat{Poly}$ as vertical morphisms, and squares as in \eqref{eqn.square} given by maps of coalgebras from $\S\mathbin{\fatsemi}\{\psi\}$ to $\{\phi\}\mathbin{\fatsemi}\S'$.
\begin{equation}\label{eqn.square}
\begin{tikzcd}
p \rar[slash, ""{name=S, below}]{\S} \dar[swap]{\phi} & q \dar{\psi} \\
p' \rar[slash, ""{name=T, above},swap]{\S'} & q'
\arrow[Rightarrow,shorten=5,from=S,to=T]
\end{tikzcd}
\end{equation}
The symbol $\xslashar{}$ is intended to indicate that the map is ``dynamic'', changing in response to what flows between $p$ and $q$.
As $\{\phi\}$ and $\{\psi\}$ have only one state, and composition of coalgebras acts as the cartesian product on states, such a square amounts to a function $S \to S'$ making \eqref{eqn.coalg_square} commute:
\begin{equation}\label{eqn.coalg_square}
\begin{tikzcd}
S \rar{\beta} \dar[swap]{f} & {[p,q]} \mathbin{\triangleleft} S \rar{\psi_\ast} & {[p,q']} \mathbin{\triangleleft} S \ar[d, "{[p,q'] \mathbin{\triangleleft} f}" pos=.4] \\
S' \rar[swap]{\beta'} & {[p',q']} \mathbin{\triangleleft} S' \rar[swap]{\phi^\ast} & {[p,q']} \mathbin{\triangleleft} S'
\end{tikzcd}
\end{equation}
Identities and composites for these squares are determined by the bicategory structure, as this double category is a restriction in the vertical direction of the double category of lax-commuting squares in a bicategory.\footnote{It should be noted however that the vertical arrows in ${\mathbb{O}\Cat{rg}}$ are regarded as polynomial maps rather than coalgebras, so that they compose strictly unitally and associatively.}
We now proceed to discuss various categorical structures enriched in ${\mathbb{O}\Cat{rg}}$, which describe dynamical systems equipped with extra algebraic structure that allows us to remove abstraction barriers when considering nested layers and complex arrangements of the components of the system.
\chapter{Dynamic structures via ${\mathbb{O}\Cat{rg}}$-Enrichment}\label{chap.org_enrich}
A monoidal double category is a viable setting for enriching various categorical structures. Intuitively, enrichment in ${\mathbb{O}\Cat{rg}}$ replaces the usual set of arrows between two objects in a categorical structure with a $[p,q]$-coalgebra for some choice of polynomials $p,q$. Therefore not only can each arrow be realized as a map of polynomials $p \to q$, but this map carries dynamics that encode how a position in $p$ and a direction in $q$ determine a transition from one arrow to another. The morphism ``reacts'' to what's flowing between $p$ and $q$.
Different situations call for different categorical structures to model their dynamics: some systems primarily involve many-to-one arrangements such as the wiring diagrams in \cref{fig.nesting}, others such as gradient descent fit naturally into a many-to-many arrow framework, and we expect in future work to consider evolving systems in which different components operate at differing time scales. Rather than choose one such categorical form to favor, and then go through the tedious exercise of forcing all of the others to conform to it, we describe how to add dynamics to the definitions of many different structures.
\slogan{A dynamic *thing* is a *thing* enriched in ${\mathbb{O}\Cat{rg}}$.}
This slogan is intentionally imprecise, so as to be maximally inclusive of different notions of categorical structures (*things*) and notions of enrichment, and also to allow the reader who has an intuitive understanding and no need for precision to skip the remainder of this paragraph. Our intuition and examples come from the theories of enrichment described in \cite{leinster1999generalized} and \cite{shapiro2022enrichment}. In the former, a *thing* can be any suitable type of generalized multicategory, while in the latter a *thing* can be any structure defined as an algebra for a familial monad on a presheaf category equipped with a choice of ``higher'' and ``lower'' dimensional cell shapes. In both cases, *things* are algebras for a particular cartesian monad $T$ which admit an ``enriched'' analogue with respect to any $T$-multicategory. To define $T$-algebras enriched in ${\mathbb{O}\Cat{rg}}$ is then to identify ${\mathbb{O}\Cat{rg}}$ with a $T$-multicategory, and in all of our examples this identification arises naturally from the observation that monoidal double categories give rise to $T$-multicategories in a natural way.
We give specific instances of ${\mathbb{O}\Cat{rg}}$-enrichment in the following sections: in \cref{sec.org_cats} for dynamical categories, in \cref{sec.org_multicats} for dynamical multicategories and operads, and in \cref{sec:org_monoidalcats} for dynamical monoidal categories and PRO(P)s. We are also interested in using dynamic duoidal categories to describe dynamical systems in which different contributors to a system operate at different rates, using the duoidal structure on ${\mathbb{O}\Cat{rg}}$ based on $\mathbin{\triangleleft}$, but that is beyond the scope of this paper.
\section{Dynamic categories}\label{sec.org_cats}
Enrichment of categories only uses ${\mathbb{O}\Cat{rg}}$'s double category structure---not its monoidal structure---as any double category forms an $f\!c$-multicategory (also known as a virtual double category) in the sense of \cite[Section 2.1]{leinster1999generalized}.
The following definition of enrichment in ${\mathbb{O}\Cat{rg}}$ is an unwinded version the more general definition in \cite[Section 2.2]{leinster1999generalized}.
\begin{definition}\label{def.org_enriched_cat}
An ${\mathbb{O}\Cat{rg}}$-enriched (henceforth \emph{dynamic}) category $A$ consists of
\begin{itemize}
\item a set $A_0$ of objects;
\item for each $a \in A_0$, a polynomial $p_a$;
\item for each $a,b \in A_0$, a $[p_a,p_b]$-coalgebra $\S_{a,b}$;
\item for each $a \in A_0$, an ``identitor'' square in ${\mathbb{O}\Cat{rg}}$ as in \eqref{eqn.tors} left; and
\item for each $a,b,c \in A_0$, a ``compositor'' square in ${\mathbb{O}\Cat{rg}}$ as in \eqref{eqn.tors} right:
\begin{equation}\label{eqn.tors}
\begin{tikzcd}
p_a \dar[equals] \rar[slash, ""{name=S, below}]{\idcoalg{p_a}} & p_a \dar[equals] \\
p_a \rar[slash, ""{name=T, above},swap]{\S_{a,a}} & p_a
\arrow[Rightarrow,shorten=5,from=S,to=T]
\end{tikzcd}\qquad\qquad\begin{tikzcd}
p_a \dar[equals] \rar[slash]{\S_{a,b}} & p_b \rar[slash]{\S_{b,c}} & p_c \dar[equals] \\
p_a \ar[slash, ""{name=T, above}]{rr}[swap]{\S_{a,c}} & & p_c
\arrow[Rightarrow,shorten=4,from=1-2,to=T]
\end{tikzcd}
\end{equation}
\end{itemize}
such that these squares satisfy unit and associativity equations.
\end{definition}
The sets $S_{a,b}$ form an ordinary category which we say \emph{underlies} $A$.
In fact, a dynamic category could be equivalently defined as an ordinary category such that each object $a$ is assigned a polynomial $p_a$ and each set of arrows $\Hom(a,b)$ is the set of states for an assigned $[p_a,p_b]$-coalgebra $\S_{a,b}$, with composition and identities respecting the coalgebra structure. This means that the arrow $\mathrm{id}_a$ in $\Hom(a,a)$ acts as the identity map on $p_a$ and is unchanged by updates, while for $f$ in $\Hom(a,b)$ and $g$ in $\Hom(b,c)$ the composite $f\mathbin{\fatsemi} g$ acts as the composite $p_a \to p_b \to p_c$ of the actions of $f$ and $g$, and the update of their composite equals the composite of their updates.
\section{Dynamic multicategories}\label{sec.org_multicats}
A monoidal double category also gives rise to an $f\!m$-multicategory in the sense of \cite[Section 3.1]{leinster1999generalized},
so we can talk about multicategories enriched in ${\mathbb{O}\Cat{rg}}$ as in \cite[Section 3.2]{leinster1999generalized}.
\begin{definition}
An ${\mathbb{O}\Cat{rg}}$-enriched (henceforth \emph{dynamic}) multicategory $A$ consists of
\begin{itemize}
\item a set $A_0$ of objects;
\item for each $a \in A_0$, a polynomial $p_a$;
\item for each $a_1,...,a_n,b \in A_0$, a $[p_{a_1} \otimes \cdots \otimes p_{a_n},p_b]$-coalgebra $\S_{a_1,...,a_n\,;\,b}$;
\item for each $a \in A_0$, an ``identitor'' square in ${\mathbb{O}\Cat{rg}}$ as in \eqref{eqn.multi_tors} left; and
\item for each $a_{1,1},\ldots,a_{1,m_1},\;\ldots,\;a_{n,1},\ldots,a_{n,m_n},\;b_1,\ldots,b_n,$ and $c \in A_0$, a ``compositor'' square in ${\mathbb{O}\Cat{rg}}$ as in \eqref{eqn.multi_tors} right
\end{itemize}
\begin{equation}\label{eqn.multi_tors}
\begin{tikzcd}[ampersand replacement=\&]
p_a \dar[equals] \rar[slash, ""{name=S, below}]{\idcoalg{p_a}} \& p_a \dar[equals] \\
p_a \rar[slash, ""{name=T, above},swap]{\S_{a;a}} \& p_a
\arrow[Rightarrow,shorten=5,from=S,to=T]
\end{tikzcd}
\qquad\quad
\begin{tikzcd}[column sep=huge, ampersand replacement=\&]
p_{a_{1,1}} \otimes \cdots \otimes p_{a_{n,m_n}} \dar[equals] \ar[r, slash, "\bigotimes_i \S_{a_{i,1},...,a_{i,m_i};b_i}", ""' name=U] \&[20pt] p_{b_1} \otimes \cdots \otimes p_{b_n} \rar[slash]{\S_{b_1,...,b_n\,;\,c}} \&[-10pt] p_c \dar[equals] \\
p_{a_{1,1}} \otimes \cdots \otimes p_{a_{n,m_n}} \ar[slash, ""{name=T}]{rr}[swap]{\S_{a_{1,1},...,a_{n,m_n};c}} \& \& p_c
\arrow[Rightarrow,shorten=6,from=U-|T,to=T]
\end{tikzcd}
\end{equation}
such that these squares satisfy unit and associativity equations (\cref{operadequations}).
\end{definition}
The sets $S_{a,b}$ form an ordinary (set-enriched) multicategory, which underlies $A$ and has a description similar to the underlying category we described below \cref{def.org_enriched_cat}.
We will mostly be interested in what we call a \emph{dynamic operad}, the case when a dynamic multicategory $A$ has only one object, assigned the polynomial ``interface'' $p$. It consists simply of a $[p^{\otimes n},p]$-coalgebra $\S_n$ for each $n \in \mathbb{N}$, equipped with coalgebra maps
\begin{equation}\label{eqn.org_operad}
\idcoalg{p} \to \S_1
\qquad\text{and}\qquad
\bigotimes_{i\in I} \S_{n_i} \to \S_N
\end{equation}
where $I$ is any finite set and $N\coloneqq\sum_{i\in I}n_i$, which together satisfy the usual equations.
\begin{example}\label{ex.collective}
A \emph{collective} (as defined in \cite{niu2021collectives}) is a $\otimes$-monoid in $\Cat{Poly}$, meaning a polynomial $p$ equipped with a monoid structure on its positions $p(1)$ and co-unital co-associative ``distribution'' functions $p[I \cdot J] \to p[I] \times p[J]$ for each $I,J \in p(1)$. This can be viewed as a dynamic operad where $\S_n$ is given by $\{\cdot_n\}$, the singleton coalgebra on the $n$-ary monoidal product $(\cdot_n)\colon p^{\otimes n} \to p$, and where the maps of coalgebras in \eqref{eqn.org_operad} are isomorphisms deduced from the equations for a monoid.
\end{example}
\begin{example}
In \cref{ex.collective}, the coalgebras $\S_n$ are determined by a single map of polynomials, with trivial updates since the state sets are singletons. This can be generalized to an intermediate notion between collectives and dynamic multicategories, where the coalgebras are still static but may have multiple states.
Given any multicategory $M$ and multifunctor $F\colon M \to\Cat{Poly}$, where $\Cat{Poly}$ here denotes the multicategory underlying $(\Cat{Poly},\mathcal{y},\otimes)$, there is a dynamic multicategory $A_F$ with
\begin{itemize}
\item object set $\ob(M)$;
\item for each $a \in \ob(M)$, the polynomial interface $p_a \coloneqq F(a)$;
\item for each tuple $(a_1,...,a_n\,;\,b)$ in $\ob(M)$, state set $S_{a_1,...,a_n\,;\,b} \coloneqq M(a_1,...,a_n\,;\,b)$;
\item the action $\tn{act}^\beta\colon M(a_1,...,a_n\,;\,b) \to \Cat{Poly}(p_{a_1} \otimes \cdots \otimes p_{a_n},p_b)$ is given by $F$; and
\item this coalgebra is static, in that for any state $s$ in $M(a_1,...,a_n\,;\,b)$, the update function $\tn{upd}^\beta_s$ is the constant function at $s$.
\qedhere
\end{itemize}
\end{example}
\begin{example}
Let $\S$ be any $p$-coalgebra for a polynomial $p$. There is a dynamic operad on $p$ with $\S_0\coloneqq \S$, with $\S_1\coloneqq\idcoalg{p}$, and with all other $\S_n\coloneqq\varnothing$ assigned the empty coalgebra.
\end{example}
\begin{example}
Consider a dynamic operad with interface $\mathcal{y}\in\Cat{Poly}$. The internal hom polynomial $[\mathcal{y}^{\otimes n},\mathcal{y}]$ is simply $\mathcal{y}$, so this structure amounts to an operad $S$ with a function $S_n \to S_n$ for each $n$, commuting with the operad structure. A dynamic operad on $\mathcal{y}$ can thus be identified with an operad $\cat{S}$ equipped with an operad map $\cat{S}\to\cat{S}$ to itself.
\end{example}
\section{Dynamic monoidal categories}\label{sec:org_monoidalcats}
A monoidal double category is precisely a representable $f\!m\!c$-multicategory as in \cite[Section 2]{shapiro2022enrichment},
so we can also enrich strict monoidal categories in ${\mathbb{O}\Cat{rg}}$.\footnote{We use throughout the notion \emph{strong} enrichment in a monoidal double category from \cite[Section 3]{shapiro2022enrichment}.} These are similar to ${\mathbb{O}\Cat{rg}}$-enriched multicategories, but include many-to-many coalgebras rather than just many-to-one.
\begin{definition}\label{enriched_monoidal}
An ${\mathbb{O}\Cat{rg}}$-enriched (henceforth \emph{dynamic}) strict monoidal category $A$ consists of
\begin{itemize}
\item a monoid $(A_0,e,*)$ of objects;
\item for each $a \in A_0$, a polynomial $p_a$;
\item an isomorphism of polynomials $y \cong p_e$;
\item for each $a,a' \in A_0$, an isomorphism of polynomials $p_{a} \otimes p_{a'} \cong p_{a*a'}$;
\item for each $a,b \in A_0$, a $[p_a,p_b]$-coalgebra $\S_{a,b}$;
\item for each $a \in A_0$, an ``identitor'' square in ${\mathbb{O}\Cat{rg}}$ as in \cref{eqn.adaptive_tor} left;
\item for each $a,b,c \in A_0$, a ``compositor'' square in ${\mathbb{O}\Cat{rg}}$ as in \cref{eqn.adaptive_tor} center; and
\item for each $a,a',b,b' \in A_0$, a ``productor'' square in ${\mathbb{O}\Cat{rg}}$ as in \cref{eqn.adaptive_tor} right:
\end{itemize}
\begin{equation}\label{eqn.adaptive_tor}
\begin{tikzcd}[column sep=35pt]
p_a \dar[equals] \rar[slash, ""{name=S, below}]{\idcoalg{p_a}} & p_a \dar[equals] \\
p_a \rar[slash, ""{name=T, above},swap]{\S_{a,a}} & p_a
\arrow[Rightarrow,shorten=5,from=S,to=T]
\end{tikzcd}
\qquad
\begin{tikzcd}[column sep=30pt]
p_a \dar[equals] \rar[slash]{\S_{a,b}} & p_b \rar[slash]{\S_{b,c}} & p_c \dar[equals] \\
p_a \ar[slash, ""{name=T, above}]{rr}[swap]{\S_{a,c}} & & p_c
\arrow[Rightarrow,shorten=4,from=1-2,to=T]
\end{tikzcd}
\qquad
\begin{tikzcd}[column sep=50pt]
p_a \otimes p_{a'} \dar[equals,swap]{\wr} \rar[slash,""{name=S,below}]{\S_{a,b} \otimes \S_{a',b'}} & p_b \otimes p_{b'} \dar[equals,swap]{\wr} \\
p_{aa'} \rar[slash, ""{name=T, above},swap]{\S_{a*a',b*b'}} & p_{b*b'}
\arrow[Rightarrow,shorten=5,from=S,to=T]
\end{tikzcd}
\end{equation}
satisfying unit, associativity, and interchange equations.
\end{definition}
Similar to \cref{sec.org_cats,sec.org_multicats}, the sets $S_{a,b}$ form the arrows in an ordinary strict monoidal category underlying $A$.
For the rest of this paper, we will only be interested in the restricted case of a dynamic monoidal category with object monoid $(\mathbb{N},0,+)$, which we call a dynamic PRO.%
\footnote{A PRO is the non-symmetric version of a PROP. While all of our examples are in fact symmetric, to keep the paper short we do not describe their symmetry operations.}
Concretely, this consists of a polynomial interface $p$ (so that in the notation above $p_n\coloneqq p^{\otimes n}$ for $n \in \mathbb{N}$) along with a $[p^{\otimes m},p^{\otimes n}]$-coalgebra $S_{m,n}$ for each $m,n \in \mathbb{N}$, equipped with the maps of coalgebras as in \eqref{eqn.adaptive_tor}. The identitors, compositors, productors, and their equations amount to the ability to compose any string diagram of the usual sort for monoidal categories, with each $m$-to-$n$ box given by a state in $S_{m,n}$, into a new box (i.e.\ state) with the appropriate sources and targets. We denote a dynamic PRO as $(p,\S)$, where $\S$ encodes all of the coalgebras $\S_{m,n}$ that constitute the ${\mathbb{O}\Cat{rg}}$-enrichment and the structure maps are implicit.
\bigskip
We now turn to morphisms between dynamic PROs; the interested reader can hopefully find analogous definitions for dynamic categories and operads.
\begin{definition}
A \emph{morphism} of dynamic PROs from $(p,\S)$ to $(p',\S')$ is given by a map of polynomials $\phi\colon p \to p'$ and, for each $m,n \in \mathbb{N}$, ``commutor'' squares as in \eqref{eqn.adaptive_map} in ${\mathbb{O}\Cat{rg}}$ which commute with the identitor, compositor, and productor squares.
\setlength{\belowdisplayskip}{-5pt}
\begin{equation}\label{eqn.adaptive_map}
\begin{tikzcd}
p^{\otimes m} \rar[slash, ""{name=S, below}]{\S_{m,n}} \dar[swap]{\phi^{\otimes m}} & p^{\otimes n} \dar{\phi^{\otimes n}} \\
p'^{\otimes m} \rar[slash, ""{name=T, above}, swap]{\S'_{m,n}} & p'^{\otimes n}
\arrow[Rightarrow,shorten=5,from=S,to=T]
\end{tikzcd}
\end{equation}
\end{definition}
\setlength{\belowdisplayskip}{11pt}
This definition of morphism (taken from \cite[Section 3]{shapiro2022enrichment})
is the direct theoretical benefit of treating ${\mathbb{O}\Cat{rg}}$ as a monoidal double category rather than as a monoidal bicategory (closer to its description in \cite{spivak2021learners}). Otherwise morphisms could either only be easily defined between dynamic PROs with the same interface polynomial, which is needlessly restrictive, or take the form of a $[p,p']$-coalgebra, which seems to be too general to be of much use.
A morphism $(p,\S) \to (p',\S')$ can be interpreted as a way of telling the codomain how to run the domain. The map of polynomials $p \to p'$ specifies how the positions of $p$ can be modeled by those of $p'$ and how the directions of $p'$ are returned as directions of $p$, while the commutor squares describe how the states of $\S_{m,n}$ can be modeled by those of $\S'_{m,n}$ in a way that respects this change of interface. A type of theorem that we hope to instantiate in future work is of the form ``this dynamic structure that we're interested in can be run by (has a map to) this other dynamic structure that we already understand well.''
\begin{example}
For a fixed polynomial $p$, there is a terminal dynamic PRO with interface $p$, which we denote $\S^{p!}$; here $\S^{p!}_{m,n}$ is the terminal $[p^{\otimes m},p^{\otimes n}]$-coalgebra for each $m,n\in\mathbb{N}$.
A state in $\S^{p!}$ is a (not necessarily finite) $[p^{\otimes m},p^{\otimes n}]$-tree. By this we mean a tree co-inductively defined by a root node labeled with a polynomial map $\phi\colon p^{\otimes m} \to p^{\otimes n}$ together with an arrow---whose source is the root and whose target is another $[p^{\otimes m},p^{\otimes n}]$-tree---assigned to each tuple
\begin{equation}\label{eqn.pmn_directions}
\big((I_1,\ldots,I_m), i_1,\ldots,i_n\big) \in p^{\otimes m}(1)\times p^{\otimes n}[\phi(I_1,...,I_m)]
\end{equation}
The action of such a tree is simply the map $\phi$ labeling its root, and the update sends a tuple as in \eqref{eqn.pmn_directions} to the target of its assigned arrow.
The idea is that the state-set of the terminal dynamic PRO encodes all possible trajectories along different actions, and this coalgebra is terminal because from any other coalgebra there is a map to $\S^{p!}_{m,n}$ sending each state to the tree whose root is labeled by the action of the state and whose edges from the root go to the trees for each of the state's possible updates.
To define a dynamic PRO structure on the terminal coalgebra $\S^{p!}$, it only remains to define maps of coalgebras as in \cref{eqn.adaptive_tor}, and these are all taken to be the unique map to the terminal $[p^{\otimes m},p^{\otimes n}]$-coalgebra; the equations hold automatically. This is the terminal dynamic PRO with interface $p$ because for any other such dynamic PRO there is a morphism given by the identity map on $p$ and with commutor squares to $\S^{p!}_{m,n}$ the unique map to the terminal $[p^{\otimes m},p^{\otimes n}]$-coalgebra. In other words, $\S^{p!}$ \emph{uniquely runs} any other dynamic PRO with interface $p$.
\end{example}
\chapter{Dynamic Structures in Nature}
Our main results are that dynamic structures describe phenomena we see instantiated around us. In this paper, we focus on deep learning and a prediction market in which the reputations of various guess-makers evolve based on how successful they are.
\section{The prediction market dynamic operad}\label{sec.kelley}
Fix a finite set $X$, elements of which we call \emph{outcomes} and intuit to be ``all equally likely'', define the set $\Delta^+_X$ of \emph{guesses on $X$} as\footnote{The assumption that every possible outcome is given some nonzero probability in each guess could be interpreted either as humility of the guess-makers or a strategic decision to avoid permanent loss of all trust. But the real reason for the choice is that it lets us avoid dividing by zero when updating.}
\[
\Delta^+_X\coloneqq\left\{\gamma\colon X\to(0,1]\;\;\middle|\;\;1=\sum_x\gamma(x) \right\}
\]
The monoid $((0,1],1,*)$ of nonzero subunital reals acts on $\Delta^+_X$ by scalar multiplication, i.e. for any $0 < m\leq 1$ and $\gamma\in\Delta^+_X$, we can define $m\cdot\gamma\in\Delta^+_X$ as follows:
\[
m\cdot \gamma\coloneqq \big(x\mapsto m\gamma (x)\big)
\]
Let $\Delta^+$ denote the operad of finite nowhere-zero probability distributions, where $\Delta^+_N$ is defined as above with the natural number $N$ regarded as the $N$-element set.
Then $\Delta^+_X$ is an algebra for it: for any $\mu\in\Delta_N$ and $\gamma \in (\Delta^+_X)^N$, we define
\[
\mu\cdot\gamma\coloneqq\bigg(x\mapsto\sum_{i\in N}(\mu_i\cdot\gamma_i)(x)\bigg)
\]
and it is easy to check that $(\mu\cdot\gamma)\in\Delta^+_X$, i.e.\ its components are in bounds $(\mu\cdot\gamma)(x)\in (0,1]$ and it is normalized $\sum_x(\mu\cdot\gamma)(x)=1$.
We now construct a dynamic operad with interface $p_X\in\Cat{Poly}$ defined as:
\[
p_X\coloneqq \Delta^+_X\,\mathcal{y}^X
\]
and use the $\Delta^+_N$ as our state spaces. The idea is that a state $\mu\in\Delta^+_N$ says how much the organization trusts each of its $N$ members (guess-makers) relative to each other. A member's position at a given moment is a report of how much confidence it has in each of the $X$-many possibilities, represented by its probability distribution.
The action of a trust distribution $\mu \in \Delta^+_N$ is the map of polynomials $p_X^{\otimes N} \to p_X$ which on positions sends $\gamma \in (\Delta^+_X)^N$ to $\mu \cdot \gamma$ and on directions sends $x \in X$ to $(x,...,x) \in X^N$. The idea is that the organization aggregates its members' predictions according to its current trust-distribution, and the outcome is accurately communicated back to each member.
The most interesting part of the dynamic structure is how the trust distribution is updated once predictions are made and a result $x\in X$ is returned. When $N=0$, there's nothing to do: $\Delta^+_0=\varnothing$. For membership $N\geq 1$, trust distribution $\mu\in\Delta^+_N$, guesses $\gamma\in(\Delta^+_X)^N$, and outcome $x\in X$, we define the updated trust distribution $\gamma(x) * \mu \in\Delta^+_N$ as
\[
\gamma(x) * \mu \coloneqq \left( i \mapsto \frac{\gamma_i(x)\mu_i}{\sum_j \gamma_j(x)\mu_j}\right).
\]
Finally, we describe the operadic structure maps. As $\Delta^+_1$ is a singleton set whose action is the identity on $p_X$, the identitor $\{\mathrm{id}_{p_X}\} \to \Delta^+_1$ is an isomorphism. The operadic compositor is given by the usual operad structure on (nowhere-zero) distributions:
\[
\Delta^+_N \times \Delta^+_{M_1} \times \cdots \times \Delta^+_{M_N} \to \Delta^+_{\sum_i M_i} \qquad\qquad (\mu,\nu_1,\ldots,\nu_N) \mapsto \mu \circ \nu \coloneqq \left( (i,j) \mapsto \mu_i\nu_j \right).
\]
\begin{theorem}\label{predictionadaptive}
The maps defined above are maps of coalgebras and satisfy the coherence equations of a dynamic operad described in \cref{operadequations}.
\end{theorem}
This is proven in \cref{proofs}.
\section{The gradient descent dynamic PRO}
Deep learning uses the algorithm of gradient descent to optimize a choice of function, based on external feedback on its output. This naturally fits into the paradigm of dynamic structures, since functions $\mathbb{R}^m \to \mathbb{R}^n$ can form the states of a polynomial coalgebra, with the feedback providing the information needed to update the choice of function. These functions can be composed and juxtaposed in a way that preserves the updates. That is, the composite of gradient descenders is a gradient descender.
\begin{definition}\label{def.Smn}
For the rest of this section, we will use the state sets
\[
S_{m,n} \coloneqq \left\{(M \in \mathbb{N}, f\colon \mathbb{R}^{M+m}, p \in \mathbb{R}^M) \;\middle|\; f \textrm{ is differentiable}\right\}.
\qedhere
\]
\end{definition}
The idea is that these states are the possible parameters among which a gradient descender is meant to find the optimal choice, while $f$ dictates how the parameter affects the resulting function $f(p,-)$. In the dynamics of these states described below, only the value of the parameter $p$ will be updated; the parameter-space dimension $M$ and the parameterized function $f$ will remain fixed, though network composition of gradient descenders will involve combining these data in nontrivial ways. Fix $\epsilon>0$
For every $x\in\mathbb{R}$, let $T_x\mathbb{R}$ denote the tangent space at $X$; for all practical purposes $T_x \mathbb{R}$ can be regarded as simply $\mathbb{R}$, but in both the description of polynomials as bundles and the intuition for this example it makes sense to use the tangent space at $x$. We proceed to define a dynamic PRO with interface $t \coloneqq \sum_{x \in \mathbb{R}} \mathcal{y}^{T_x \mathbb{R}}$ and with coalgebras $\S_{m,n}$ which update the state sets $S_{m,n}$ from \cref{def.Smn} using gradient descent. The PRO structure maps encode how networks of gradient descenders can be composed into a single gradient descender with a larger parameter space.
\begin{definition}
The $[t^{\otimes m},t^{\otimes n}]$-coalgebra structure on $S_{m,n}$ is given by
\begin{itemize}
\item On positions, the action $\tn{act}^\beta_{M,f,p}\colon \mathbb{R}^m \to \mathbb{R}^n$ is given by $f(p,-)$.
\item For $x \in \mathbb{R}^m$, the action $\tn{act}^\beta_{M,f,p}(x,-)\colon T_{f(p,x)} \mathbb{R}^n \to T_x \mathbb{R}^m$ on directions sends $y\in T_{f(p,x)}$ to $\pi_m (Df)^\top \cdot y$.
\item The update function $\tn{upd}^\beta_{M,f,p}$ sends $x \in \mathbb{R}^m$ and $y \in T_{f(p,x)}$ to $(M,f,p+\epsilon \pi_M (Df)^\top \cdot y)$ for our fixed $\epsilon$.
\qedhere
\end{itemize}
\end{definition}
The action of a state as a map $t^{\otimes m} \to t^{\otimes n}$ is given by applying the parameterized function $f$ with the parameter $p$, resulting in a function $\mathbb{R}^m \to \mathbb{R}^n$ as desired. The transpose $(Df)^\top$ of the derivative of $f$ sends a feedback vector $y \in T_{f(p,x)} \mathbb{R}^n$, which can be interpreted as the difference in $\mathbb{R}^n$ between the ``correct'' result for $x$ and the current approximation $f(p,x)$, to the corresponding ``correction'' to $(p,x)$ in $\mathbb{R}^{M+m}$. The projection of this correction to $T_x \mathbb{R}^m$ provides the action of the state on directions, which in a network will then be further propagated back to the gradient descender which had output $x$. The projection to $T_p \mathbb{R}^M$ provides the direction and magnitude with which to update the parameters (scaled by the ``learning rate'' $\epsilon$).
Thus far, we have provided the data of the polynomial $t$ and the $[t^{\otimes m},t^{\otimes n}]$-coalgebras $\S_{m,n}$ needed to define a dynamic PRO. We now define the identitor, compositor, and productor morphisms of coalgebras presented by the squares in \cref{enriched_monoidal}.
\begin{itemize}
\item The identitors $\idcoalg{t^{\otimes n}} \to \S_{n,n}$ send the unique state in the domain to the state
\[(0,\mathrm{id}_{\mathbb{R}^n},0) \in S_{n,n}.\]
\item The compositors $\S_{\ell,m}\mathbin{\fatsemi}\S_{m,n} \to \S_{\ell,n}$ send the pair $((L,f,p),(M,g,q))$ to
\[\left( M+L,\,g(-,f(-,-))\colon \mathbb{R}^{M+L+\ell} \To{\mathrm{id} \times f} \mathbb{R}^{M+m} \To{g} \mathbb{R}^n,\, (q,p) \in \mathbb{R}^{M+L} \right).\]
\item The productors $\S_{m,n} \otimes \S_{m',n'} \to \S_{m+m',n+n'}$ send the pair $((M,f,p),(M',f',p'))$ to
\[(M+M',\,(f,f'),\,(p,p')).\]
\end{itemize}
These structure maps ensure that whenever two gradient descenders are combined in series or parallel, the resulting composite descender retains the parameter spaces of both. Likewise when the input or output of a descender is wired past some other descender in a network, it does not contribute any new parameters and merely preserves its input/output until plugged into a descender. The following is proven in \cref{proofs}.
\begin{theorem}\label{gradientadaptive}
The maps defined above are maps of coalgebras and satisfy the coherence equations of a dynamic PRO described in \cref{PROequations}.
\end{theorem}
|
1,116,691,499,021 | arxiv | \section{Introduction}
Advancement in nano-fabrication techniques made it possible to make devices in which metallic leads are connected to quantum dots (QDs)\cite{Kouwenhoven2001, Franceschi2010,Martin2011}. In principle, such hybrid devices are experimental realization for quantum impurities interacting with the sea of conduction band electrons. The quantum dots are nanoscopic semiconductor structures (e.g., InAs nanowire, carbon nanotube, and Graphene QDs) in which electrons are confined to zero dimensions. Due to the quantum confinement, these quantum dots have a discrete energy level like an atom. These hybrid combinations of metal and QDs are used as a single electron transistor(SET), a device that is highly conductive only at very specific gate voltage. If the metal is in the superconducting state, then it provides vital applications, such as nano-SQUID for the detection of the individual magnetic molecule, sources of spin entangled electrons, and detectors for mechanical resonators\cite{Franceschi2010}. Quantum dot connected to one or more superconducting leads can be used to study the exciting phenomenon, e.g., quantum phase transition or interplay between Kondo physics and superconductivity\\
In his pioneering work, Anderson\cite{Anderson1961} analyzed the electronic structure of a metal containing a quantum impurity and studied the conditions necessary in metals for the presence or absence of localized magnetic moments at the impurity site. By using self-consistent Hartree-Fock approximation (HFA), it has been shown that local magnetic moments might be formed under suitable conditions determined by an interplay of certain physical parameters such as impurity energy levels, the coupling between impurity and metal (hybridization energy or s-d interaction) and on-site Coulomb repulsion.\\
The quantum dot (impurity) embedded in a superconducting bath has also been a topic of intensive research from the past few decades. Quantum impurity embedded in a superconducting bath (with superconducting energy gap $\Delta_{sc}$) instead of a normal metal drastically modifies the electronic structure of impurity. The proximity effect allows the Cooper pair to leak into the quantum impurity state, thus shows an induced pairing in its spectral function in the energy range $-\Delta_{sc}<\omega<\Delta_{sc}$. Andreev reflection (i.e., conversion of electrons into the Cooper pairs with a simultaneous reflection of the holes) on the opposite interfaces give rise to discrete sub-gap states known as Andreev bound states (ABSs). The presence of strong on-site Coulomb interaction opposes any double occupancy of the quantum impurity state (Coulomb blockade). At low temperatures, the magnetic impurity can be screened by the conduction electrons at the Fermi sea(i.e., the formation of a spin S=0 state by breaking the Cooper pairs)\cite{Kondo1964,Glazman2001}. This so-called Kondo effect occurs if the impurity is strongly coupled to the bath(i.e., for small $\Delta_{sc}/\Gamma$, where $\Gamma$ is the coupling between impurity and superconducting bath), which then hybridizes with the impurity level. Thus both the effect, i.e., the on-site Coulomb repulsion and appearance of Kondo singlet competes with the superconductivity.\\
The effect of superconductivity on the formation of localized magnetic moments has been studied previously within the Hartree-Fock approximation by various authors \cite{Tripathi1967,Kusakabe1971,Rossler1972}. Early studies showed that superconductivity hinders the spin localization or magnetic moment formation in comparison with normal metals by assuming that there is no pairing induced by superconductivity on the impurity site\cite{Tripathi1967}.
Kusakabe \cite{Kusakabe1971} and Rossler and Kiwi \cite{Rossler1972} included the pairing induced on the localized site of the impurity. Kusakabe concluded that the superconductivity either aided or hindered the formation of the localized magnetic moment according to the impurity level's energy relative to the Fermi surface. On the other hand, Rossler and Kiwi find out that the magnetic region is slightly reduced relative to the normal state.
More recently, the spectroscopic properties of quantum impurity embedded in a superconducting host are studied in the superconducting atomic limit ($\Delta_{sc}\rightarrow\infty$) \cite{Bauer2007,Baranski2013,Domanski2008,Vecino2003,Meng2009}. Such a limiting case provides a useful way to handle the problem analytically.\\
Previous Hartree-Fock treatment of superconductor quantum dot nanostructures were either incomplete \cite{Rozhkov1999,Zhu2001} (i.e exclude self-consistent determination of the induced gap), computationally incorrect \cite{Yoshioka2000}, more focused on the effect of magnetic impurity on the superconductivity \cite{Shiba1973} and study the formation of the sub-gap Andreev bound states and transport properties of superconductor quantum dot Josephson junction \cite{Martin2012}. The recent experimental and theoretical study on the effect of one or more magnetic impurity on the bulk superconductor is given in references \cite{Balatsky2006,Meng2015,Heinrich2018}. Further the transport properties of superconductor quantum dot Josephson junction have also been extensively analyzed by using second-order perturbation theory \cite{Vecino2003,Meng2009,Zonda2015,Zonda2016} and renormalization group techniques \cite{Choi2004,Karrasch2008,Lim2008,Wentzell2016}. The experimental and theoretical studies of superconductor quantum dot nanostructures have been summarized in Ref.\cite{Franceschi2010,Martin2011,Meden2019}.\\
\begin{figure}[h]
\includegraphics[scale=0.45]{parameters}
\caption {Schematic level diagram for quantum impurity (QD) embedded in BCS superconductor: an impurity level $\epsilon_d$ with on-site Coulomb repulsion $U$ is hybridized with a continuum of excitations in a superconductor with a gap $\Delta_{sc}$ through hybridization with strength $\Gamma$.}
\end{figure}\\
The tunneling spectroscopy experiment of Andreev bound states in carbon nanotube quantum dot or InAs nanowire quantum dot contacted with normal and/or superconducting leads has been studied in detail\cite{Deacon2010,Lee2014,Pillet2013,Buitelaar2002,vanDam2006,Maurand2012,Li2017}. Tunnel spectroscopy experiment by Deacon et al \cite{Deacon2010} provides the evidence of singlet (non-magnetic) to doublet (magnetic) transition in S-QD-N (with negligible coupling to the normal lead) when the number of electrons changes from even to odd. Further, Lee et al \cite{Lee2014} studied the magnetic properties of S-QD. The experimental study of the electronic and transport properties of the hybrid superconductor quantum dot Josephson Junction (S-QD-S) can be found in Ref. \cite{Pillet2013,Buitelaar2002,vanDam2006,Maurand2012,Li2017}. Maurand et al \cite{Maurand2012} study a carbon-nanotube quantum dot embedded in a superconducting quantum interference device to investigate the competition of strong coulomb correlation with induced pairing. In the strong Coulomb blockade regime, the singlet to doublet transition ($0-\pi$ transition for Josephson junctions) is controlled by a change in the energy of the quantum dot level relative to the Fermi level. At a larger coupling, the Kondo effect develops for a magnetic impurity(dot), and suppress magnetism. The competition between the singlet and doublet states is governed by different energy scales: superconducting gap ($\Delta_{sc}$), the coupling between the superconductor and quantum dot ($\Gamma$), the on-site Coulomb correlation or charging energy ($U$), and the energy ($\epsilon_d$) of the dot level relative to the Fermi energy of the Superconducting electrode (see Fig 1)\\
Motivated by the above theoretical and experimental studies, we have planned to investigate the single impurity Anderson model with a superconducting bath within self-consistent HFA. This approximation gives qualitative insight into the lowest order correlation effect but does not capture the Kondo physics. We focused on the sub-gap states and the formation and stability of finite magnetic moment at the quantum impurity site immersed in an s-wave superconducting host for the low-frequency limit $|\omega|<<\Delta_{sc}$(i.e., a gap much larger than all characteristic impurity energies) and for the finite superconducting gap case. It is also assumed that the energy level spacing of quantum impurity $\delta\epsilon$ is sufficiently large compared to other energy parameters. Thus the impurity is simplified to a two-fold degeneracy of spin up and spin down (by Pauli exclusion principle).\\
Kondo effect can also arise in impurity coupled to superconductor in the low-frequency limit if there exists a Fermionic state near Fermi level \cite{Baranski2013}. For example, in N-QD-S junctions, itinerant electrons from the normal electrode can screen magnetic impurity giving rise to the Kondo effect. For $|\omega|<<\Delta_{sc}$, the single quantum impurity is coupled only to a superconductor (S-QD). Thus there are no single-particle states near the Fermi level, and as a result, the formation of the Kondo peak is suppressed. The coupling between quantum impurity and quasiparticle excitations also vanishes in the low-frequency limit. But the impurity is still coupled to the Cooper pairs at the Fermi level, which induces a superconducting gap (proportional to the hybridization $\Gamma$ between impurity and superconductor) in the spectral density of impurity. Thus in the low-frequency limit, the on-site Coulomb repulsion competes with the induced on dot pairing. For the finite superconducting gap $\Delta_{sc}$, the remnants of the Kondo effect can influence the singlet to doublet transition by suppressing the magnetism \cite{Maurand2012,Maurand2013}. However, the general picture of singlet to doublet transition in non-Kondo regime ($T_K<<\Delta_{sc}$, where $T_K=\sqrt{\frac{U\Gamma}{2}}e^{\left(\frac{-\pi U}{8\Gamma}\right)}$) is captured on mean-field level. To describe the non-magnetic singlet to magnetic doublet transition in the Kondo regime ($T_K>>\Delta_{sc}$), it is, however, necessary to go beyond the mean-field approximation, taking into account the higher-order dynamical correlations. A detailed discussion of the model Hamiltonian and theoretical formulation is provided in the preceding section \Romannum{2}.
\section{Model Hamiltonian and Theoretical formulation}
Single-level Anderson impurity Model provides the microscopic model Hamiltonian for a single-level quantum impurity (QD) embedded in BCS superconducting bath,
\begin{equation}\label{eq:pareto mle2}
\begin{aligned}
\hat{H}=\hat{H}_{QD}+\hat{H}_S+\hat{H}_T
\end{aligned}
\end{equation}
where
\begin{gather}
\begin{aligned}
\hat{H}_{QD}=\sum_{\sigma} \epsilon_{d}\hat{n}_{d\sigma}+U\hat{n}_{d\uparrow}\hat{n}_{d\downarrow}
\end{aligned} \\
\begin{aligned}
\hat{H}_S=\sum_{k,\sigma}(\epsilon_{k}\hat{c}^\dagger_{k\sigma}\hat{c}_{k\sigma})-\sum_{k}(\Delta_{sc}\hat{c}^\dagger_{k\uparrow}\hat{c}^\dagger_{-k\downarrow}+\Delta^\ast_{sc}\hat{c}_{-k\downarrow}\hat{c}_{k\uparrow})
\end{aligned} \\
\begin{aligned}
\hat{H}_T=\sum_{k,\sigma}(V_{k}\hat{d}^\dagger_{\sigma}\hat{c}_{k\sigma}+{V^\ast_{k}}\hat{c}^\dagger_{k\sigma}\hat{d}_{\sigma})
\end{aligned}
\end{gather}
$\hat{H}_{QD}$ (Eq.(2)) is the Hamiltonian for single-level quantum impurity, $d_\sigma(d^\dagger_\sigma)$ is the annihilation(creation) operator of electron with spin $\sigma$ on the impurity and $n_{d\sigma}=d_\sigma^\dagger d_\sigma$ is number operator. The impurity consists of a single electronic level of energy $\epsilon_d$ and can be occupied upto two electrons. The Coulomb repulsion $U$ between electrons on the impurity state is also taken into account, which hinders an exact solution to the problem.\\
$\hat{H}_S$ (Eq.(3)) is BCS Hamiltonian, $c_{k\sigma}(c^\dagger_{k\sigma})$ is the annihilation(creation) operator of an electron with spin $\sigma$ and wave vector $\vec{k}$ in the superconducting bath.
In $\hat{H}_S$, the first term is the kinetic energy, and the second term represents the attractive interaction between the electrons of the superconducting bath, which is responsible for the formation of Cooper pairs. $\Delta_{sc}$ is a superconducting energy gap i.e., energy difference between the ground state of the superconductor and energy of lowest quasiparticle excitations.
The energy $\epsilon_k$ is measured with respect to the chemical potential $\mu_S=\epsilon_f=0$ at $T=0K$.\\
$\hat{H}_T$ (Eq.(4)) represents the hybridization of impurity with the external superconducting bath, i.e., the possibility of the single-particle tunneling between impurity state and superconducting bath and vice-versa. $V_{k}$ is the hybridization energy (or s-d interaction).\\
To diagonalized the BCS part of the above Hamiltonian, we employ the so-called Bogoliubov transformation. We define new Fermionic quasiparticle operators $\gamma_{k\sigma}$ and coefficients $u_k$ and $v_k$
\begin{equation}\label{eq:pareto mle2}
\begin{aligned}
c_{k\uparrow} = u^\ast_k\gamma_{k\uparrow}+v_k\gamma^\dagger_{-k\downarrow}
\\
c^\dagger_{-k\downarrow} = u_k\gamma^\dagger_{-k\downarrow}-v^\ast_k\gamma_{k\uparrow}
\end{aligned}
\end{equation}
The normalization condition is $|{u_k}|^2+|{v_k}|^2=1$.
Substituting in the Eq.(1) yields
\begin{equation}\label{eq:pareto mle2}
\begin{aligned}
H=\sum_{k,\sigma}(E_{k}\gamma^\dagger_{k\sigma}\gamma_{k\sigma}+E_0)
+\sum_{k\sigma}(V_ku^\ast_kd^\dagger_{\sigma}\gamma_{k\sigma}+h.c)+
\\
\sum_{k}[V^\ast_kv_k(d^\dagger_{\uparrow}\gamma^\dagger_{-k\downarrow}-d^\dagger_{\downarrow}\gamma^\dagger_{k\uparrow})+ h.c]
+\sum_{\sigma} \epsilon_{d}n_{d\sigma}+Un_{d\uparrow}n_{d\downarrow}
\end{aligned}
\end{equation}
where $h.c$ denotes the Hermitian conjugate, $E_0=\sum_k(\epsilon_k-E_k+\Delta_{sc}\langle{c^\dagger_{k\uparrow}c^\dagger_{-k\downarrow}}\rangle)$ is the ground state energy of the bath and $E_k = \sqrt{\epsilon^2_k+|\Delta_{sc}|^2}$ is the excitation energy (quasiparticle energy) of the bath.
We assume that hybridization or s-d interaction is $k$ independent i.e $V_k=V$ for $V_k<<D$ (wide band) where $-D\leq\epsilon_k\leq D$ and the normal tunneling rate from dot to the lead (or coupling constant) is defined by $\Gamma=\pi|V|^2\rho_0$, where normal density of states $\rho_0$ is constant in the range of energies around the Fermi level (flat band).\\
The coefficients $u_k$ and $v_k$ read
\begin{gather}
|{u_k}|^2=\frac{1}{2}(1+\frac{\epsilon_k}{\sqrt{{\epsilon_k}^2+|\Delta_{sc}|^2}})
\\
|{v_k}|^2=\frac{1}{2}(1-\frac{\epsilon_k}{\sqrt{{\epsilon_k}^2+|\Delta_{sc}|^2}})
\end{gather} \\
For $\Delta_{sc}\rightarrow 0$, $|{u_k}|^2\rightarrow 1$ for $\epsilon_k>0$ and $|{u_k}|^2\rightarrow 0$ for $\epsilon_k<0$ whereas ${|v_k}|^2\rightarrow 1$ for $\epsilon_k<0$ and ${|v_k}|^2\rightarrow 0$ for $\epsilon_k>0$. Thus a Bogoliubon excitation in normal state corresponds to creating an electron for energies above the Fermi level and creating a hole of opposite momentum and spin for energies below the Fermi level. At the superconducting state, a Bogoliubon becomes a superposition of both electron and hole state.\\
To solve the above single level Anderson impurity model, we use the Green's function Equation of motion (EOM) method \cite{Zubarev1960}
We are mainly interested in the spectral properties of the quantum impurity which can be extracted from the single-particle retarded Green's function defined as
\begin{equation}\label{eq:pareto mle2}
\begin{aligned}
G^r_{d\sigma}(t) = \langle\langle{d_{\sigma}(t);d^\dagger_{\sigma}(0)}\rangle\rangle=-i\theta(t)\langle{[d_{\sigma}(t),d^\dagger_{\sigma}(0)]_+}\rangle
\end{aligned}
\end{equation}
In the framework of the Green's function method, the Fourier transform of the single particle retarded Green function should satisfy the equation of motion
\begin{equation}\label{eq:pareto mle2}
\begin{aligned}
\omega \langle\langle{d_{\sigma};d^\dagger_{\sigma}}\rangle\rangle_\omega=\langle{[d_{\sigma},d^\dagger_{\sigma}]_+}\rangle+\langle\langle{[d_{\sigma},H];d^\dagger_{\sigma}}\rangle\rangle_\omega
\end{aligned}
\end{equation}
For correlated ($U\neq0$) quantum impurity embedded in a superconducting bath, the Hamiltonian is not exactly solvable due to the quartic term in the Coulomb interaction. Therefore we analyze above Hamiltonian by treating the Coulomb interaction within HFA.\\
For the finite superconducting gap $\Delta_{sc}$, the interaction term in HFA is written as \cite{Shiba1973, Yoshioka2000, Martin2012, Vecino2003}
\begin{gather}
\begin{aligned}
U{n}_{d\uparrow}{n}_{d\downarrow}=U\langle{n}_{d\uparrow}\rangle{n}_{d\downarrow}+U\langle{n}_{d\downarrow}\rangle{n}_{d\uparrow}+ \\
U\langle{d^\dagger_{\downarrow} d^\dagger_{\uparrow}}\rangle d_{\uparrow} d_{\downarrow}+ U\langle{d_{\uparrow} d_{\downarrow}}\rangle d^\dagger_{\downarrow} d^\dagger_{\uparrow}
\end{aligned}
\end{gather}
where $\langle{n}_{d\sigma}\rangle$ is the average number of occupation of spin $\sigma\in\;\uparrow,\downarrow$ on the dot and the dimensionless parameter $\langle{d_{\uparrow} d_{\downarrow}}\rangle$ or $\langle{d^\dagger_{\downarrow} d^\dagger_{\uparrow}}\rangle$ are the qualitative measure of the induced on-dot pairing.\\
Thus the Hamiltonian (Eq.(6)) becomes
\begin{gather}
\begin{aligned}
H={} & \sum_{k,\sigma}(E_{k}\gamma^\dagger_{k\sigma}\gamma_{k\sigma}+E_0)+\sum_{k\sigma}(V_ku^\ast_kd^\dagger_{\sigma}\gamma_{k\sigma}+h.c) + \\
& \sum_{k}[V^\ast_kv_k(d^\dagger_{\uparrow}\gamma^\dagger_{-k\downarrow}-d^\dagger_{\downarrow}\gamma^\dagger_{k\uparrow})+ h.c] + \\
& \sum_{\sigma}E_{d\sigma}d^\dagger_{\sigma}d_{\sigma}+ U\langle{d_{\uparrow} d_{\downarrow}}\rangle \left( d_{\uparrow} d_{\downarrow} + d^\dagger_{\downarrow} d^\dagger_{\uparrow}\right)
\end{aligned}
\end{gather}
where $E_{d\sigma}=\epsilon_{d}+U\langle{n}_{d\bar{\sigma}}\rangle$ and $\sigma\neq\bar{\sigma}$\\
Both $\langle{n}_{d\sigma}\rangle$ and $\langle{d_{\uparrow} d_{\downarrow}}\rangle$ have to be calculated self-consistently.
Again, by employing the Green's function EOM method, we derived the following coupled equations for the case of correlated quantum impurity.
\begin{gather}
\begin{aligned}
(\omega-E_{d\uparrow})\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle=1+V\sum_ku^\ast_k\langle\langle{\gamma_{k\uparrow};d^\dagger_{\uparrow}}\rangle\rangle+\\
V\sum_kv_k\langle\langle{\gamma^\dagger_{-k\downarrow};d^\dagger_{\uparrow}}\rangle\rangle-U\langle{d_{\uparrow} d_{\downarrow}}\rangle \langle\langle{d^\dagger_{\downarrow}; d^\dagger_{\uparrow}}\rangle\rangle
\end{aligned}
\end{gather}
\begin{gather}
\begin{aligned}
(\omega-E_k)\langle\langle{\gamma_{k\uparrow};d^\dagger_{\uparrow}}\rangle\rangle=V^\ast u_k\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle+\\
V v_k\langle\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle\rangle
\end{aligned}
\end{gather}
\begin{gather}
\begin{aligned}
(\omega+E_k)\langle\langle{\gamma^\dagger_{-k\downarrow};d^\dagger_{\uparrow}}\rangle\rangle=-V u^\ast_k\langle\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle\rangle+\\
V^\ast v^\ast_k\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle
\end{aligned}
\end{gather}
and
\begin{gather}
\begin{aligned}
(\omega+E_{d\downarrow})\langle\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle\rangle=-V^\ast\sum_ku_k\langle\langle{\gamma^\dagger_{-k\downarrow};d^\dagger_{\uparrow}}\rangle\rangle+\\
V^\ast\sum_kv^\ast_k\langle\langle{\gamma_{k\uparrow};d^\dagger_{\uparrow}}\rangle\rangle-U\langle{d_{\uparrow} d_{\downarrow}}\rangle \langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle
\end{aligned}
\end{gather}
We solved above close set of equations (Eqs.13-16) to find the expression for the single electron retarded Green's function $(\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle)$ of the quantum dot.
\begin{equation}
\resizebox{1.0\hsize}{!}{$G^r_{d}(\omega)=\frac{\omega+E_{d\downarrow}-I_1}{(\omega+E_{d\downarrow}-I_1)(\omega-E_{d\uparrow}-I_2)-(I_3+ U\langle{d_{\uparrow} d_{\downarrow}}\rangle)^2}$}
\end{equation}
where
\begin{equation}
I_1=|V|^2\sum_k\left(\frac{|u_k|^2}{\omega+E_k}+\frac{|v_k|^2}{\omega-E_k}\right)
\end{equation}
\begin{equation}
I_2=|V|^2\sum_k\left(\frac{|u_k|^2}{\omega-E_k}+\frac{|v_k|^2}{\omega+E_k}\right)
\end{equation}
and
\begin{equation}
I_3=|V|^2\sum_ku_kv^\ast_k\left(\frac{1}{\omega+E_k}-\frac{1}{\omega-E_k}\right)
\end{equation}
Where $I_1$ and $I_2$ are the diagonal and $I_3$ is the off-diagonal part of selfenergy (which corresponds to induced paring) due to coupling between QD and superconducting host in Nambu representation \cite{Bauer2007,Baranski2013}.
By transferring summation over $k$-values into the integral over $\epsilon$, the multi-dimensional problem changes into a one-dimensional problem and for $|\omega|<\Delta_{sc}$ i.e. within superconducting gap one can have
$$I_1=I_2=-2|V|^2\rho_0\omega\int_{0}^{D\rightarrow\infty} \left[\frac{1}{\epsilon^2+\left(\Delta^2_{sc}-\omega^2\right)}\right]d\epsilon$$
\begin{equation}
=-\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}
\end{equation}
and
$$I_3=2|V|^2\rho_0\Delta_{sc}\int_{0}^{\infty} \left[\frac{1}{\epsilon^2+\left(\Delta^2_{sc}-\omega^2\right)}\right]d\epsilon$$
\begin{equation}
=\frac{\Gamma\Delta_{sc}}{\sqrt{\Delta^2_{sc}-\omega^2}}
\end{equation}
And for $|\omega|>\Delta{sc}$ a simple manipulation provides
\begin{equation}
I_1=I_2=-\frac{i\Gamma\omega}{\sqrt{\Delta_{sc}^2-\omega^2}}
\end{equation}
\begin{equation}
I_3=\frac{i\Gamma\Delta_{sc}}{\sqrt{\Delta_{sc}^2-\omega^2}}
\end{equation}
In the next subsections, we calculate single electron retarded Green's function and corresponding spectral density for uncorrelated quantum impurity and correlated quantum impurity for the finite superconducting gap and low-frequency limit $|\omega|<<\Delta_{sc}$.
\subsection{Non-interacting case($U=0$)}
For the non-interacting case or uncorrelated quantum impurity $E_{d\sigma}=\epsilon_d$, and the Hamiltonian is exactly solvable and the Green function of the quantum dot is given by (from Eq. 17),
\begin{gather}
G^r_{d0}(\omega)=\frac{\omega+\epsilon_d+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}}{\omega^2+\frac{2\Gamma\omega^2}{\sqrt{\Delta^2_{sc}-\omega^2}}-\epsilon^2_d-\Gamma^2} \; \; , \; \; |\omega|<\Delta_{sc}
\end{gather}
\begin{gather}
G^r_{d0}(\omega)=\frac{\omega+\epsilon_d+\frac{i\Gamma\omega}{\sqrt{\omega^2-\Delta^2_{sc}}}}{\omega^2+\frac{2i\Gamma\omega^2}{\sqrt{\omega^2-\Delta^2_{sc}}}-\epsilon^2_d-\Gamma^2} \; \; , \; \; |\omega|>\Delta_{sc}
\end{gather}
For $|\omega|<\Delta_{sc}$ the poles of Green's function (i.e. equating denominator to zero) gives the energies of localized excited states or Andreev bound states (ABSs) which can be obtained by solving following equation
\begin{gather}
\omega^2-\left[\frac{\left(\epsilon^2_d +\Gamma^2\right)}{\left(1+\frac{2\Gamma}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)}\right]=0
\end{gather}
The Green's function for $|\omega|<\Delta_{sc}$ can also be written as ($\omega\rightarrow\omega+i\delta$)
\begin{gather}
G^r_{d0}(\omega+i\delta)=\sum_{\alpha=\pm}{\frac{W^{\alpha}_b}{\omega-E^{\alpha}_b+i\delta}}
\end{gather}
where $E^{+}_b =E_b$ and $E^{-}_b =-E_b$ are the poles of Green function i.e solution of Eq.(27) while their respective spectral weights i.e., $W^+_b$ and $W^-_b$ are calculated from their residuals
\begin{equation}
W^{\alpha}_b = \left[\frac{\omega+\epsilon_d+\frac{\Gamma\omega}{\sqrt{\omega^2-\Delta^2_{sc}}}} {\frac{d}{d\omega}\left(\omega^2+\frac{\Gamma\omega^2}{\sqrt{\omega^2-\Delta^2_{sc}}}-\epsilon^2_d-\Gamma^2\right)}\right]_{\omega=E^{\alpha}_b}
\end{equation}\\
\begin{equation}
\resizebox{1.0\hsize}{!}{$W^{\alpha}_b = \frac{1}{2}\left(\Delta^2_{sc}-E^2_b\right)\left[\frac{\sqrt{\Delta^2_{sc}-E^2_b}(1+\frac{\alpha\epsilon_d}{E_b})+\Gamma}{(\Delta^2_{sc}-E^2_b)(\sqrt{\Delta^2_{sc}-E^2_b}+2\Gamma)+\Gamma E^2_b}\right]$}
\end{equation}
For uncorrelated QD in the particle-hole symmetric case ($\epsilon_d=-U/2$), the weights of ABSs become equal i.e $W^{+}_b=W^{-}_b$.\\
The parameter $\delta\rightarrow 0^+$ is equivalent to the infinitesimally weak coupling to the normal lead in S-QD-N system \cite{Baranski2013, Domanski2008}. Weak coupling to the normal lead means that it only serves as a probe to provide the information of quantum dot, without disturbing the quantum states there. Finite coupling to the normal lead changes the width of the sub-gap states i.e the lifetime of quasiparticles ($\delta\propto\frac{1}{\tau}$). Thus, in our analysis of the S-QD system ($\delta=10^{-5}\Gamma$) the sub-gap states become Dirac delta function, i.e. they represent the quasiparticles of an infinite life-time.\\
The total spectral density on the quantum impurity is defined as the imaginary part of the retarded Green’s function $\rho_{d0}=-\frac{1}{\pi}Im\{G^r_{d0}(\omega)\}$ (here only considering spin up; the total spectral density is
the sum of spin up and spin down spectral density, which are equal by symmetry).
\begin{gather}
\rho_{d0}(\omega)= \frac{\delta}{\pi}\sum_{\alpha=\pm}\left[\frac{W^{\alpha}_b}{(\omega-E^{\alpha}_b)^2+\delta^2}\right]+\rho_{cont}(\omega)
\end{gather}
with
\begin{equation}
\resizebox{1.0\hsize}{!}{$\rho_{cont}(\omega)=\frac{1}{\pi}\frac{\Gamma\omega}{\sqrt{\omega^2-\Delta^2_{sc}}}\left[\frac{\omega^2+2\epsilon_d\omega+\Gamma^2+\epsilon^2_d}{\omega^2-(\Gamma^2+\epsilon^2_d)^2+\left(\frac{2\Gamma\omega^2}{\sqrt{\omega^2-\Delta^2_{sc}}}\right)^2}\right]$}
\end{equation}\\
where first term is the discrete spectral density for $|\omega|<\Delta_{sc}$ and second term $\rho_{cont}(\omega)$ is continuum spectral density for $|\omega|>\Delta_{sc}$.
\subsection{Interacting case($U\neq0)$ : For finite Superconducting gap ($\Delta_{sc}$)}
For interacting or correlated quantum impurity, the single-particle retarded Green's functions (Eq.(17)) and the anomalous Green's function, which corresponding to the pairing parameter, is calculated by using Eqs.(13)-(16) above \cite{Bauer2007,Martin2012,Wentzell2016}.
\begin{equation}
\resizebox{1.0\hsize}{!}{$G^r_{d,11}(\omega)=\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle=\left[\frac{\left(\omega+E_{d\downarrow}+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)}{\left(\omega+E_{d\downarrow}+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)\left(\omega-E_{d\uparrow}+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)-\left(\frac{\Gamma\Delta_{sc}}{\sqrt{\Delta^2_{sc}-\omega^2}}+U\langle{d_{\uparrow} d_{\downarrow}}\rangle\right)^2}\right]$}
\end{equation}
where $\omega\rightarrow\omega+i\delta$
\begin{equation}
\resizebox{1.0\hsize}{!}{$G^r_{d,22}(\omega)=\langle\langle{d^\dagger_{\downarrow};d_{\downarrow}}\rangle\rangle=\left[\frac{\left(\omega-E_{d\downarrow}+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)}{\left(\omega-E_{d\downarrow}+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)\left(\omega+E_{d\uparrow}+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)-\left(\frac{\Gamma\Delta_{sc}}{\sqrt{\Delta^2_{sc}-\omega^2}}+U\langle{d_{\uparrow} d_{\downarrow}}\right)^2}\right]$}
\end{equation}
\begin{equation}
\resizebox{1.0\hsize}{!}{$G^r_{d,21}(\omega)=\langle\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle\rangle=\left[\frac{\left(\frac{\Gamma\Delta_{sc}}{\sqrt{\Delta^2_{sc}-\omega^2}}+U\langle{d_{\uparrow} d_{\downarrow}}\right)}{\left(\omega+E_{d\downarrow}+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)\left(\omega-E_{d\uparrow}+\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}\right)-\left(\frac{\Gamma\Delta_{sc}}{\sqrt{\Delta^2_{sc}-\omega^2}}+U\langle{d_{\uparrow} d_{\downarrow}}\right)^2}\right]$}
\end{equation}
The self-consistent equation for the average occupation number $\langle{n_{d\sigma}}\rangle$ at the quantum dot level of a given spin $\sigma$ and the pairing parameter $\langle{d_{\uparrow} d_{\downarrow}}\rangle$ is obtained by integrating corresponding Spectral density in the continuum energy up to the Fermi level $\epsilon_f$ (at $T=0K$).
\begin{gather}
\langle{n_{d\sigma}}\rangle=-\frac{1}{\pi}\int_{-\infty}^0 Im\{\langle\langle{d_{\sigma};d^\dagger_{\sigma}}\rangle\rangle \} d\omega
\end{gather}
\begin{gather}
\langle{d_{\uparrow} d_{\downarrow}}\rangle=\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle=-\frac{1}{\pi}\int_{-\infty}^0 Im\{\langle\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle\rangle \} d\omega
\end{gather}
Starting from the initial guess for occupation $\langle n_{d\uparrow}\rangle_{(1)} = 0.5$, we iterate three self-consistent Hartree-Fock equations for $\langle n_{d\uparrow}\rangle$, $\langle n_{d\downarrow}\rangle$ and $\langle{d_{\uparrow} d_{\downarrow}}\rangle$, $k$ times until $\langle n_{d\uparrow}\rangle_{(k+1)}-\langle n_{d\uparrow}\rangle_{(k)} \leq 10^{-6}$.\\
The spectral density of the quantum dot features discrete ABS inside the superconducting gap. Thus the spectral density of quantum dot is given by\cite{Wentzell2016}
\begin{equation}
\rho_d(\omega)=-\frac{1}{\pi}Im\left[G^r_{d,11}(\omega)+G^r_{d,22}(\omega)\right]
\end{equation}
\subsection{Interacting case($U\neq0)$ : For low frequency limit ($|\omega|<<\Delta_{sc}$)}
For correlated quantum impurity, the simple solvable limit is the limit of large gap i.e., $\Delta_{sc}\rightarrow\infty$, and has been discussed previously \cite{Bauer2007,Baranski2013,Domanski2008,Vecino2003,Meng2009}. This is not the limit, as realized in the real experiment, but it allows one to obtain the exact analytical solution, and it becomes more useful for the complex multi-terminal and/or multi-dot nanostructures.
We take the low-frequency limit $|\omega|<<\Delta_{sc}$ after taking $D\rightarrow\infty$ for the proximity effect to survive which is equivalent to $\Delta_{sc}\rightarrow\infty$ limit.\\
The Hamiltonian for Correlated quantum impurity coupled to BCS superconductor (Eq.(6)) is
\begin{gather}
\begin{aligned}
H={} & \sum_{k,\sigma}(E_{k}\gamma^\dagger_{k\sigma}\gamma_{k\sigma}+E_0)+\sum_{k\sigma}(V u^\ast_kd^\dagger_{\sigma}\gamma_{k\sigma}+h.c)+ \\
& \sum_{k}[V^\ast v_k(d^\dagger_{\uparrow}\gamma^\dagger_{-k\downarrow}-d^\dagger_{\downarrow}\gamma^\dagger_{k\uparrow})+ h.c]+\sum_{\sigma}E_{d\sigma}d^\dagger_{\sigma}d_{\sigma}
\end{aligned}
\end{gather}
where $E_{d\sigma}=\epsilon_{d}+U\langle{n}_{d\bar{\sigma}}\rangle$ and $\sigma\neq\bar{\sigma}$\\
By employing the Green's function EOM method, we derived the following coupled equations for the case of the correlated quantum dot.
\begin{gather}
\begin{aligned}
(\omega-E_{d\uparrow})\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle=1+V\sum_ku^\ast_k\langle\langle{\gamma_{k\uparrow};d^\dagger_{\uparrow}}\rangle\rangle+\\
V\sum_kv_k\langle\langle{\gamma^\dagger_{-k\downarrow};d^\dagger_{\uparrow}}\rangle\rangle
\end{aligned}
\end{gather}
\begin{gather}
\begin{aligned}
(\omega-E_k)\langle\langle{\gamma_{k\uparrow};d^\dagger_{\uparrow}}\rangle\rangle=V^\ast u_k\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle+\\
V v_k\langle\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle\rangle
\end{aligned}
\end{gather}
\begin{gather}
\begin{aligned}
(\omega+E_k)\langle\langle{\gamma^\dagger_{-k\downarrow};d^\dagger_{\uparrow}}\rangle\rangle=-V u^\ast_k\langle\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle\rangle+\\
V^\ast v^\ast_k\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle
\end{aligned}
\end{gather}
and
\begin{gather}
\begin{aligned}
(\omega+E_{d\downarrow})\langle\langle{d^\dagger_{\downarrow};d^\dagger_{\uparrow}}\rangle\rangle=-V^\ast\sum_ku_k\langle\langle{\gamma^\dagger_{-k\downarrow};d^\dagger_{\uparrow}}\rangle\rangle+\\
V^\ast\sum_kv^\ast_k\langle\langle{\gamma_{k\uparrow};d^\dagger_{\uparrow}}\rangle\rangle
\end{aligned}
\end{gather}
Then we can solve this close set of equations (Eqs.40-43) and get the expression for the single electron retarded Green's function as follows
\begin{equation}
\resizebox{1.0\hsize}{!}{$G^r_{d}(\omega)=\langle\langle{d_{\uparrow};d^\dagger_{\uparrow}}\rangle\rangle=\frac{\omega+E_{d\downarrow}-I_1}{(\omega-E_{d\uparrow}-I_2)(\omega+E_{d\downarrow}-I_1)-(I_3)^2}$}
\end{equation}
Where for $|\omega|<\Delta_{sc}$ we have
\begin{equation}
I_1=-\frac{\Gamma\omega}{\sqrt{\Delta^2_{sc}-\omega^2}}
\end{equation}
and
\begin{equation}
I_3=\frac{\Gamma\Delta_{sc}}{\sqrt{\Delta^2_{sc}-\omega^2}}
\end{equation}
In the low-frequency regime $|\omega|<<\Delta_{sc}$ Eqs. (45) and (46)become
$$I_1=I_2=0, I_3=\Gamma.$$
Thus the Green function of correlated quantum (Eq.(44)) in the above limit become
\begin{equation}
G^r_{d}(\omega)=\frac{\omega+E_{d\downarrow}}{(\omega-E_{d{\uparrow}})(\omega+E_{d\downarrow})-\left(\Gamma\right)^2}
\end{equation}
where $\omega\rightarrow\omega+i\delta$\\
The energies of ABSs is given by the poles of the Green's function i.e
$$D(\omega)=(\omega-E_{d\bar{\sigma}})(\omega+E_{d\sigma})-\left(\Gamma\right)^2=0$$
The corresponding spectral density and average occupation number for spin $\sigma$ on the quantum dot is given by,
\begin{gather}
\rho_{d\sigma}(\omega)=-\frac{1}{\pi}Im\{G^r_{d\sigma}(\omega)\}
\end{gather}
\begin{equation}
\langle n_{d\sigma}\rangle=\left[\frac{1}{2}-\frac{E_{d\sigma}}{\pi\sqrt{E^2_{d\sigma}+\Delta^2_d}} \tan^{-1}\left(\frac{\sqrt{E^2_{d\sigma}+\Delta^2_d}}{\delta}\right)\right]
\end{equation}
where $E_{d\sigma}=\epsilon_d+U\langle n_{d\bar{\sigma}}\rangle$ and $\Delta_d = \Gamma$ is proximity induced superconducting gap i.e for low frequency limit $\omega<<\Delta_{sc}$ quantum dot itself become superconducting grain with induced gap equal to $\Gamma$ (similar to the uncorrelated quantum dot with $\Gamma<<\Delta_{sc}$).\\
The above equation's form is the same as obtained by \cite{Anderson1961} for quantum impurity embedded in a normal metallic host.
These two coupled equations for $\langle n_{d\uparrow}\rangle$ and $\langle n_{d\downarrow}\rangle$ give us the occupation of the spin up and spin down states at the impurity site and the magnetic moment, $m=\langle n_{d\uparrow}\rangle-\langle n_{d\downarrow}\rangle$.\\
In the next section, we discuss the results obtained by the numerical computations for various parameter regimes.
\section{Results and Conclusion}
The competition between the non-magnetic (singlet) and magnetic (doublet) ground state at the impurity site embedded in superconductor host is determined by different energy scales: $\Delta_{sc}$, $\Gamma$, $U$ and $\epsilon_d$.\\
In Fig.2 we present the spectral density $\rho_{d0}(\omega)$ (Eq.(31)) for different coupling strengths $\Gamma$ for uncorrelated ($U=0$) quantum impurity. This shows that the spectral density $\rho_{d0}(\omega)$ vanishes for $\omega<\Delta_{sc}$ i.e inside the superconducting gap except for certain discrete values. These resonant sub-gap states (ABSs) represents the quasiparticles of infinite lifetime ($\tau=1/\delta$) and is the signature of the proximity effect.
\begin{figure}[h]
\includegraphics[scale=0.28]{sqdU0}
\centering
\caption{The spectral density $\rho_{d0}(\omega)$ of uncorrelated quantum impurity obtained for various value of $\Gamma/\Delta_{sc}$ for electron-hole symmetric case ($\epsilon_d=-U/2=0$) and $\delta\rightarrow 0^+$ at $ T=0K$.}
\label{fig:nonfloat}
\end{figure}\\
In the superconducting atomic limit($\Gamma<<\Delta_{sc}$ or $\Delta_{sc}\rightarrow\infty$) the poles of Green's function is $\omega=\pm\sqrt{\epsilon^2_d+\Gamma^2}$, which shows that in this limit the impurity itself become superconductor with the induced pairing gap $\Delta_d=|\Gamma|$ as shown in Fig 2.(a). This superconducting singlet $|S\rangle$ ($S=0$) is superposition of the empty,$ |0\rangle$, and doubly occupied, $|\uparrow\downarrow\rangle$,states i.e $|S\rangle= -v^{\ast}_d|\uparrow\downarrow\rangle+u_d|0\rangle$.
On the other hand, for $\Gamma>>\Delta_{sc}$, i.e., in a strong coupling regime, the resonant sub-gap quasiparticles states combine with the gap edge singularities at $\pm\Delta{sc}$ as shown in Fig 2.(d). Figure 2.(b) and 2.(c) shows the well defined sub-gap ABSs for intermediate values of $\Gamma/\Delta_{sc}$.\\
In the low-frequency regime $|\omega|<<\Delta_{sc}$, the proximity induced on-dot pairing $|\Delta_d|=\Gamma$ competes with Coulomb repulsion $U$. For $U=0$ case, it follows from Eqs.(49) that $$\langle n_{d\uparrow}\rangle=\langle n_{d\downarrow}\rangle$$
which is a non-magnetic solution as discussed above.
\begin{figure}[h]
\includegraphics[scale=0.26]{sc}
\caption{Self -consistent plot of $\langle n_{d\uparrow}\rangle$ vs $\langle n_{d\downarrow}\rangle$ for a) Non-Magnetic case and b) Magnetic case for electron-hole symmetry ($\epsilon_d=\frac{-U}{2}$).}
\label{fig:nonfloat}
\end{figure}\\
The self-consistent solution of Eqs.(49) for $\langle n_{d\uparrow}\rangle$ and $\langle n_{d\downarrow}\rangle$ shows the existence of the singlet ($|S\rangle$) and magnetic doublet ( $|\uparrow\rangle$, $|\downarrow\rangle$ ) solution for different values of $U/\Gamma$ (see Fig.3 and Fig.4).
For small $U$ i.e $U/\Gamma=1.0$, there exists only one non-magnetic solution at $\langle n_{d\uparrow}\rangle$=$\langle n_{d\downarrow}\rangle=1/2$. But for $U/\Gamma=3.0$ we find the \enquote{localized} case with three possible solutions, one non-magnetic solution at $\langle n_{d\uparrow}\rangle$=$\langle n_{d\downarrow}\rangle=1/2$ and another pair of stable magnetic solution at $\langle n_{d\uparrow}\rangle$=$1-\langle n_{d\downarrow}\rangle=0.87266$. The existence of this spurious spin-symmetry breaking ($\langle n_{d\uparrow}\rangle\neq\langle n_{d\downarrow}\rangle$) or spontaneous generation of Zeeman field is not present in the exact solution. Despite this spurious symmetry breaking, the HFA contains the physics of magnetization due to Coulomb correlation and provides qualitatively fine description of singlet to doublet transition and the corresponding phase diagram for weak ($\Gamma<<\Delta_{sc}$)and intermediate ($\Gamma\leq\Delta_{sc}$) coupling strength with small on site coulomb interaction $U$.
\begin{figure}[h]
\includegraphics[scale=0.27]{fig}
\centering
\caption{$U/\Gamma$ dependence of the magnetic moment $m$ at the impurity site embedded in S.C for $\epsilon_d=-U/2$ at $T=0K$ in the low-frequency regime $|\omega|<<\Delta_{sc}$. The inset shows the $U/\Gamma_N$ dependence of the magnetic moment $m$ at the impurity site embedded in Normal metal (N.M) host.}
\label{fig:nonfloat}
\end{figure}\\
Fig.4 shows the $U/\Gamma$ dependence of magnetic moment $m=\langle n_{d\uparrow}\rangle-\langle n_{d\downarrow}\rangle$ at the impurity site for electron-hole symmetric case i.e $\epsilon_d=-U/2$. The ground state is singlet as long as $U\leq 2\Gamma$ and doublet otherwise. Thus the ground state transition occurs at $U/2\Gamma\simeq 1$. In the superconducting case, the Andreev bound state within the superconducting gap causes the singlet to doublet transition. For a quantum impurity embedded in a normal metallic host (N-QD) with tunnel coupling $\Gamma_N$, HFA leads to the magnetic phase transition at $U/\pi\Gamma_N=1$ (see inset of Fig.4). Thus singlet to doublet transition in N-QD (in non-Kondo regime) occurs for a larger value of on-site Coulomb interaction $U$ as compared to S-QD ($\Delta_{sc}\rightarrow\infty$).\\
Let us now discuss the magnetic moment as a function of the energy level of quantum impurity, $\epsilon_d$, away from the electron-hole symmetric case ($\epsilon_d\neq -U/2$). The phase diagram depicts the stability of the magnetic doublet versus that of the spin-singlet.\\
Fig.5(b) is the phase diagram for impurity embedded in the superconducting host, and it shows the magnetic moment in color-scale representation as a function of $\epsilon_d/U$ and $\Gamma/U$. This Singlet to Doublet transition is consistent with the previous results (exact for particle-hole symmetric case) \cite{Bauer2007,Meng2009}. These authors considered the effective localized model with $\Delta\rightarrow\infty$ to study sub-gap states and to obtain the phase boundary analytically. The equation of phase boundary is given by $\Gamma/U=\sqrt{1/4-(\epsilon_d/U+1/2)^2}$ (black dashed line in figure 5(b)).\\
Fig.5(a) shows the phase diagram for impurity embedded in normal metal \cite{Anderson1961}. If the metal is in the superconducting state with $|\omega|<<\Delta_{sc}$ then the magnetic region is enhanced as compared to the normal metallic state by a factor of 1.9 (i.e. area of a magnetic doublet in a superconducting case $\approx$ 1.9 $\times$ area of a magnetic doublet in normal case). The enhancement of the magnetic region is due to the change in the density of states near the Fermi level of the host metal.\\
\begin{figure}[h]
\includegraphics[scale=0.26]{phase1}
\centering
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.26]{phase2}
\centering
\caption{Ground state phase diagram showing the non-magnetic(Singlet)and magnetic(Doublet) regions for quantum impurity embedded in the metal in a) normal state ($\Gamma_N$ is the coupling between impurity and normal metallic lead) and b) superconducting state for $|\omega|<<\Delta_{sc}$.The black dashed line shows the phase boundary obtained from effective Hamiltonian\cite{Bauer2007}.}
\label{fig:nonfloat}
\end{figure}\\
For finite superconducting gap the self-consistent treatment of $\langle n_{d\uparrow}\rangle$, $\langle n_{d\downarrow}\rangle$ and $\langle{d_{\uparrow} d_{\downarrow}}\rangle$ (Eqs.(36) and (37)) is necessary to study the ABS and BCS singlet to magnetic doublet transition.\\
\begin{figure}[h]
\includegraphics[scale=0.255]{magind}
\centering
\caption{$U/\Gamma$ dependence of the {\bf{(a)}} magnetic moment $m$ and {\bf{(b)}} scaled paring parameter $\langle{d_{\uparrow} d_{\downarrow}}\rangle$ for different values of $\Delta_{sc}/\Gamma$ at the impurity site embedded in S.C for $\epsilon_d=-U/2$ at $T =0K$.}
\label{fig:nonfloat}
\end{figure}
Fig.6(a) and fig.6(b) shows the dependence of magnetic moment $m$ and paring parameter $\langle{d_{\uparrow} d_{\downarrow}}\rangle$ on the Coulomb interaction $U$ for various $\Delta_{sc}$. The discontinuous jump in the magnetic moment at singlet to doublet transition is different from N-QD \cite{Anderson1961} and S-QD in $|\omega|<<\Delta_{sc}$ limit and from Yoshioka \cite{Yoshioka2000} which shows that magnetic moment increases continuous from zero. Also, the paring parameter decreases for increasing the on-site Coulomb interaction due to the suppression of superconducting correlation by repulsive coulomb interaction and become discontinuous at singlet to doublet transition. The ground state is always singlet for $U/\Gamma<2$ and it can become doublet when $U/\Gamma$ is increased. The value of Coulomb interaction $U$ at which the transition occurs decreases with increasing the value of superconducting gap $\Delta_{sc}$. This gives the indication that, the magnetic doublet region is enhanced by increasing the superconducting gap. For a large superconducting gap, i.e., $\Delta_{sc}=100\Gamma$ the non-magnetic to magnetic transition occurs close to $U/\Gamma=2$ and the induced gap become equal to $\Gamma$ in singlet state and zero in doublet similar to the low frequency or superconducting atomic limit for electron-hole symmetric case
\begin{figure}[h]
\includegraphics[scale=0.44]{absvsu}
\centering
\caption{Subgap ABS as a function of dot energy level $\epsilon_d/\Delta_{sc}$ (a,b,c) and Coulomb interaction $U/\Delta_{sc}$ (d) for different $\Delta_{sc}$ values in the electron-hole symmetric case.(This is a colormap in which white and black color corresponds to $\rho_d(\omega)=0$ and $\rho_d(\omega)=1$ respectively)}
\label{fig:nonfloat}
\end{figure}\\
To provide a complete picture of S-QD within HFA, we plot the Andreev bound states as a function of dot parameters. In fig.8(a),(b) and (c), we plot the ABS as a function of $\epsilon_d/\Delta_{sc}$ for different $U/\Delta_{sc}$ and $\Delta_{sc}/\Gamma$ ratio. The number of Andreev bound states depends on the ratio $U/\Delta_{sc}$ ,$\Delta_{sc}/\Gamma$ and $\epsilon_d/\Delta_{sc}$. In the singlet region, only two ABS appears symmetrically with respect to the Fermi level, whereas in the doublet case, the number of ABS is doubled. Fig.8(d) shows the ABS as a function of $U/\Delta_{sc}$ for $\Delta_{sc}=5\Gamma$. It is also clear that the outer ABS merges with the gap edge for a larger value of $U/\Delta_{sc}$. These results qualitatively agree with the recent experimental study of superconductor quantum dot nanostructures \cite{Lee2014,Pillet2013}\\
In conclusion, to gain insights into the physics of hybrid superconductor-quantum dot devices, we considered the uncorrelated and correlated quantum impurity embedded in the BCS superconductor host. For correlated quantum impurity ($U>0$), we analyzed the low-frequency limit ($|\omega|<<\Delta_{sc}$) and finite $\Delta_{sc}$ case within the HFA. Our study for the weak coupling regime ($\Gamma<<\Delta_{sc}$ and $\Gamma\leq\Delta_{sc}$) can be regarded as complementary to the previous numerical renormalization group analysis of S-QD by J. Bauer et al \cite{Bauer2007} in the strong coupling regime ($\Gamma>>\Delta_{sc}$). The major difference between the two regimes is the nature of singlet. In the weak coupling regime, the singlet ground state corresponds to an s-wave pair, i.e., BCS singlet, whereas in the strong coupling regime the screened local spin, i.e., a Kondo singlet competes with the BCS singlet (more precisely the superconducting gap ($\Delta_{sc}$) competes with the Kondo temperature ($T_K$)).\\
The low-frequency limit is difficult to realize exactly in the experiments, but it indicates the necessary condition for the formation of a magnetic moment at the impurity site and allows one to study sub-gap states to an extent. The competition between the proximity induced local pairing $\Delta_d$ and Coulomb interaction $U$ on the dot site results in a transition from the BCS like the state to the singly occupied one. It is clear from our analysis that the effective local Hamiltonian phase diagram \cite{Bauer2007,Meng2009} is recovered by the Hartree Fock treatment of the complete S-QD Hamiltonian in the low-frequency limit. For the finite superconducting case, we study the singlet doublet transition and the proximity induced pairing parameter as a function of $U/\Gamma$ for different superconducting gap $\Delta_{sc}$ in the electron-hole symmetric case. We also analyze the quantum dot's spectral density to study the ABSs as a function of dot parameters. We found that superconductivity assists the formation of the magnetic moment and the doublet magnetic region is maximum for $\Delta_{sc}>>\Gamma$, which further reduces with decreasing the value of the superconducting gap and even become smaller than the magnetic doublet phase in N-QD. However, for the strong coupling regime with strong on-site Coulomb interaction, $U$, and below the Kondo temperature ($T_K$), the local magnetic moment at the impurity site can be screen by the conduction electrons at the Fermi energy (Kondo effect), and thus competes with the superconducting gap ($\Delta_{sc}$) for the singlet-doublet transition \cite{Bauer2007,Maurand2012,Zitko2015}. Our mean-field analysis provides a basis to study such strongly correlated case for one-to-one quantitative comparison between experiment and theoretical results.
Further, the above Hartree-Fock mean-field analysis can be extended to study the magnetic, spectral, and transport properties of multi-dot and multi-terminal superconductor-quantum dot devices.
\begin{acknowledgements}
One of the authors, Sachin Verma, is presently a research scholar at the department of physics IIT Roorkee and is highly thankful to the Ministry of Human Resource Development (MHRD), India, for their financial support, in the form of Ph.D. fellowship.
\end{acknowledgements}
|
1,116,691,499,022 | arxiv | \section{Introduction}\label{intro}
The study of the effects of periodic forces on a variety of systems, including classical~\cite{Higashikawa2018,Salerno2016}, quantum~\cite{Bukov2015,Eckardt2015,Kohler1997,Lewis1969,Brandner2016} and statistical systems~\cite{Jung1993,Brandner2015,Dutta2003,Dutta2004,Wang2015,Knoch2019,Gammaitoni1998,Kim2010,Fiore2019,Koyuk2018,Oberreiter2019,Tociu2019}, has been of continual interest for various and diverse reasons.
Periodically driven many-particle systems, under certain conditions, can exist in a state exhibiting periodic thermodynamic properties. It may not be possible to uncover the features of this {\it oscillating state} either from the knowledge of the equilibrium properties of the corresponding systems in the absence of driving or by studying the effects of weak periodic forces on the thermodynamics of such systems. Instead it may be required to explore the behavior of such systems by necessarily treating the driving nonperturbatively.
A suitable framework to investigate the properties of an oscillating state is presumably some sort of stochastic thermodynamics, wherein the periodic driving is appropriately incorporated. The fluctuations of a macroscopic variable of a thermodynamic system can be assumed to follow a continuous Markov process even in the presence of driving. It is expected that the stochastic process that describes the fluctuations is continuous due to the macroscopic nature of the variable. On the other hand, it is not evident whether or not the Markov property is a reasonable assumption. This is due to the fact that, as the frequency of the driving increases, the contributions due to the higher order time-derivative terms can become increasingly significant. Hence these higher order terms may have to be accommodated in effecting the stochastic process. Essentially, we could include certain additional variables, apart from the ones that are relevant in the absence of driving, and then assume that the extended set of variables follows continuous Markov process and that its asymptotic distribution describes the oscillating state.
The study of an underdamped Brownian motion, subjected to periodic driving, can be of considerable interest for multiple reasons. First, it is a prototypical example of the Langevin dynamics. Since any continuous Markov process is represented by a Langevin equation, it may be possible to draw useful analogies between the macroscopic variables that follow such a process and the position and velocity variables of the Brownian particle. Second, the velocity variable can be thought of as an additional degree of freedom that is relevant when driving is introduced to the overdamped Brownian motion. The extent of the relevance, for instance, can be estimated by comparing the marginal distribution of the underdamped motion, wherein the velocity is eliminated, with the distribution of the overdamped motion. In the special case of linearly driven Langevin equation, the correlation between the position and velocity degrees can become significant, depending on the driving~\cite{Awasthi2020}. Third, we can track almost analytically the underdamped Brownian motion even in the presence of driving, and hence the study may prove useful in finding some generic features of the oscillating state.
In an earlier work~\cite{Awasthi2020}, we found that any linearly driven Langevin equation can be solved exactly upon exploiting the underlying $SL_2$~symmetry. The exact solution could reveal the presence of oscillating states under certain conditions, could expose interesting properties of some observables and even could establish certain relations among them. This motivates us to extend the enquiry beyond linear driving and search for solutions, if possible exactly, in the presence of anharmonic perturbations. It is also natural to wonder whether we could capitalize on the symmetry even in the nonlinear case. We essentially ask the following central questions. Do driven Langevin systems with anharmonic perturbations reach an oscillating state asymptotically? Can we find the probability distributions of the oscillating state exactly? What are the conditions under which these oscillating states exist and are stable?
The layout of the current work is as follows. In the next section, we consider a generic class of driven nonlinear Langevin equations and formulate a perturbation scheme wherein the underlying $SL_2$~symmetry is not only manifest but also can be exploited to obtain the corresponding asymptotic distributions to any order. In Sec.~\ref{stability}, we explore the conditions under which the driven systems reach an oscillating state and can remain stable under perturbations. Finally, we summarize and conclude briefly with comments and remarks in Sec.~\ref{conc}.
\section{Driven underdamped Brownian particle: Asymptotic distribution}\label{Bparticle}
We will assume that the dynamics of a Brownian particle, when subjected to periodic driving, is governed by the Langevin equation of an underdamped Brownian motion with time-dependent parameters that are $T$-periodic. If we view the Langevin dynamics as an effective description, that is obtained upon eliminating the bath degrees of freedom, then it is reasonable to assume that both viscous coefficient and the noise strength will have to vary with the same period. Though there is no compelling reason for the external potential to vary commensurably, we nevertheless will restrict all time-dependent modulations to the same period. In this section, we shall obtain the asymptotic distribution of the periodically driven underdamped Brownian particle, by perturbatively treating the nonlinear part of the external potential, while nonperturbatively accounting the periodic time dependence.
\subsection{Driven nonlinear Langevin dynamics}\label{Langevin-dynamics}
We shall consider the stochastic dynamics of the position~$X_t$ and velocity~$V_t$ of a driven Brownian particle, governed by the following set of equations,
\begin{align}\label{stoc-dyn}
\dot{X}_t& = V_t ~,\nonumber \\
\dot{V}_t& = -\gamma V_t + f \left( X_t, \lambda \right) + \eta(t)~,
\end{align}
where the viscous coefficient~$\gamma$, the set of parameters~$\lambda$ of the external force~$f$ and the Gaussian noise~$\eta$ have $T$-periodic time dependence. The noise is assumed to have zero mean and nonzero variance~${\langle \eta(t) \eta(t') \rangle_{\eta}= 2D(t) \delta(t-t')}$, where the time-dependent diffusion coefficient~$D$ is $T$-periodic. The external force~$f$ may also contain nonlinear terms of~$X_t$.
The associated Fokker-Planck (FP) equation for the probability distribution~$P(x,v,t)$ of the above nonlinear Langevin dynamics is given by
\begin{equation}\label{FP-eqn}
\frac{\partial}{\partial t}P(x,v,t) =\mathcal{L}(x,v, g(t)) P(x,v,t)~,
\end{equation}
where~$g$ denotes all the parameters, including~$\gamma$ and~$D$, and the FP operator~$\mathcal{L}$ is defined as
\begin{equation}\label{FP-op}
\mathcal{L}(x,v, g) := -\frac{\partial}{\partial x}v - \frac{\partial}{\partial v}\left[ -\gamma v + f(x,\lambda) \right] + D \frac{\partial^{2}}{\partial v^{2}} ~.
\end{equation}
It should be remarked that though it is common to assume the form of the external force to be~${f=f(x,\lambda)}$, the analysis that we shall employ to obtain the asymptotic distribution is oblivious to this restriction and in fact holds good for~${f=f(x,v,\lambda)}$. Essentially, it is sufficient to assume that~$f$ is Taylor expandable in $x,v$~variables and thus can be written as~${f = - \sum_{n,m} \lambda_{n,m} x^n v^m}$, where~${n,m}$ are non-negative integers, and that the time dependence of the external force is implicitly provided by the linear parameter~$k$ and the nonlinear parameters~${\lambda_{n,m}}$. We will tune off the parameter~$\lambda_{0,1}$ to zero, since the linear term~$\gamma v$ is already included, and denote the linear coefficient~$\lambda_{1,0}$ as~$k$, for notational familiarity.
The asymptotic distribution of the FP equation for periodically driven Brownian particle can be expected to be $T$-periodic under certain conditions. In other words, the driven stochastic system can exist in a state, that we refer to as an oscillating state, wherein various relevant observables exhibit periodic properties. Suppose we consider the domain~$\mathcal{D}_{eq}$ in the parameter space~$\gamma, D, k$ and~$\lambda_{n,m}$ which is defined as the set of all points for which the asymptotic distribution of the FP equation, when the parameters are time independent, is an equilibrium distribution. Let us consider the domain~$\mathcal{D}_{eq}$ to be simply connected and the equilibrium state therein to be continuous. If we now drive the parameters of the FP equation in this domain continuously, with a period much larger than the corresponding equilibrium relaxation timescales, then the asymptotic state essentially will be a periodic trajectory in the equilibrium state space. As we decrease the period, the trajectories of course will not be restricted to the equilibrium state space but are likely to be continuous in some extended space, wherein the closed trajectories presumably can characterize the oscillating states of a driven system. While it is far from obvious what this extended space is or how to characterize the oscillating states, we shall see that it may still be possible to specify some of the conditions under which these states can exist.
The asymptotic distribution is of course obtained by taking the large time limit of the solution of the FP equation~\cite{Awasthi2020}. We could choose the prescription for approaching the limit in periodically driven systems, for instance, by first decomposing time,~${t = NT + \tau}$, in terms of an integer~$N$ number of periods and a remainder~$\tau$ such that~$0 \le \tau < T$, and then by taking~$N \to \infty$ limit. Thus the large time limit of the solution
\begin{equation}\label{form-sol}
P(x, v, t) = \mathcal{U}(x, v; t) P(x, v, 0) ~,
\end{equation}
of the FP equation~\eqref{FP-eqn}, results in a $T$-periodic asymptotic distribution
\begin{equation}\label{asymp-sol}
P_{os}(x, v, \tau) = \mathcal{U}(x, v; \tau) P_{\infty}(x, v, 0)~,
\end{equation}
provided the time-independent asymptotic distribution~$P_{\infty}(x, v, 0)$ exists. In the above equation,~$\mathcal{U}$ denotes the evolution operator, which is formally expressed as
\begin{equation}\label{ker}
\mathcal{U}(x, v; t) := \mathcal{T}\big\lbrace e^{\int_{0}^{t} \mathcal{L}(x, v, g(t)) dt} \big\rbrace ~,
\end{equation}
where~$\mathcal{T}$ indicates time ordering, and the time-independent distribution
\begin{equation}\label{largeN-asym}
P_{\infty}(x, v, 0) := \lim_{N\to\infty} \left[ \mathcal{U}(x, v; T) \right]^N P(x, v, 0)~,
\end{equation}
where~$P(x, v, 0)$ is the initial distribution. The existence and uniqueness of~${P_{\infty}(x, v, 0)}$ is dictated by the spectrum of the monodromy operator~${\mathcal{U}(x, v; T)}$. In case there exists a unique distribution then it necessarily satisfies the eigenvalue equation
\begin{equation}\label{t-indep-asym}
\mathcal{U}(x, v; T)P_{\infty}(x, v, 0)=P_{\infty}(x, v, 0)~.
\end{equation}
When the parameters are chosen from the domain~$\mathcal{D}_{eq}$ with their values kept fixed in time, then the above equation is equivalent to~${\mathcal{L} P_{\infty} =0}$ and the asymptotic distribution is an equilibrium distribution~${P_{eq}(x, v)}$. Hence we could equivalently define the domain~$\mathcal{D}_{eq}$ as the set of all~$g$ for which the FP operator~$\mathcal{L}(x,v, g)$ is negative semi-definite, with a unique normalizable eigenfunction corresponding to the zero eigenvalue, and with a nonzero gap between zero and the real part of any other eigenvalue. These properties of the FP operator are also required for the equilibrium state to avoid any singular behavior in~$\mathcal{D}_{eq}$. Suppose we now drive the parameters~$g=g(t)$ piecewise continuously, though not necessarily respecting the negative semi-definite property of~$\mathcal{L}(x,v, g(t))$, but necessarily ensuring that the real part of none of the eigenvalues of~$\mathcal{U}(x, v; T)$ exceeds unity. Then under such conditions the time-independent asymptotic distribution~${P_{\infty}(x, v, 0)}$ can exist, and hence the system can reach an oscillating state described by the distribution~$P_{os}(x, v, \tau)$. In some sense, the domain~$\mathcal{D}_{os}$ of the parameter space for which the oscillating states exist can be larger than the domain~$\mathcal{D}_{eq}$ for which the equilibrium states exist. This may even suggest that the stability of the states of macroscopic systems can presumably be controlled and manipulated by periodic driving.
In order to comprehend the role of driving in the oscillating states, it is natural to ask the following questions. What are the solutions of the FP equation for a generic driving~$g(t)$? Do these solutions have a large time limit? Namely, does the system reach an oscillating state independent of the initial distribution? Can we specify the necessary conditions that guarantee an oscillating state without having to solve explicitly the FP equation? In other words, can we find some criteria that tells us whether or not a given~$g(t)$ is in the domain~$\mathcal{D}_{os}$? In case of linear external force, using certain techniques from representation theory, we could find almost exactly the oscillating state and establish rigorously the necessary conditions for its existence~\cite{Awasthi2020}. We shall now show, even in cases where the external forces contain nonlinear terms, that similar techniques can be employed to answer these questions, provided we deal with nonlinearity perturbatively.
\subsection{Perturbative analysis to all orders}\label{pert-theory}
Following the standard perturbative analysis, we can decompose the FP operator,
\begin{equation}
\mathcal{L}=\mathcal{L}_0+ \epsilon \mathcal{L}_p~,
\end{equation}
into a solvable part that includes only the linear force terms, given by
\begin{equation}
\mathcal{L}_0 = \mathcal{L}_0 (x, v, g_0) = -\frac{\partial}{\partial x}v + \frac{\partial}{\partial v} \left[ \gamma v + kx \right] + D \frac{\partial^{2}}{\partial v^{2}}~,
\end{equation}
where~$g_0$ denotes~$\gamma, D$ and~$k$, and a perturbative part
\begin{equation}
\mathcal{L}_p = \mathcal{L}_p (x, v, \lambda) := \frac{\partial}{\partial v} \mathcal{O}_p~,
\end{equation}
where~$\mathcal{O}_p = \mathcal{O}_p(x, v, \lambda) = -kx - f(x, v, \lambda)$ consists only the nonlinear terms of the force~$f$.
The dimensionless parameter~$\epsilon$ is introduced to conveniently track the order of nonlinearity. Expanding the probability distribution~$P(x, v, t)$, in the FP equation~\eqref{FP-eqn}, as the following series in the perturbative parameter~$\epsilon$,
\begin{equation}
P(x, v, t) := \sum_{n=0}^{\infty} \epsilon^n P^{(n)}(x, v, t)~,
\end{equation}
leads to a sequence of dynamical equations for~$P^{(n)}$.
The zeroth-order equation is given by
\begin{equation}\label{zero-FP-eq}
\frac{\partial}{\partial t}P^{(0)}(x,v,t) =\mathcal{L}_0(x,v, g_0(t)) P^{(0)}(x,v,t)~,
\end{equation}
which is the FP equation of a driven Brownian particle in harmonic potential~\cite{Awasthi2020}. The other higher-order corrections~$P^{(n)}$ are governed by the recursive equations,
\begin{equation}\label{nth-FP-eq}
\frac{\partial}{\partial t}P^{(n)} =\mathcal{L}_0 P^{(n)} + \mathcal{L}_p P^{(n\!-\!1)}~,
\end{equation}
for any integer~$n \ge 1$. The dependence on the coordinates~$x, v, t$ and on other parameters is not explicitly exhibited here for notational simplicity.
We will assume~$P(x, v, t)$ is normalized to any given order of~$\epsilon$. In other words,~$P^{(0)}(x, v, t)$ is normalized, and~$\int dx dv P^{(n)}(x, v, t) =0$ for all~$n \ge 1$. We denote the asymptotic perturbative components of~$P(x, v, t)$ as~$P^{(n)}_{\infty}(x, v, t)$, which are obtained by taking the large time limit of the solutions of Eqs.~\eqref{zero-FP-eq} and~\eqref{nth-FP-eq}. We could choose the initial conditions for these equations such that the initial zeroth-order distribution~${P^{(0)}(x, v, 0)= P(x,v,0)}$, and hence all the initial higher-order corrections~${P^{(n)}_{\infty}(x, v, 0)=0}$, for~${n \ge 1}$.
Let us consider the driving parameters such that the necessary conditions required for the solution of Eq.~\eqref{zero-FP-eq} to asymptotically approach a well-defined limit are satisfied. The asymptotic distribution~$P^{(0)}_{\infty}(x,v,t)$ is in fact a Gaussian distribution with zero mean and a $T$-periodic covariance matrix
\begin{equation}\label{cov-mat}
\Sigma(t) :=
\begin{bmatrix}
\langle x^2 \rangle_0 & \langle xv \rangle_0 \\
\langle vx \rangle_0 &\langle v^2 \rangle_0
\end{bmatrix}
\equiv
\begin{bmatrix}
\widetilde{X}_{2,0}(t) & \widetilde{X}_{1,1}(t)\\
\widetilde{X}_{1,1}(t) & \widetilde{X}_{0,2}(t)
\end{bmatrix}~,
\end{equation}
whose matrix elements depend on the parameters~$g_0(t)$.
The methods employed to find the conditions for the existence of the asymptotic limit and to obtain the distribution~$P^{(0)}_{\infty}(x,v,t)$ are detailed in the Ref.~\cite{Awasthi2020}. Since we shall extend these methods to obtain the higher-order asymptotic functions~${P^{(n)}_{\infty}(x, v, t)}$, for~${n \ge 1}$, we will briefly spell them out, as and when required.
The algorithm that we shall employ to evaluate the asymptotic $n$-th order correction is as follows. We first substitute~$P^{(n-1)}$ in the equation~\eqref{nth-FP-eq} with its asymptotic function~${P^{(n-1)}_{\infty}}$ which we assume exists, and then find the solution of the modified equation. This solution is valid only for times that are large compared to the time required for the $(n-1)$th-order distribution to reach its asymptotic limit. In other words, decompose the time~${t= (N_0 + N_1)T + \tau}$ and solve Eq.~\eqref{nth-FP-eq} for~${t= N_1T + \tau}$ with the new initial condition given at time~$N_0T$, where~$N_0$ is chosen such that the substitution~$P^{(n-1)}$ with~${P^{(n-1)}_{\infty}}$ is justifiable, and then take the limit~${N_1 \to \infty}$. We can then show that the asymptotic limit~${P^{(n)}_{\infty}(x, v, t)}$ of the solution to the modified Eq.~\eqref{nth-FP-eq} exists when a certain condition holds for any arbitrary initial function~${P^{(n)}(x, v, N_0T)}$. We shall discuss later and analyse in detail this specific condition which is in fact independent of~$n$. Thus when this condition holds and the zeroth-order distribution~$P^{(0)}_{\infty}(x,v,t)$ exists, we establish iteratively the existence of~${P^{(n)}_{\infty}(x, v, t)}$ for any~$n >1$.
The calculations for determining explicitly the $n$-th order correction is similar to those for determining the first-order correction. We begin by rewriting the first-order correction
\begin{equation}\label{P1-A1}
P^{(1)}(x,v,t) = - \left( A^{(1)} - \langle A^{(1)} \rangle_0 \right)P^{(0)}_{\infty}(x,v,t)~,
\end{equation}
where~$A^{(1)}= A^{(1)}(x,v,t)$ is yet to be determined function and~$\langle A^{(1)} \rangle_0$ is the average of~$A^{(1)}$ with respect to~$P^{(0)}_{\infty}(x,v,t)$. If the perturbation~$\mathcal{O}_p(x, v, \lambda)$ is a polynomial function in~$x$ and~$v$ then~$A^{(1)}(x,v,t)$ will also be a polynomial in these variables. Owing to the underlying $SL_2$~symmetry of the unperturbed FP equation, it is beneficial to choose the basis functions
\begin{equation}
\mathcal{O}^r_L := x^{L-r} v^r~,
\end{equation}
where~$L$ is a positive integer and denotes the degree of homogeneity and~$r$, for a given~$L$, runs over the integers from~$0$ to~$L$. Let us consider the perturbation~$\mathcal{O}_p$ to be a nonlinear polynomial of degree~$L_p$, and hence can be written in the form
\begin{equation}\label{Op-poly}
\mathcal{O}_p(x, v, \lambda) = \sum_{L=2}^{L_p} \sum_{r=0}^L \lambda^L_r \mathcal{O}^r_L~,
\end{equation}
where the parameters~$\lambda^L_r$, in case they depend on time, are~$T$-periodic. We can represent the function~$A^{(1)}$ in this basis as
\begin{equation}\label{A1-poly}
A^{(1)}(x, v, t) = \sum_{L=1}^{L_1} \sum_{r=0}^L a^L_r(t) \mathcal{O}^r_L~,
\end{equation}
where the coefficients~$a^L_r(t)$ are time dependent and~$L_1$ is a finite positive integer such that~${L_1 \gg L_p}$. We shall henceforth refer the coefficients of~$\mathcal{O}^r_L$ in an expansion as level-$L$ coefficients.
We now proceed to determine the coefficients~$a^L_r(t)$. Substituting Eqs.~\eqref{P1-A1},~\eqref{Op-poly} and~\eqref{A1-poly} in the recursive equation~\eqref{nth-FP-eq}, for~$n=1$, straightforwardly leads to a polynomial equation. Then equating the coefficients of each monomial in the polynomial equation results in an ordinary differential equation for the variables~$a^L_r$. These dynamical equations can be written in the form
\begin{equation}\label{a-dyn}
\frac{d}{dt} a^L_r = H^L_r + N^L_r + R^L_r + S^L_r ~,
\end{equation}
where the first term~$H^L_r$ contains only the level-$L$ coefficients of~$A^{(1)}$ and is given by
\begin{equation}\label{hom-def}
H^L_r = - (L+1-r) a^L_{r-1} + r \gamma_p a^L_r + (r+1) k_p a^L_{r+1} ~,
\end{equation}
and include the $T$-periodic parameters
\begin{eqnarray}\label{gp-kp}
\gamma_p := \gamma - 2 D (\Sigma^{-1})_{22}~, \nonumber \\
k_p := k - 2 D (\Sigma^{-1})_{12}~;
\end{eqnarray}
the second term~$N^L_r$ contains only a level-$(L+2)$ coefficient of~$A^{(1)}$ and is given by
\begin{eqnarray}\label{nhom-def}
N^L_r = \begin{cases}
D (r+1)(r+2) a^{L+2}_{r+2}~, \text{ for } {1 \le L \le L_1 -2}~, \\
0~, \text{ for } L \ge L_1-1~;
\end{cases}
\end{eqnarray}
the third term~$R^L_r$ contains only a level-$(L+1)$ coefficient of the perturbation~$\mathcal{O}_p$ and is given by
\begin{eqnarray}\label{pert1-def}
R^L_r = \begin{cases}
- (r+1) \lambda^{L+1}_{r+1}~, \text{ for } {1 \le L \le L_p -1}~, \\
0~, \text{ for } L \ge L_p~;
\end{cases}
\end{eqnarray}
while the fourth term contains only level-$(L-1)$ coefficients of the perturbation~$\mathcal{O}_p$ and is given by
\begin{eqnarray}\label{pert2-def}
S^L_r = \begin{cases}
\Sigma^{-1}_{12} \lambda^{L-1}_{r} + \Sigma^{-1}_{22} \lambda^{L-1}_{r-1}~, \text{ for } {3 \le L \le L_p +1}~, \\
0~, \text{ for either } { L \ge L_p +2}~\text{ or } {1 \le L \le 2}~.
\end{cases}
\end{eqnarray}
Furthermore, the constant term of the polynomial equation leads to an additional equation
\begin{equation}
\frac{d}{dt} \langle A^{(1)} \rangle_0 + 2 D a^2_2 =0~,
\end{equation}
which though is not an independent one. Note that the dynamical equations~\eqref{a-dyn} are such that the level-$L$ coefficients~$a^L_r$ are all linearly coupled amongst themselves. Further they contain the inhomogeneous terms involving both the variables that are level-${(L+2)}$ coefficients~$a^{L+2}_{r+2}$ and the given interaction terms~$ R^L_r$ and $S^L_r $. The interaction terms though are absent for the dynamical equations with~${L \ge L_p + 2}$. Hence it is evident that we need to solve for~$a^{L+2}_{r+2}$ in order to solve for~$a^L_r$.
In order to solve Eqs.~\eqref{a-dyn} which are inhomogeneous, it is useful to first discuss the symmetries and solutions of the corresponding homogeneous equations obtained from Eqs.~\eqref{a-dyn} by dropping $N^L_r $, $ R^L_r $ and $ S^L_r $ terms. The homogeneous equations can be rewritten as
\begin{equation}\label{a-hom}
\frac{d}{dt} {\bf a}_L= \left[ - \mathbf{J}^{-}_{L} +\frac{\gamma_p }{2} \left( L I_L +\mathbf{J}_{L} \right) + k_p \mathbf{J}^{+}_{L}\right]^T {\bf a}_L~,
\end{equation}
where~${\bf a}_L$ is a $(L+1)$-component vector whose elements are~$a^L_r$, namely~${\bf a}_L^T := \left( a^L_0, a^L_1, \cdots, a^L_L \right)$, the superscript~$T$ on a vector or a matrix denotes their transpose, the matrix~$I_L$ is the identity matrix of dimension~$(L+1)$, and~$\lbrace \mathbf{J}^{\pm}_{L},\mathbf{J}_{L} \rbrace$ are matrices of the same dimension with matrix elements
\begin{eqnarray}
\left( \mathbf{J}^{+}_{L} \right)_{r,s}&=& r \delta_{r,s\!+\!1} ~,\nonumber \\
\left( \mathbf{J}^{-}_{L} \right)_{r,s}&=& (L-r) \delta_{r,s\!-\!1}~, \nonumber \\
\left( \mathbf{J}_{L} \right)_{r,s}&=& (2r-L) \delta_{r,s}~,
\end{eqnarray}
and whose indices~${r,s}$ run over all the integers from~$0$ to~$L$. We can easily verify that these matrices satisfy the commutation relations of the $sl_2$~algebra, namely
\begin{equation}\label{sl2alg}
\left[ \mathbf{J}_{L} , \mathbf{J}^{\pm}_{L} \right] = \pm 2\mathbf{J}^{\pm}_{L} ~,\quad \left[\mathbf{J}^{+}_{L}, \mathbf{J}^{-}_{L} \right] = \mathbf{J}_{L} ~.
\end{equation}
In fact these matrices form an irreducible representation of the generators of the group~$SL_2(R)$.
This dynamical symmetry exhibited by the homogeneous equations is indeed induced by the underlying $SL_2$~symmetry of the unperturbed FP operator.
It may be remarked in passing that Eq.~\eqref{a-hom} can be mapped to a transposed version by a similarity transformation induced by an anti-diagonal matrix~${\mathcal S}$ with matrix elements~${\mathcal S}_{r,s} = \delta_{r+s,L} r! (L-r)!/L!$. Under this transpose map $\mathbf{J}^{+}_{L}$ and $\mathbf{J}^{-}_{L}$ reverse their role in the algebra as is evident from the relations ${\mathcal S}^{-1} \mathbf{J}_{L}{\mathcal S} = -\left( \mathbf{J}_{L} \right)^T$ and
${\mathcal S}^{-1} \mathbf{J}^{\pm}_{L}{\mathcal S} = \left( \mathbf{J}^{\pm}_{L} \right)^T$.
The explicit $L$-dependent term in Eq.~\eqref{a-hom} can be removed by writing the vector~${\bf a}_L$ in terms of another vector~${\bf b}_L = {\bf a}_L \exp(-L\Gamma_p/2)$, where~$\Gamma_p(t)= \int_o^t dt' \gamma_p(t')$, and thus obtain the equation
\begin{equation}\label{b-hom}
\frac{d}{dt} {\bf b}_L= \left[ - \mathbf{J}^{-}_{L} +\frac{\gamma_p }{2} \mathbf{J}_{L} + k_p \mathbf{J}^{+}_{L}\right]^T {\bf b}_L~,
\end{equation}
which has a form independent of any specific $sl_2$~representation. This form makes it amenable to determine its solutions by exploiting the theorem which states that any irreducible representation of~$sl_2$ is a symmetric power of the standard representation~\cite{Fulton2004}. In other words, we obtain the solutions of Eq.~\eqref{b-hom} by taking the symmetrized tensor products of the solutions of its corresponding equation in the standard representation, which is
\begin{equation}\label{b-hom1}
\frac{d}{dt} {\bf b}_1= \left[ - \mathbf{J}^{-}_{1} +\frac{\gamma_p }{2} \mathbf{J}_{1} + k_p \mathbf{J}^{+}_{1}\right]^T {\bf b}_1~,
\end{equation}
or, when written in terms of components of the vector~${\bf b}_1^T := (b_0, b_1)$, is
\begin{align}\label{standard-b}
\frac{d}{dt}
\begin{pmatrix}
b_0\\
b_1 \\
\end{pmatrix} =
\begin{pmatrix}
- \frac{1}{2}\gamma_p & k_p \\
-1 & \frac{1}{2}\gamma_p
\end{pmatrix}
\begin{pmatrix}
b_0 \\
b_1 \\
\end{pmatrix} ~.
\end{align}
Notice that the component~$b_1$ satisfies the Hill equation
\begin{equation}\label{hill-eq}
\frac{d^2}{dt^2} b_1 + \nu_p b_1 =0~,
\end{equation}
where the $T$-periodic parameter
\begin{equation}\label{nu-p}
\nu_p = k_p - \frac{1}{2} \dot{\gamma}_p - \frac{1}{4} \gamma_p^2~.
\end{equation}
Hence the solutions of Eqs.~\eqref{b-hom1}, \eqref{b-hom} and \eqref{a-hom} can be solely expressed in terms of the two independent solutions of the Hill equation, denoted~$u(t)$ and~$w(t)$, which can be chosen to be pseudo-periodic with Floquet exponents~$\mu_p$ and~$-\mu_p$, respectively, namely~${u(t+T)=u(t) \exp(\mu_p T)}$ and ~${w(t+T)=w(t) \exp(-\mu_p T)}$. The psuedo-periodic solutions of Eq.~\eqref{b-hom1} are
\begin{align}\label{sol-b1}
\mathbf{b}_{1}^{(0)} =
\begin{pmatrix}
\frac{1}{2} \gamma_p u-\dot{u} \\
u\\
\end{pmatrix} ~,~
\mathbf{b}_{1}^{(1)} =
\begin{pmatrix}
\frac{1}{2} \gamma_p w - \dot{w} \\
w\\
\end{pmatrix}~,
\end{align}
while those of the homogeneous equation~\eqref{a-hom} are
\begin{equation}\label{sol-bL}
\mathbf{a}_{L}^{(r)} = e^{\frac{L}{2} \Gamma_p} \text{Sym} \left[ \left( \mathbf{b}_{1}^{(0)} \right)^{\otimes(L-r)} \otimes \left( \mathbf{b}_{1}^{(1)} \right)^{\otimes r} \right]~,
\end{equation}
where~$r$ runs over the integers from~$0$ to~$L$ and $\text{Sym}$ denotes the symmetrization of the tensor products of the vectors in the bracket. The solutions $ \mathbf{a}_{L}^{(r)}$ are pseudo-periodic with corresponding Floquet exponents
\begin{equation}\label{Fexp-L}
\mu_L^{(r)} = \frac{1}{2}L\overline{\gamma}_p + (L-2r)\mu_p~,
\end{equation}
where~$\overline{\gamma}_p$ is the time average of~$\gamma_p$ over a period~$T$, and they vanish in the large-time limit provided the modulus of the real part of the fundamental Floquet exponent
\begin{equation}\label{cond-pert}
\left| \text{Re} \left( \mu_p \right) \right| < -\frac{1}{2} \overline{\gamma}_p~.
\end{equation}
Henceforth we will assume that the parameters~${k, \gamma}$ and~$D$ are chosen such that the condition~\eqref{cond-pert} holds. We can now deduce the large-time solutions~$a^L_r(t)$ of Eq.~\eqref{a-dyn} for each~$L$ sequentially in the reverse order starting from~$L=L_1$ down to~$L=1$. For~${L_p+2 \le L \le L_1}$, the set of equations~\eqref{a-dyn} are homogenous and hence the corresponding large-time solutions~$a^L_r(t)$ vanish for any arbitrary initial conditions. The equations~\eqref{a-dyn} for~$L=L_p+1$ and~$L=L_p$ are though inhomogeneous have only given $T$-periodic inhomogeneous terms~$ h^L_r =R^L_r + S^L_r$. The solutions of these equations can formally be written as
\begin{equation}\label{a-L-soln}
\mathbf{a}_L(t)
= K_L(t,0) \mathbf{a}_L(0)
+ \int_{0}^{t} ds K_L(t,s)\mathbf{h}_L(s)~,
\end{equation}
where for any~$L$ the vector~$\mathbf{h}_L$ is defined by the components~$ h^L_r$, and the matrix
\begin{equation}
K_L(t,s)= \Phi_L(t)\Phi^{-1}_L(s)
\end{equation}
is defined by the fundamental matrix~$\Phi_L(t)$ of Eq.~\eqref{a-hom} constructed with~$ \mathbf{a}_{L}^{(r)}(t)$ as column vectors. Now to determine the large-time limit of the solutions, we use the following two properties of the matrix~$K(t,s)$. First, the matrix can be decomposed as~${K_L(t,s)= K_L(t,t')K_L(t',s)}$ for any~$t'$, which is a consequence of its definition. Second, it is invariant under discrete time translation by a period, namely ${K_L(t+T,s+T)= K_L(t,s)}$, which is due to the pseudo-periodic nature of the fundamental matrix,~${\Phi_L(t+T) = \Phi_L(t) \Lambda_L}$, where~$\Lambda_L$ is a diagonal matrix with elements~$\exp(\mu_L^{(r)}T)$. Using these properties, for any time~$t= NT + \tau$ the first term on the right hand side of Eq.~\eqref{a-L-soln} can be written as
\begin{equation}\label{term1}
K_L(\tau,0) \left[ K_L(T,0)\right]^N \mathbf{a}_L(0)~,
\end{equation}
while for any $T$-periodic function~$\mathbf{h}_L(t)$ the second term can be written as
\begin{equation}\label{term2}
K_L(\tau,0) \left( 1 \!-\! K_L(T,0)^N \right) Z_L(T;\mathbf{h}_L) + Y_L(\tau;\mathbf{h}_L)~,
\end{equation}
where the matrices~$Y_L(\tau;\mathbf{h}_L)$ and~$Z_L(T;\mathbf{h}_L)$ are independent of~$N$ and are defined for any given $T$-periodic vector~$\mathbf{f}_L(t)$ as
\begin{eqnarray}\label{termYZ}
Y_L(\tau;\mathbf{f}_L ) &=& \int_{0}^{\tau} \!ds K_L(\tau,s)\mathbf{f}_L(s)~,\nonumber \\
Z_L(T; \mathbf{f}_L) &=& \left({ 1- K_L(T,0) }\right)^{-1}Y_L(T;\mathbf{f}_L )~.
\end{eqnarray}
Note that~$K_L(T,0) = \Phi_L(0) \Lambda_L \Phi^{-1}_L(0)$, and hence its eigenvalues are same as those of~$\Lambda_L$, namely~$\exp(\mu_L^{(r)}T)$. Now when the condition~\eqref{cond-pert} holds not only the matrix~$Z_L(T;\mathbf{h}_L)$ is ensured to be nonsingular but also the matrix~$K_L(T,0)^N$ approaches zero in the limit~${N \to \infty}$. Consequently, a well-defined asymptotic limit of the solution that is independent of the initial conditions exists, and we obtain the asymptotic solution as
\begin{equation}\label{asy-sol-L}
\mathbf{a}_L(\tau) = K_L(\tau,0) Z_L(T;\mathbf{h}_L )+Y_L(\tau;\mathbf{h}_L )~,
\end{equation}
for~$L=L_p+1$ and~$L=L_p$. It is straightforward to verify that ${\mathbf{a}_L(\tau+T)= \mathbf{a}_L(\tau)}$. This also implies that~$N^L_r(t)$ becomes $T$-periodic for~$L=L_p-1$ and~$L=L_p-2$. It should emphasized that the asymptotic solution~\eqref{asy-sol-L} is $T$-periodic in spite of the psuedo-periodic nature of the homogeneous part of the solution~\eqref{a-L-soln}.
By hierarchically repeating the arguments that lead to Eq.~\eqref{asy-sol-L}, we further obtain the other $T$-periodic asymptotic solutions
\begin{equation}\label{asy-sol-L-2}
\mathbf{a}_L(\tau) = K_L(\tau,0) Z_L(T;\mathbf{h}_L+ \mathbf{n}_L )+Y_L(\tau;\mathbf{h}_L+\mathbf{n}_L )~,
\end{equation}
for $1 \le L \le L_p -1$, where the components of the vector~$\mathbf{n}_L$ are defined to be the asymptotic values of~$ N^L_r$.
We can of course determine the next-order correction~${P^{(2)}_{\infty}(x, v, t)}$ by exactly going through the same mathematical manipulation as performed earlier to determine~${P^{(1)}_{\infty}(x, v, t)}$ after replacing the nonlinear term~$\mathcal{O}_p$ with~$ ( \langle A^{(1)} \rangle_0 - A^{(1)} )\mathcal{O}_p$. Thus proceeding iteratively, correction term~${P^{(n)}_{\infty}(x, v, t)}$ to any order~$n$ can be obtained.
To summarize this section, we have shown that the asymptotic probability distribution of the periodically driven nonlinear FP equation~\eqref{FP-eqn} to any order of perturbation in anharmonicity exists and is $T$-periodic provided the unperturbed oscillating state exists and the condition~\eqref{cond-pert} holds. Furthermore all the coefficients of this $T$-periodic asymptotic distribution to any order can be determined exactly in terms of the solutions of the Hill equation~\eqref{hill-eq}.
\section{Stability of the Oscillating state}\label{stability}
In this section, we study the stability of the oscillating states and the effect of the perturbations. In particular, we will survey the domains of the unperturbed oscillating states and then examine whether perturbations can coexist within these domains.
\subsection{Unperturbed oscillating states}
The zeroth-order asymptotic distribution~$P^{(0)}_{\infty}(x,v,t)$ exists when the first moments vanish at large times and the asymptotic covariance matrix is positive definite~\cite{Awasthi2020}. The first moments of the distribution~$P^{(0)}(x,v,t)$ are related to the solutions~$Y_{10}$ of the following Hill equation
\begin{equation}\label{hill-eq-0}
\frac{d^2}{dt^2} Y_{10} + \nu Y_{10} =0~,
\end{equation}
where~${\nu = k - \dot{\gamma}/2 - \gamma^2/4}$. When the Floquet exponents~$\pm \mu$ of this Hill equation satisfy the condition
\begin{equation}\label{cond-mu-g0}
| Re{(\mu)} | < \frac{1}{2} \overline{\gamma}~,
\end{equation}
then the first moments vanish asymptotically. This condition also means that ${\exp(-\overline{\gamma} t/2) Y_{10}(t)}$ should remain bounded at all times. Furthermore the condition~\eqref{cond-mu-g0} ensures that the second moments also remain bounded and become $T$-periodic asymptotically independent of the initial conditions of the distribution. We could deduce this fact since the second moments also can be written in terms of the Floquet solutions of Eq.~\eqref{hill-eq-0}. Essentially the dynamical equations of the second moments $X_{2,0}, X_{1,1}$ and $X_{0,2}$ are
\begin{eqnarray}\label{mom-2-as}
\frac{d}{dt}X_{2,0} &=& 2 X_{1,1}~, \nonumber \\
\frac{d}{dt}X_{1,1} &=&-k X_{2,0} -\gamma X_{1,1} + X_{0,2}~,\nonumber \\
\frac{d}{dt}X_{0,2} &=& -2k X_{1,1} -2 \gamma X_{0,2} +2D~,
\end{eqnarray}
and the moments $X_{2,0}, X_{1,1}$ and $X_{0,2}$ approach their asymptotic values $\widetilde{X}_{2,0}, \widetilde{X}_{1,1}$ and $\widetilde{X}_{0,2}$, respectively, in the large time limit. The homogeneous part of these equations with parameters $k$ and $\gamma$ has similar structure as Eq.~\eqref{a-hom} for $L=2$ with corresponding parameters $k_p$ and $-\gamma_p$. Hence the manner in which Eq.~\eqref{a-hom}, its solution~\eqref{sol-bL} and its exponents~\eqref{Fexp-L} are related to the Hill equation\eqref{hill-eq} is the exact manner in which the homogeneous part of Eq.~\eqref{mom-2-as}, its solution and its exponents are related to the Hill equation~\eqref{hill-eq-0}. In fact the Floquet exponents associated with the covariance matrix can also be read from Eq.~\eqref{Fexp-L} and are ${-{\overline{\gamma} + (2-2r)\mu}}$, where $r=0,1,2$.
The condition~\eqref{cond-mu-g0} involves Floquet exponents which depend on the parameters~$k$ and $\gamma$ implicitly, and hence it is far from obvious without explicit verification whether a given driving allows the system to be in an oscillating state. We can rewrite the condition in a form that is more amenable to computation using Floquet theory of the Hill equation. The Floquet exponents essentially can be expressed in terms of the solutions of the Hill equation at time~$T$~\cite{Magnus2013,Eastham1975}. Let $u_1(t)$ and $u_2(t)$ be two independent solutions of the Hill equation~\eqref{hill-eq-0} with the initial conditions:~$u_1(0)=1$, $u_2(0)=0$, $\dot{u}_1(0)=0$ and $\dot{u}_2(0)=1$. Since the parameter~$\nu$ is $T$-periodic,
\begin{align}\label{sol-T}
\begin{pmatrix}
u_1(t+T)\\
u_2(t+T) \\
\end{pmatrix} =
\Phi(T)
\begin{pmatrix}
u_1(t) \\
u_2(t) \\
\end{pmatrix}~,
\end{align}
where the monodromy matrix
\begin{align}\label{mon-mat}
\Phi(T) =
\begin{pmatrix}
u_1(T) & \dot{u}_1(T) \\
u_2(T) & \dot{u}_2(T)
\end{pmatrix}~.
\end{align}
The Floquet coefficients~$\exp(\pm \mu T)$ are the eigenvalues of the matrix~$\Phi(T)$, namely the roots of the equation ${\rho^2 - \rho \Delta + 1 =0}$, where the trace of the matrix
\begin{equation}\label{tr-1}
\Delta = u_1(T) + \dot{u}_2(T)~,
\end{equation}
while the determinant is unity since the Wronskian of two independent solutions is constant. In other words, the coefficients
\begin{equation}\label{floq-coeff}
e^{\pm \mu T} = \frac{\Delta}{2} \pm \sqrt{ \frac{\Delta^2}{4}-1 }~.
\end{equation}
The Floquet coefficients are real for~$|\Delta| \ge 2$ and complex with modulus one for~$|\Delta| < 2$. Substituting Eq.~\eqref{floq-coeff} in condition~\eqref{cond-mu-g0} leads to the relation
\begin{equation}\label{eff-cond-1}
|\Delta| < 2\cosh\left( \frac{1}{2} \overline{\gamma} T \right)~.
\end{equation}
We can now distinguish different regions. When~${|\Delta| > 2\cosh( \overline{\gamma} T /2)}$ the oscillating states do not exist since the function ${\exp(-\overline{\gamma} t/2) Y_{10}(t)}$ blows up at large time. In the region ${2 < |\Delta| < 2\cosh( \overline{\gamma} T /2)}$, any given initial distribution relaxes into an oscillating state in a time scale $\tau_R \sim 1/ [\overline{\gamma} /2 - | Re{(\mu)} |] $ that depends on the value of~$|\Delta| $.
While in the region $ |\Delta| \le 2 $, the oscillating states exist and can be reached in time $\tau_R \sim 2/\overline{\gamma}$. The case ${|\Delta| =2\cosh( \overline{\gamma} T /2)}$ is physically less attractive as it not only requires a fine-tuned driving but also results in an asymptotic state with a memory of the initial conditions.
\begin{figure*}[!htb]
\centering
\includegraphics[width=\linewidth]{MuPlotsN2.eps}
\caption{The absolute value of the real part of the Floquet Exponent $|Re(\mu)|$ controls the stability of the system. If its value is below the dotted line of $\gamma_0/2$ (= $\overline{\gamma}/2$) in plot (a) and (b), then the system is asymptotically stable. (a) $|Re(\mu)|$ as function of $k_1$ with $k_0 = 10$, $\gamma_0=8$, and $\gamma_1 = 4$ for frequencies $\omega \in \lbrace 4,6,12 \rbrace$. (b) $|Re(\mu)|$ as function of $\gamma_1$ with $k_0 = 4$, $k_1=2$, and $\gamma_0 = 15$ for frequencies $\omega \in \lbrace 6,8,10 \rbrace$. (c) The relaxation time $\tau_R^{-1}$ as a function of $k_0$ with $k_0 = 4$, $k_1=2$, and $\gamma_0 = 15$ for frequencies $\omega \in \lbrace 6,8,10 \rbrace$. The dotted line denotes the relaxation time when the parameters $k$ and $\gamma$ are taken to be time-independent~($k_1 = \gamma_1 = 0$). }
\label{MuPlots}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\linewidth]{SRPlotsN2.eps}
\caption{Stable and unstable region for periodically driven system in (a) $k_1-\omega$ plane with $\gamma_1 = 0$, $k_0 = 10.0625$, and $\gamma_0 = 0.5$; (b) $\gamma_1-k_1$ plane with $\gamma_0 = 15$, $\omega = 5$, and $k_0 = 15$; (c) $k_1-k_0$ plane with $\gamma_0 = 1$, $\gamma_1 = 0$, and $\omega = 5$. }
\label{Stability Region}
\end{figure*}
In general it may not be possible to analytically determine the explicit dependence of~$\Delta$ on the time-dependent parameters~$k$ and~$\gamma$ or more precisely on the function~$\nu$ of these parameters that appears in the
equation~\eqref{hill-eq-0}. Nevertheless we could gain a considerable qualitative understanding of this dependency by probing numerically. To this end we consider the cases where the parameters are restricted to the first harmonics and evaluate~$\Delta$ and in turn determine the relaxation times and depict the domains of the oscillating states.
Let us drive the parameters as follows,
\begin{eqnarray}\label{kg-1har}
&k(t) = k_0 + k_1 \cos(\omega t)~, \nonumber \\
&\gamma(t) = \gamma_0 + \gamma_1 \cos(\omega t)~,
\end{eqnarray}
where~$k_0, k_1, \gamma_0, \gamma_1$ and~$\omega=2\pi/T$ are constants. Note that the Hill equation can be viewed as an eigenvalue problem or equivalently as a steady state Schr\"odinger equation of an electron in one-dimensional periodic potentials. The parameter~$k_0$, for instance, can be viewed as an eigenvalue provided the corresponding eigenfunction exist. Unlike the steady state wavefunctions in the electron case, here the eigenfunctions can explode asymptotically though not faster than~$\exp(\gamma_0 t/2)$. As we vary~$k_0$, keeping the other constants fixed, we can expect to encounter both the forbidden regions where the eigenfunctions do not exist and the allowed regions where the eigenfunctions and hence the oscillating states exist. Similar features are indeed expected to show up when the other parameters are varied too.
We find it convenient to plot the modulus of the Floquet exponent~$\mu$ as a function of the parameters in the chosen domain, since $\Delta$ grows exponentially with linear increase of~$\mu$ and hence a log scale can be avoided. The dependence of~$|Re(\mu)|$ for different values of~$\omega$ on~$k_1$ and on~$\gamma_1$ is illustrated in Fig.~\hyperref[MuPlots]{1(a)} and Fig.~\hyperref[MuPlots]{1(b)} respectively. All the numerical error bars are essentially less than the thickness of the lines used in the plots.
As we vary~$k_1$, one passes through three different regions. One of which is where the value of~$|Re(\mu)|$ varies but is less than~$\overline{\gamma}/2$ indicating the system can relax into a stable oscillating state in a time scale that depends on~$k_1$. The other is where~$|Re(\mu)| =0$ and the system can relax to an oscillating state in a time that is independent of~$k_1$. The third kind is where~$|Re(\mu)| > \overline{\gamma}/2$ and the system can not exist in an oscillating state. Note that the region where~$|Re(\mu)|$ approaches zero for $\omega=4$ and~$6$ appear as points in Fig.~\hyperref[MuPlots]{1(a)} only due to the choice of the scale of the axis. The time rate of convergence to an oscillating state can of course be read from Fig.~\hyperref[MuPlots]{1(a)} and Fig.~\hyperref[MuPlots]{1(b)}, but is explicitly plotted in Fig.~\hyperref[MuPlots]{1(c)} to demonstrate the nontrivial dependence of the parameters, say~$k_0$.
It may be required to add the caveat that even though for small~$k_1$ values the relaxation time decreases with increase of driving frequency, this is not a generic feature for arbitrary values of~$k_1$ as is evident from Fig.~\hyperref[MuPlots]{1(a)}. A similar conclusion can be arrived at from Fig.~\hyperref[MuPlots]{1(b)} where the relaxation time is not always monotonically related to the value of~$\gamma_1$. Nevertheless observe in Fig.~\hyperref[MuPlots]{1(b)} that the value of~$\gamma_1$ that saturates the stability of the oscillating state increases with the increase of driving frequency, which is a result that is in tune with the intuition that periodic driving enhances stability. Notice what is indeed far from evident that there is a range of~$\gamma_1$ values for which~$\gamma(t)$ becomes negative at times and yet allows the system to be in a stable oscillating state.
The non-monotonic behavior of the relaxation times and the lack of simple algebraic relations to determine stability makes it hard to envisage the exotic landscape of the parameter space of the driven system containing regions that either further or forbid oscillating states. We survey and chart out some of the planes of the parameter space, for instance, ${k_1-\omega}$ plane as shown in Fig.~\hyperref[MuPlots]{2(a)}, ${\gamma_1-k_1}$ plane as shown in Fig.~\hyperref[MuPlots]{2(b)}, and ${k_1-k_0}$ plane as shown in Fig.~\hyperref[MuPlots]{2(c)}. The maps clearly indicate that as we move in these planes we could pass through a variety of terrain starting from vast stretches of stable regions to tentacles of allowed zones separated by impermissible gaps. It should be emphasized that the stable regions are identified not only by ensuring that the condition~\eqref{eff-cond-1} holds but also by confirming in parallel that the covariance matrix~\eqref{cov-mat} is positive definite.
\subsection{Concurrence of the perturbations}
Suppose we choose the driven parameters~$k, \gamma$ and~$D$ such that a normalizable asymptotic distribution in absence of the nonlinear forces exists. We still need to address whether such a choice is compatible with the condition~\eqref{cond-pert} for if it is not the case then the perturbations will destroy the oscillating state. This condition essentially ensures periodicity and boundedness of the perturbative corrections to the oscillating state.
The difficulty in determining the compatibility is that the condition~\eqref{cond-pert} involves Floquet exponents of a Hill equation and hence has to be verified explicitly. We first recast this condition to a form similar to Eq.~\eqref{eff-cond-1} that is more convenient for distinguishing various regions where the perturbative coefficients~$a_r^L$ exist and are periodic. We shall then proceed to address the issue of compatibility.
Let $p_1(t)$ and $p_2(t)$ be two independent solutions of the Hill equation~\eqref{hill-eq} with the initial conditions:~$p_1(0)=1$, $p_2(0)=0$, $\dot{p}_1(0)=0$ and $\dot{p}_2(0)=1$. The corresponding monodromy matrix
\begin{align}\label{mon-mat-2}
\Phi_p(T) =
\begin{pmatrix}
p_1(T) & \dot{p}_1(T) \\
p_2(T) & \dot{p}_2(T)
\end{pmatrix}~,
\end{align}
and the Floquet coefficients
\begin{equation}\label{floq-coeff-2}
e^{\pm \mu_p T} = \frac{\Delta_p}{2} \pm \sqrt{ \frac{\Delta_p^2}{4}-1 }~,
\end{equation}
where the trace
\begin{equation}\label{tr-2}
\Delta_p = p_1(T) + \dot{p}_2(T)~.
\end{equation}
Substituting Eq.~\eqref{floq-coeff-2} in condition~\eqref{cond-pert} leads to the relation
\begin{equation}\label{eff-cond-2}
|\Delta_p| < 2\cosh\left( \frac{1}{2} \overline{\gamma}_p T \right)~,
\end{equation}
which when satisfied guarantees the coexistence of the perturbations.
Since the quantities~$\Delta_p, \overline{\gamma}_p$ and~$\Delta$ are fixed by the parameters~$k, \gamma$ and~$D$ it is natural to ask whether there are regions in the parameter space where the two relations~\eqref{eff-cond-1} and~\eqref{eff-cond-2} are mutually incompatible. Astonishingly we found strong numerical evidence to the contrary and noticed the following two relations,
\begin{eqnarray}
\label{sup-1}&\overline{\gamma}_p = - \overline{\gamma} ~, \\
\label{sup-2}&\Delta_p = \Delta~,
\end{eqnarray}
for any given~$k, \gamma$ and~$D$ that supports an unperturbed oscillating state. We have in fact sampled a class of periodic driving that included higher harmonics up to tenth order and numerically explored a 64 parameter space either randomly or by continuously varying one of the parameters.
Though $\gamma_p$ depends on two other additional parameters $k$ and $D$ in a nontrivial way, yet we find that its time average is heedless of their being. Though the functions~$\nu$ and $\nu_p$ are found to be wildly different from each other in general, the corresponding Floquet exponents appear to be equal without any exception. The prime reason for not foreseeing these relations a priori is that unlike~$\nu$ the function~$\nu_p$ in addition depends on $D$ not only implicitly through $\Sigma^{-1}$ but also explicitly.
We now prove that the relations~\eqref{sup-1} and~\eqref{sup-2} indeed hold. The first relation easily follows once we note that the dynamics of the determinant~$|\Sigma|$ of the asymptotic covariant matrix~$\Sigma$ which can be obtained from Eq.~\eqref{mom-2-as} can be written as
\begin{equation}\label{det-dyn}
\frac{d}{dt} \ln|\Sigma| = -2 \gamma + 2D (\Sigma^{-1})_{22}~.
\end{equation}
The right hand side of the above equation reduces to~$-(\gamma+\gamma_p)$ upon using Eq.~\eqref{gp-kp}. The time-period average of the left hand side vanishes since the moments of the asymptotic distribution are $T$-periodic, and thus follows the relation~\eqref{sup-1}.
We need a chain of arguments to establish the second relation. Using Eqs.~\eqref{mom-2-as} it is straightforward to obtain the equations of motion for all the elements of the inverse covariance matrix~$S$ and write down the following system of nonlinear equations,
\begin{equation}\label{inv-corr-eqn}
\frac{d}{dt} {\bf c} = M({\bf c}) {\bf c} + {\bf d}({\bf c})
\end{equation}
where the transpose of ${\bf c}$ and ${\bf d}({\bf c})$ are
\begin{eqnarray}\label{c-d-def}
&{\bf c}^T = \Big( S_{11} , 2 S_{12} , S_{22} \Big)~,\nonumber \\
&{\bf d}({\bf c})^T = 2 D \Big( S^2_{12} , 2 S_{12} S_{22} , S^2_{22} \Big)~,
\end{eqnarray}
respectively, and the matrix~$M({\bf c})$ is
\begin{align}\label{M-def}
M({\bf c})=
\begin{pmatrix}
0 & k' & 0 \\
-2 & \gamma' & 2 k' \\
0 & -1 & 2 \gamma'
\end{pmatrix}~,
\end{align}
where ${k' = k -2D S_{12} }$ and ${\gamma' = \gamma - 2 D S_{22} }$. In the limit~$t \to \infty$, the matrix~$S(t)$ approaches the inverse of the asymptotic covariance matrix~$\Sigma(t)$ and hence ${k' \to k_p}$, ${\gamma' \to \gamma_p}$, the vector~${\bf d}({\bf c})$ becomes $T$-periodic and the matrix
\begin{equation}\label{M-asy}
M({\bf c}) \to M_{\infty} := \left[ - \mathbf{J}^{-}_{2} +\frac{\gamma_p }{2} \left( 2 I_2 +\mathbf{J}_{2} \right) + k_p \mathbf{J}^{+}_{2}\right]^T .
\end{equation}
The asymptotic solution of Eq.~\eqref{inv-corr-eqn} is unique and bounded since the asymptotic unperturbed distribution is unique and normalizable.
Any solution of the equation for a given initial condition approaches the asymptotic solution, while the differences between the solutions vanish asymptotically at a rate dictated by the Floquet exponents. The dynamics of the difference~$\delta {\bf c} $ of infinitesimally separated solutions follows from Eq.~\eqref{inv-corr-eqn} and is given by
\begin{equation}\label{deviate-eq}
\frac{d}{dt} \delta {\bf c} = M({\bf c}) \delta{\bf c}~,
\end{equation}
which asymptotically takes the exact form as Eq.~\eqref{a-hom} satisfied by ${\bf a}_L$ for $L=2$. Hence, using Eq.~\eqref{Fexp-L}, we conclude that the Floquet exponents associated with the elements of the inverse covariance matrix are~${\mu_2^{(r)} = \overline{\gamma}_p + (2-2r)\mu_p}$, where ${r=0,1,2}$. The variation of the covariance matrix~$\delta S^{-1}$ is related to the variation of the inverse covariance matrix~$\delta S$ by the relation
\begin{equation}\label{var-C}
{\delta S^{-1} = - S^{-1} {\delta S} S^{-1} \to - \Sigma {\delta S} \Sigma} .
\end{equation}
The Floquet exponents associated with the covariance matrix $\delta S^{-1}$ can be obtained independently and as mentioned earlier are ${-{\overline{\gamma} + (2-2r)\mu}}$. Now using Eqs.~\eqref{var-C} and~\eqref{sup-1}, we conclude~$\mu_p = \pm \mu$ and thus establish Eq.~\eqref{sup-2}.
Essentially, the two conditions~\eqref{cond-pert} and~\eqref{cond-mu-g0} or the corresponding equivalent conditions~\eqref{eff-cond-2} and~\eqref{eff-cond-1} are not just compatible with each other but are in fact one and the same. In other words, we find that the perturbations can coexist with the oscillating states in the entire domain of their existence.
\section{Concluding comments}\label{conc}
We have considered a periodically driven Brownian particle under nonlinear forces and analysed the time dependence of the asymptotic distribution. The unperturbed Brownian particle is known to exist in an oscillating state under the conditions that the inequality~\eqref{cond-mu-g0} holds and that the covariance matrix is positive definite. To any order in the nonlinear perturbations, we find that the oscillating state either can sustain or is destroyed depending on whether or not the condition~\eqref{cond-pert} holds. The reason that it is the same condition that arbitrates the existence of the oscillating state at every non-zero order of perturbation is due to the presence of the underlying $SL_2$ symmetry. We have essentially formulated the perturbative analysis suitable both for identifying the symmetry, as given in Eq.~\eqref{a-hom}, and for obtaining exactly the asymptotic distribution in terms of the solution of the Hill equation~\eqref{hill-eq}. We have obtained the formal expression of the first order coefficients~$\mathbf{a}_L(t)$ of the asymptotic distribution as given in Eqs.~\eqref{asy-sol-L} and~\eqref{asy-sol-L-2}, and outlined the procedure to determine the higher order coefficients.
We have addressed the issue of the stability of the oscillating state which is essentially related to the compatibility of the two conditions~\eqref{cond-mu-g0} and~\eqref{cond-pert}. These conditions implicitly depend on the driving parameters in a nontrivial way involving Floquet exponents of the corresponding Hill equations. We have charted out some of the terrains of the oscillating states in the parameter space of driving involving first harmonics so as to explicitly demonstrate the nontrivial relation between driving parameters and the stated conditions. More importantly, we have proved the equivalence of these conditions inspite of their implicit dependence on the parameters and established that the oscillating states are stable against nonlinear perturbations to all orders of perturbation.
The ubiquity of the driven systems and their access to experiments is a strong motivation to study the effect of different perturbations on the oscillating states. The perturbative formulation developed here could prove valuable not only in understanding the properties of these states but also in establishing the appropriate description of driven physical systems.
Since a variety of macroscopic systems undergo stochastic dynamics similar to that of Brownian motion, these systems when driven has the possibility to exist in stable oscillating states. The necessary conditions that we obtained could be effective in identifying the type of driving required to maintain an oscillating state.
The analysis that we have employed here can also be easily extended to study driven stochastic systems possessing other symmetries. It would be interesting to know the role of symmetry on the necessary and sufficient conditions under which the oscillating states can exist and sustain.
|
1,116,691,499,023 | arxiv | \subsubsection*{Acknowledgements}
This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No.\ OSR-2017-CRG6-3434.02. The sea surface temperature data studied in this paper were provided by GHRSST, Met Office and CMEMS.
\newpage
\section{Introduction}\label{sec:intro}
\subsection{Statistical modeling of spatial extremes}
The availability of increasingly detailed spatial and spatio-temporal datasets has motivated a recent surge in methodological developments to model such data. In this work, we are concerned with modeling extreme values of spatial or spatio-temporal processes, which we denote by $\{Y(s): s \in \mathcal{S} \subseteq \mathbb{R}^2\}$ and $\{Y(s,t): (s,t) \in \mathcal{S}\times\mathcal{T} \subseteq \mathbb{R}^2\times\mathbb{R}_+\}$. The goal of modeling spatio-temporal extremes is often to enable extrapolation from observed extreme values to future, more intense episodes, and consequently requires careful selection of models suited to this delicate task.
Early work on spatial extremes focused almost exclusively on max-stable processes \citep{Smith.1990,Coles.1993,Schlather.2002,Padoan.al.2010,Davison.Gholamrezaee.2012}. These are the limiting objects that arise through the operation of taking pointwise maxima of $n$ weakly dependent and identically distributed copies of a spatial process. However, this is a poor strategy when data exhibit a property known as \emph{asymptotic independence}, which means that the limiting process of maxima consists of everywhere-independent random variables. Moreover, even when the process is \emph{asymptotically dependent}, meaning that the limit has spatial coherence, the fact that the resulting process is formed from many underlying original events \citep{Dombry2018} can hinder both interpretability and inference. More recently, analogues of max-stable processes suited to event-level data have been developed \citep{Ferreira.deHaan.2014,Dombry.Ribatet.2015,Thibaud.Opitz.2016,deFondeville.Davison.2020}, but use of these generalized Pareto or $r$-Pareto processes also requires strong assumptions on the extremal dependence structure.
Broadly, spatial process data can be split according to whether they exhibit asymptotic independence or asymptotic dependence. As mentioned, these can be characterized by whether the data display independence or dependence in the limiting distribution of pointwise maxima, but when considering threshold exceedances other definitions are more useful. Consider two spatial locations $s, s+h \in \mathcal{S}$. For $Y(s) \sim F_s$, define the tail correlation function \citep{Strokorb2015} as
\begin{align}
\chi(s,s+h) = \lim_{q \to 1} \Pr\{F_{s+h}(Y(s+h))>q |F_{s}(Y(s))>q\}. \label{eq:chi}
\end{align}
If $\chi(s,s+h) = 0$ for all $h \neq 0$ then $Y$ is asymptotically independent, or asymptotically dependent where limit~\eqref{eq:chi} is positive for all $h$. Intermediate scenarios of asymptotic dependence up to a certain distance are also possible. Asymptotic dependence is a minimum requirement for use of max-stable or Pareto process models, but in practice more rigid assumptions are imposed as these models do not allow for any weakening of dependence with the level of the event --- a feature common in most environmental datasets.
Recent work on modeling of spatial extremes has focused on the twin challenges of incorporating flexible extremal dependence structures, and developing models and inference techniques that allow for large numbers of observation locations. \citet{Huser.al.2017,Huser.al.2020} suggest Gaussian scale mixture models, for which both types of extremal dependence can be captured depending on the distribution of the scaling variable. The model of \citet{Huser.Wadsworth.2018} was the first to offer a smooth transition between dependence classes, meaning it is not necessary to make a choice before fitting the model. However, owing to complicated likelihoods, each of these models is limited in practice to datasets with tens of observation locations. Modifications in \citet{Zhang.al.2019} suggest that hundreds of sites might be possible, but further scalability looks elusive for now.
\cite{Wadsworth.Tawn.2019} proposed an alternative approach based on a spatial adaptation of the multivariate conditional extreme value model \citep{Heffernan.Tawn.2004, Heffernan.Resnick.2007}, which has been further extended to the space-time case by \citet{Simpson.Wadsworth.2020}. Both types of extremal dependence can be handled and the likelihoods involved are much simpler. However, application thus far has still been limited to hundreds of observation locations. In this work, we seek to exploit the power of Gaussian Markov random fields and the integrated nested Laplace approximation (INLA) in this context in order to permit \edit{substantially higher} dimensional inference and prediction, and to achieve more flexible modeling by replacing parametric structures with semi-parametric extensions. We note that, in the restricted case of Pareto processes, \citet{deFondeville.Davison.2018} perform inference for a 3600-dimensional problem via a gradient-score algorithm. In this work, we handle inference for problems of comparable dimension, using the more flexible conditional extremes models with likelihood-based inference. \cite{Opitz.al.2018} have previously used INLA in an extreme value analysis context, focusing on regression modeling of threshold exceedances, but this is the first time it has been utilized in the conditional extremes framework.
\edit{In the remainder of the introduction, we provide background on the conditional extremes approach, and introduce briefly the idea of the INLA methodology. In Section~\ref{sec:modelFormulations}, we detail modifications to the conditional modeling approach that allow for inference through a latent variable framework using INLA; it is these tools that enable inference to become feasible at thousands of observation locations.}
\subsection{\edit{Conditional extremes models}}\label{subsec:CSEintro}
The aforementioned conditional extremes approaches, originating with \cite{Heffernan.Tawn.2004} in the multivariate case, involve the construction of models by conditioning on exceedances of a high threshold in a single variable. The spatial setting studied by \cite{Wadsworth.Tawn.2019}, and subsequent spatio-temporal extension of \cite{Simpson.Wadsworth.2020}, require conditioning on threshold exceedances at a single spatial or spatio-temporal location, with additional structure being introduced by exploiting the proximity of the other locations to this conditioning site.
In the spatial setting, denote by $\{X(s): s \in \mathcal{S}\}$ a stationary and isotropic process with marginal distributions possessing exponential-type upper tails, i.e., $\Pr\{X(s)>x\}\sim \edit{c}e^{-x}$ as $x\rightarrow\infty$, $\edit{c>0}$. This is achieved in practice via a marginal transformation, explained further in Section~\ref{subsec:redsea}. Let $s_0$ denote the conditioning site. We assume that $\{X(s)\}$ possesses a joint density, so that conditioning on the events $\{X(s_0)>u\}$ or $\{X(s_0)=u\}$ as $u \to \infty$ leads to the same limiting process \citep{Wadsworth.Tawn.2019}; see also \citet{Drees.Janssen.2017} for further discussion on the conditioning event in a multivariate setting. We comment on handling anisotropy in Section~\ref{sec:discussion}.
For a finite set of locations $s_1,\dots,s_d$, \cite{Wadsworth.Tawn.2019} assume that there exist normalizing functions $a_{s-s_0}(\cdot)$ and $b_{s-s_0}(\cdot)$ such that as $u\rightarrow\infty$,
\begin{align}
\Pr\left(\left[\frac{X(s_i)-a_{s_i-s_0}\left\{X(s_0)\right\}}{b_{s_i-s_0}\left\{X(s_0)\right\}}\right]_{i=1,\dots,d} \leq \bm{z}\bigg{\vert}~X(s_0)=u\right) ~\rightarrow \Pr\left(\{Z^0(s_i)\}_{i=1,\dots,d} \leq \bm{z}\right),
\label{eqn:CSEassumption}
\end{align}
where $\bm{z} = (z_1,\ldots, z_d)$, \edit{and the vector $\{Z^0(s_1),\ldots,Z^0(s_d)\}$ represents a finite-dimensional distribution of some stochastic process $\{Z^0(s)\}$, referred to as the residual process}. Several theoretical examples are provided therein to illustrate this assumption. The first of the normalizing functions is constrained to take values $a_0(x)=x$ and $a_{s-s_0}(x)\in[0,x]$, and is usually non-increasing as the distance between $s$ and $s_0$ increases: the residual process therefore satisfies $Z^0(s_0)=0$. Furthermore, under assumption~\eqref{eqn:CSEassumption}, the excess of the conditioning variable $X(s_0)-u\mid X(s_0)>u$ is exponentially distributed, and independent of the residual process.
Assumption~\eqref{eqn:CSEassumption} is exploited for modeling by assuming that it holds approximately above a high threshold $u$. In particular, we can assume that
\begin{align}
\{X(s):s\in\mathcal{S}\} \mid \left[X(s_0)=x\right] = a_{s-s_0}(x) + b_{s-s_0}(x)\{Z^0(s):s\in\mathcal{S}\}, \qquad x>u.
\label{eqn:modelingAssumption}
\end{align}
Suitable choices for $a_{s-s_0}(\cdot),b_{s-s_0}(\cdot)$ and $\{Z^0(s)\}$ lead to models with different characteristics. \cite{Wadsworth.Tawn.2019} propose a theoretically-motivated parametric form for the normalizing function $a_{s-s_0}(\cdot)$, as well as three different parametric models for $b_{s-s_0}(\cdot)$ that are able to capture different tail dependence features. They propose constructing the residual process by first considering some stationary Gaussian process $\{Z(s)\}$, and either subtracting $Z(s_0)$ or conditioning on $Z(s_0)=0$ to ensure the condition $Z^0(s_0)=0$ on $\{Z^0(s)\}$ is satisfied. \edit{Marginal transformations of $\{Z^0(s)\}$ are considered therein in order to increase the flexibility of the models.}
\edit{We note that assumptions~\eqref{eqn:CSEassumption} and \eqref{eqn:modelingAssumption} depend on the choice of $s_0$. In some applications, there may be a location of particular interest that would make a natural candidate for $s_0$, but for other scenarios the choice is not evident. However, under the assumption that $\{X(s)\}$ possesses a stationary dependence structure, in the sense that the joint distributions are invariant to translation, the form of the normalization functions $a_{s-s_0},b_{s-s_0}$ and the form of the residual process $\{Z^0(s)\}$ do not in fact depend on $s_0$, so that inference made using one conditioning location is applicable at any location. We discuss this issue further in Section~\ref{sec:sensitivity}.}
\edit{The approach to inference taken by \cite{Wadsworth.Tawn.2019} involves a composite likelihood. This allows different locations to play the role of the conditioning site, and combines information across each of these.} \edit{Inference under this ``vanilla'' version of the model, with $\{Z^0(s)\}$ constructed from a Gaussian process and parametric forms for the normalizing functions, can currently be performed for hundreds of observation locations.} However, scalability to thousands of locations is impeded by the $O(d^3)$-complexity of matrix inversion in the Gaussian process part of the likelihood and the fact that, in contrast to other areas of spatial statistics, we have $n$ replicates of the process to be used for inference.
\subsection{\edit{INLA and the latent variable approach}}
\edit{In order to facilitate higher dimensional inference, we represent our model for observations at $d$ locations in terms of an $m$-dimensional latent Gaussian model, where $m \ll d$. This has the effect of creating a model that is amenable to use of the INLA framework, which allows for fast and accurate inference on the Bayesian posterior distribution of a parameter vector of interest, $\bm{\theta}$. It is particularly computationally convenient when the $m$-dimensional latent Gaussian component is endowed with a Gaussian Markov covariance structure, so that the precision matrix is sparse.}
\edit{The general form of the likelihood for models amenable to inference via INLA is
\begin{align}
\pi(\bm{v}|\bm{\eta},\bm{\theta}) = \prod_{i=1}^d \pi(v_i|\eta_i,\bm{\theta}), \label{eq:inlamodel}
\end{align}
where the observations $\bm{v} =(v_1,\ldots,v_d)^\top \in \mathbb{R}^d$, but the vector $\bm{\eta} = (\eta_1,\ldots,\eta_d)$ is a linear function of the $m$-dimensional latent Gaussian process. This specification of the distribution of $\bm{\eta}$, via a sparse precision matrix, allows for inference on the posterior distribution of interest, $\pi(\bm{\theta}|\bm{v})$, through a Laplace approximation to the necessary integrals. Note that the vector $\bm{\theta}$ also includes parameters of the latent Gaussian model, and is usually termed the \emph{hyperparameter} vector.}
\edit{A benefit of this Bayesian approach to statistical modeling with latent variables, over alternatives like the EM algorithm or Laplace approximations applied in a frequentist setting, is that parameters, predictions and uncertainties can be estimated simultaneously, and prior distributions can be used to incorporate expert knowledge and control the complexity of the model or its components with respect to simpler baselines. Moreover, the availability of the \texttt{R} software package \texttt{R-INLA} \citep{Rue.al.2017} facilitates the implementation, reusability and adaptation of our models and code, making them suitable for use with datasets other than the one considered in this paper. One of the main challenges we face is reconciling the form of the conditional extremes models with the formulation in equation~\eqref{eq:inlamodel} that is allowed under this framework. We outline our general strategy in Section~\ref{sec:modelFormulations}, but defer more detailed computational and implementation details to Section~\ref{sec:computation}. Our motivation for this is to provide readers with a general understanding of the methodology, unimpeded by extensive technical details. However, we note that implementation is a substantial part of the task and therefore Section~\ref{sec:computation} provides interested parties with the necessary particulars.}
\subsection{Overview of paper}
The remainder of the paper is structured as follows. \edit{In Section~\ref{sec:modelFormulations}, we provide a discussion on flexible forms for the conditional spatial extremes model that are possible under the latent variable framework. We discuss details of our inferential approach in Section~\ref{sec:spatialInference}, then apply this to a dataset of Red Sea surface temperatures in Section~\ref{sec:spatialApplication}, considering a range of diagnostics to aid model selection and the assessment of model fit. A spatio-temporal extension is presented in Section~\ref{sec:spacetimeInference}. Section~\ref{sec:computation} is aimed at those readers interested in the specifics of implementation, and includes more detail on INLA, the construction of Gauss-Markov random fields with approximate Mat\'ern covariance, and implementation of our models in \texttt{R-INLA}.} Section~\ref{sec:discussion} concludes with a discussion. Supplementary Material contains code for implementing the models we develop, and is available at \url{https://github.com/essimpson/INLA-conditional-extremes}.
\section{\edit{The latent variable approach and model formulations}}\label{sec:modelFormulations}
\subsection{Overview}
\edit{In this section, we begin by outlining details of the latent variable approach. We then build on the conditional extremes modeling assumption given in~\eqref{eqn:modelingAssumption} to allow for higher-dimensional inference under this latent variable framework. We discuss specific variants of the conditional extremes model that are possible in this case, summarizing the options in Section~\ref{subsec:modelSummary}.}
\subsection{Generalities on the latent variable approach}\label{sec:generalities}
Here, we provide some general details on the latent variable approach for spatial modeling, denoting the observed data generically by $\bm V = (V_1,\ldots,V_d)^\top$, which in our context will correspond to observations at $d$ spatial locations. When modeling spatial extreme values, it is always necessary to have replications of the spatial process in question in order to distinguish between marginal distributions and dependence structures, and to define extreme events. We comment further on the handling of temporal replication in Sections~\ref{subsec:spacetime} and~\ref{sec:implementation}; we will later also explicitly model temporal, as well as spatial, dependence.
In hierarchical modeling with latent Gaussian processes, we define a latent, unobserved Gaussian process $\bm W=(W_1,\ldots,W_m)^\top$, with $m$ denoting the number of \edit{`locations'} of the latent process. \edit{These could encompass the spatial locations used to discretize the spatial domain, or the knots used in spline functions, for example.} We assume conditional independence of the observations $\bm V$ with respect to $\bm W$, and use the so-called \emph{observation matrix} $A\in\mathbb{R}^{d\times m}$ to define a linear predictor
$$
\bm\eta = \bm\eta (\bm W) = A \bm W
$$
that linearly combines the latent variables in $\bm W$ into components $\eta_i$ associated with $V_i$, $i=1,\ldots,d$. The components $\eta_i$ represent a parameter of the probability distribution of $V_i$. The matrix $A$ is deterministic and is fixed before estimating the model. For instance, $A$ handles the piecewise linear spatial interpolation from the \edit{$m$ locations represented by the latent Gaussian vector $\bm W$ towards the $d$ observed sites; for this, $\bm W$ may contain the values of a spatial field} at locations $\tilde{s}_1,\ldots,\tilde{s}_m$, and $A$ has $i$-th line $A_i=(0,\ldots,0,1,0,\ldots,0)$ if the observation location $s_i$ of $V_i$ coincides with one of the locations $\tilde{s}_{j_0}$, where the $1$-entry is at the $j_0$th position. Otherwise, several entries of $A_i$ could have non-zero weight to implement interpolation between the $\tilde{s}_{j}$-locations. The distribution of $\bm\eta$ is also multivariate Gaussian due to the linear transformation. The univariate probability distribution of $V_i$, often referred to as the \emph{likelihood model}, can be Gaussian or non-Gaussian and is parametrized by the linear predictor $\eta_i$, and potentially by other hyperparameters. The vector of hyperparameters (i.e., parameters that are not components of one of the Gaussian vectors $\bm W$ and $\bm \eta$), such as those related to variance, spatial dependence range, or smoothness of a spline curve, is denoted by $\bm \theta$. Letting $\pi(\cdot)$ denote a generic probability distribution, the hierarchical model is structured as follows:
\begin{align*}
\bm \theta &\sim \pi(\cdot) & \text{hyperparameters,}\\
\bm W\mid \bm \theta & \sim \mathcal{N}_m\left(\bm 0, Q(\bm
\theta)^{-1}\right) &\text{latent Gaussian components,} \\
V_i\mid \bm W, \bm \theta &\sim \pi(\cdot \mid \eta_i, \bm
\theta), ~~\mbox{independent} &\text{likelihood\ of\ observations.}
\end{align*}
The matrix $Q(\bm\theta)$ denotes the precision matrix of the latent Gaussian vector $\bm W$, whose variance-covariance structure may depend on some of the hyperparameters in $\bm \theta$ that we seek to estimate.
In the case of observations $V_i$ having a Gaussian distribution\edit{, we can set the Gaussian mean as the linear predictor $\eta_i$. Then,} the conditional variance $\sigma^2$ of $V_i$ given $\eta_i$ is a hyperparameter, and we define
\begin{align}
V_i \mid \eta_i, \sigma^2 \sim \mathcal{N}(\eta_i, \sigma^2), \quad i=1,\ldots,d.
\label{eqn:latentV}
\end{align}
\edit{Under the latent variable framework, if $d>m$ as in our setting, we always need a small positive variance $\sigma^2>0$ in~\eqref{eqn:latentV}, since there is no observation matrix $A$ that would allow for an exact solution to the equation $\bm V = A\bm W$ with given $\bm V$. The presence of this independent component with positive variance is therefore a necessity for the latent variable approach. In some modeling contexts it can be interpreted as a useful model feature, e.g., to capture measurement errors. We discuss the consequences of this $\sigma^2$ parameter on our conditional model in Section~\ref{sec:spatialModels}. With the Gaussian likelihood \eqref{eqn:latentV}, the multivariate distribution of $\bm V$ given $\sigma^2$ is still Gaussian, and in this case the Laplace approximation to the posterior $\pi(\bm{\theta}|\bm{v})$ with the observation $\bm{v}$ of $\bm V$ is exact and therefore does not induce any approximation biases.}
A major benefit of the construction with latent variables is that the dimension of the latent vector $\bm W$ is not directly determined by the number of observations $d$. The computational complexity and stability of matrix operations (e.g., determinants, matrix products, solution of linear systems) arising in the likelihood calculations for the above Bayesian hierarchical model is therefore mainly determined by the tractability of the precision matrix $Q(\bm\theta)$, whose dimension can be controlled independently from the number of observations. Such matrix operations can be implemented very efficiently if precision matrices are sparse \citep{Rue.Held.2005}. If data are replicated with dependence between replications, such as spatial data observed at regular time steps in spatio-temporal modeling, the sparsity property can be preserved in the precision matrix of the latent space-time process $\bm W$. In this work, we will make assumptions related to the separability of space and time, which allows us to generate sparse space-time precision matrices by combining sparse precision matrices of a purely spatial and a purely temporal process \edit{using the Kronecker product of the two matrices}; see Sections~\ref{subsec:spacetime} and~\ref{sec:implementation} for further details.
\subsection{\edit{The latent variable approach for conditional extremes model inference}}\label{sec:spatialModels}
\edit{Here, we explain how the latent variable approach can be used within the conditional extremes framework to reduce the dimension of the residual process $\{Z^0(s)\}$ for inferential purposes. We begin by presenting the conditional extremes model with parametric forms for the normalizing functions $a_{s-s_0}$ and $b_{s-s_0}$, following the approach of \cite{Wadsworth.Tawn.2019}. In Section~\ref{sec:proposed.a}, we propose a more flexible semi-parametric form for $a_{s-s_0}$ that further exploits the latent variable framework.}
\edit{Consider the conditional extremes model presented in~\eqref{eqn:modelingAssumption}. Fixing $a_{s-s_0}(x)=x$ and $b_{s-s_0}(x)=1$ enforces asymptotic dependence, but setting $a_{s-s_0}(x) = x\alpha(s-s_0)$ and allowing the form of $\alpha(s-s_0)$ to depend on the distance from the conditioning location is the key aspect that enables modeling of asymptotic independence as well. To capture asymptotic independence, \cite{Wadsworth.Tawn.2019} propose a parametric form for $\alpha(\cdot)$, defining
\begin{align}
\alpha(s-s_0) = \exp\left\{-\left(\|s-s_0\|/\lambda\right)^\kappa\right\},\qquad \lambda>0, \qquad 0\leq\kappa\leq 2.
\label{eqn:parametric.a}
\end{align}
The resulting function $a_{s-s_0}$ satisfies the constraint that $a_0(x)=x$ and has $a_{s-s_0}(x)$ decreasing as the distance to $s_0$ increases. \cite{Wadsworth.Tawn.2019} propose three different parametric forms for the normalizing function $b_{s-s_0}$, each with different modeling aims. We focus on the option of $b_{s-s_0}(x)=x^\beta$, with $\beta\in[0,1)$, throughout the rest of the paper, including the simple special case where we fix $\beta=0$.}
\edit{The existing conditional extremes approach must be simplified and slightly modified to allow for inference using the latent variable framework. First, in Section~\ref{subsec:CSEintro}, we discussed that \cite{Wadsworth.Tawn.2019} consider marginal transformations of the residual process to increase the flexibility of their approach. A special case of this is to simply restrict $\{Z^0(s)\}$ to have Gaussian margins. This is the approach we take in order to adopt the framework described in Section~\ref{sec:generalities} and facilitate computation. Moreover, as highlighted in equation~\eqref{eqn:latentV}, the use of a latent process of dimension $m<d$ requires the introduction of a variance term $\sigma^2>0$ common to each of the observation locations. Taking this into consideration, our conditional extremes model now has the form
\begin{align}
X(s_i)\mid [X(s_0)=x] = x\alpha(s_i-s_0) + x^\beta Z^0(s_i) + \epsilon_i, \qquad \epsilon_i \sim \mathcal{N}(0,\sigma^2)~~ \mbox{i.i.d.},
\label{eqn:latentConditionalForm}
\end{align}
for $i=1,\dots,d$, where we assume $\epsilon_0 = 0$ at $s_0$. A key point here is that the Gaussian noise does not represent a model feature to capture measurement error or add extra roughness to the process; it is simply included for computational feasibility.}
\edit{The latent variable approach described in Section~\ref{sec:generalities} can be applied to the residual process in~\eqref{eqn:latentConditionalForm}, providing us with a ``low-rank'' representation of $\{Z^0(s)\}$. The constraint that $Z^0(s_0)=0$ can be enforced by manipulating the observation matrix $A$; further detail on this is provided in Section~\ref{subsec:residualConstraint}. In this case, assuming the parametric form~\eqref{eqn:parametric.a} for $\alpha(s-s_0)$ requires estimation of the parameters $(\lambda,\kappa)$, in addition to the parameter $\beta$ of the $b_{s-s_0}$ function. Under the latent variable framework, these are included as part of the hyperparameter vector $\bm \theta$. The dimension of $\bm{\theta}$ must remain moderate (say, at most $10$ to $20$ components), since INLA requires numerical integration with integrand functions defined on the hyperparameter space. In the implementation of INLA using \texttt{R-INLA}, estimation of the parameters of $(\lambda,\kappa,\beta)$ requires the use of specific user-defined (``\texttt{generic}'') models}, which we describe in Section~\ref{sec:implementation}. We emphasize that the use of \texttt{generic} \texttt{R-INLA} models allows for the implementation of other relevant parametric forms for the functions $a_{s-s_0}$ and $b_{s-s_0}$, if the above choices do not provide a satisfactory fit.
\edit{For the function $\alpha$, an alternative to parametric forms is to adopt a semi-parametric approach by constructing $x \alpha(\cdot)$ as an additive contribution to the linear predictor with multivariate Gaussian prior distribution. However, the function $b_{s-s_0}(x)$ must have a parametric form with a small number of parameters included in the hyperparameter vector $\bm\theta$, because in the INLA framework it is not possible to represent both $b_{s-s_0}(x)$ and $\{Z^0(s)\}$ as latent Gaussian components, given the restriction of using a linear transformation from $\bm W$ to $\bm \eta$. This is achieved by our choice to set $b_{s-s_0}(x)=x^\beta$.} \edit{Some frequentist estimation approaches for generalized additive models implement Laplace approximation techniques where semiparametric forms of the variance of the Gaussian likelihood are possible \citep{Wood2011}. However, this approach is currently not available within \texttt{R-INLA} and may come at the price of less stable estimation due to identifiability problems and less accurate Laplace approximations, and estimation can become particularly cumbersome with large datasets, such that we do not pursue it here.}
\subsection{\edit{Semi-parametric modeling of $a_{s-s_0}$}}\label{sec:proposed.a}
\edit{Continuing to adopt the form $a_{s-s_0}(x) = \alpha(s-s_0)x$, we now consider semi-parametric modeling of $\alpha$. In subsequent sections} we focus on this solution for its novelty, increased flexibility and computational convenience. \edit{Semi-parametric forms can be implemented by using a B-spline function for $\alpha(s-s_0)$, which forms an additive component of the linear predictor $\bm \eta$. This is computationally convenient since INLA can handle a large number of latent Gaussian variables in $\bm W$ when calculating accurate deterministic approximations to posterior distributions, via the Laplace approximation. We constrain this function to have $\alpha(0)=1$, ensuring that $a_{0}(x)=x$.}
In extension to the models for conditional spatial and spatio-temporal extremes developed by \citet{Wadsworth.Tawn.2019} and \citet{Simpson.Wadsworth.2020}, we can further increase the flexibility of the conditional mean model by explicitly including a second spline function, denoted $\gamma(s-s_0)$ and with $\gamma(0)=0$, that is not multiplied by the value of the process at the conditioning site. \edit{To clarify, this implies that we have $a_{s-s_0}(x)=\alpha(s-s_0)x+\gamma(s-s_0)$, with $\gamma(s-s_0)$ also incorporated as a component of the linear predictor $\bm \eta$.} An example where such a deterministic component arises is given by the conditional extremes model corresponding to the Brown--Resnick type max-stable processes \citep{Kabluchko.Schlather.deHaan.2009} with log-Gaussian spectral function \citep[see Proposition 4 of][]{Dombry.al.2016}, which are widely used in statistical approaches based on the asymptotically dependent limit models mentioned in Section~\ref{sec:intro}; in this case, we obtain
\[
X(s)\mid [X(s_0)=x] \stackrel{d}{=} x + Z(s)-Z(s_0)-\text{Var}(Z(s)-Z(s_0))/2,
\]
with a centered Gaussian process $\{Z(s)\}$. \edit{Therefore, by setting $\alpha(s-s_0)=1$}, in this model the $\gamma$-term corresponds to the semi-variogram $\text{Var}(Z(s)-Z(s_0))/2$. We note that for the Brown--Resnick process, $\gamma$ should indeed correspond to a valid semi-variogram, although we will not constrain it as such in our implementation \edit{to allow for greater flexibility. However, we underline that in the INLA framework there is no impediment to using parametric forms of $\gamma$ with parameters included in the hyperparameter vector $\bm{\theta}$.}
\subsection{Proposed models}\label{subsec:modelSummary}
\edit{To summarize, in the implementation of the conditional spatial extremes modeling assumption~\eqref{eqn:latentConditionalForm} using \texttt{R-INLA}, we propose to explore several options for the form of the model:} setting $\alpha(s-s_0)=1$ everywhere or using a spline function; whether or not to include the second spline term $\gamma(s-s_0)$; and whether or not to include the parameter $\beta$. Together, this means that all models can be written as special cases of the representation
\begin{align}
X(s_i)\mid [X(s_0)=x] = \alpha(s_i-s_0) x + \gamma(s_i-s_0) + x^\beta Z^0(s_i) + \epsilon_i, \qquad x>u, \qquad i=1,\ldots,d, \label{eq:modelgeneral}
\end{align}
where we suppose that $\{Z^0(s)\}$ has a Gaussian structure, further described in Sections~\ref{subsec:SPDE} and~\ref{subsec:residualConstraint}, and $\epsilon_i \sim \mathcal{N}(0,\sigma^2)$ i.i.d.. This opens up the framework of conditional Gaussian models and the potential for efficient inference via INLA, while closely following the conditional extremes formulation. In particular, the joint distribution of $\{X(s_i): i=1,\ldots, d\}$, not conditional on the value of $X(s_0)$, is non-Gaussian.
\edit{Finally, we give an illustration, linking model~\eqref{eq:modelgeneral} to the general notation and principles outlined in Section~\ref{sec:generalities}.} Our observation vector $\bm{V}$ is the process $\{X(s)\}$ observed at $d$ locations: $\bm{X}=(X(s_1),\ldots,X(s_d))^\top$. The latent Gaussian component $\bm{W}$ consists of components for $\alpha,\gamma$ and $\{Z^0(s)\}$: $\bm{W} = (\bm{W}_\alpha^\top, \bm{W}_\gamma^\top, \bm{W}_Z^\top)^\top \in \mathbb{R}^{m_{\alpha}} \times \mathbb{R}^{m_\gamma} \times \mathbb{R}^{m_Z}$, with $m_\alpha+m_\gamma+m_Z = m$. The observation matrix $A \in \mathbb{R}^{d \times m}$ is the concatenation of matrices for each component: $A_\alpha \in \mathbb{R}^{d \times m_\alpha}$, $A_\gamma \in \mathbb{R}^{d \times m_\gamma}$, and $A_S \in \mathbb{R}^{d \times m_Z}$. We include the $x^\beta$-term into the process $\bm{W}_Z$ if we want to estimate the parameter $\beta$, such that it does not appear in the fixed observation matrix $A_S$; if $\beta$ is fixed, we could instead include the $x^\beta$-term into $A_S$.
\edit{We emphasize that the model is applied to replicates of the observed process $\bm{X}$ and that while the $\alpha$, $\gamma$ and $\beta$ components are fixed, the residual process and error term generally vary across replicates. All together for the $j$th replicate $\bm{X}_j$, we get
\[
\bm{X}_j|[X_j(s_0)=x_j] = (x_j A_\alpha,A_\gamma,A_{S})(\bm{W}_\alpha^\top, \bm{W}_\gamma^\top, \bm{W}_{Z,j}^\top)^\top + \bm{\epsilon}_j,
\]
with i.i.d.\ Gaussian components in $\bm{\epsilon}_j = (\epsilon_{1,j},\ldots,\epsilon_{d,j})^\top$, and where the $j$-subscripts highlight the components that vary with replicate.} \edit{To implement model~\eqref{eq:modelgeneral} in an efficient manner for a large number of observation locations, we need to carefully consider computations related to the residual process $\{Z^0(s)\}$; this is explained in detail in Section~\ref{subsec:residualConstraint}.}
\section{\edit{Inference for conditional spatial extremes}}\label{sec:spatialInference}
\subsection{\edit{Overview}}
\edit{ In Section~\ref{sec:spatialApplication}, we apply variants of model~\eqref{eq:modelgeneral} to the Red Sea surface temperature data, with the different model forms summarized in Table~\ref{tab:spatialModels}. In this section, we discuss certain considerations necessary to carry out inference and techniques to compare the candidate models. In Section~\ref{subsec:marginalTransform}, we begin with a discussion of the transformation to exponential-tailed marginal distributions that are required for conditional extremes modeling. We discuss construction of the observation matrix $A$ and choices of prior distributions for the hyperparameters in Section~\ref{sec:hyperprior}. In Sections~\ref{subsec:spatialModelSelection}~and~\ref{subsec:loocv}, we present two approaches for model selection and validation, both of which are conveniently implemented in the \texttt{R-INLA} package and therefore straightforward to apply in our setting.}
\subsection{\edit{Marginal transformation}}\label{subsec:marginalTransform}
To ensure the marginal distributions of the data have the required exponential upper tails, we suggest transforming to Laplace scale, as proposed by \cite{Keef.al.2013}. This is achieved using a semiparametric transformation. Let $Y$ denote the surface temperature observations at a single location. We assume these observations follow a generalized Pareto distribution above some high threshold $v$ to be selected, and use an empirical distribution function below $v$, denoted by $\tilde{F}(\cdot)$. That is, we assume the distribution function
\begin{align}
F(y)=
\begin{cases}
1 - \lambda_v\left\{1+\frac{\xi(y-v)}{\sigma_v}\right\}^{-1/\xi}_+, &y\geq v\\
\tilde{F}(y), &y<v,
\end{cases}
\label{eqn:marginalDist}
\end{align}
for $\lambda_v=1-F(v)$, $\sigma_v>0$ and $y_+=\max(y,0)$. Having fitted this model, we obtain standard Laplace margins via the transformation
\[
X=
\begin{cases}
\log\left\{2F(Y)\right\}, &F(Y)\leq1/2,\\
-\log\left[2\{1-F(Y)\}\right], &F(Y)>1/2.
\end{cases}
\]
\edit{This transformation should be applied separately for each spatial location, and we estimate the parameters of the generalized Pareto distributions using the \texttt{ismev} package in \texttt{R} \citep{ismev}.} It is possible to include covariate information in the marginal parameters and impose spatial smoothness on these, for instance by using flexible generalized additive models (see \citet{CastroCamilo2020} for details), but we do not take this approach.
\subsection{\edit{Spatial discretization and prior distributions for hyperparameters}}\label{sec:hyperprior}
\edit{We now discuss the distribution of the latent processes $\bm{W}_\alpha, \bm{W}_\gamma$ and $\bm{W}_Z$, as defined in Section~\ref{subsec:modelSummary}. Gauss--Markov distributions for these components, with approximate Mat\'ern covariance, are achieved through the \edit{stochastic partial differential equation (SPDE)} approach of \citet{Lindgren.al.2011}. The locations of the components of the multivariate Gaussian vector $\bm{W}_Z$ defining the latent spatial process are placed at the nodes of a triangulation covering the study area. To generate this spatial discretization of the latent Gaussian process, and the observation matrix $A$ to link it to observations,} \edit{we use a mesh. An example of this will be discussed for our Red Sea application in Section~\ref{subsec:RedSeaMeshPriors} and demonstrated pictorially in Figure~\ref{fig:largeSpatialMesh}.} \edit{Full technical details of the construction of spatial and spatio-temporal precision matrices $Q$ for each component, and observation matrices $A$, are provided in Section~\ref{sec:computation}.}
\edit{For the one-dimensional spline functions we propose for modeling $\alpha(s-s_0)$ and $\gamma(s-s_0)$, we suggest choosing knots that are equally spaced and relate to the distance from the conditioning site, $s-s_0$. One knot should be placed at a distance of $s-s_0=0$ to allow us to enforce the constraints that $\alpha(0)=1$ and $\gamma(0)=0$. We use quadratic spline functions for both $\alpha(s-s_0)$ and $\gamma(s-s_0)$, which we have found to provide more flexibility than their linear counterparts. For the SPDE priors corresponding to these spline components, we suggest fixing the range and standard deviation, since we consider that estimating these parameters is not crucial. This avoids the very high computational cost that can arise when we estimate too many hyperparameters with \texttt{R-INLA}.}
For specifying the prior distributions of the hyperparameters (e.g., variances, spatial ranges, autocorrelation coefficients \edit{for the space-time extension}) we use the concept of penalized complexity (PC) priors \citep{Simpson.al.2017}, which has become the standard approach in the INLA framework. PC priors control model complexity by shrinking model components towards a simpler baseline model, using a constant rate penalty expressed through the Kullback-Leibler divergence of the more complex model with respect to the baseline. In practice, only the rate parameter has to be chosen by the modeler, and it can be determined indirectly---but in a unique and intuitive way---by setting a threshold value $r$ and a probability $p\in (0,1)$ such that $\mathrm{Pr}(\text{hyperparameter}>r)=p$, with $>$ replaced by $<$ in some cases, depending on the role of the parameter. For example, the standard baseline model for a variance parameter of a latent and centered Gaussian prior component $\bm W$ contributing to the linear predictor $\bm \eta$ is a variance of $0$, which corresponds to the absence of this component from the model, and the PC prior of the standard deviation corresponds to an exponential distribution. Analogously, we can fix the PC prior for the variance parameter $\sigma^2$ of the observation errors $\epsilon_i$ in \eqref{eq:modelgeneral}. \edit{If the data-generating process is smooth then $\sigma^2$ could instead be fixed to a very small value, but for reasons of stabilizing the estimation procedure, we prefer to estimate its value from the data.} For the Mat\'ern covariance function, PC priors are defined jointly for the range and the variance, with the baseline given by infinite range and variance $0$; in particular, the inverse of the range parameter has a PC prior given by an exponential distribution, see \citet{Fuglstad.al.2019} for details. As explained by \citet{Simpson.al.2017}, PC priors are not useful to ``regularize" models, i.e., to select a moderate number of model components among a very large number of possible model components. Rather, they are used to control the complexity of a moderate number of well-chosen components that always remain present in the posterior model, and they do not put any positive prior mass on the baseline model.
\subsection{Model selection using WAIC}\label{subsec:spatialModelSelection}
Conveniently, implementation in \texttt{R-INLA} allows for automatic estimation of certain information criteria that can be used for model selection. Two such criteria are the deviance information criterion (DIC), and the widely applicable or Watanabe-Akaike information criterion (WAIC) of \cite{Watanabe2013}. We favour the latter since it captures posterior uncertainty more fully than the DIC. This, and other, benefits of the WAIC over the DIC are discussed by \cite{Vehtari.etal.2017}, where an explanation of how to estimate the WAIC is also provided. Using our general notation for latent variable models, as in Section~\ref{sec:generalities}, suppose that the posterior distribution of the vector of model parameters $\bm{\tilde{\theta}}=(\bm \theta^\top, \bm W^\top)^\top$ is represented by a sample $\tilde{\bm\theta}^s$, $s=1,\dots,S$, with the corresponding sample variance operator given by $\mathbb{V}^S_{s=1}(\cdot)$. Given the observations $v_i$, $i=1,\ldots,d$, the WAIC is then estimated as
\[
\sum_{i=1}^d\log\left\{\frac{1}{S}\sum_{s=1}^S\pi(v_i\mid \tilde{\bm\theta}^s)\right\} - \sum_{i=1}^d \mathbb{V}_{s=1}^S\left\{\log \pi(v_i\mid \tilde{\bm\theta}^s)\right\},
\]
with the first term providing an estimate of the log predictive density, and the second an estimate of the effective number of parameters. Within \texttt{R-INLA}, we do not generate a representative sample, but the sample means and variances with respect to $\tilde{\bm\theta}^s$ in the above equation are estimated based on \texttt{R-INLA}'s internal representation of posterior distributions; see also the estimation technique for the DIC explained in \citet[][Section~6.4]{Rue.al.2009}. \edit{Smaller values of the WAIC indicate more successful model fits, and we will use this criterion to inform our choice of model for the Red Sea data in Section~\ref{subsec:spatialModels}.}
\subsection{\edit{Cross validation procedures}}\label{subsec:loocv}
As mentioned previously, the main aim of fitting conditional extremes models is usually to extrapolate to extreme levels that have not been previously observed. However, the INLA framework also lends itself to the task of interpolation, e.g., making predictions for unobserved locations. Although interpolation is not our aim, here we discuss some procedures that allow for the assessment of models in this setting. For model selection, we can use cross-validated predictive measures, based on leave-one-out cross-validation (LOO-CV). These are relatively quick to estimate with INLA without the need to re-estimate the full model; see \citet[][Section~6.3]{Rue.al.2009}. Here, one possible summary measure is the \emph{conditional predictive ordinate} (CPO), corresponding to the predictive density of observation $v_i$ given all the other observations $\bm v_{-i}$, i.e.,
$$
\text{CPO}_i = \pi(v_i\mid \bm v_{-i}),
$$
for $i=1,\dots,d$. Log-transformed values of CPO define the log-scores often used in the context of prediction and forecasting \citep{Gneiting2007}. A model with higher CPO values usually indicates better predictions. We note that the CPO is not usually used for extreme value models \edit{where interpolation is often not considered the main goal.} \edit{It will not be particularly informative in our application} since the loss of information from holding out a single observation is negligible in the case of densely observed processes with very smooth surface. However, we include it as it may be useful for other applications where spatial and temporal interpolation are important, \edit{for example when data are observed at irregularly scattered meteorological stations,} and due to the simplicity of its calculation in \texttt{R-INLA}.
One can also consider the \emph{probability integral transform} (PIT) corresponding to the distribution function of the predictive density, evaluated at the observed value $v_i$, i.e.,
$$
\text{PIT}_i = \int_{-\infty}^{v_i}\pi(v\mid \bm v_{-i})\mathrm{d}v.
$$
If the predictive model for single hold-out observations appropriately captures the variability in the observed data, then the PIT values will be close to uniform. \edit{If in a histogram of PIT values the mass concentrates strongly towards the boundaries, then predictive credible intervals (CIs) will be too narrow; by contrast, if mass concentrates in the middle of the histogram, then predictive CIs will be too large.} \edit{We refer the reader to \citet{Czado2009predictive} for more background on PITs. We discuss such cross validation procedures in Section~\ref{subsec:spatialModels}.}
\section{Application to modeling Red Sea surface temperature extremes}\label{sec:spatialApplication}
\subsection{Overview}
\edit{In this section, we propose specific model structures for the general model \eqref{eq:modelgeneral} and illustrate an application of our approach using a dataset of Red Sea surface temperatures, the spatio-temporal extremes of which have also been studied by \cite{Hazra.Huser.2020} and \cite{Simpson.Wadsworth.2020}, for instance. We focus on the purely spatial case here, and consider further spatio-temporal modeling extensions for this dataset in Section~\ref{sec:spacetimeInference}. We use the methods discussed in Sections~\ref{subsec:spatialModelSelection}~and~\ref{subsec:loocv} to assess the relative suitability of the proposed models for the Red Sea data.} For the best-fitting model, we present additional results, and conclude with a discussion of consequences of using a single, fixed conditioning site. Throughout this section, the threshold $u$ in conditional model~\eqref{eq:modelgeneral} is taken to be the 0.95 quantile of the transformed data, following \cite{Simpson.Wadsworth.2020}. For the sake of a brevity, we do not compare results for different thresholds in the following.
\subsection{Red Sea surface temperature data}\label{subsec:redsea}
The surface temperature dataset comprises daily observations for the years 1985 to 2015 for 16703 locations across the Red Sea, corresponding to $0.05\degree\times0.05\degree$ grid cells. We focus only on the months of July to September to approximately eliminate the effects of seasonality. More information on the data, which were obtained from a combination of satellite and in situ observations, can be found in \cite{Donlon.etal.2012}. Extreme events in this dataset are of interest, since particularly high water temperatures can be detrimental to marine life, e.g., causing coral bleaching, and in some cases coral mortality.
\cite{Simpson.Wadsworth.2020} apply their conditional spatio-temporal extremes model to a subset of 54 grid cells located across the north of the Red Sea. In this paper, we instead focus on a southern portion of the Red Sea, where coral bleaching is currently more of a concern \citep{Fine.etal.2019}. We demonstrate our approach using datasets of two different spatial dimensions; the first dataset contains 6239 grid cells, corresponding to all available locations in our region of interest, while the second dataset is obtained by taking locations at every third longitude and latitude value in this region, leaving 678 grid cells to consider. These two sets of spatial locations are shown in Figure~\ref{fig:spatialLocations}. \cite{Simpson.Wadsworth.2020} consider their 54 spatial locations at five time-points, resulting in a lower dimensional problem than both the datasets we consider here. On the other hand, \cite{Hazra.Huser.2020} model the full set of 16703 grid cells, but they ensure computational feasibility by implementing so-called `low-rank' modeling techniques using spatial basis functions given by the dominant empirical orthogonal functions, obtained from preliminary empirical estimation of the covariance matrix of the data.
\edit{There are two transformations that we apply to these data as a preliminary step.} First, as our study region lies away from the equator, one degree in latitude and one degree in longitude correspond to different distances. To correct for this we apply a transformation, multiplying the longitude and latitude values by 1.04 and 1.11, respectively, such that spatial coordinates are expressed in units of approximately $100$ km. \edit{Our resulting spatial domain measures approximately $400$ km between the east and west coasts and $500$ km from north to south.} We use the Euclidean distance on these transformed coordinates to measure spatial distances in the remainder of the paper. \edit{We also transform the margins to Laplace scale using the approach outlined in Section~\ref{subsec:marginalTransform}. We take $v$ in~\eqref{eqn:marginalDist} to be the empirical 0.95 quantile of $Y$, here representing the observed sea surface temperature at an individual location, so that $\lambda_v=0.05$.} \edit{Any temporal trend in the marginal distributions could also be accounted for at this stage, e.g., using the approach of \cite{Eastoe.Tawn.2009}, but we found no clear evidence that this was necessary in our case.}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{spatial-locations.pdf}
\caption{Location of the Red Sea (grey), and the subsets of grid cells in the two datasets we consider (black).}
\label{fig:spatialLocations}
\end{figure}
\edit{In the remainder of this section, we apply a variety of conditional spatial extremes models to the Red Sea data with the large spatial dataset in Figure~\ref{fig:spatialLocations}, and apply several model selection and diagnostic techniques. For the conditioning site, we select a location lying towards the centre of the spatial domain of interest.} We discuss this choice further in Section~\ref{sec:sensitivity}, where we present additional results based on the moderate dataset. In Appendix~\ref{app:chi}, we provide an initial assessment of the spatial extremal dependence properties of the sea surface temperature data, based on the tail correlation function defined in~\eqref{eq:chi}. These results demonstrate that there is weakening dependence in the data at increasingly extreme levels, \edit{which provides an initial indication} that models exhibiting asymptotic independence should be more appropriate here.
\subsection{\edit{The mesh and prior distributions for the Red Sea data application}}\label{subsec:RedSeaMeshPriors}
\edit{As discussed in Section~\ref{sec:hyperprior}, in order to carry out our latent variable approach to model inference, we require a triangulation of the area of interest. The mesh we use for the spatial domain in the southern Red Sea is shown in Figure~\ref{fig:largeSpatialMesh}. This was generated using \texttt{R-INLA}, with the most dense region corresponding to the area where we have observations. The spatial triangulation mesh has $541$ nodes, i.e., the dimension of the latent process is approximately 8.7\% of the size of the large dataset, and similar in size to the moderate dataset.} {In this case, the extension of the mesh beyond the study region is reasonably big, in order to avoid boundary effects of the SPDE for the sea surface temperatures, whose spatial dependence range is known to be relatively large; see \citet{Simpson.Wadsworth.2020}.} We use a coarser resolution in the extension region to keep the number of latent Gaussian variables as small as reasonably possible.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{largeMesh.pdf}
\caption{The large set of spatial locations (red dots) and the corresponding triangulation mesh used for the SPDE model with an inner and outer boundary (blue lines). The inner boundary delimits a high-resolution zone covering the study area, while the outer boundary delimits an extension zone with lower resolution to prevent effects of SPDE boundary conditions in the study area.}
\label{fig:largeSpatialMesh}
\end{figure}
Due to the availability of many observations in the Red Sea dataset, we found the hyperparameter priors to only have a small impact on posterior inference in our application, and that the credible intervals of the hyperparameters are very narrow. We have chosen moderately informative PC priors through the following specification:
$$
\mathrm{Pr}(\sigma > 0.1)=0.5, \quad \mathrm{Pr}(\sigma_{Z} > 1) = 0.5, \quad \mathrm{Pr}(\rho_{Z} > 100\ \mathrm{km})=0.5,
$$
where $\sigma_{Z}$ and $\rho_{Z}$ are the standard deviation and the empirical range, respectively, of the unconstrained spatial Mat\'ern fields $\{Z(s)\}$. \edit{Where the $\beta$-parameter is to be estimated as part of a specified model, we choose a log-normal prior where the normal has mean $-\log(2)$ and variance $1$. This does not guarantee estimates of $\beta<1$, but such a constraint could be included within the \texttt{generic}-model framework if required.} \edit{The Mat\'ern covariance function is also specified by a shape parameter $\nu$. We fix $\nu=0.5$, corresponding to an exponential correlation function. Sensitivity to $\nu$} can be checked by comparing fitted models across different values, as demonstrated in Appendix~\ref{app:zetaSensitivity}. We find this to have little effect on the results for our data.
\edit{As discussed in Section~\ref{sec:hyperprior}, for each spline function, we place one knot at the boundary where $s=s_0$ and use a further 14 equidistant interior knots.} This quantity provides a reasonable balance between the reduced flexibility that occurs when using too few knots, and the computational cost and numerical instability (owing to near singular precision matrices) that may arise with using too many. \edit{For these spline components, we have fixed the range to $100$~km and the standard deviation to $0.5$. If we wished to obtain very smooth posterior estimates of the spline function, we could choose parameters that lead to stronger constraints on the (prior) variability of the spline curve.} We will demonstrate the estimated spline functions for some of our models in Section~\ref{subsec:Model3}.
\subsection{Variants of the spatial model and model comparison}\label{subsec:spatialModels}
\edit{In Section~\ref{sec:proposed.a}, we discussed choices of the normalizing functions $a_{s-s_0}(x)$ and $b_{s-s_0}(x)$ that are possible under the INLA framework. In Table~\ref{tab:spatialModels}, we summarize the models we consider based on the structures outlined in equation~\eqref{eq:modelgeneral}. We fit models with these different forms and subsequently select the best model for our data. Model~0 has $\alpha(s-s_0)=1$, $\beta=0$, resulting in a very simple asymptotically dependent model.} Model~2 is also asymptotically dependent, but allows for weaker dependence than in Model 0 due to the drift that is captured by the $\gamma(s-s_0)$ term, supposed to be negative in practice. In Model~6, the residual process has been removed, so that all variability is forced to be captured by the term $\epsilon_i$ in \eqref{eq:modelgeneral}. Models~0~and~6 are meant to act as simple baselines to which we can compare the other models, but we would not expect them to perform sufficiently well in practice. \edit{As mentioned in Section~\ref{subsec:CSEintro}, \cite{Wadsworth.Tawn.2019} propose two options for constructing the residual process $\{Z^0(s)\}$, each based on manipulation of a stationary Gaussian process $\{Z(s)\}$. For all models in Table~\ref{tab:spatialModels}, we focus on a residual process of the form $\{Z^0(s)\}=\{Z(s)\}-Z(s_0)$, further detail on which is given in Section~\ref{subsec:residualConstraint}.}
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c c c|c|}
\hline
Model number & Model form & WAIC & CPO & RMSE & Run-time\\
\hline
0
& $x + \{Z^0(s)\}$ & 2438 & -0.0061 & 0.019 & 20\\
1
& $x\cdot\alpha(s-s_0) + \{Z^0(s)\}$ & 614 & -0.0028 & 0.001 & 22\\
2
& $x + \gamma(s-s_0) + \{Z^0(s)\}$ & 743 & -0.0035 & 0.005 & 35\\
3
& $x\cdot\alpha(s-s_0) + \gamma(s-s_0) + \{Z^0(s)\}$ & 4 & -0.0018 & 0 & 32\\
4
& $x\cdot\alpha(s-s_0) + \gamma(s-s_0) + x^\beta \cdot \{Z^0(s)\}$ & 0 & 0 & 0.003 & 107\\
5
& $x\cdot\alpha(s-s_0) + x^\beta \cdot \{Z^0(s)\}$ & 611 & -0.0004 & 0.010 & 86\\
6
& $x\cdot\alpha(s-s_0) + \gamma(s-s_0)$ & 4394961 & -2.8042 & 0.514 & 43\\
\hline
\end{tabular}
\caption{Summary of conditional spatial models, model selection criteria, and total run-times (minutes). The minimum WAIC value (-1460982 for Model~4); maximum CPO value (3.0305 for Model~4); and minimum RMSE value (0.862 for Model~3) have been subtracted from their respective columns. We estimate $\beta$ as 0.29 with a 95\% credible interval of $(0.27,0.31)$ (Model~4) and 0.33 $(0.31,0.34)$ (Model~5).}
\label{tab:spatialModels}
\end{table}
\edit{Alongside the models in Table~\ref{tab:spatialModels}, we provide the corresponding WAIC and CPO values, as discussed in Sections~\ref{subsec:spatialModelSelection}~and~\ref{subsec:loocv}, respectively. The computation times for each model are also included, as this information may also aid model selection where there is similar performance under the other criteria.}
\edit{Beginning with the WAIC, we first recall that smaller values of this criterion are preferred. Models~1~and~3 are simplified versions of Models~5~and~4, respectively, in that the value of $\beta$ is fixed to 0 rather than estimated directly in \texttt{R-INLA}. In both cases, the results are very similar whether $\beta$ is estimated or fixed, suggesting the simpler models with $\beta=0$ are still effective. The estimated WAIC values suggest Models~3~and~4 provide the best fit for our data. The common structure in these models are the terms $\alpha(s-s_0)x$ and $\gamma(s-s_0)$, indicating that their inclusion is important. In Model~4, the posterior mean estimate of $\beta$ is 0.29, but despite simplifying the model, setting $\beta=0$ as in Model~3 provides competitive results.}
\edit{On the other hand, the CPO results are relatively similar across Models 0~to~5, but clearly substantially better than Model~6, which we include purely for comparison. Model~6 performs poorly here since all spatially correlated residual variation has been removed. We provide a histogram of PIT values for Model~3 in Appendix~\ref{app:pit}, with equivalent plots for Models~0~to~5 being very similar. The histogram has a peak in the middle, suggesting that the posterior predictive densities for single observations are generally slightly too ``flat''; however, here the variability in the posterior predictive distributions is very small throughout. Therefore, the fact of slightly overestimating the true variability, which is very small, does not cause too much concern about the model fit.} If the PIT values concentrated strongly at $0$ and $1$, this would indicate that posterior predictive distributions would not allow for enough uncertainty, i.e., the model would be overconfident with its predictions; however, this is not the case here. Due to the smoothness of our data we essentially have perfect predictions using Models~0~to~5, and these plots are not particularly informative, but again may be useful in settings where spatio-temporal interpolation is a modeling goal.
\edit{Finally, we consider using a further cross validation procedure to compare the different models.} This involves removing all data for locations lying in a quadrant to the south-east of the conditioning site, and using our methods to estimate these values. The difference between the estimates and original data, on the Laplace scale, can be summarized using the root mean square error (RMSE). These results are also provided in Table~\ref{tab:spatialModels}, where Model~3 gives the best results, although is only slightly favoured over the other Models~0~to~5.
\subsection{Results for Model~3}\label{subsec:Model3}
For our application, it is difficult to distinguish between the performance of Models 0~to~5 using the cross validation approaches, but Models~3~and~4 both perform well in terms of the WAIC. We note that the run-time for Model~3, provided in Table~\ref{tab:spatialModels}, is less than one third of the run-time of Model~4 for this data, so we choose to focus on Model~3 here due to its simplicity. In general, simpler models have quicker computation times, but this is not necessarily always the case; we comment further on this in Section~\ref{subsec:space-time-diagnostics}. We provide a summary of the fitted model parameters for Model~3, excluding the spline functions, in Table~\ref{tab:parameterEstimates}. The estimated value of $\sigma^2$ is very small, as expected. The Mat\'{e}rn covariance of the process $\{Z(s)\}$ has a reasonably large dependence range, estimated to be 428.2~km.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c c|}
\hline
Parameter & Posterior mean & 95\% credible interval \\
\hline
$\sigma^2$ & 0.0107 & (0.0106, 0.0107) \\
$\sigma_Z$ & 1.557 & (1.496, 1.618) \\
$\rho_Z$ & 428.2 & (409.5, 446.8) \\
\hline
\end{tabular}
\caption{Estimated parameters for Model 3.}
\label{tab:parameterEstimates}
\end{table}
We now consider the estimated spline functions $\alpha(s-s_0)$ and $\gamma(s-s_0)$ for Model~3; these are shown in Figure~\ref{fig:model4splines}. For comparison, we also show the estimate of $\alpha(s-s_0)$ for Model~1 and $\gamma(s-s_0)$ for Model~2. These two models are similar to Model~3, in that $\beta=0$, but $\gamma(s-s_0)=0$ for Model~1 and $\alpha(s-s_0)=1$ for Model~2, for all $s\in\mathcal{S}$. For Model~1, the $\alpha(s-s_0)$ spline function generally decreases monotonically with distance, as would be expected in spatial modeling. For Model~3, the interaction between the two spline functions makes this feature harder to assess, but further investigations have shown that although $\alpha(s-s_0)$ and $\gamma(s-s_0)$ are not monotonic in form, the combination $\alpha(s-s_0) x+\gamma(s-s_0)$ is usually decreasing for $x\geq u$; i.e., there is posterior negative correlation, and transfer of information between the two spline functions. Some examples of this are given in Figure~\ref{fig:diffs0splines} and will be discussed in Section~\ref{sec:sensitivity}. The behaviour of the $\gamma(s-s_0)$ spline function for Model~2 is similar to that of the $\alpha(s-s_0)$ functions for Models~1~and~3, highlighting that all three models are able to capture similar features of the data despite their different forms. The success of Model~3 over Models~1~and~2 can be attributed to the additional flexibility obtained via the inclusion of both spline functions. We note that in terms of representing the data, there may be little difference between suitable models, as we see here. However, we should also consider the diagnostics relating to our specific modeling purpose, which in our case is extrapolation. The results in Appendix~\ref{app:chi} demonstrate that there is weakening dependence in the Red Sea data. The asymptotic dependence of Model~2 means that it cannot capture this feature, and is therefore unsuitable here. \edit{In Appendix~\ref{app:chi}, we compare the empirical tail correlation estimates to ones obtained using simulations from our fitted Model~3. This model goes some way to capturing the observed weakening dependence, although the estimates do not decrease as quickly as for the empirical results. We comment further on this issue below.}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{splinesM1M2M3.pdf}
\caption{Posterior mean estimates of the spline functions $\alpha(s-s_0)$ (left) and $\gamma(s-s_0)$ (right) for Model~1 (blue), Model 2 (orange) and Model~3 (black). \edit{The dashed lines show approximate 95\% pointwise credible intervals in each case.}}
\label{fig:model4splines}
\vspace{1cm}\centering
\includegraphics[width=0.9\textwidth]{ProportionExtremeLocations.pdf}
\caption{Left: the spatial domain separated into 17 regions; the region labels begin at 1 in the centre of the domain, and increase with distance from the centre. The conditioning site $s_0$ is shown in red. Right: the estimated proportion of locations that exceed the 0.95 quantile, given it is exceeded at $s_0$ using Model~3 (green) and equivalent empirical results (purple).}
\label{fig:extremeProb1}
\end{figure}
Our fitted models can be used to obtain estimates of quantities relevant to the data application. For sea surface temperatures, we may be interested in the spatial extent of extreme events. High surface water temperatures can be an indicator of potentially damaging conditions for coral reefs, so it may be useful to determine how far-reaching such events could be. To consider such results, we fix the model hyperparameters and spline functions to their posterior means, and simulate directly from the spatial residual process of Model~3. If a thorough assessment of the uncertainty in these estimates was required, we could take repeated samples from the posterior distributions of the model parameters fixed to their posterior mean, and use each of these to simulate from the model. However, assessing the predictive distribution in this way is computationally more expensive, so we proceed without this step.
We separate the spatial domain into the 17 regions demonstrated in the left panel of Figure~\ref{fig:extremeProb1}. Given that the value at the conditioning site exceeds the 0.95 quantile, we estimate the proportion of locations in each region that also exceed this quantile. Results obtained via 10,000 simulations from Model~3 are shown in the right panel of Figure~\ref{fig:extremeProb1}, alongside empirical estimates from the data. These results suggest that Model~3 provides a successful fit of the extreme events, particularly within the first ten regions, which correspond to distances up to approximately 200~km from the conditioning site. At longer distances, the results \edit{do differ, which may be due to the comparatively small number of locations that contribute to the model fit in these regions and to some mild non-stationarities arising close to the coastline}. \edit{In Appendix~\ref{app:extrapolationDiagnostic}, we present a similar diagnostic where we instead extrapolate to the 0.99 quantile. Comparing the empirical results to those in Figure~\ref{fig:extremeProb1}, we again see that the data exhibits weakening dependence as we increase the threshold level. This suggests that an asymptotically independent model, as we have with Model~3, is appropriate; an asymptotically dependent model would not have captured this feature. However, these diagnostics do suggest that the dependence does not weaken quickly enough in our fitted model. It is possible that this could have been improved by a different threshold choice, but investigating this is beyond the scope of the paper.}
\subsection{Sensitivity to the conditioning site}
\label{sec:sensitivity}
A natural question when applying the conditional approach to spatial extreme value modeling, is how to select the conditioning location. Under an assumption of spatial stationarity in the dependence structure, the parameters of the conditional model defined in~\eqref{eqn:modelingAssumption} should be the same regardless of the location $s_0$. However, since the data are used in slightly different ways for each conditioning site, and because the stationarity assumption is rarely perfect, we can expect some variation in parameter estimates for different choices of $s_0$.
In \citet{Wadsworth.Tawn.2019} and \citet{Simpson.Wadsworth.2020}, this issue was circumvented by using a composite likelihood that combined all possible individual likelihoods for each conditioning site, leading to estimation of a single set of parameters that reduced sampling variability and represented the data well on average. However, bootstrap methods are needed to assess parameter uncertainty, and as the composite likelihood is formed from the product of $d$ full likelihoods, the approach scales poorly with the number of conditioning sites. Composite likelihoods do not tie in naturally to Bayesian inference as facilitated by the INLA framework, and so to keep the scalability, and straightforward interpretation of parameter uncertainty, we focus on implementations with a single conditioning location. Sensitivity to the particular location can be assessed similarly to other modeling choices, such as the threshold above which the model is fitted.
In particular, different conditioning sites may lead us to select different forms of the models described in Table~\ref{tab:spatialModels}, as well as the resulting parameter estimates. To assess this, we fit all seven models to the moderate dataset, using 39 different conditioning sites on a grid across the spatial domain, with the mesh and prior distributions selected as previously. We compare the models using the WAIC, as described in Section~\ref{subsec:spatialModelSelection}.
The results are shown in Figure~\ref{fig:diffs0results}, where we demonstrate the best two models for each conditioning site. For the majority of cases, Model~4 performs the best in terms of the WAIC, and in fact, it is in the top two best-performing models for all conditioning sites. The best two performing models are either Models~3~and~4 or Models~4~and~5 for all conditioning locations. This demonstrates that there is reasonable agreement across the spatial domain, and suggests that using just one conditioning location should not cause an issue in terms of model selection.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ConditioningSiteModelComparison.pdf}
\caption{Maps showing the `best' and `second best' models using different conditioning sites, based on minimizing the WAIC: Model~3, blue; Model~4, orange; Model~5, purple.}
\label{fig:diffs0results}
\vspace{1cm}
\includegraphics[width=\textwidth]{ConditioningSiteSplineComparison-Model3-39sites.pdf}
\caption{\edit{Posterior mean estimates of $\alpha(s-s_0)u + \gamma(s-s_0)$} for Model~3 (right), with $u$ representing the threshold used in the model fits. The colours of the lines correspond to the conditioning sites used, as shown in the left panel.}
\label{fig:diffs0splines}
\end{figure}
To further consider how restrictive it is to only fit models at one conditioning site, we can compare the spline functions estimated using different locations for $s_0$. We again focus on results for Model~3, as in Section~\ref{subsec:Model3}, and consider estimates of $\alpha(s-s_0)u + \gamma(s-s_0)$, with $u$ representing the threshold used for fitting. We demonstrate the estimates of this function in Figure~\ref{fig:diffs0splines}, for the same 39 conditioning sites used in Figure~\ref{fig:diffs0results}, highlighting results for four of these sites situated across the spatial domain. Overall, the estimated functions are reasonably similar, particularly for shorter distances. There is one function that appears to be an outlier, corresponding to a conditioning site located on the coast. Although the other coastal conditioning sites we consider do not have this issue, it does suggest that some care should be taken here.
As a final test on the sensitivity to the conditioning site, we consider the implications if we fit Model~3 at one conditioning site, and use this for inference at another location. In particular, we take the results from Section~\ref{subsec:Model3}, using a conditioning site near the centre of the spatial domain, and use these to make inference at a conditioning site located on the coast. We use a method analogous to the one used to create Figure~\ref{fig:extremeProb1}. That is, we separate the spatial domain into regions, and for each one, we estimate the proportion of locations that take values above their 0.95 quantile, given that this quantile is exceeded at the conditioning location. In Figure~\ref{fig:extremeProb_diffs0}, we compare results based on simulations from the fitted model to empirical estimates. Although a different conditioning location was used to obtain the model fit, the results are still good, particularly up to moderate distances, supporting our use of a single conditioning site for inference. One issue that is highlighted here is that by fitting the model at a central conditioning site, the maximum distance to $s_0$ is around 391~km, so we are not able to make inference about the full domain for a conditioning site near the boundary, where the maximum distance to other locations is much larger. This aspect should be taken into account when choosing a conditioning site for inference. This issue is specific to the use of spline functions for $\alpha(s-s_0)$ and $\gamma(s-s_0)$, and there is no such problem for parametric functions such as the one proposed by \cite{Wadsworth.Tawn.2019} for $\alpha(s-s_0)$. There is therefore a trade-off here between the flexbility of the splines and the spatial extrapolation possible using parametric functions.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{ProportionExtremeLocations_diffs0.pdf}
\caption{Left: the spatial domain separated into regions; the region labels begin at 1 at $s_0$ (red), and increase with distance from this location. Right: the estimated proportion of locations that exceed the 0.95 quantile, given it is exceeded at $s_0$ using Model~3 (green) and equivalent empirical results (purple).}
\label{fig:extremeProb_diffs0}
\end{figure}
\section{Inference for conditional space-time extremes}\label{sec:spacetimeInference}
\subsection{Conditional spatio-temporal extremes models}
\cite{Simpson.Wadsworth.2020} extend assumption~\eqref{eqn:CSEassumption} to a spatio-temporal setting. The aim is to model the stationary process $\{X(s,t):(s,t)\in\mathcal{S}\times\mathcal{T}\}$ which also has marginal distributions with exponential upper tails. The conditioning site is now taken to be a single observed space-time location $(s_0,t_0)$, and the model is constructed for a finite number of points $(s_1,t_1),(s_1,t_2),\dots,(s_d,t_\ell)$ pertaining to the process at $d$ spatial locations and $\ell$ points in time, where data may be missing for some of the space-time points. The structure of the conditional extremes assumption is very similar to the spatial case, in particular, it is assumed that there exist functions $a_{(s,t)-(s_0,t_0)}(\cdot)$ and $b_{(s,t)-(s_0,t_0)}(\cdot)$ such that as $u\rightarrow\infty$,
$$
\Pr\left(\left[\frac{X(s_i,t_j)-a_{(s_i,t_j)-(s_0,t_0)}\left\{X(s_0,t_0)\right\}}{b_{(s_i,t_j)-(s_0,t_0)}\left\{X(s_0,t_0)\right\}}\right]_{\substack{i=1,\dots,d,\\j=1,\dots,\ell}} \leq \bm{z}~\bigg{\vert}~X(s_0,t_0)=u\right) ~\to \Pr\left[\{Z^0(s_i,t_j)\}_{\substack{i=1,\dots,d,\\j=1,\dots,\ell}} \leq \bm{z}\right],
$$
for \edit{$\{Z^0(s_i,t_j)\}_{\substack{i=1,\dots,d,\\j=1,\dots,\ell}}$ representing finite-dimensional realizations of} a spatio-temporal residual process $\{Z^0(s,t)\}$. Once more the excesses $X(s_0,t_0)-u|X(s_0,t_0)>u$ are independent of the residual process as $u \to \infty$, and the constraints on the residual process $\{Z^0(s,t)\}$ and normalizing function $a_{(s,t)-(s_0,t_0)}(\cdot)$ are analogous to the spatial case. We consider spatio-temporal variants of spatial models 1, 3, 4 and 5, which provided the best WAIC values, in Section~\ref{subsec:spatialModels}; see the model summary in Table~\ref{tab:spacetimeResults}. In order to preserve sparsity in the precision matrix \edit{of the relevant latent variables}, a simple autoregressive structure is employed for the temporal aspect of the residual process; further details are provided in Section~\ref{subsec:spacetime}. Specifically, we construct the process $\{Z^0(s,t)\}$ as $\{Z(s,t)\}-Z(s_0,t_0)$ using the first-order autoregressive structure in combination with the spatial SPDE model as described in equation~\eqref{eq:ar1}. \edit{For the temporal auto-correlation coefficient $\rho$ in~\eqref{eq:ar1}, we again opt for a PC prior. The baseline could be either $\rho=0$ (no dependence) or $\rho=1$ (full dependence); here, we choose $\rho=0$ and a moderately informative prior through the specification $\mathrm{Pr}(\rho > 0.5) = 0.5$.} The prior distributions for $\alpha(s-s_0,t-t_0)$ and $\gamma(s-s_0,t-t_0)$ are constructed according to \eqref{eq:ar1}, with a \edit{one-dimensional} SPDE model for a quadratic spline with $14$ interior knots deployed for spatial distance \edit{and replicated for each of the $\ell$ time points, with prior temporal dependence of spline coefficients for consecutive time lags controlled by a first-order autoregressive structure; the resulting Gaussian prior processes} are conditioned to have $\alpha(0,0)=1$ and $\gamma(0,0)=0$. \edit{Contrary to $Z^0$, the components $\alpha$ and $\gamma$ are deterministic in the conditional extremes framework, but through the semiparametric formulation we can handle them in the same way within the INLA framework. Using Gaussian process priors for spline coefficients allows for high modeling flexiblity through a relatively large number of basis functions, where hyperparameters ensure an appropriate smoothness of estimated functions.}
\subsection{Spatio-temporal Red Sea surface temperature data}\label{subsec:RedSea_spacetime}
Since the spatio-temporal models are more computationally intensive than their spatial counterparts due to a larger number of hyperparameters and more complex precision matrices, we focus only on the moderate set of spatial locations demonstrated in Figure~\ref{fig:spatialLocations}, which contains 678 spatial locations; this will still result in a substantial number of dimensions when we also take the temporal aspect into account.
To carry out inference for the conditional spatio-temporal model, \edit{we must separate the data into temporal blocks of equal length, with the aim that each block corresponds to an independent observation from the process $\{X(s,t)\}$. We first apply a version of the runs method of \cite{Smith.Weissman.1994} to decluster the data. Each cluster corresponds to a series of observations starting and ending with an exceedance of the threshold $u$ at the conditioning site, with clusters separated by at least $r$ non-exceedances of $u$. Once these clusters are obtained, we take the first observation in each one as the start of an extreme episode, with the following six days making up the rest of the block. Declustering is applied only with respect to the spatial conditioning site $s_0$, but we still consider observations across all spatial locations at the corresponding time-points. We select the tuning parameter in the runs method to be $r=12$; this is chosen following the approach of \cite{Simpson.Wadsworth.2020}, who check for stability in the number of clusters obtained using different values of $r$,} and note that since we focus only on summer months, blocks should not be allowed to span multiple years. This declustering approach yields $28$ \edit{non-overlapping} blocks of seven days to which we can apply our four spatio-temporal models.
\subsection{Model selection, forecasting and cross validation}\label{subsec:space-time-diagnostics}
We compare the four models using similar criteria as in the spatial case. The WAIC and average CPO values are presented in Table~\ref{tab:spacetimeResults}, where the most complex Model~4 performs best in terms of the WAIC, while a slightly better CPO value arises for Model~3. We note that the model selected using the WAIC has the same form in both the spatial and spatio-temporal cases.
We also compare fitted and observed values using a variant of the root mean square error (RMSE). The results of within-sample RMSEs are almost identical for the four models, and therefore not included in Table~\ref{tab:spacetimeResults}, yielding a value of $0.077$. To assess predictive performance, it is more interesting to consider an additional variant of cross validation in the spatio-temporal case to test the forecasting ability of the models. We carry out seven-fold cross validation by randomly separating our 28 declustered blocks into groups of four, and for each of these groups we remove the observations at all locations for days two to seven. We then fit the model using the remaining data in each case, and obtain predictions for the data that have been removed. This cross validation procedure is straightforward to implement, as in \texttt{R-INLA} it is possible to obtain predictions (e.g., posterior predictive means), including for time-points or spatial locations without observations. We compare the predicted values with the observations that were previously removed, presenting the cross validation root mean square error (RMSE$_{\text{CV}}$) in Table~\ref{tab:spacetimeResults}. Again, the results are quite similar, but Model~3 performs slightly better than the others. Finally, run-times are reported in the table and range between 1 hour and 4 hours, using 2 cores on machines with 32Gb of memory.
When comparing the spatial models with the corresponding space-time models having the same spatial component, the order of run-times changes in our results. We emphasize that, on average, more complex latent models will require longer run-times with INLA if the observations remain the same. However, the Laplace approximations conducted by INLA require iterative optimization steps to find modes of high-dimensional functions, and in some cases these optimization steps may be substantially more computer-intensive for a simpler model, for instance when the mode is relatively hard to identify. Therefore, there is no contradiction in the reported results.
\begin{table}[ht]
\centering
\resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c c c|c|}
\hline
Model number & Model form & WAIC & CPO & RMSE$_{\text{CV}}$ & Run-time\\
\hline
1 & $x\cdot\alpha(s-s_0,t-t_0) + \{Z^0(s,t)\}$ & 108 & -0.0003 & 0.001 & 99\\
3 & $x\cdot\alpha(s-s_0,t-t_0) + \gamma(s-s_0,t-t_0) + \{Z^0(s,t)\}$ & 59 & 0 & 0 & 206\\
4 & $x\cdot\alpha(s-s_0,t-t_0) + \gamma(s-s_0,t-t_0) + x^\beta \cdot \{Z^0(s,t)\}$ & 0 & -0.0018 & 0.091 & 71\\
5 & $x\cdot\alpha(s-s_0,t-t_0) + x^\beta \cdot \{Z^0(s,t)\}$ & 47 & -0.0004 & 0.094 & 89\\
\hline
\end{tabular}}
\caption{Summary of conditional space-time models, model selection criteria, and total run-times (minutes). The minimum WAIC value ($-215973$ for Model~4); maximum CPO value ($2.92$ for Model~3); and minimum RMSE$_\text{CV}$ value ($1.09$ for Model~3) have been subtracted from their respective columns. We estimate $\beta$ as $0.55$ with a 95\% credible interval of $(0.49,0.64)$ (Model~4) and $0.55$ $(0.50,0.65)$ (Model~5).}
\label{tab:spacetimeResults}
\end{table}
\section{Computational and implementation details}\label{sec:computation}
\subsection{Introduction}
This section provides further details on INLA, the SPDE approach, and specifics of implementation that are necessary to gain a full understanding of our methods, but not to appreciate the general ideas behind the approach.
\subsection{Bayesian inference with the integrated nested Laplace approximation}
The integrated nested Laplace approximation \citep[INLA;][]{Rue.al.2009,Rue.al.2017,Opitz.2017b,Niekerk.et.al.2019} provides relatively fast and accurate analytical approximations for posterior inference in models with latent Gaussian processes. The distribution of the observed variables may be non-Gaussian conditional on the latent Gaussian process. \edit{Although here the focus of our modeling approach for conditional extremes is on Gaussian responses,} this does not imply a joint Gaussian assumption on our data, as explained in Section~\ref{sec:spatialModels}. The method astutely combines Laplace approximations \citep{Tierney.Kadane.1986}, used to compute expectations with respect to high-dimensional multivariate Gauss--Markov random vectors (denoted by $\bm W$ in Section~\ref{sec:generalities}, with up to tens of thousands of components), with efficient numerical integration schemes for integration with respect to a relatively small number of hyperparameters (denoted by $\bm \theta$) governing variance and \edit{correlation} of Gaussian components, and the shape of the distribution of observations. Therefore, it bypasses issues that may arise with simulation-based Markov chain Monte Carlo (MCMC) inference, where the design of stable algorithms for fast exploration of the posterior distribution may be hampered by intricate dependencies between the components of the model \citep[e.g.,][]{Rue.Held.2005}. With Gaussian distributions for the likelihood as in our model assumption in \eqref{eq:modelgeneral}, the Laplace approximation is exact. INLA is implemented in the \texttt{INLA} package \citep{Lindgren.Rue.2015} of the \texttt{R} statistical software, also referred to as \texttt{R-INLA}, and over the last decade it has been widely adopted for Bayesian additive regression modeling of spatial and spatio-temporal data due to its integration with the stochastic partial differential equation (SPDE) approach \citep{Lindgren.al.2011,Krainski.al.2018}, which provides convenient Gauss--Markov approximations to the Mat\'ern covariance function. The Bayesian framework of INLA allows for joint estimation and uncertainty assessment of latent components, hyperparameters and predictions. Recently, the speed and stability of INLA with high-dimensional latent Gaussian structures were further leveraged through its integration with the sparse matrix computation library \texttt{PARDISO} \citep{Niekerk.et.al.2019}.
\edit{We further point out that the approach of generalized additive models with quadratic penalty terms on coefficients $\bm W$ using frequentist instead of Bayesian estimation of hyperparameters $\bm \theta$ puts no prior distribution on $\bm \theta$, but the interpretation of $\bm W$ as a Gaussian process is maintained to provide joint estimation of hyperparameters $\bm\theta$ and regression coefficients $\bm W$ through Laplace approximation \citep{Wood2011}, similar to the INLA method.}
\edit{For a concrete example of how Laplace approximation of integrals representing posterior estimates (i.e., certain expectations) works, we show how to use it for obtaining the posterior distribution of $\theta_1$, the first component of the hyperparameter vector $\bm\theta$. We denote by $\bm{\theta}_{-1}=(\theta_2,\ldots)^T$ the hyperparameter vector with the component to estimate removed. Since the function arguments are considered as non-stochastic, we here use lower-case notation $\bm v$ and $\bm w$ for $\bm V$ and $\bm W$, respectively. We have
$$
\pi(\theta_1\mid \bm v) = \int_{\bm \Theta_{-1}}\int_{\mathbb{R}^m} \pi(\bm \theta,\bm w \mid \bm v) \,\mathrm{d}\bm w\mathrm{d}\bm{\theta}_{-1},
$$
where the outer integral $\mathrm{d}\bm{\theta}_{-1}$ can be disregarded if $\bm{\theta}=\theta_1$ has only one component. The joint posterior density of $\bm w$ and $\bm \theta$ can be calculated up to a constant as follows,
$$
\pi(\bm \theta,\bm w \mid \bm v) \propto \pi(\bm v \mid \bm \theta, \bm w) \times \pi(\bm w\mid \bm \theta)\times \pi(\bm\theta) = \exp\left(\log \pi(\bm w\mid \bm \theta) + \sum_{j=1}^d \log \pi(V_j \mid \bm \theta, \bm w)\right) \times \pi(\bm\theta),
$$
where the proportionality factor $\pi(\bm v)^{-1}$ is constant for a fixed dataset $\bm v$.
Writing
$$
g(\bm w)= \log \pi(\bm w\mid \bm \theta) + \sum_{j=1}^d \log \pi(v_j \mid \bm \theta, \bm w)
$$
for the function in the exponent, we replace it by a quadratic approximation using its second-order Taylor expansion around its modal configuration $\bm w^\star$ with $g(\bm w^\star) = \max_{\bm w} g(\bm w)$, i.e.,
$$
g(\bm w) = g(\bm w^\star) + (\bm w-\bm w^\star)^T g''(\bm w^\star) (\bm w-\bm w^\star)
$$
with the Hessian matrix $g''(\bm w^\star)$ of $g$. This defines an approximation of the integrand $\pi(\bm \theta,\bm w \mid \bm v)$ via a multivariate Gaussian density, and therefore the value of the integral with respect to $\mathrm{d}\bm w$ can be calculated straightforwardly. If the likelihood $\pi(v_j\mid \bm \theta, \bm w)$ is Gaussian, then the approximation of $g''(\bm w^\star)$ through a Gaussian density is exact and easy to calculate directly. In the general case, numerical implementations such as the \texttt{R-INLA} software use iterative algorithms to find $\bm w^\star$. Finally, the outer integral with respect to $\mathrm{d} \bm \theta_{-1}$ in relatively small dimension is calculated through an appropriate discretization scheme. Note that the Laplace approximation of the inner integral has to be calculated for each discretization point of $\bm \theta_{-1}$. A similar approximation scheme can then be applied for posterior densities of some component $w_j$ of $\bm w$, or of some linear combination of components of $\bm w$ (e.g., components of the linear predictor $\bm \eta$), where Laplace approximation is used to calculate the integral with respect to $\mathrm{d}\bm w_{-j }$. When the likelihood is Gaussian as in our case, then one can simply use the exact conditional distributions $\pi(w_j \mid \bm w_{-j}, \bm v, \bm \theta)$, which are univariate Gaussian. }
\subsection{The SPDE approach}\label{subsec:SPDE}
The latent variable framework allows us to choose the spatial resolution of the latent model separately from that of the observed locations. Moreover, we can use the results of \citet{Lindgren.al.2011}, known as the stochastic partial differential equation (SPDE) approach, to work with numerically convenient Markovian approximations to the Mat\'ern covariance function, leading to sparse precision matrices. We consider random fields defined on $\mathbb{R}^D$; for the residual process $\{Z^0(s)\}$, $D=2$, but we will also use this framework with $D=1$ to define the spline functions with respect to the distance to the conditioning site. The SPDE is given by
\begin{equation}
\label{eq:spde}
\left(\kappa^2 - \Delta \right)^{\zeta/2} \tau \{W(s):s\in \mathbb{R}^D\} = \{B(s):s\in \mathbb{R}^D\},
\quad \zeta=\nu+D/2,
\end{equation}
with the Laplace operator $\Delta y=\sum_{j=1}^D \partial^2 y/\partial^2 x_j$, a standard Gaussian white noise process $\{B(s)\}$, and parameters $\kappa>0$ (controlling correlation range) and $\tau>0$ (controlling the variance). It has a unique stationary solution given by a zero-mean Gaussian process $\{W(s)\}$ with Mat\'ern covariance function. Here, $\nu$ is the shape
parameter of the Mat\'ern, with
$\nu=0.5$ yielding the exponential covariance model. The marginal
variance \edit{of $\{W(s)\}$ is $\sigma_{Z}^2=\Gamma(\nu)/(\Gamma(\nu+D/2)(4\pi)^{D/2}\kappa^{2\nu}\tau^2)$,
and the \emph{empirical range}, where a correlation of approximately $0.1$ is attained between two
points, is approximately $\rho_{Z}=\sqrt{8\nu}/\kappa^2$}. Note that this range parameter is different from the range in the classical Mat\'ern parametrization.
In practice, the domain is finite, i.e., different from $\mathbb{R}^D$, and appropriate boundary conditions must be imposed to ensure a solution that is unique in terms of finite-dimensional distributions. An approximation to the exact solution satisfying the boundary conditions is constructed through the representation $W(s) = \sum_{j=1}^{m} W_j \Psi_j(s)$ with locally supported basis functions $\Psi_j(s)$ (e.g., linear or quadratic B-splines for $D=1$, and finite elements for $D=2$). The basis functions do not depend on SPDE parameters. The stochastic solution $\{W(s)\}$ of the SPDE in the subspace of functions spanned by the linear combination of basis functions then yields $\bm{W} = (W_1,\ldots,W_{m})^T\sim \mathcal{N}_{m}(0,Q^{-1})$ with precision matrix $Q$ known in analytical form. \edit{We emphasize that $\{W(s)\}$ here could represent the splines used for $\alpha(s-s_0)$ or $\gamma(s-s_0)$, or the spatial process $\{Z(s)\}$ used in the construction of $\{Z^0(s)\}$.} In Section~\ref{sec:proposed.a}, we labelled the corresponding latent variables $\bm{W}_\alpha\in\mathbb{R}^{m_\alpha}$, $\bm{W}_\gamma\in\mathbb{R}^{m_\gamma}$ and $\bm{W}_Z\in\mathbb{R}^{m_Z}$. For $D=2$, we use Neumann boundary conditions where the outward derivative of the realizations of the Gaussian field is zero, which is the default choice for spatial modeling with \texttt{INLA}. For $D=1$ and a support given by an interval, a unique approximation to the SPDE solution exists with free boundaries. In our models where spline functions are constrained to value zero at the origin, we use constructions with a Dirichlet boundary on the left side of the interval, such that the solution satisfies the constraint. Theoretical results in \citet{Lindgren.al.2011} show that the approximation to the solution is good in general and can be made arbitrarily close by choosing a finer finite element mesh.
The value of $\zeta$ in \eqref{eq:spde} determines how the approximate solution of the SPDE can be constructed in practice \citep{Lindgren.al.2011}, and it must be fixed when estimating the model with INLA. The INLA implementation currently supports using $\zeta\in[1,2]$, i.e., $\nu\in[0,1]$ for $D=2$.
The vector $\bm{W}_Z$ contains the variables used to represent a single replicate of the Gaussian process. When modeling conditional extremes, we usually extract $n>1$ extreme episodes satisfying $X(s_0)>u$. To represent the unconstrained residual spatial process $\{Z(s)\}$, we therefore need independent replicates $\bm{W}_{Z,j}$, $j=1,\ldots,n$, of $\bm{W}_Z$. Moreover, for the purpose of space-time modeling, we may assume that single episodes span $\ell\geq 1$ time steps. Then, for the unconstrained residual process $\{Z(s)\}$ associated with each episode, we will define a Gaussian vector with $\ell\times m_Z$ components, and there will be $n$ replicates of this vector. We will write the precision matrices of Gaussian vectors comprising several blocks of the initial variables $\bm{W}_Z$ through Kronecker products of matrices; see Section~\ref{subsec:spacetime}.
\subsection{Imposing the condition $Z^0(s_0)=0$ on the residual process}\label{subsec:residualConstraint}
As mentioned in Section~\ref{subsec:CSEintro}, the residual process $\{Z^0(s)\}$ in the spatial conditional extremes model can be constructed by starting with a Gaussian process $\{Z(s)\}$ and imposing the ($Z(s_0)=0$)-constraint in some way. \cite{Wadsworth.Tawn.2019} propose two options: either subtract the value at the conditioning site, i.e., set the residual process to be $\{Z(s)\}-Z(s_0)$; or use the conditional process $\{Z(s)\}\mid Z(s_0)=0$.
In the latent variable framework, we can obtain a residual process of form $\{Z(s)\}-Z(s_0)$ without losing the latent Markovian structure, since we only need to manipulate the representation for $\{Z(s)\}$, which has a sparse precision matrix. The latent variables representing $\{Z(s)\}$ are handled as usual, but we modify the observation matrix $A_S$ of the spatial process $\{Z(s)\}$ to obtain $A_S^0$, the observation matrix associated with the process $\{Z(s)\}-Z(s_0)$. Therefore, let $A_{s_0}$ denote the observation matrix for the conditioning site of dimension $1\times m_Z$, and $A_S$ the observation matrix for the observation locations with dimension $d\times m_Z$. Then, we apply the transformation
\begin{equation}\label{eq:Aspatial}
A_S^0 = A_S - \begin{pmatrix} A_{s_0} \\ \vdots \\ A_{s_0} \end{pmatrix} \in \mathbb{R}^{d \times m_{Z}}
\end{equation}
to obtain the new observation matrix.
The alternative approach is to impose the ($Z(s_0)=0$)-constraint via conditioning, in the sense of the conditional probability distribution. In general, if $\bm W \sim \mathcal{N}_m(\bm 0,Q^{-1})$ is an $m$-dimensional Gaussian random vector with precision matrix $Q$, we may want to impose a linear constraint of the form
$$
B \bm W = \bm e, \quad B\in\mathbb{R}^{k\times m}, \quad \bm e \in \mathbb{R}^k,
$$
where $k$ is small.
For instance, $B=(1,0,\ldots,0)$ and $\bm e=0$ if we constrain the Gaussian vector to satisfy $W_1=0$, or $B=(1/m,\ldots,1/m)$ and $e=0$ if we constrain the average value to $0$.
The linear transformation
$$
\bm W\mid (B\bm W = \bm e) \quad \stackrel{d}{=} \quad \bm W - Q^{-1} B^T\left(BQ^{-1}B^T\right)^{-1}(B\bm W -\bm e)
$$
of the unconstrained vector $\bm W$ imposes this constraint in the sense of generating a realization of the conditional distribution given $B\bm W = \bm e$. In practice, one can calculate $BQ^{-1}$ by solving $k$ linear systems without explicitly calculating and storing $Q^{-1}$, and fast implementations exist when $Q$ is sparse and $k$ is very small. This approach is known as \emph{conditioning by kriging} \citep[see, e.g., equation (8) in ][]{Rue.al.2009,Cressie.1993}; it is available in \texttt{R-INLA}, and we use it for the implementation of the models presented here. Another possibility, applicable in a more specific setting by allowing us to directly condition the Gaussian vector $\bm W$ on $W_1=0$ (here using the first component without loss of generality), is to remove $W_1$ from $\bm W$, resulting in $\bm W_{-1}$. The precision matrix of $\bm W_{-1}$ conditional on $W_1=0$ then corresponds to $Q$ but with the first row and the first column removed. Since this approach is less general (specifically, in order to impose $Z^0(s_0)=0$, we require that a knot is placed at $s_0$), we here prefer the approach of conditioning through kriging. With respect to model structure, the difference between the two approaches is that conditioning through kriging does not fix the constraint in the prior model, but imposes it in the posterior model by applying the conditioning transformation during the Laplace approximations of INLA, while the second approach directly fixes the constraint in the prior model. In both cases, the condition is appropriately incorporated into the posterior model, and no notable differences arise in the posterior models returned by \texttt{R-INLA}.
For our Red Sea data application, we found that the choice of residual process does not have a large impact on results. The option of using the form $\{Z(s)\}-Z(s_0)$ performed slightly better overall, and we therefore used this method for the results presented in Sections~\ref{sec:spatialInference} and~\ref{sec:spacetimeInference}. A comparison of results using the two different approaches is provided in Appendix~\ref{app:Z0comparison}.
\subsection{Space-time Gauss-Markov models}\label{subsec:spacetime}
Inference on spatial conditional extremes is usually based on replicated observations, corresponding to extreme events of the spatial process $\{X(s)\}$, and in the case of space-time conditional extremes on replicated observations of extreme episodes stretching over several time steps. In this section, we detail how to combine Kronecker products of precision matrices, appropriate observation matrices, and the conditioning approaches outlined in Section~\ref{subsec:residualConstraint}, to generate the latent variable representations of the residual processes $\{Z^0(s)\}$ using sparse precision matrices.
In a setting with $\ell \geq 1$ independent and identically distributed replicates of spatial Gaussian fields, the joint precision matrix of the $\ell$ fields considered at a fixed set of spatial locations can be represented as the Kronecker product $Q_{ST} = I_\ell \otimes Q_S$, where $I_\ell$ is the $\ell\times \ell$ identity matrix and $Q_S$ is a purely spatial precision matrix. More general time-stationary but temporally dependent sparse precision matrices are possible using the assumption of separable space-time dependence. Given sparse precision matrices $Q_S$ and $Q_T$, the latter representing the purely temporal covariance structure, the precision matrix for $\ell$ time steps of the space-time process corresponds to the Kronecker product $Q_{ST} = Q_T \otimes Q_S$. The precision matrix $Q_T$ corresponds to a stationary Gaussian time series (e.g., of a first-order auto-regressive process), assumed to have variance $1$ for the sake of identifiability of variance parameters; see Section~\ref{sec:implementation} for further details.
With \texttt{R-INLA}, the standard choice for modeling spatio-temporal dependence is temporal auto-correlation for $Q_T$. Using discrete and equidistant time steps, we consider the stationary space-time process $\{W(s,t)\}$, with auto-correlation parameter $\rho\in(-1,1)$, given as
\begin{align}
\{W(s,1)\} &= \{\varepsilon_1(s)\}, \notag \\
\{W(s,t+1)\} &= \rho \{W(s,t)\} + \sqrt{1-\rho^2}\{\varepsilon_{t+1}(s)\}, \quad t=1,2,\ldots, \label{eq:ar1}
\end{align}
where $\varepsilon_t$, $t=1,2,\ldots$ are Gaussian random fields with Mat\'ern covariance, and $\{W(s,t)\}$ and $\{\varepsilon(s,t)\}$ possess the same variance. \edit{In our setting, this auto-regressive structure is only used to model temporal dependence within single extreme episodes, and there is no assumption of dependence between different extreme events.} The space-time precision matrix for the Cartesian product of a collection of sites and times corresponds to the Kronecker product of the corresponding purely spatial Mat\'ern precision matrix $Q_S$, and the purely temporal $\ell\times\ell$ precision matrix $Q_T^{\mathrm{AR1}}$ of a stationary first-order auto-regressive process with marginal variance $1$, defined as follows for $\ell\geq 1$ time steps:
$$
Q_T^{\mathrm{AR1}} = \frac{1}{1-\rho^2}
\begin{pmatrix}
1 & -\rho & & & & \\
-\rho & 1+\rho^2 & -\rho & & & \\
& -\rho & 1+\rho^2 & -\rho & & \\
& & \ddots & \ddots & \ddots & \\
& & & -\rho & 1+\rho^2 & -\rho \\
& & & & -\rho & 1
\end{pmatrix}.
$$
The Kronecker product $Q_T^{\mathrm{AR1}}\otimes Q_S$ then has the following form:
$$
Q_{ST} = \frac{1}{1-\rho^2}
\begin{pmatrix}
Q_S & -\rho Q_S & \cdots & \cdots & 0 \\
-\rho Q_S & (1+\rho^2)Q_S & -\rho Q_S & \cdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \cdots & -\rho Q_S & (1+\rho^2)Q_S& -\rho Q_S \\
0 & \cdots & \cdots & -\rho Q_S & Q_S
\end{pmatrix}.
$$
We can modify the spatio-temporal Gaussian process to enforce $Z^0(s_0,t_0)=0$ in the corresponding residual process by analogy with the spatial setting in Section~\ref{subsec:residualConstraint}. The procedure for conditioning on $Z(s_0,t_0)=0$ also does not present notable differences, and we now detail the alternative approach of using the construction $\{Z^0(s,t)\}=\{Z(s,t)\}-Z(s_0,t_0)$. Assume, without loss of generality, that the time $t_0$ with the observed conditioning value corresponds to the first time step of each extreme episode, as outlined in Section~\ref{subsec:RedSea_spacetime}, and that the same locations are observed during the $\ell$ time steps. We first define $A_{s_0,t_0}$ as the observation matrix for the conditioning site and time with dimension $1\times (m_Z\times \ell)$, where $m_Z$ is the number of latent variables for a single spatial replicate, as before. For instance, if $Z(s_0,t_0)$ corresponds to the first latent variable, then $A_{s_0,t_0}=(1,0,\ldots,0)$.
Using the notation for spatial observation matrices as defined in \eqref{eq:Aspatial}, the observation matrix $A_{ST}^0$ for one episode of the residual process $\{Z^0(s,t)\}=\{Z(s,t)\}-Z(s_0,t_0)$
is given by the modified block-diagonal matrix
$$
A_{ST}^0 =
\begin{pmatrix}
A_S & 0 & \cdots & \cdots & 0 \\
0 & A_S & 0& \cdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \cdots & 0 & A_S& 0 \\
0 & \cdots & \cdots & 0 & A_S
\end{pmatrix}
-
\begin{pmatrix}A_{s_0,t_0} \\ \vdots\\ \vdots \\ \vdots \\ A_{s_0,t_0}\end{pmatrix},
$$
with $\ell$ blocks on the diagonal, and one or several columns with the same non-negative entries for all rows to represent the term $-Z(s_0,t_0)$. Then, the representation $A_{ST}^0$ coincides with $A_S^0$ in the case of purely spatial extreme episodes ($\ell=1$).
Finally, we take into account the replication structure with $n$ observed replicates of extreme spatial or spatio-temporal episodes.
By assuming that each replicate has the same design of spatial locations observed over $\ell$ time steps, we can write the overall observation matrix as the Kronecker product $A_{\mathrm{repl}}=I_n\otimes A_{ST}^0$ with the $n\times n$ identity matrix $I_n$.
We emphasize that we use the constructions of spatio-temporal processes based on \eqref{eq:ar1} for two purposes. First, we can specify the residual process $\{Z^0(s,t)\}$ by using $\{Z^0(s)\}$, with $s\in\mathcal{S}\subset\mathbb{R}^2$, in~\eqref{eq:ar1}. \edit{Second, we can define the prior structure for the functions $\alpha(s-s_0,t-t_0)$ or $\gamma(s-s_0,t-t_0)$ by using independent copies of $\alpha(s-s_0)$ or $\gamma(s-s_0)$, respectively, for the innovation process $\varepsilon$ in \eqref{eq:ar1}, with $t$ running from $1$ to $\ell$.} The latter case can be seen as the use of a Gaussian process prior for the coefficients of a tensor product spline basis, defined with respect to the dimensions of spatial distance and time lag.
The form of the process $\{W(s,t)\}$ used here exhibits separable dependence in space and time. At present, there are no other, more flexible non-separable models indexed over continuous space and readily implemented within \texttt{R-INLA}, although the possibility of such models has been discussed by \citet{Bakka.al.2018}. \edit{\cite{Simpson.Wadsworth.2020} consider the case for using non-separable dependence forms within spatio-temporal conditional extremes models. They conclude that allowing for non-separability in the normalizing functions (e.g., where $a_{(s,t)}$ cannot be decomposed additively or multiplicatively into a purely spatial and a purely temporal term) is more important than in the residual process} since these capture more of the structure in the model. Within \texttt{R-INLA}, the semiparametric specification of the first normalizing function $a_{(s,t)-(s_0,t_0)}$ allows for flexible, non-separable structure in the posterior estimate of the function (which is not to be confounded with the fact that its prior dependence structure is separable). More complex, non-separable parametric forms of the function $b_{(s,t)-(s_0,t_0)}$ could be estimated by analogy with the spatial case. We conclude that using a separable form of $\{W(s,t)\}$ here should be sufficient.
\subsection{Implementation using the \texttt{R-INLA} software}\label{sec:implementation}
\edit{For examples of the implementation of our proposed approach in \texttt{R-INLA}, we have made annotated code available on GitHub (\url{https://github.com/essimpson/INLA-conditional-extremes}). When fitting models using this approach, the default \texttt{R-INLA} output consists of discretized versions of univariate posterior densities for the components of the latent Gaussian model terms (including the linear predictor components) and hyperparameters of the model. Moreover, standard summaries of these univariate posteriors are provided, including posterior means, medians and $95\%$ credible intervals. It is also possible to request further specific outputs when calling the \texttt{inla} function, which carries out the estimation in \texttt{R}, such as estimates of the WAIC and CPO values. If required, one can also obtain approximate posterior samples after having estimated the model, which allows for posterior inference using Monte Carlo estimation for more complex quantities that are not part of the output that can be provided directly.}
While standard functionality available in the \texttt{R-INLA} library allows for straightforward implementation of the unconstrained SPDE model and auto-regressive structures for dimensions $D=1,2$, as presented in Sections~\ref{subsec:SPDE} and \ref{subsec:spacetime}, respectively, more specific extensions are required for imposing the condition $Z^0(s_0,t_0)=0$.
The \texttt{R-INLA} package provides the precision matrices of the unconstrained latent spatial process $\{Z(s)\}$. Space-time processes $\{Z(s,t)\}$, and independent replications of spatial or spatio-temporal processes, are then handled internally by \texttt{R-INLA}. To estimate model components of type \edit{$x^\beta\left[\{Z(s,t)\}-Z(s_0,t_0)\right]$ where the parameter $\beta$ does not need to be estimated through INLA}, we can simply modify the observation matrix $A_{\mathrm{repl}}$ and give it as an input to the estimation routine. As to imposing the constraint where we condition on $Z(s_0,t_0)=0$, the conditioning-by-kriging approach using a matrix $B$ and a vector $\bm e$ is already implemented for the spatial $\{Z(s)\}$ process ($D=2$), and can be used for spatial extreme episodes with $\ell=1$. Similarly, for $D=1$ the condition $Z(0)=0$ can be set through a flag in \texttt{R-INLA}, and we will deploy this mechanism to constrain priors of spline functions used to model the functions $\alpha(s-s_0)$ and $\gamma(s-s_0)$ in \eqref{eq:modelgeneral}.
However, space-time models ($\ell>1$) with temporal auto-regression, where the condition is active only for exactly one of the $\ell$ time steps, are not possible through this mechanism in \texttt{R-INLA}. Similarly, \edit{variances of the residual space-time process $x^\beta\{Z^0(s,t)\}$ that vary over the $\ell$ time steps, with non-stationarity expressed through hyperparameters to be estimated, are not directly available.}
Many additive components of the latent model that are not directly available through standard mechanisms in \texttt{R-INLA} can be implemented manually through its \texttt{rgeneric} function. This requires us to manually define functions that return the precision matrix, the (deterministic) mean function (if different from $0$), and the prior densities of the hyperparameters of the component to be set up.
\edit{In particular, the parameter $\beta$ may be treated as a hyperparameter to be estimated.} Moreover, \edit{we could estimate a parametric mean function $a_{(s,t)-(s_0,t_0)}(x)$ in $a_{(s,t)-(s_0,t_0)}(x)+x^\beta\{Z^0(s,t)\}$}, where $a_{(s,t)-(s_0,t_0)}(x)$ depends on hyperparameters but does not involve any of the latent Gaussian components gathered in the vector $\bm W$. Finally, the conditioning on $Z(s_0,t_0)=0$ in the spatio-temporal setting ($\ell>1$) can be imposed by combining an unconditional \texttt{rgeneric}-model with the conditioning through kriging technique available as a standard mechanism within \texttt{R-INLA}. In all operations involving large precision matrices, it is crucial to use appropriate sparse matrix objects and sparse matrix operations in the \texttt{R} language.
In Section~\ref{subsec:spatialModels}, we proposed a cross validation procedure that involved removing and subsequently predicting observations in a particular region of the spatial domain. We now highlight that in \texttt{R-INLA}, this is straightforward to achieve by replacing the data for the cross validation by missing data flags, since these values will automatically be estimated, e.g., through the posterior mean of the ``fitted values".
\section{Discussion}\label{sec:discussion}
The aim of this paper was to develop an inferential approach for spatial and spatio-temporal conditional extremes models, by exploiting latent Gaussian processes within the SPDE framework, and with efficient inference carried out using \texttt{R-INLA}. A benefit of this method is that we are able to handle more spatial or spatio-temporal locations than is possible using existing likelihood-based techniques. In principle, the Laplace approximations carried out within INLA could also be used for frequentist inference without specifying prior distributions for hyperparameters, but we emphasize that the Bayesian framework comes with some valuable benefits, such as the control of model complexity via the use of penalized complexity priors. High-dimensional inference was facilitated by accepting some modest restrictions on the modeling set up. Firstly, we only considered inference based on a single conditioning location. As mentioned in Section~\ref{sec:sensitivity}, in other contexts sensitivity to the choice of conditioning location has been reduced by use of composite likelihoods to incorporate all potential conditioning locations. However, this comes at a computational cost, with a further much larger cost to assess uncertainty via the bootstrap. Secondly, we only allow for the residual process to have a Gaussian form. In many applications this is likely to be adequate, but may lead to problems if the domain of the data is sufficiently large that there is approximate independence in the extremes at long distances. This is because under independence, we expect $\alpha(s-s_0) =0$, $\gamma(s-s_0) =0$ and $\beta = 0$, such that $\{X(s)\}|[X(s_0)=x] = \{Z^0(s)\}$, but there is a mismatch between the marginals of $\{X(s)\}$ (Laplace) and $\{Z^0(s)\}$ (Gaussian). \citet{Wadsworth.Tawn.2019} dealt with this by allowing more general forms for the margins of $\{Z^0(s)\}$, but where this is not necessary, use of untransformed Gaussian processes is certainly more efficient. In principle, non-Gaussian responses can be handled within \texttt{R-INLA} by using a response distribution (i.e., a ``likelihood model") different from the Gaussian; however, due to the conditional independence assumption with respect to the latent Gaussian process, it may be difficult to obtain models that realistically reflect the spatio-temporal smoothness of observations. In contrast to the existing inferential approach, INLA allows us to estimate flexible semiparametric specifications for the functions arising in the mean of the Gaussian process of conditional extremes, and the estimation and uncertainty assessment is performed jointly with all other model parameters.
Since we construct models using a single conditioning site, $s_0$, but may subsequently assume that the fitted model applies at other conditioning locations, another important consideration is the choice of a suitable position for $s_0$. There are certain aspects to take into account here, as highlighted in Section~\ref{sec:sensitivity}. For instance, one may wish to choose $s_0$ so that $\max_{i=1,\dots,d}\|s_i-s_0\|$ takes its largest value, as this will provide more reliable estimates for the spline functions $\alpha(s-s_0)$ and $\gamma(s-s_0)$ at the longest distances. On the other hand, choosing $s_0$ towards the edge of the spatial domain may mean that it is less likely to be representative of the full set of locations. These two considerations should be balanced in the selection of $s_0$. Even when inference on parameters has been made using a single conditioning location, our assumption of spatial stationarity means that it is still possible to infer conditional probabilities or expectations for alternative conditioning sites or events. In particular, \citet{Wadsworth.Tawn.2019} demonstrate how to make inference on quantities of the form
\[
\mathrm{E}[g(\{X(s)\})|\max_{1\leq i \leq d} X(s_i)>u],
\]
for a function $g(\cdot)$ of interest, which could be exploited in our setting just as easily.
In other application contexts, the analysis of non-stationarities in conditional extremes with respect to $s_0$ may be of interest, such that the assessment of differences between models fitted at different conditioning locations $s_0$ is an inferential goal in itself. The local modeling suggested by the conditional extremes approach makes sense if we have a large study area with possible non-stationarities, but are mostly interested in inferences on local features. For example, with the Red Sea surface temperature data, one could choose $s_0$ as a representative site of one coral reef, or several closely located coral reefs, although we have not done this here. In climate studies, loosely speaking ``climate" is often considered as pertaining to the characteristics of the marginal distribution, while ``weather" is additionally driven by the local spatio-temporal dependence; with the conditional extremes models, we could also consider the properties of the $s_0$-conditioned model as part of the ``climate" at $s_0$.
As discussed in Section~\ref{sec:sensitivity}, \cite{Wadsworth.Tawn.2019} and \cite{Simpson.Wadsworth.2020} use a composite likelihood approach to combine information across several conditioning sites. While this is also a possibility in our setting, we would lose some of the benefits over the classical likelihood framework since the uncertainty estimation becomes awkward in a Bayesian context, though consistency of estimators is preserved \citep{Soubeyrand2015}. An alternative may be to obtain separate estimates for different conditioning locations, and combine these via some weighted approach, i.e., perform model averaging either in the domain of the models' likelihood or in the domain of their predictions. This provides a potential avenue for further work.
While this was not necessary for our Red Sea data example, it would be relatively straightforward to include covariates within the latent Gaussian structure. For instance, a distant-dependent variance model may be more appropriate in certain cases, and such an adaptation would be possible within the INLA framework by suitable modification of the models outlined in Table~\ref{tab:spatialModels}. As is generally the case with covariate modeling, the difficulty here is in choosing relevant covariates whose influence can be easily interpreted. For some scenarios, the effect of a particular covariate may already be known, and the modeling may benefit from this approach. Another technique that may be useful in certain settings is censoring, which is also possible within the INLA framework via the inclusion of a censored Gaussian response for the likelihood of observations in Section~\ref{sec:generalities}; see \citet{Zhang.al.2019} for an MCMC-implementation in a similar context.
A further issue linked to modeling non-stationarity is the assumption of isotropy that we place on the underlying spatial process. Indeed, the results in Appendix~\ref{app:chi} show that there is some violation of this assumption in our application. \cite{Wadsworth.Tawn.2019} deal with anisotropy by including a transformation of the spatial coordinates within the modeling procedure. This approach is also adopted by \cite{Simpson.Wadsworth.2020}, and the resulting transformation for the sea surface temperature data in the northern Red Sea is very small. Such a transformation is not incorporated within the standard \texttt{R-INLA} set-up, but the anisotropy parameters could be estimated using a \texttt{generic} model. In the spatial setting, \cite{Richards2021} propose a deformation technique to deal with non-stationarity in extremal dependence features. This allows for anisotropy to be handled as a preliminary modeling step, which it would be possible to do in the context of conditional spatial extremes.
Our approach is not fully Bayesian, since the marginal transformation of data at each spatial location to a standard Laplace distribution is carried out separately to the dependence modeling. This approach appears sufficient, particularly since simultaneous estimation of the margins and the dependence within the INLA framework would be intricate. This would result in us resorting to MCMC estimation, but here we do not want to sacrifice the simplicity and speed of the INLA-implementation with big datasets.
There are several aspects that we have had to consider as part of the implementation of the conditional extremes models in INLA, with some of these being more important than others. We found the priors of the hyperparameters to have minimal importance, as indicated by posteriors with small credible intervals. This is likely due to the large number of observations we had available. We also observed very similar results when setting the SPDE parameter to $\zeta=1.5$ or $\zeta=2$, so for our data, the smoothness of the Gaussian field does not have a significant impact compared to other aspects of the model. As we may have expected, choosing appropriate normalizing functions is hugely important, and the forms of these may need to be tailored to the specific data application.
\paragraph{Data and code} Code to implement the models in this paper is available online in the GitHub repository \url{https://github.com/essimpson/INLA-conditional-extremes}, and the sea surface temperature data can be downloaded at \url{https://marine.copernicus.eu}.
\bibliographystyle{apalike}
|
1,116,691,499,024 | arxiv | \section{Acknowledgements} We thank Beno\^{i}t Fauqué for sharing the STO data of Fig.~\ref{fig:STOcomparison}. C.H.L. acknowledges support from NSF GRFP DGE-1746045. G.G.G.-V. acknowledges support from the Vice-rectory for Research at the University of Costa Rica under project No. 816-C1-601.
\newpage
|
1,116,691,499,025 | arxiv | \section{Introduction}
The size of a galaxy, as measured by its half-mass radius $R$, for example,
is among the most basic of its properties. Together with the mass $M$,
the size $R$ determines the binding energy, $-E\approx{GM^2}/(4R)$, and hence
the energy radiated away during the formation of the galaxy. For galactic disks,
with stars and gas on nearly circular orbits with rotation velocity $V_{\rm{rot}}$,
the size $R$ is determined by the angular momentum $J\approx{MRV_{\rm{rot}}}$,
which in turn determines the energy $E=-\mathrm{\sfrac{1}{2}}MV_{\rm{rot}}^2\approx-{G^2M^5}/(8J^2)$.
The basic description of galaxies in general consists of $M$, $R$, and $V_{\rm{rot}}$,
or equivalently $M$, $E$, and $J$, while for disk-dominated galaxies, any two of
these quantities suffice.
As a result of the hierarchical growth of galaxies, we expect their masses and
radii to increase with cosmic time and thus to decrease with redshift. In the
simplest models of galaxy formation, the sizes of the baryonic components of galaxies are,
on average, proportional to the sizes of their surrounding dark matter halos.
For galactic disks, this proportionality in sizes follows directly from the assumed
proportionality of the specific angular momentum of baryons and dark matter resulting from
tidal torques in the early stages of galaxy formation \citep{Fall:1980up,Mo:1998hg}.
This assumption underlies practically all of the semianalytical models of galaxy formation
in current use \citep[e.g.,][]{Cole:2000fl,Croton:2016jg}. Recent hydrodynamical simulations
of galaxy formation confirm the approximate proportionality between the specific angular momentum
of galaxies and their dark matter halos \citep{Genel:2015kp,Pedrosa:2015dh,Teklu:2015ev,Zavala:2016ki}.
There have been numerous searches for the expected decrease in galactic sizes with redshift
based on measurements of deep images taken with the \emph{Hubble Space Telescope} (\emph{HST}\xspace) over the
past dozen years \citep[e.g.,][]{Ferguson:2004dt,Hathi:2008ca,Mosleh:2012cw}. These searches
all find that galaxies were smaller in the past, by roughly the predicted amount,
although there are significant differences in the precise decline of galactic sizes with
redshift among these studies (compare, e.g. \citealt{Shibuya:2015bj} and \citealt{CurtisLake:2016bm}).
Part of the discrepancy among these results stems from the fact that the apparent evolution in
sizes depends on how galaxies at different redshifts are compared, whether at fixed
stellar mass or luminosity or at variable stellar mass or luminosity.
\cite{Kravtsov:2013cy} used stellar mass--halo mass (SMHM) relations derived via the
technique of abundance matching to compare the observed sizes of present-day galaxies
with the sizes of their matched dark matter halos in cosmological $N$-body simulations.
He found that the sizes of galaxies at $z=0$ are proportional on average to the sizes of
their halos. Furthermore, the coefficient of proportionality is consistent with a simple
model in which galactic disks grow with approximately the same specific angular momentum as their
halos until $z \sim 2$ and then stop growing after that. The question immediately arises whether
the same or a different relation holds between the sizes of galaxies and their halos at
high redshifts. The purpose of this paper is to answer this question.
The advantage of comparing the sizes of galaxies at multiple redshifts with
the sizes of their matched halos at the same redshifts, as we do here,
is that the results are then expressed directly in simple, physically meaningful terms.
This framework also helps to clarify the results of previous searches
for the evolution of galactic sizes.
There are already a couple of indications that the sizes of galaxies and their
halos evolve in lockstep. First, semiempirical models of galaxy formation that
make this assumption agree better with deep \emph{HST}\xspace images than the same models with
different assumptions about the evolution of galactic sizes \citep{TaghizadehPopp:2015hp}.
Second, recent measurements of the sizes and rotation velocities of galactic disks
at $1 < z < 3$ and $0.2 < z < 1.4$ indicate that they have approximately the same
specific angular momenta
as their dark matter halos \citep{Burkert:2016fr,Contini:2016fn}. While these results
are suggestive, it is still important to make a direct, independent comparison of the
sizes of high-redshift galaxies with the sizes of their matched halos,
the investigation we describe here.
The plan for the remainder of this paper is the following. In Section \ref{sec:data},
we describe our sample of galaxies and measurements of their sizes and other properties.
In Section \ref{sec:matching}, we discuss the abundance-matching method and its implementation
with four different SMHM relations. In Section \ref{sec:results},
we present the results of our comparison of galaxy and halo sizes, and
in Section \ref{sec:errors}, we discuss the uncertainties in these results.
We discuss some implications of our results in Section \ref{sec:disc}.
We show the connection between the galaxy size--halo size relation and the
more familiar galaxy size--stellar mass relation in an appendix.
All magnitudes quoted in this paper are in the AB system, and we assume the following
cosmological parameters: $h=0.7$, $\Omega_m=0.27$, and $\Omega_\Lambda=0.73$.
\floattable
\begin{deluxetable}{lcccccc}
\tabletypesize{\small}
\tablecolumns{7}
\tablecaption{Galaxy Sample Sizes\label{tab:sample}}
\tablehead{\colhead{Redshift} & \colhead{Wide} & \colhead{Deep} & \colhead{HUDF} & \colhead{Total} & \colhead{${z_{\rm{med}}}$} & \colhead{$M_{*,\rm{low}}$\tablenotemark{a}} \\
& & & & & & \colhead{($M_\odot$)}}
\startdata
$0.0 < z < 0.5$ & 4388 & 923 & 50 & 5361 & 0.34 & $1.0\times10^7$ \\
$0.5 < z < 1.0$ & 9706 & 2435 & 116 & 12257 & 0.73 & $5.0\times10^7$ \\
$1.0 < z < 1.5$ & 6666 & 1395 & 113 & 8174 & 1.23 & $8.2\times10^7$ \\
$1.5 < z < 2.0$ & 5152 & 1224 & 90 & 6466 & 1.70 & $1.7\times10^8$ \\
$2.0 < z < 2.5$ & 2580 & 727 & 47 & 3354 & 2.23 & $2.1\times10^8$ \\
$2.5 < z < 3.0$ & 1483 & 497 & 54 & 2034 & 2.69 & $3.8\times10^8$ \\
\hline
All Redshifts & 29975 & 7201 & 470 & 37646 & \nodata & \nodata \\
\enddata
\tablenotetext{a}{Typical stellar mass of the galaxies from HUDF with
$26.6$ mag $< H_{160} < 26.8$ mag
and near the median of each redshift bin. In the lowest redshift bin, we impose a hard cut
in stellar mass at $10^7\,M_\odot$.}
\end{deluxetable}
\section{Observations}\label{sec:data}
For this study, we need a galaxy sample with homogeneous data quality
that enables accurate size measurements. \emph{HST}\xspace images are required because
galaxies at $z>1$ are generally smaller
than $1\arcsec$. We also need a galaxy sample
with good constraints on redshifts, stellar masses, and star formation rates, so that
we can connect galaxies to dark matter halos and distinguish star forming galaxies
from quiescent galaxies. The Cosmic Assembly Near-infrared Deep Extragalactic Legacy
Survey (CANDELS) is the best data set currently
available for this study: all five CANDELS fields, covering $\approx 800$ arcmin$^2$ in total,
have \emph{HST}\xspace images at optical and near-IR wavelengths with uniform quality
\citep{Grogin:2011hx,Koekemoer:2011br}. The high
angular resolution of \emph{HST}\xspace ($\lesssim0\farcs15$ in the near-IR) is able to resolve
most galaxies at $z\leq3$. In addition, ancillary spectroscopic and imaging data
combine with \emph{HST}\xspace data to provide tight constraints on galaxy redshifts, stellar masses,
and star formation rates. CANDELS has three tiers of depth. The Wide region covers
$\sim 675$ arcmin$^2$ to a $5\sigma$ limiting magnitude $H_{\rm 160} \sim 27.3$ mag in a
$0\farcs17$ aperture. The Deep region covers $\sim 125$ arcmin$^2$ to $H_{\rm 160} \sim 28.1$ mag.
The survey also encompasses the Hubble Ultra-Deep Field (HUDF)---the HUDF09
\citep{Bouwens:2010dk} and HUDF12 (\citealp{2013ApJ...763L...7E,2013ApJS..209....3K};
see also \citealp{2013ApJS..209....6I})---covers
$\sim 5$ arcmin$^2$ to $H_{\rm 160} \sim 29.7$ mag.
We take the photometry, spectroscopic and photometric redshifts, and stellar-mass estimates from
the CANDELS-team catalogs (\citealt{Guo:2013ig,Galametz:2013dd,Santini:2015hh,2016arXiv161207364N};
G. Barro et al. 2017, in preparation; M. Stefanon et al. 2017, in preparation). The size estimates
are taken from \cite{vanderWel:2012eu}.
We select galaxies in the CANDELS survey at $0 < z < 3$ for this study. We cap our
galaxy redshifts at $z=3$ because this is the highest redshift that \emph{HST}\xspace still
samples redward of rest-frame 4000\AA, and because selection biases induced by
cosmological surface brightness dimming are expected to be relatively mild for $z \leq 3$
\citep{TaghizadehPopp:2015hp}. Sources are detected using SExtractor \citep{Bertin:1996ww}
in $H_{160}$. Roughly 10\% of these sources have high-quality spectroscopic
redshifts, which are used in calibrating the photometric redshifts
for the remaining sources.
Galaxy sizes are measured in $H_{160}$ and $J_{125}$
by fitting a single S\'ersic profile to each galaxy using
GALFIT \citep{Peng:2010eh}. We define galaxy sizes as effective radii
($R_{\mathrm{eff}}\xspace$) along the major axis, the radii within which S\'ersic profiles contain half of the total
integrated light. We discuss the deprojection from 2D to 3D later when
comparing with theoretical expectations. Our overall sample is dominated by late-type
galaxies at all redshifts, whose disk components
have the same 2D and 3D half-light radii.
Using simulations with artificial galaxies and comparisons of
measurements in different imaging depths, \cite{vanderWel:2012eu} concluded that
brighter than $H_{160}=24.5$ mag in the Wide region, the systematic (random)
errors of $R_{\mathrm{eff}}\xspace$ measurements are below $\sim$20\% (30\%). Meanwhile, the
systematic (random) errors of S\'ersic index $n$ measurements are below
$\sim$50\% (60\%). The quoted errors here are for galaxies with $n>3$, which
tend to have larger errors than galaxies with $n<3$. Therefore, we select all
galaxies brighter than $H_{160}=24.5$ mag in the Wide region,
$H_{160}=25.2$ mag in the Deep region, and $H_{160}=26.7$ mag in the HUDF
(SExtractor-measured magnitudes). These
magnitude limits correspond to similar signal-to-noise limits.
In addition to magnitude cuts, we prune the sample as follows.
We reject all sources that have problematic photometry (generally
those at the borders of the image or falling on stellar diffraction
spikes). We eliminate sources that are identified as active galactic nuclei
(AGNs) via X-ray or IR spectral energy distributions (SEDs).
We discard as point sources all objects that have
half-light radii (measured by SExtractor) smaller than 2.6 pixels.
We enforce the following criteria to eliminate
galaxies with poor GALFIT fits: (1) the GALFIT measurement is
flagged as poor in
the catalogs from \cite{vanderWel:2012eu}; (2) the error in the measured $R_{\mathrm{eff}}\xspace$
exceeds $0.3R_{\mathrm{eff}}\xspace$; (3) the measured $n$ lies outside the range $0.1<n<8$,
which usually signals problematic fits. The GALFIT, AGN, and point-source
criteria combined reject roughly one-fourth of the sources that satisfy the magnitude cuts.
The numbers of sources that pass all the cuts above are listed in Table \ref{tab:sample}.
The existence of the very deep HUDF data allows us to test whether
selection effects, measurement biases, or the pruning procedure are
biasing our samples near their faint limits. In the top panels of Figure \ref{fig:pruning},
we compare the size distributions in the Wide region and the HUDF for
the magnitude range $23.5$ mag $<H_{\rm 160} < 24.5$ mag before and after pruning,
finding no significant difference. If the HUDF were picking up many
more low surface brightness objects, we would have expected to see them
show up in the tail of the distribution. Instead, we see more large-radius
objects in the Wide sample, most of which are pruned away as bad fits,
but without having much impact on the median $R_{\mathrm{eff}}\xspace$. A Kolmogorov-Smirnov test
yields $p$ values consistent with the samples being drawn from
the same underlying distribution. The bottom panels of Figure \ref{fig:pruning}
show the same comparison for the Deep region in the magnitude range
$24.2$ mag $<H_{160}<25.2$ mag.
We made a similar comparison for the stellar mass distributions,
also finding no statistically significant difference between the HUDF
and the Deep and Wide samples.
We have also estimated the completeness of our sample from the detection
efficiencies for the CANDELS survey derived by \cite{Guo:2013ig}. They inserted
artificial galaxies into images from the Wide, Deep, and HUDF regions and
analyzed them with SExtractor in the same way as the real survey to determine
the detection efficiency as a function of apparent magnitude $H_{160}$,
effective radius $R_{\mathrm{eff}}\xspace$, and S\'ersic index $n$ (see their Fig. 5). From
these results, we estimate that our sample as a whole is more than 85\%
complete. This high level of completeness helps to ensure that selection
biases have relatively little impact on our galaxy size--halo size
relations (estimated in Section \ref{sec:errors}).
\begin{figure*}[ht]
\plotone{re_comparison.pdf}
\caption{
Histograms of effective radius $R_{\mathrm{eff}}\xspace$ for galaxies in narrow magnitude ranges
in the Wide, Deep, and HUDF regions of our sample.
The top panels compare the distributions of $R_{\mathrm{eff}}\xspace$ in the Wide and HUDF
regions in the magnitude range $23.5$ mag $<H_{160}<24.5$ mag, while the bottom panels
compare the distributions of $R_{\mathrm{eff}}\xspace$ in the Deep and HUDF regions in the
magnitude range $24.2$ mag $<H_{160}<25.2$ mag.
For reference, the selection limits of our sample in these regions are $H_{160}
= 24.5$ (Wide), 25.2 (Deep), and 26.7 mag (HUDF).
The left and right panels compare the distributions before and after the sample
pruning described in Section \ref{sec:data}.
The legends in the panels list the median values of $R_{\mathrm{eff}}\xspace$ in the four histograms,
and the Kolmogorov-Smirnov probabilities that the histograms are drawn from the
same underlying distribution.
The consistency of the histograms in regions with different depths,
before and after pruning, indicates that the distribution of galactic sizes in
our sample is unbiased even near the selection limits.
\label{fig:pruning}}
\end{figure*}
Studying galaxy size evolution demands that we compare $R_{\mathrm{eff}}\xspace$ values at a similar
rest-frame wavelength across redshift bins, so that we can eliminate the
contributions from dust or stellar age gradient to the observed size evolution.
We follow the procedure in \cite{vanderWel:2014hi} to correct for galaxy color
gradients and place galaxy sizes on the same rest-frame wavelength. To do this,
we use galaxy sizes measured in $H_{160}$ for galaxies at $z>1.5$ and use the
sizes measured in $J_{125}$ at $z<1.5$. Color gradients that lead to different
galaxy sizes at different wavelengths are accounted for by a correction factor
that is a function of galaxy redshift, stellar mass, and galaxy type (late-type
or early-type). As the result of this color gradient correction, the measurements
are converted into the $R_{\mathrm{eff}}\xspace$ near rest-frame 5000\AA. The size
correction is typically only a few percent, but it does reach $\sim$60\% in some cases.
For more details about the color gradient correction, we refer the readers to
\cite{vanderWel:2014hi}, Section 2.2, and their equations (1) and (2).
Stellar masses and star formation rates are estimated by comparing our photometry
with model SEDs, adopting a
\cite{Chabrier:2003ki} initial mass function (IMF).
Here the stellar masses of galaxies include all luminous stars
and dark remnants at the time of observation (but not stellar ejecta).
This method of estimating
stellar masses has been extensively
tested in \cite{Mobasher:2015gp}, and they found that typical stellar mass
uncertainties are $\sim0.25$ dex for the magnitude limits adopted here. The
primary sources of systematic uncertainties are IMF and stellar evolution
models; for galaxies with strong nebular emission lines, systematic
uncertainties for stellar mass can be up to $\sim 0.4$ dex.
We restrict this study to galaxies with stellar masses $M_* > 10^7\,M_{\odot}$.
Above this limit, we include all galaxies brighter than the magnitude limits mentioned
above, where we are confident that our measurements are robust and unaffected
by size-dependent biases. For each redshift interval, we estimate the typical
stellar mass of the faintest galaxies $M_{*,\rm{low}}$ by taking the median SED-fitted
stellar mass estimate of galaxies within 0.1 mag of the HUDF magnitude limit.
The values of $M_{*,\rm{low}}$ are listed in Table \ref{tab:sample} and shown as
thick tick marks at the bottoms of Figures \ref{fig:RR_z0}--\ref{fig:RR_allz_ssfr}. SED-based
star formation rates can be uncertain by $\sim0.4$ dex \citep{Salmon:2015iz}; therefore, the
uncertainties in the specific star formation rates (sSFRs) are roughly $\lesssim 0.6$ dex for
our galaxy sample. In this paper, we select subsamples in the upper and lower 20\%
tails of the sSFR distribution. Because we are making a differential
comparison between the relatively large populations in these tails,
our results are not sensitive to the sSFR uncertainties.
\section{Abundance Matching}\label{sec:matching}
In this study, we employ the technique of abundance matching to estimate the mass
and hence the size of the dark matter halo associated with each galaxy in our
sample. In essence, this technique compares the measured sizes of observed galaxies
with the inferred sizes of matched halos in cosmological dark matter simulations.
The basic assumption is that the rank ordering of galaxy (stellar) masses $M_*\xspace$
reflects on average the rank ordering of halo (virial) masses $M_{200c}\xspace$, i.e.,
that the cumulative number densities of galaxy masses and halo masses are equal:
$n_g(>M_*\xspace) = n_h(>M_{200c}\xspace)$. This ansatz leads directly to a correspondence
between $M_*\xspace$ and $M_{200c}\xspace$ known as the stellar mass--halo mass
relation. While the assumption that galaxy masses and halo masses follow the same
rank ordering is a reasonable approximation for statistical studies based on large
samples such as ours, it cannot be exactly true for individual galaxies, which
experience stochastic events such as mergers and starbursts throughout their histories.
Given an SMHM relation, we compute the halo mass $M_{200c}\xspace$ of each galaxy
in our sample from its stellar mass $M_*\xspace$. We then compute the virial
halo radius $R_{200c}\xspace$ using the standard formula
\begin{equation}\label{eq:rvir}
R_{200c}\xspace = \left[\frac{3M_{200c}\xspace}{4\pi\cdot200\rho_{\rm{crit}}(z)}\right]^{1/3},
\end{equation}
where $\rho_{\rm{crit}}(z)$ is the critical density of the universe at redshift $z$.
In order to
assess how sensitive our results are to the choice of SMHM relation, we perform all
of our calculations with four different SMHM relations. All of these SMHM relations are
based on the \cite{Chabrier:2003ki} stellar IMF and
the same halo mass definition ${M_{200c}\xspace}$. They are plotted in Figures
\ref{fig:SMHM1}, \ref{fig:SMHM_gtypes}, and \ref{fig:smhm} and discussed below.
\emph{SMHM relation 1}. We have derived this new SMHM relation specifically for
this study so that it is as consistent as possible with the CANDELS data set, selection
criteria, and SED fitting procedure for our sample of galaxies with size measurements.
In particular, we combine the stellar mass function $n_g(>M_*\xspace)$
from \cite{Tomczak:2014hw} with our
determination of the halo mass function $n_h(>M_{200c}\xspace)$ from the Millennium-II
simulation \citep{BoylanKolchin:2009co}.
\cite{Tomczak:2014hw} derived the stellar mass function of galaxies at $0.2 < z < 3$
in three of the five CANDELS fields, using selection criteria and
procedures for estimating stellar masses similar to those for our sample,
as described in Section \ref{sec:data}.
We have compared our stellar masses with those derived by \cite{Tomczak:2014hw}
\footnote{These stellar masses are published by the ZFOURGE team \citep{2016ApJ...830...51S}
and can be downloaded from \url{http://zfourge.tamu.edu}.}
and find no systematic offset and only a small scatter ($\sim 0.1$ dex).
\citeauthor{Tomczak:2014hw} fitted a double Schechter function to the
observed stellar mass function in
differential form $dn_g(>M_*\xspace)/dM_*\xspace$ in each of eight redshift bins. We adopt the
\citeauthor{Tomczak:2014hw} results directly for the three bins of width
$\Delta z=0.5$ covering the range $1.5 < z < 3.0$. However, for simplicity, we
combine their results for the four bins of width $\Delta z=0.25$ covering the range
$0.5 < z < 1.5$ into two bins of width $\Delta z=0.5$. In this step, we weight the
observed comoving densities of galaxies by the comoving volume in each $\Delta z=0.25$
bin and then fit a double Schechter function to the combined comoving densities in each
$\Delta z=0.5$ bin. For our lowest redshift bin, $0 < z < 0.5$,
we adopt the \citeauthor{Tomczak:2014hw}
stellar mass function in their lowest redshift bin, $0.2 < z < 0.5$, because it
agrees well with the one at $<z>=0.1$ derived by \cite{Moustakas:2013il}.
Finally, we have derived the halo mass function $n_h(>M_{200c}\xspace)$ from the
Millennium-II simulation \citep{BoylanKolchin:2009co} at the snapshot closest
to the middle of each redshift bin and then matched this to the stellar mass function
as described above to obtain the SMHM relation.
As a check on this procedure, we have independently derived our own stellar mass
function from scratch by the $1/V_{\rm{max}}$ method for the galaxies in all five
CANDELS fields in the six $\Delta z=0.5$ bins (albeit with approximate
$K$-corrections in our estimates of $V_{\rm{max}}$). The resulting stellar mass function
is nearly identical to the rebinned one from \cite{Tomczak:2014hw}. This adds to
our confidence in the validity of SMHM relation 1,
which we regard as the primary SMHM relation in this study.
Because our galaxy sample covers a wider range in stellar mass than the
\citeauthor{Tomczak:2014hw} sample, we linearly extrapolate the SMHM relation in log--log space
to both lower and higher masses. The solid lines in Figure \ref{fig:SMHM1}
show the SMHM relation derived directly from the \citeauthor{Tomczak:2014hw} data,
while the dashed lines show the extrapolated parts of the SMHM relation.
\emph{SMHM relation 2}. \cite{Behroozi:2013fg} derived this SMHM relation from published
stellar mass and halo mass functions over a wide range of redshifts ($0 < z < 8$).
This is probably the most prevalent
SMHM relation in the literature. However, since it is based on stellar mass functions
that are quite different from those derived using CANDELS data, it is not ideal
for the present study. We use it mainly to gauge the sensitivity of our results to
different SMHM relations.
For consistency, we convert their halo mass $M_{\rm{vir}}$, defined using a
redshift-dependent overdensity factor $\Delta_{\rm{vir}}(z)$ \citep{Bryan:1998cc},
to our halo mass definition
$M_{200c}\xspace$. The conversion assumes an NFW halo mass profile and the halo mass--concentration
model calibrated in \cite{Diemer:2015bd}. The corrections are very small
in general ($< 0.1$ dex).
\emph{SMHM relation 3}. This is the same SMHM relation adopted by \cite{Kravtsov:2013cy}.
He derived his own SMHM relation out of concerns that previous relations used stellar
mass functions that are biased at both the high-mass and low-mass ends.
By using the same SMHM relation as \cite{Kravtsov:2013cy}, we can directly compare our
galaxy size--halo size relation with his at $z=0$.
\emph{SMHM relation 4}. There are several SMHM relations separated by
galaxy type at $z < 0.5$ in the literature, which we plot
in Figure \ref{fig:SMHM_gtypes}. These relations use different approaches
to deriving the ratio between stellar masses and
halo masses, ranging from abundance matching \citep{RodriguezPuebla:2015bk} to
weak lensing \citep{2015MNRAS.447..298H, Mandelbaum:2016eb} to a mixture of
the two methods \citep{Dutton:2010hf}. We adopt the SMHM relation from
\cite{RodriguezPuebla:2015bk}
because it has the largest dynamic range in halo mass and is in the middle of
the range spanned by the other type-dependent relations from the literature.
We use the \citeauthor{RodriguezPuebla:2015bk} SMHM relations for blue and
red central galaxies at $z=0$ for galaxies in our sample with S\'ersic index
$n$ below and above 2.5, respectively.
Since \citeauthor{RodriguezPuebla:2015bk} defined their halo mass using
$\Delta_{\rm{vir}}(z)$, we have applied the same conversion to $M_{200c}\xspace$ as we did
for SMHM relation 2.
We compare the four SMHM relations in Figure \ref{fig:smhm}. Evidently,
there are significant discrepancies among these SMHM relations, especially
the first and second, for which the differences can be up to $\sim 0.5$
dex at $z \sim 3$. Our SMHM relation 1, derived specifically for the CANDELS sample
at $0 < z < 3$, shows stronger redshift evolution than SMHM relation 2 from
\cite{Behroozi:2013fg}. As already noted, this difference comes mainly from the different
stellar mass functions used as input to these SMHM relations. Fortunately, as we
show in Sections \ref{sec:results} and \ref{sec:errors}, our main scientific results
are relatively insensitive to the adopted SMHM relation, largely due to the weak
dependence of halo size on halo mass ($R_{200c}\xspace \propto M_{200c}\xspace^{1/3}$).
\begin{figure}[ht]
\plotone{SMHM_1.pdf}
\caption{Ratio of galaxy stellar mass $M_*$ to halo virial mass $M_{200c}\xspace$ plotted
against $M_{200c}\xspace$ for our primary SMHM relation in six redshift bins
covering the
range $0 < z < 3$. We derived this SMHM relation by abundance matching
from an evolving stellar mass function appropriate for the CANDELS sample
\citep{Tomczak:2014hw} and the evolving halo mass function in the
Millennium-II simulation \citep{BoylanKolchin:2009co} as described in Section \ref{sec:matching}.
Solid lines are based directly on the stellar mass function
from \cite{Tomczak:2014hw}; we linearly extrapolate the SMHM relation in log--log space
to cover the stellar mass range of our sample (dashed lines).
\label{fig:SMHM1}}
\end{figure}
\begin{figure}[h]
\plotone{SMHM_gtypes.pdf}
\caption{Ratio of galaxy stellar mass $M_*$ to halo virial mass $M_{200c}\xspace$ plotted
against $M_{200c}\xspace$ for four low-redshift SMHM relations from the literature
that depend on galaxy color or type. These were derived by abundance
matching \citep{RodriguezPuebla:2015bk}, weak lensing
\citep{2015MNRAS.447..298H, Mandelbaum:2016eb},
or a combination of both techniques \citep{Dutton:2010hf}.
Three of the SMHM relations pertain to $z = 0$ and one to
$z = 0.5$ \citep{2015MNRAS.447..298H}.
Note the large discrepancies among these color- and
type-dependent SMHM relations. \label{fig:SMHM_gtypes}}
\end{figure}
\begin{figure}[h]
\plotone{SMHM_all.pdf}
\caption{Ratio of galaxy stellar mass $M_*$ to halo virial mass $M_{200c}\xspace$ plotted
against $M_{200c}\xspace$ for the four SMHM relations adopted in this work.
\emph{SMHM relation 1}: derived as described in Section 3 for all
galaxies at $0< z <3$ and displayed here at $0 < z < 0.5$ and
$2.5 < z < 3.0$, which bracket the relation at intermediate redshifts.
\emph{SMHM relation 2}: derived by \cite{Behroozi:2013fg} for all
galaxies at $0<z<8$ and displayed here at $z = 0.1$ and $z = 3.0$.
\emph{SMHM relation 3}: derived by \cite{Kravtsov:2013cy} for all
galaxies only at $z=0$.
\emph{SMHM relation 4}: derived by \cite{RodriguezPuebla:2015bk}
separately for blue and red galaxies only at $z = 0$.
Note that there are significant differences among these SMHM
relations, but because halo size depends weakly on halo mass
($R_{200c}\xspace \propto M_{200c}\xspace^{1/3}$), our main results are not sensitive to these
differences.\label{fig:smhm}}
\end{figure}
\section{Results}\label{sec:results}
The main results of this paper are displayed in Figures
\ref{fig:RR_z0}--\ref{fig:RR_allz_ssfr} and described in this section. The uncertainties
in these results, mostly stemming from the SMHM relation and morphological
classification, are discussed in Section \ref{sec:errors}.
Our first main result is that galaxy sizes are proportional to halo sizes
over a wide range of size and mass. Figure \ref{fig:RR_z0} shows galaxy
$R_{\mathrm{eff}}\xspace$ plotted against halo $R_{200c}\xspace$ at $0 < z < 0.5$ for the four different SMHM relations.
In each panel, the medians of
$\logR_{\mathrm{eff}}\xspace$ in bins of width $\Delta\logR_{200c}\xspace=0.15$ dex are plotted as pentagons,
and the 16th--84th percentile ranges as vertical bars; only the bins with
more than five galaxies are shown. The halo
radius limit corresponding to the reference stellar mass $M_{*,\rm{low}}$
from Table \ref{tab:sample}
is shown as a thick tick mark at the bottom of each panel.
The coefficient of
proportionality $\alpha$ in the relation $R_{\mathrm{eff}}\xspace=\alphaR_{200c}\xspace$ is nearly the same
in all four cases; the median values of $\alpha$ are 0.021, 0.025, 0.023,
and 0.024 for SMHM relations 1--4, respectively. These $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations
are approximately linear, but with some subtle differences depending on the adopted
SMHM relation.
\citet{Kravtsov:2013cy} also found a linear relation, using completely independent
samples of galaxies at $z = 0$ and de-projected 3D half-\emph{mass}
radii $R_{1/2}$ rather than the projected 2D half-\emph{light} radii $R_{\mathrm{eff}}\xspace$. The solid line
in Figure \ref{fig:RR_z0} shows his
derived relation $R_{1/2}=\alpha^\primeR_{200c}\xspace$ with $\alpha^\prime=0.015$, assuming $R_{\mathrm{eff}}\xspace=R_{1/2}$ for
pure-disk galaxies. The bulk
of our sample by number lies above this relation by ${\sim0.2}$ dex, agreeing better
at the high- and low-mass ends. There are a number of possible explanations for this offset,
one of them being the difference between 2D half-light (effective) and 2D half-mass radii.
\cite{Szomoru:2013gh} noted that for
the galaxies more massive than $5\times10^{10}~M_\odot$ at $0 < z < 2.5$, rest-frame $g$-band
2D half-light radii are on average $\sim$25\% larger than 2D half-mass radii
(presumably due to the influence of bulges), which could account for $\sim 0.1$ dex of the offset.
We will address other explanations below in connection with morphological
types, deprojection effects, and the redshift evolution.
Our second main result is that the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations are offset for
late-type and early-type galaxies.
To separate morphological types, we split our
sample in two different ways: (1) high-$n$ (early-type)
and low-$n$ (late-type) subsamples, and (2) low-sSFR
(early-type) and high-sSFR (late-type) subsamples. We only include the highest
and lowest 20\% of the sample in either $n$ or sSFR in the hope that
this procedure will isolate disk-dominated from spheroid-dominated
galaxies. The resulting
$R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations for late- and early-type galaxies using all four
SMHM relations are shown in Figures \ref{fig:RR_z0_sersic} and
\ref{fig:RR_z0_ssfr}.
We see in both Figures \ref{fig:RR_z0_sersic} and \ref{fig:RR_z0_ssfr} that
galaxies of different types follow sequences roughly parallel to the
$R_{\mathrm{eff}}\xspace\proptoR_{200c}\xspace$ line with
an offset of $\sim 0.2$ dex at $0 < z < 0.5$. This result is relatively robust against
SMHM relation and morphological classification method: early-type
(high-$n$ or low-sSFR) galaxies have smaller $R_{\mathrm{eff}}\xspace$ than late-type (low-$n$ or
high-sSFR) galaxies at the same halo masses. The effect persists even if we
compare 3D half-light radii rather than 2D half-light radii $R_{\mathrm{eff}}\xspace$, although
with a smaller separation between the sequences. The parallel sequences of early-
and late-type galaxies in the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ diagram are reminiscent of the
parallel sequences of spheroid- and disk-dominated galaxies in the $J/M$ vs. $M$
diagram \citep{Fall:1983wu,Romanowsky:2012kb,Fall:2013du}. The latter is due to
a combination of different sizes (by a factor of $\sim$2) and different
rotation velocities (also by a factor of $\sim$2--3) of spheroid- and disk-dominated
galaxies of the same stellar mass.
This helps explain why our overall relation in Figure \ref{fig:RR_z0} is higher than
Kravtsov's at intermediate masses.
Our sample is dominated by late-type galaxies
($\sim$90\% have $n<2.5$), while Kravtsov's sample is dominated
by early-type galaxies ($\sim 80\%$ by number).
He noted that late-type galaxies are systematically larger in $R_{1/2}$ than early-type galaxies at intermediate
stellar masses, which is where we see the largest offset between these sequences in
Figure \ref{fig:RR_z0}.
The changing morphological
mix as a function of mass also helps explain the apparent curvature of the
overall relation in Figure \ref{fig:RR_z0}, because early-type galaxies
dominate the high- and low-mass ends of the relation.
Our third main result is that the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation for
late-type galaxies is close to the predictions of the simple analytic model of
disk formation. The scale radius and
effective radius of an exponential disk embedded in a dark matter halo
with a virial (outer) radius $R_{200c}\xspace$ and a spin parameter $\lambda$ are given by
\begin{equation}\label{eq:RR_disk}
R_d = \frac{\lambda}{\sqrt{2}}R_{200c}\xspace
\end{equation}
and
\begin{equation}\label{eq:RR_eff}
R_{\mathrm{eff}}\xspace = 1.68~R_d,
\end{equation}
when the disk and halo have the same specific angular momentum ($J/M$). Equation
(\ref{eq:RR_disk}) is exact for isothermal halos (\citealp{Fall:1980up}; see their
Figure 3 and equation 42; \citealp{Fall:1983wu}, see his equation 4) and is
approximate for NFW halos with typical concentrations \citep{Mo:1998hg,Burkert:2016fr}.
This prediction is shown as the dashed lines in Figures \ref{fig:RR_z0_sersic} to
\ref{fig:RR_allz_ssfr} for $\lambda=0.035$, the peak of the universal spin
parameter distribution \citep{Bullock:2001kb,2007MNRAS.376..215B}. We find that
late-type galaxies at $0 < z < 0.5$ lie $\sim 0.2$ dex below the $J/M$
equality line; in other words, our late-type galaxies have slightly less specific
angular momentum than their dark matter halos. This offset is consistent with direct
measurements of specific angular momentum at $z=0$, which indicate $J/M$ retention factors
$\eta_j \sim 80\%\pm20\%$ for galactic disks \citep{Fall:2013du}.
\begin{figure*}[ht]
\begin{center}
\plotone{RR_z0.pdf}
\figcaption{Galaxy effective radius $R_{\mathrm{eff}}\xspace$ plotted against halo virial radius $R_{200c}\xspace$ in the
lowest redshift interval ($0 < z < 0.5$) for the full sample of galaxies.
The four panels show results for SMHM relations 1, 2, 3, and 4 as indicated.
The faint gray dots represent individual galaxies, while the filled pentagons and vertical
bars indicate the median values and 16th--84th percentile ranges of $R_{\mathrm{eff}}\xspace$ in bins of
width 0.15 in $\logR_{200c}\xspace$.
The diagonal lines show the $R_{1/2}$--$R_{200c}\xspace$ relation at $z=0$ from \cite{Kravtsov:2013cy}
assuming $R_{\mathrm{eff}}\xspace=R_{1/2}$.
The thick tick mark at the bottom of each panel indicates the halo size corresponding
to the reference stellar mass $M_{*,\rm{low}}$ listed in Table \ref{tab:sample}.
Note that the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations are similar for the four different SMHM relations
and are roughly consistent with Kravtsov's results.
The $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations are linear in a first approximation but exhibit some
curvature at high and low masses as a result of the changing mix of galaxy morphologies.
Compare with Figures \ref{fig:RR_z0_sersic} and \ref{fig:RR_z0_ssfr}.
\label{fig:RR_z0}}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\plotone{RR_z0_sersic_20pc.pdf}
\figcaption{Galaxy effective radius $R_{\mathrm{eff}}\xspace$ plotted against halo virial radius $R_{200c}\xspace$ in the
lowest redshift interval ($0 < z < 0.5$) for subsamples of galaxies with the lowest
and highest 20\% of the measured S\'ersic index $n$ as proxies for late-
and early-type galaxies, respectively.
The four panels show results for SMHM relations 1, 2, 3, and 4 as indicated.
The faint blue and red dots represent individual low-$n$ and high-$n$ galaxies,
respectively, while the filled blue squares, open red circles, and vertical bars
indicate the corresponding median values and 16th--84th percentile ranges of $R_{\mathrm{eff}}\xspace$
in bins of width 0.15 in $\logR_{200c}\xspace$.
The diagonal solid lines show the $R_{1/2}$--$R_{200c}\xspace$ relation at $z=0$ from
\cite{Kravtsov:2013cy} assuming $R_{\mathrm{eff}}\xspace=R_{1/2}$, while the diagonal dashed
lines show the prediction for galactic disks with the same $J/M$ as
their surrounding halos.
The thick tick mark at the bottom of each panel indicates the halo size corresponding
to the reference stellar mass $M_{*,\rm{low}}$ listed in Table \ref{tab:sample}.
Note that the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation for low-$n$ galaxies is systematically
above, and roughly parallel to, the relation for high-$n$ galaxies.
The $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations for both subsamples of galaxies are more linear
than the relations for the full sample. Compare with Figures \ref{fig:RR_z0} and \ref{fig:RR_z0_ssfr}.
\label{fig:RR_z0_sersic}}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\plotone{RR_z0_ssfr_20pc.pdf}
\figcaption{Galaxy effective radius $R_{\mathrm{eff}}\xspace$ plotted against halo virial radius $R_{200c}\xspace$ in the
lowest redshift interval ($0 < z < 0.5$) for subsamples of galaxies with the highest
and lowest 20\% of the measured sSFR as proxies
for late- and early-type galaxies, respectively.
The four panels show results for SMHM relations 1, 2, 3, and 4 as indicated.
The faint blue and red dots represent individual high-sSFR and low-sSFR galaxies,
respectively, while the filled blue squares, open red circles, and vertical bars
indicate the corresponding median values and 16th--84th percentile ranges of $R_{\mathrm{eff}}\xspace$
in bins of width 0.15 in $\logR_{200c}\xspace$.
The diagonal solid lines show the $R_{1/2}$--$R_{200c}\xspace$ relation at $z=0$ from
\cite{Kravtsov:2013cy} assuming $R_{\mathrm{eff}}\xspace=R_{1/2}$, while the diagonal dashed
lines show the prediction for galactic disks with the same $J/M$ as
their surrounding halos.
The thick tick mark at the bottom of each panel indicates the halo size corresponding
to the reference stellar mass $M_{*,\rm{low}}$ listed in Table \ref{tab:sample}.
Note that the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation for high-sSFR galaxies is systematically
above, and roughly parallel to, the relation for low-sSFR galaxies.
The $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations for both subsamples of galaxies are more linear
than the relations for the full sample. Compare with Figures \ref{fig:RR_z0} and \ref{fig:RR_z0_sersic}.
\label{fig:RR_z0_ssfr}}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\plotone{RR_T14_sersic_20pc.pdf}
\figcaption{Galaxy effective radius $R_{\mathrm{eff}}\xspace$ plotted against halo virial radius $R_{200c}\xspace$ at
different redshifts for subsamples of galaxies with the lowest and highest 20\%
of the measured S\'ersic index $n$ as proxies for late- and early-type
galaxies, respectively.
The six panels show results computed from SMHM relation 1
in redshift intervals of $\Delta z = 0.5$ covering the range $0 < z < 3$.
The faint blue and red dots represent individual low-$n$ and high-$n$ galaxies,
respectively, while the filled blue squares, open red circles, and vertical bars
indicate the corresponding median values and 16th--84th percentile ranges of $R_{\mathrm{eff}}\xspace$
in bins of width 0.15 in $\logR_{200c}\xspace$.
The diagonal solid lines show the $R_{1/2}$--$R_{200c}\xspace$ relation at $z=0$ from
\cite{Kravtsov:2013cy} assuming $R_{\mathrm{eff}}\xspace=R_{1/2}$, while the diagonal dashed
lines show the prediction for galactic disks with the same $J/M$ as
their surrounding halos.
The thick tick mark at the bottom of each panel indicates the halo size corresponding
to the reference stellar mass $M_{*,\rm{low}}$ listed in Table \ref{tab:sample}.
Note that the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations for both low-$n$ and high-$n$ galaxies are
nearly constant with redshift, and that the one for low-$n$ galaxies is close to the
predicted relation for equality of $J/M$ in disks and halos. Compare with Figure \ref{fig:RR_allz_ssfr}.
\label{fig:RR_allz_sersic}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\plotone{RR_T14_ssfr_20pc.pdf}
\figcaption{Galaxy effective radius $R_{\mathrm{eff}}\xspace$ plotted against halo virial radius $R_{200c}\xspace$ at
different redshifts for subsamples of galaxies with the highest and lowest 20\%
of the measured sSFR as proxies for late- and early-type
galaxies, respectively.
The six panels show results computed from SMHM relation 1
in redshift intervals of $\Delta z = 0.5$ covering the range $0 < z < 3$.
The faint blue and red dots represent individual high-sSFR and low-sSFR galaxies,
respectively, while the filled blue squares, open red circles, and vertical bars
indicate the corresponding median values and 16th--84th percentile ranges of $R_{\mathrm{eff}}\xspace$
in bins of width 0.15 in $\logR_{200c}\xspace$.
The diagonal solid lines show the $R_{1/2}$--$R_{200c}\xspace$ relation at $z=0$ from
\cite{Kravtsov:2013cy} assuming $R_{\mathrm{eff}}\xspace=R_{1/2}$, while the diagonal dashed
lines show the prediction for galactic disks with the same $J/M$ as
their surrounding halos.
The thick tick mark at the bottom of each panel indicates the halo size corresponding
to the reference stellar mass $M_{*,\rm{low}}$ listed in Table \ref{tab:sample}.
Note that the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations for both high-sSFR and low-sSFR galaxies are
nearly constant with redshift, and that the one for high-sSFR galaxies is close to the
predicted relation for equality of $J/M$ in disks and halos. Compare with Figure \ref{fig:RR_allz_sersic}.
\label{fig:RR_allz_ssfr}}
\end{center}
\end{figure*}
Our fourth main result is that there is remarkably little evolution in the
$R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation from $z=3$ to $z=0$. This is shown
in Figures \ref{fig:RR_allz_sersic} and \ref{fig:RR_allz_ssfr}.
As in the previous diagrams, we select the highest
and lowest 20\% tails of the $n$ and sSFR distributions. We only show
results for SMHM relation 1, but we have checked that they
are similar for the other SMHM relations. Figures
\ref{fig:RR_allz_sersic} and \ref{fig:RR_allz_ssfr} show again that in all
redshift bins, late-type galaxies follow a nearly linear relation: $R_{\mathrm{eff}}\xspace=\alphaR_{200c}\xspace$.
At $0.5 < z < 3$, late-type galaxies have ${\alpha \approx 0.034}$ in
Figure \ref{fig:RR_allz_sersic} (${\alpha \approx 0.029}$ in
Figure \ref{fig:RR_allz_ssfr}) and lie close to the $J/M$ equality line
(within $\lesssim$ 0.1--0.2 dex) with no discernible evolution.
(There is a slight offset to smaller sizes in the late-type sample
when selected by sSFR rather than S\'ersic index.) This result agrees with recent direct
measurements of specific angular momentum at $0.2 < z < 1.4$ \citep{Contini:2016fn} and at
$1 < z < 3$ \citep{Burkert:2016fr}, which show that $J/M$ in galactic disks is
nearly the same as in their dark matter halos.
\cite{Kravtsov:2013cy} speculated that the sizes of galaxies grew in
proportion to the sizes of their halos until $z \sim 2$ and then stopped,
while their halos continued to grow in mass and size.
We find instead that the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations at $z < 2$ are
very similar to those at $z > 2$. Our $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations for the
late-type galaxies at $z < 0.5$ have smaller amplitudes than those at
$z > 0.5$, indicating a possible slowdown in the growth of disks, but this
deviation is mild ($\sim$0.2 dex) and not established beyond all doubt (see below).
The $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation for early-type galaxies
is also nearly constant. We see
in Figures \ref{fig:RR_allz_sersic} and \ref{fig:RR_allz_ssfr} that the
trend for early-type galaxies at all redshifts roughly parallels that
for late-type galaxies, but shifted down by $\sim 0.2$ dex at $0 < z < 0.5$
and by $\sim$ 0.2--0.3 dex at $0.5 < z < 3$. There is a slight hint of a ``turnover''
at the most massive end at $0 < z < 0.5$ (see Figures \ref{fig:RR_allz_sersic} and
\ref{fig:RR_allz_ssfr}). This turnover, if real, could be due to either
size-measurement biases (due to diffuse
outer halos surrounding central galaxies in groups and clusters) or
the breakdown of abundance matching for the group- or cluster-mass halos.
\section{Uncertainties}\label{sec:errors}
How robust are these results? The uncertainties in this study
potentially include measurement and statistical errors internal to the
CANDELS data set, as well as external systematic errors from the adopted
SMHM relations and stellar population models. Here we provide a brief
assessment of these uncertainties.
As noted in Section \ref{sec:data}, errors in the measurements of
effective radii $R_{\mathrm{eff}}\xspace$ (from fits to S\'ersic profiles) are relatively
small: $< 20\%$ (systematic) to $30\%$ (random). Even if these errors
were at the upper end of this range for all galaxies and varied
systematically with galactic masses and sizes, they would have a negligible
influence on the coefficient and exponent of the galaxy size--halo size relation:
$R_{\mathrm{eff}}\xspace = \alpha R_{200c}\xspace^{\beta}$ with $|\Delta\alpha/\alpha| \la 0.02$ and
$|\Delta\beta| \la 0.08$ (assuming a $\sim 20\%$ or smaller systematic deviation
in $R_{\mathrm{eff}}\xspace$ for a factor of 10 or more variation in $R_{200c}\xspace$). Because the
sample size in this study is so large ($N \sim 38000$), the effects of
random errors in the size measurements on the mean $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations
are even smaller. In a situation like this, with negligible internal errors,
formal tests of goodness of fit are not informative, and we do not attempt them.
The dominant uncertainties in our galaxy size--halo size relations are most
likely caused by possible systematic errors in our adopted SMHM relations.
We can judge the magnitude of these errors by comparing the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$
relations plotted in Figures \ref{fig:RR_z0} to \ref{fig:RR_z0_ssfr} for the
four different SMHM relations. This comparison indicates that the SMHM relation
may be responsible for systematic errors at the level of $\sim 0.1$--$0.2$ dex,
perhaps a little less for the combined sample of galaxies, perhaps a little more
for the subsamples split by morphological type. Quantitative measures of the
deviations among the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations at $0 < z < 0.5$ confirm these impressions.
The contributions to the error budget from the adopted stellar population models,
which determine the stellar masses and specific star formation rates, are
smaller than those from the adopted SMHM relations. Systematic errors in stellar
masses could affect the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations at about the same level as
systematic errors in $R_{\mathrm{eff}}\xspace$. The classification of the 3D shapes of galaxies
(i.e., flat disks vs. round spheroids) by S\'ersic index is another source of uncertainty,
because it is based only on the radial decline of the projected 2D surface brightness
profiles. Fitting a single S\'ersic profile instead of a detailed disk/bulge
decomposition possibly adds further uncertainty.
Nevertheless, the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations we obtain from subsamples
split by S\'ersic index agree at the $\lesssim 0.1$ dex level with those from
subsamples split by specific star formation rate.
We estimate the impact of selection biases on our galaxy size--halo size
relations from the detection efficiencies for the CANDELS survey derived by
\cite{Guo:2013ig} as follows. They divide the $R_{\mathrm{eff}}\xspace$--$H_{160}$ plane into regions
that are 0--50\%, 50--90\%, and 90--100\% complete. Most of our sample (88\%)
lies in the region of 90--100\% completeness, while the remainder (12\%) lies in the
region of 50--90\% completeness. To place an upper limit on the impact of
selection biases, we adopt the lower limits of 90\% and 50\% on the completeness
in these two regions of the $R_{\mathrm{eff}}\xspace$--$H_{160}$ plane, assign weights 2.0 (i.e., $1/0.5$)
and 1.1 (i.e., $1/0.9$) to the galaxies in our sample in these regions, and then
recompute the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations. For $R_{200c}\xspace \gtrsim 100$ kpc, we
find negligible corrections to the median $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations, while
for $R_{200c}\xspace \lesssim 100$ kpc, we find corrections below 0.1 dex for all galaxy types
and redshifts $0 < z < 3$. We conclude from this exercise that selection biases
are likely to be subdominant sources of uncertainty in our $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$
relations.
Based on this assessment of uncertainties, most of the results of this paper
appear to be robust. In particular, there is a strong, approximately linear
correlation between the sizes of galaxies and their dark matter halos over the full
range of redshifts examined here, $0 < z < 3$. The coefficient of proportionality
is larger for late-type galaxies than for early-type galaxies, which follow roughly
parallel sequences, except possibly at the highest redshifts. For late-type galaxies,
the observed $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation is generally consistent with simple models
in which galactic disks grow with the same specific angular momentum as their dark matter halos.
There is some evidence for a slowdown in disk growth at $z < 0.5$, but the
apparent deviation from the $J/M$ equality line is only $\sim 0.2$ dex.
\newpage
\floattable
\begin{deluxetable*}{lcccc}
\tabletypesize{\small}
\tablewidth{\textwidth}
\tablecaption{Verification of Main Results \label{tab:truth}}
\tablehead{\colhead{} & \colhead{SMHM 1} & \colhead{SMHM 2} & \colhead{SMHM 3} & \colhead{SMHM 4}}
\startdata
1. The $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations are roughly linear in all redshift bins. & T & T & T & T \\
2. The $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations are offset for early- and late-type galaxies. & T & T & T & T \\
3. The $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation for late-type galaxies are close to the $J/M$ equality line. & T & T & T & T \\
4. The $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation shows little evolution between $z=0$ and $z=3$. & T & T & T & T \\
\enddata
\end{deluxetable*}
We have plotted and examined the $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations at all redshifts ($0 < z < 3$)
for all four SMHM relations to determine whether or not they support the four main results
discussed in Section \ref{sec:results}. The outcome of this test is recorded in Table \ref{tab:truth} by a T (for true) or F (for false) for each combination of
SMHM relation and result. All of the entries are Ts. Table \ref{tab:truth}
therefore reinforces our conclusion that
the main scientific results of this study are robust relative to discrepancies among
the SMHM relations (because of the weak dependence of $R_{200c}\xspace$ on $M_{200c}\xspace$).
\section{Discussion}\label{sec:disc}
We have found that the sizes of galaxies are proportional on average to the
sizes of their dark matter halos over a wide range of galaxy and halo masses and over
the entire redshift range $0 < z < 3$ studied here: $R_{\mathrm{eff}}\xspace=\alphaR_{200c}\xspace$
with $\alpha \approx 0.03$. In particular,
we confirm the basic relation found by \cite{Kravtsov:2013cy} at $z=0$ with only
minor adjustment, some of which is related to the difference between 2D
half-light radii and 3D half-mass radii. There
is some curvature at the upper end of our overall $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relation, which
is due to the larger abundance and smaller average size of
early-type galaxies compared with late-type galaxies of the same stellar mass. Indeed, we
find that early- and late-type galaxies follow distinct, roughly parallel
$R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations offset by a factor of $\sim 2$ for the upper and lower
20th percentiles of S\'ersic index and specific star formation rate,
which are meant to be proxies for disk-dominated and spheroid-dominated
galaxies.
Given the proportionality between galaxy and halo sizes, it is now straightforward to predict
how galaxy sizes evolve with redshift, from the following alternative forms of equation (\ref{eq:rvir}):
\begin{equation}\label{eq:size_evol}
R_{\mathrm{eff}}\xspace=\alphaR_{200c}\xspace=\alpha\left[\frac{GM_{200c}\xspace}{100H^2(z)}\right]^{1/3}=\alpha\frac{{V_{200c}}}{10H(z)}.
\end{equation}
Here $H(z)$ is the Hubble parameter at redshift $z$, and ${V_{200c}}$ is the
circular velocity of the halo in question \citep[see][]{Mo:1998hg}. Thus, we
expect $R_{\mathrm{eff}}\xspace\propto{H^{-2/3}(z)}$ or $R_{\mathrm{eff}}\xspace\propto{H^{-1}(z)}$ depending on
whether galaxies at different $z$ are compared at the same $M_{200c}\xspace$ or ${V_{200c}}$.
As a result of gravitational clustering, the characteristic halo
mass evolves with redshift roughly as
$\sigma({M^*_{200c}}, z) \propto \delta_c(z) / D(z)$, where $\sigma({M^*_{200c}}, z)$
is the RMS deviation of the linear density field smoothed over the scale ${R(M^*_{200c})}$,
$\delta_c(z)$ is the critical linear overdensity for collapse \citep{Kitayama:1996in},
and $D(z)$ is the linear growth factor \citep{Carroll:1992kw}.
The corresponding galactic size $R_{\mathrm{eff}}\xspace^*(z)$ at the
knee of the galaxy mass function should evolve
according to equation (\ref{eq:size_evol}) with $M_{200c}\xspace \rightarrow {M^*_{200c}}(z)$.
This expression for $R_{\mathrm{eff}}\xspace^*(z)$ relates the typical
sizes of progenitor--descendant pairs of galaxies at different redshifts,
although there will be a large dispersion about it as a result of stochasticity
in the hierarchical growth of galaxies.
Our $R_{\mathrm{eff}}\xspace$--$R_{200c}\xspace$ relations for late-type galaxies (defined by low $n$, high
sSFR) at $0.5 < z < 3$ are within $\lesssim$ 0.1--0.2 dex of the predictions of simple
models in which galactic disks acquire and retain the same specific angular
momentum as induced by tidal torques in their surrounding dark matter halos. At
$z < 0.5$, late-type galaxies are $\sim 0.2$ dex below this prediction. However,
given possible systematic errors in the measurements of galactic sizes
($\lesssim20\%$ for low-$n$ galaxies), our results are consistent with a range
$\eta_j \sim 80\%\pm20\%$ for the retained fraction of specific angular momentum.
Our results therefore agree nicely with recent, direct
measurements of the specific angular momentum of galactic disks at $z=0$
\citep{Fall:2013du}, at $0.2<z<1.4$ \citep{Contini:2016fn}, and at $1 < z < 3$
\citep{Burkert:2016fr}, all of which indicate retention factors $\eta_j$ near unity or
slightly below.
The notion of angular momentum conservation was introduced as a simplifying
approximation in the era of analytical models of galaxy formation
\citep{Fall:1980up}. Since then, hydrodynamical models have revealed a much
more complex situation. In particular, it is now clear that several physical
processes may change the specific angular momentum of galaxies or
parts of galaxies during their formation and evolution, including merging,
feedback, inflows, outflows, and gravitational interactions between baryons and
dark matter. Some of these processes cause gains in specific angular momentum,
while others cause losses (see \citealt{Romanowsky:2012kb} and
\citealt{Genel:2015kp} for summaries and references to earlier work).
The galactic disks that form in recent hydrodynamical simulations have nearly
the same specific angular momentum on average as their dark matter halos, in good
agreement with observations
\citep{Genel:2015kp,Pedrosa:2015dh,Teklu:2015ev,Zavala:2016ki}. Evidently, the
processes responsible for gains and losses are either weak or in rough balance,
leading to an apparent (if not strict) conservation of angular momentum during
the formation of galactic disks. Simulations and now observations indicate that
galaxies of all types grow in a quasi-homologous (or self-similar) relationship
with their dark matter halos. The details of how this happens are a topic of ongoing research.\\
We thank Gerard Lemson for the help with Millennium Simulation, Adam Tomczak
for useful discussions of stellar mass functions, and Andrey Kravtsov for
providing conversion factors between different halo mass definitions.
We also thank Avishai Dekel, Sandra Faber, Steve Finkelstein,
Andrey Kravtsov, Yu Lu, and Rachel Somerville for comments
on a near-final draft of this paper. This work is based on observations taken by the
CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST,
which is operated by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS5-26555.
|
1,116,691,499,026 | arxiv | \section{Spin coherent states}
Spin coherent states are well known in the literature \cite{radcliffe1971,arecchi1972,perelomov1972,klauder1979,klauderBook,auerbachBook,aravind1999}, where their primary purpose is for constructing the spin coherent state path integral. Here we introduce them in a way that is useful for our purposes, and we briefly state their properties.
The reader should not be daunted by the math, which is presented here for completeness but is unnecessary for conceptual understanding.
For convenience we will set $\hbar=1$.
Consider a spin with a fixed total angular momentum quantum number
$s \in \{ 0, \tfrac{1}{2}, 1, \tfrac{3}{2}, 2, \dotsc \}$.
The usual arguments show that the eigenstates of $z$-angular momentum, $\ket{sm}$, form a ladder with $m = -s, -s+1, -s+2, \dotsc, s$. The state with maximal $z$-angular momentum is $\ket{ss}$. Since
$\hat{S}_z \ket{ss} = s \ket{ss}$ and
$\hat{S}_x \ket{ss} = \hat{S}_y \ket{ss} = \ket{\mathrm{null}}$,
we see that $\ket{ss}$ is an eigenstate of the \emph{vector} spin operator in the sense that
\begin{align}
\hat{\SSS} \ket{ss} = s \eee_z \ket{ss}
,
\label{SpecificEigenrelation}
\end{align}
where $\eee_z$ is the unit vector in the $z$ direction.
Now, let $\sss$ be a vector of length $s$ and direction $(\theta,\phi)$, so that
$\sss=(s,\theta,\phi)$ in spherical polars and
$\sss=(s \sin \theta \cos \phi,s \sin \theta \sin \phi,s \cos \theta)$ in Cartesians.
Define the \emph{spin coherent state} $\ket{\sss}$ as the state obtained by rotating $\ket{ss}$ counterclockwise by angle $\theta$ about the $y$ axis, and then by angle $\phi$ about the $z$ axis:
\begin{align}
\ket{\sss}
\equiv
\ket{s \theta \phi}
&\equiv
e^{ i \phi \hat{S}_z } e^{ i \theta \hat{S}_y } \ket{ss}
.
\label{SCSDefinition}
\end{align}
Using the Wigner $D$-matrix \cite{sakuraiBook}, we may write $\ket{\sss}$ explicitly as a linear combination of $\ket{sm}$ states:
\begin{align}
\ket{s \theta \phi}
&= \sum_{m=-s}^s
\sqrt{ \tfrac{(2s)!}{(s+m)!(s-m)!} }
\big( \cos \tfrac{\theta}{2} \big)^{s+m}
\big( \sin \tfrac{\theta}{2} \big)^{s-m}
e^{-i m \phi}
\ket{sm}
.
\label{WignerDMatrixFormula}
\end{align}
From the properties of spin operators it can be shown that Eqs.~\eqref{SpecificEigenrelation} and \eqref{SCSDefinition} give
\begin{align}
\hat{\SSS} \ket{\sss} = \sss \ket{\sss}
.
\label{GeneralEigenrelation}
\end{align}
In other words, the spin coherent state $\ket{\sss}$ is an eigenstate of the \emph{vector} spin operator with \emph{vector} eigenvalue $\sss$. From here it is easy to show that
$
\sss \cdot \hat{\SSS} \ket{\sss} = s^2 \ket{\sss}
$
and
$
\bra{\sss} \hat{\SSS} \ket{\sss} = \sss
$,
and that the spin coherent states are normalized as
\begin{align}
\braket{\sss}{\sss} = 1.
\end{align}
\section{Spin wavefunctions and orbital wavefunctions}
From Eq.~\eqref{WignerDMatrixFormula} we see that
\begin{align}
\braket{sm}{s \theta \phi}
&=
\sqrt{ \tfrac{(2s)!}{(s+m)!(s-m)!} }
\big( \cos \tfrac{\theta}{2} \big)^{s+m}
\big( \sin \tfrac{\theta}{2} \big)^{s-m}
e^{-i m \phi}
.
\end{align}
We define the ``spin wavefunction'' of a spin state $\ket{sm}$ as the ``coefficients'' of $\ket{sm}$ in the basis of spin coherent states $\ket{s \theta \phi}$, including a normalization factor:
\begin{align}
F_{sm} (\theta, \phi)
&= \sqrt{ \tfrac{2s+1}{4\pi} }
\braket{s \theta \phi} {sm}
\nonumber\\
&=
\sqrt{ \tfrac{2s+1}{4\pi} \tfrac{(2s)!}{(s+m)!(s-m)!} }
\big( \cos \tfrac{\theta}{2} \big)^{s+m}
\big( \sin \tfrac{\theta}{2} \big)^{s-m}
e^{i m \phi}
.
\label{FsmFormula}
\end{align}
For comparison, we also define ``orbital wavefunctions'' as the orbital angular momentum eigenfunctions for integer $l$ and $m$. These are the well-known spherical harmonics\cite{sakuraiBook}, which can be written in terms of associated Legendre functions $P_l^m$:
\begin{align}
Y_{lm} (\theta, \phi)
&= \sqrt{ \tfrac{2l+1}{4\pi} \tfrac{(l-m)!}{(l+m)!} }
P_l^m (\cos\theta)
e^{i m \phi}
.
\label{YlmFormula}
\end{align}
With these definitions, both types of wavefunctions are normalized such that
\begin{align}
\int_0^\pi d\theta~ \sin \theta
\int_0^{2\pi} d\phi~
\abs{ Y_{lm} (\theta, \phi) }^2
&= 1
, \nonumber\\
\int_0^\pi d\theta~ \sin \theta
\int_0^{2\pi} d\phi~
\abs{ F_{sm} (\theta, \phi) }^2
&= 1
.
\end{align}
The spin wavefunction and orbital wavefunction have the same $\phi$ dependence, but the $\theta$ dependence is different. The relationship between the two representations is vaguely reminiscent of the duality between position and momentum.
\section{Visualizations}
We will borrow geographical techniques for visualizing functions over the surface of a sphere. In particular, we will use the Hammer projection\cite{snyderBook}, which is an equal-area cartographic projection that maps the entire surface of the Earth (or any sphere) to the interior of an ellipse of semiaxes $\sqrt{8}$ and $\sqrt{2}$. This may be thought of as making a cut along the ``International Dateline'' (the meridian $\phi=180^\circ$, so that the cut sphere is topologically equivalent to a flat sheet, and flattening the resulting shape into an ellipse. The Hammer projection is described mathematically by the following transformations between $(\theta,\phi)$ and $(x,y)$:
\begin{align}
x = \frac{\sqrt{8} \sin \theta \sin \tfrac{\phi}{2}}{
\sqrt{1 + \sin\theta \cos \tfrac{\phi}{2}}
}
,\qquad
y = \frac{\sqrt{2} \cos \theta }{
\sqrt{1 + \sin\theta \cos \tfrac{\phi}{2}}
}
,\qquad
0 \leq \theta \leq \pi
\text{~and~}
0 \leq \phi < 2\pi
;
\\
\theta = \arccos \Big( y\sqrt{1 - \tfrac{x^2}{16} - \tfrac{y^2}{4} } \Big)
,\qquad
\phi = 2\arctan \frac{ x\sqrt{1 - \tfrac{x^2}{16} - \tfrac{y^2}{4} } }{
4 (1 - \tfrac{x^2}{16} - \tfrac{y^2}{4}) - 2
}
,\qquad
\tfrac{x^2}{8} + \tfrac{y^2}{2} < 1
.
\label{HammerFormulas}
\end{align}
\begin{figure}
\begin{center}
\includegraphics[width=0.85\columnwidth]{HammerProjection}
\end{center}
\caption{
Latitudes (lines of constant $\theta$)
and meridians (lines of constant $\phi)$
according to the Hammer projection,
which maps an entire spherical surface to a flat ellipse.
\label{HammerFigure}
}
\end{figure}
\begin{figure}
\subfigure[$Y_{99}$]{
\includegraphics[width=0.4\textwidth]{Y99}
\label{Y99}
}
\subfigure[$F_{99}$]{
\includegraphics[width=0.4\textwidth]{F99}
\label{F99}
}
\subfigure[$Y_{90}$]{
\includegraphics[width=0.4\textwidth]{Y90}
\label{Y90}
}
\subfigure[$F_{90}$]{
\includegraphics[width=0.4\textwidth]{F90}
\label{F90}
}
\subfigure[$Y_{96}$]{
\includegraphics[width=0.4\textwidth]{Y96}
\label{Y96}
}
\subfigure[$F_{96}$]{
\includegraphics[width=0.4\textwidth]{F96}
\label{F96}
}
\caption{
Visualizations of ``orbital wavefunctions'' $Y_{lm} (\theta,\phi)$
and ``spin wavefunctions'' $F_{sm} (\theta,\phi)$
over the surface of the sphere
according to the Hammer projection.
Brightness indicates the magnitude of the complex wavefunction
and hue indicates the argument.
\label{YandF}
}
\end{figure}
Let us now compare spin and orbital wavefunctions for various states.
First consider the state of maximal $z$-angular momentum, $\ket{ss}$. The orbital wavefunction is
\begin{align}
Y_{ss} (\theta, \phi)
&= (-1)^s \sqrt{ \tfrac{(2s+1)!!}{ 2^s s!~ 4\pi } } ~ \sin^s \theta ~ e^{i s \phi}
\label{Yss}
\end{align}
(Fig.~\ref{Y99}). This represents travelling waves going around the equator of the sphere, which jives with the heuristic classical picture of a particle orbiting in a horizontal circle. The orbital wavefunction is the probability amplitude for finding the particle at a position
$\rrr=(r,\theta,\phi)$. The spin wavefunction is
\begin{align}
F_{ss} (\theta, \phi)
&= \sqrt{ \tfrac{2s+1}{ 4\pi } } ~ \cos^{2s} \tfrac{\theta}{2} ~ e^{i s \phi}
\label{Fss}
\end{align}
(Fig.~\ref{F99}). Since $\ket{ss}$ is identical to the spin coherent state
$\ket{s \eee_z}$ with $\theta=\phi=0$, we might have expected that the spin wavefunction would be a Dirac delta function of the form $\delta(\theta) \delta(\phi)$. However, because spin coherent states form an overcomplete non-orthonormal basis, the spin wavefunction is actually a smooth function with maximum amplitude near the north pole ($\theta=0$). This can be interpreted in terms of a semiclassical spin vector that points toward the north pole on average, but undergoes quantum fluctuations away from this direction. Larger values of $s$ lead to smaller quantum fluctuations.
Now consider the state $\ket{s0}$. The orbital wavefunction is
\begin{align}
Y_{s0} (\theta, \phi)
&= \sqrt{ \tfrac{2s+1}{ 4\pi } } ~ P_s (\cos \theta)
\label{Yss}
\end{align}
(Fig.~\ref{Y90}), where $P_s$ is a Legendre polynomial. This state has total angular momentum $s$, but its average $z$-angular momentum is zero. Classically, this suggests that the angular momentum vector $\SSS$ lies in the $xy$ plane and that a particle executes circular orbits in a vertical plane perpendicular to $\SSS$. Quantum mechanically, there are standing waves formed by the interference of northbound and southbound waves along every meridian.
The spin wavefunction is
\begin{align}
F_{s0} (\theta, \phi)
&= \sqrt{ \tfrac{2s+1}{ 4\pi } \tfrac{(2s)!}{2^s s!} } ~ \sin^{s} \theta
\label{Fss}
\end{align}
(Fig.~\ref{F90}). The plot can be understood as the distribution of a semiclassical spin whose quantum fluctuations allow it to explore the whole equator, as well as making excursions toward the ``tropics''. This is an improvement over the textbook picture (Fig.~\ref{SpinCartoon}). It is mathematically precise, and it captures extra nuances: not only does the spin precess in a circle due to quantum fluctuations, its ``latitude'' also fluctuates.
Finally, consider $\ket{96}$ as an example of a generic state. For this state the orbital wavefunction (Fig.~\ref{Y96}) consists of travelling waves along several latitudes, whereas the spin wavefunction (Fig.~\ref{F96}) is concentrated near a single latitude. This latitude corresponds to a vertical position $m$ on a sphere of radius $s(s+1)$, where $m=6$ and $s=9$.
The orbital wavefunction $Y_{lm}$ is only meaningful when $l$ and $m$ are integers. If $l$ is a half-integer, $Y_{lm}$ diverges at the poles and is generallly non-normalizable, due to the Legendre functions in Eq.~\eqref{YlmFormula}. In contrast, the spin wavefunction $F_{sm}$ is well-defined even for half-integer values of $s$ and $m$, as can be seen from Eq.~\eqref{FsmFormula} and Fig.~\ref{YandFTable}. This is the key advantage of spin wavefunctions.
A careful reader will notice that if $s$ is a half-integer, the function $F_{sm} (\theta, \phi)$ is discontinuous at the ``International Dateline'' $\phi=\pi$: upon crossing this branch cut, the spin wavefunction changes by a factor of $-1$. This is not a bug, but a feature! It illustrates the peculiar nature of spinor rotation: rotation by $2\pi$ gives a factor of $-1$, and a spinor is only invariant under a full $4\pi$ rotation.
\begin{figure}
\subfigure[$Y_{lm}$]{
\includegraphics[width=0.47\textwidth]{Ylm}
\label{Ylm}
}
\subfigure[$F_{sm}$]{
\includegraphics[width=0.47\textwidth]{Fsm}
\label{Fsm}
}
\caption{
Tables of visualizations
of ``orbital wavefunctions'' $Y_{lm} (\theta,\phi)$
and ``spin wavefunctions'' $F_{sm} (\theta,\phi)$.
The latter are defined for both integer and half-integer $s$.
\label{YandFTable}
}
\end{figure}
\section{Spin wavefunction of a spin coherent state}
So far we have considered ``spin wavefunctions'' for spin angular momentum eigenstates $\ket{sm}$. Now let us consider the spin wavefunction for a spin coherent state $\ket{\sss'} = \ket{s \theta' \phi'}$:
\begin{align}
F_{\theta'\phi'} (\theta, \phi)
&\equiv
\sqrt{ \tfrac{2s+1}{ 4\pi } }
\braket{s\theta\phi}{s \theta' \phi'}
\nonumber\\
&=
\sqrt{ \tfrac{2s+1}{ 4\pi } }
\sum_{m=-s}^s
\tfrac{(2s)!}{(s+m)!(s-m)!}
\big( \cos \tfrac{\theta}{2} \cos \tfrac{\theta'}{2} \big)^{s+m}
\big( \sin \tfrac{\theta}{2} \sin \tfrac{\theta'}{2} \big)^{s-m}
e^{i m (\phi - \phi')}
.
\label{Fthetaphithetaphi}
\end{align}
Although the form of Eq.~\eqref{Fthetaphithetaphi} is not illuminating, the plots in Fig.~\ref{Fscs} show that the ``spin wavefunction'' $ F_{\theta'\phi'} (\theta, \phi)$ has largest magnitude near $\theta=\theta'$ and $\phi=\phi'$, as one would expect. The ``spread'' in the wavefunction is inversely proportional to $s$, so that the limit $s\rightarrow\infty$ is indeed the semiclassical limit.
\begin{figure}
\subfigure[$F_{60^\circ,-45^\circ}$]{
\includegraphics[width=0.3\textwidth]{F-19over2-60-m45}
\label{Fscs1}
}
\subfigure[$F_{60^\circ,45^\circ}$]{
\includegraphics[width=0.3\textwidth]{F-19over2-60-45}
\label{Fscs2}
}
\subfigure[$F_{60^\circ,135^\circ}$]{
\includegraphics[width=0.3\textwidth]{F-19over2-60-135}
\label{Fscs3}
}
\caption{
Spin wavefunctions $F_{\theta'\phi'} (\theta,\phi)$
for three spin coherent states $\ket{s\theta'\phi'}$
with spin quantum number is $s=19/2$,
colatitude parameter $\theta'=60^\circ$,
and azimuthal parameters $\phi'=-45^\circ, 45^\circ, 135^\circ$.
\label{Fscs}
}
\end{figure}
\section{Time-dependent spin wavefunctions}
A spin in a constant magnetic field $\BBB$ obeys the Hamiltonian $\hat{H} = -\gamma \BBB \cdot \hat{\SSS}$ (where $\gamma$ is a gyromagnetic ratio)\cite{griffithsBook}. Ehrenfest's theorem shows that the average spin precesses around the direction of $\BBB$ at the Larmor frequency $f_L = \gamma B/2\pi$.
Furthermore, it can be shown that if the initial state is a spin coherent state, $\ket{\sss}$, then the state at a later time $t$ is also a spin coherent state with a rotated vector $\sss(t)$ as well as a phase factor. This implies that Larmor precession can be visualized in the classroom by animating the time-dependent spin wavefunction $F(\theta,\phi,t)$. Successive frames in such an animation might look like Fig.~\ref{Fscs}.
\section{Further discussion}
It is well known that the orbital wavefunctions for $\ket{lm}$ states, which are the spherical harmonics $Y_{lm} (\theta, \phi)$, are polynomials in $x$, $y$, and $z$, where $x=\sin\theta\cos\phi$, $y=\sin\theta\sin\phi$, and $z=\cos\theta$. Starting from Eq.~\ref{FsmFormula}, it can be shown that the spin wavefunctions for $\ket{sm}$ states, $F_{sm} (\theta, \phi)$, are square roots of rational functions in $x$, $y$, and $z$:
\begin{align}
F_{sm} (\theta, \phi)
&=
\sqrt{ \tfrac{2s+1}{4\pi} \tfrac{(2s)!}{(s+m)!(s-m)! 4^s} }
(1+z)^{s/2}
(1-z)^{s/2-m}
(x+iy)^m
.
\end{align}
The spin wavefunctions for coherent states $\ket{s\theta\phi}$ can be expressed in a similar form:
\begin{align}
F_{\theta'\phi'} (\theta, \phi)
&=
\tfrac{2s+1}{4^s 4\pi}
\sum_{m=-s}^s
\tfrac{(2s)!}{(s+m)!(s-m)!}
(1+z)^{s/2}
(1+z')^{s/2}
(1-z)^{s/2-m}
(1-z')^{s/2-m}
(x+iy)^m
(x'-iy')^{m}
,
\end{align}
where $x'$, $y'$, and $z'$ correspond to $\theta'$ and $\phi'$. The sum can be written in closed form in terms of hypergeometric functions, but this is not illuminating.
The spin coherent states form a massively overcomplete non-orthonormal basis. Thus, the spin wavefunction $F(\theta,\phi)$ contains a large amount of redundant information.
However, we conjecture that the $F_{sm} (\theta, \phi)$ for \emph{all} $s=0, \tfrac{1}{2}, 1, \tfrac{3}{2}, 2, \dotsc$ and $m=-s,-s+1,\dotsc,s$ may form a complete, non-redundant basis for functions on the sphere, just like the $Y_{lm} (\theta, \phi)$. If this is true, it would allow an \emph{arbitrary} wavefunction to be expanded as $F(\theta,\phi) = \sum_{sm} c_{sm} F_{sm} (\theta,\phi)$. We are not aware whether this has been proven.
At the time of writing it is also unclear whether the time-dependent Schr{\"o}dinger equation for $F(\theta,\phi;t)$ can be written down in differential form, or if it is inherently an integrodifferential equation. Measurement-induced collapse of the ``spin wavefunction'' can certainly be discussed within our picture, although this may not serve a useful pedagogical purpose.
\section{Closing remarks}
We have developed the concept of the ``spin wavefunction'' for spin-$s$ spins, using the basis of spin coherent states. This works for both integer and half-integer values of $s$. We provide explicit formulas and striking visualizations of spin eigenstates $\ket{sm}$, spin coherent states $\ket{\sss}$, and Larmor precession.
We also demonstrate that cartographic projections such as the Hammer projection are useful for visualizing wavefunctions defined on spherical surfaces.
Students bring a variety of learning styles to the classroom \cite{montgomery1999}.
Some take well to a deductive approach going from general theorems to specific phenomena,
whereas others prefer an inductive approach starting with concrete examples.
We feel that the spin wavefunction visualizations presented here will be very useful for reaching out to the latter class of students.
\input{spinwf.bbl}
\end{document} |
1,116,691,499,027 | arxiv | \section{Introduction}
In compressed sensing one is confronted with the inverse problem of
recovering unknown but structured signals from only few observations,
far less the dimension of the signal. This methodology is reasonable
if the effective dimension of the signals of interest is much smaller
then its ambient dimension. A prominent example is the set of
$s$--sparse and $N$-dimensional vectors where $s\ll N$. The original
recovery problem has combinatorial nature and is computationally infeasible
since one essentially has to implicitly search over the exponentially
$\binom{N}{s}$ many support combinations of the unknown signal.
The first fundamental theoretical breakthroughs \cite{Candes2005b,candes:stablesignalrecovery,Donoho2006a} show that
for a linear and real-valued measurement model and under further
assumptions on the so called measurement matrix, it is possible to
recover the unknown vector in the noiseless case by a linear program.
In the noisy case it is also possible to obtain provable guarantees
for certain convex programs, see here for example \cite{Foucart2013},
which usually require a tuning parameter
that often depends on further properties on the noise contribution,
in most cases, the $\ell^2$--norm of the noise. However, there are several
signal processing problems where it is difficult to aquire this
knowledge. For example, in the Poisson noise model this depends on the
unknown signal itself. Another example are certain applications in
sparse covariance matching where the error contribution comes from the
deviation of the empirical to the true covariance matrix, which in turn
depends on the sparse parameter to recover. There are some
concepts known in the literature how to deal with convex compressed
sensing programs in the absence of this a-priori information. To
mention some examples, the quotient bounds \cite{Wojtaszczyk2010} of
the measurement matrix can provide guarantees for the basis pursuit or
the basis pursuit denoising, see for example also \cite[Chapter
11]{Foucart2013}. Empirical approaches and modifications of the convex
programs are also known to get rough estimates for the noise power,
see for example \cite{herrmann:wsa18}. Interestingly, it has been observed
also in \cite{Donoho92,Bruckstein,Slawski,Meinshausen2013} that nonnegativity of the
unknowns together with particular properties of the measurement matrix
yield a ``self-tuning'' approach, which has been worked out in
\cite{Kabanava2015} for the nuclear norm and in \cite{Kueng2018} for the
$\ell^1$--norm with respect to guarantees formulated in the
terminology of the robust nullspace property.
\section{Main Results}
Motivated by covariance matching problems, briefly also sketched
below, we shall consider the problem of recovering
nonnegative and sparse vectors from the noisy matrix
observation
\begin{equation}
Y=\mathcal{A}(x)+E,
\label{eq:measmodel}
\end{equation}
where $\mathcal{A} : \mathbb{R}^N \mapsto \mathbb{C}^{n\times n}$ is a given linear measurement
map.
We establish recovery guarentees for the generic convex
program
\begin{equation}
x^\sharp=\arg\min_{z\geq 0}\|\mathcal{A}(z)-Y\|,
\label{eq:generic:nnnorm}
\end{equation}
where $\norm{\cdot}$ is a given norm on $\mathbb{C}^{n\times n}$. We shall
write $\norm{\cdot}_p$ for the $\ell^p$-norms for vectors and matrices
(when seen as a vector). In particular, for the Frobenius norm
$\|\cdot\|_{\text{F}}=\|\cdot\|_2$ the problem \eqref{eq:generic:nnnorm}
is the so called {\em Nonnegative Least-Squares} (NNLS). Guarantees
for this case have been established already in
\cite{Slawski,Meinshausen2013,Kueng2018}. See here also
\cite{Kabanava2015} for a similar approach for the low-rank and
positive-semidefinite case (instead of sparse and elementwise
nonnegative).
In this work we follow techniques established mainly in
\cite{Kueng2018} and extend our work to the special matrix-valued
observation model \eqref{eq:measmodel}. Then we investigate a
structured random measurement map $\mathcal{A}$ which can be represented as a
matrix with independent heavy-tailed columns containing vectorized
outer products of random vectors with itself. Such matrices are
sometimes also called as (self-) Khatri-Rao products. By construction
such random matrices are biased which is essential for the recovery of
nonnegative vectors via \eqref{eq:generic:nnnorm} (further details
below or see for example also the discussion \cite{Shadmi:isit19} for
the unstructured case). Recent results about the RIP property of such
matrices after centering and in the real case have obtained in
\cite{Fengler:krrip:2019}. These investigations have been worked out
towards a NNLS recovery guarantee in
\cite{Hag:isit18,alex2019nonbayesian} using the nullspace property in
the special case where the vectors are drawn from the complex
sphere. In this work we focus on the complex subgaussian case and
establish the corresponding compressed sensing recovery guarantee.
To state our main results we need the following definitions.
For the case of a generic norm $\norm{\cdot}$ on $\mathbb{C}^{n\times n}$ we
let $\norm{\cdot}\dualnormsymbol$ be the corresponding dual norm
defined as
\begin{equation*}
\norm{Y}\dualnormsymbol:=\sup_{\norm{X}\leq 1}\langle Y,X\rangle,
\end{equation*}
where by $\scp{X}{Y}:=\trace{X^*Y}$ we denote the Hilbert-Schmidt (Frobenius)
inner product on $\mathbb{C}^{n\times n}$. To simplify notation we will stick
to square matrices in the space $\mathbb{C}^{n\times n}$ but the first part of this work
can be easily rewritten for the non-square case or even for a generic
inner product space.
\cite[Definition 4.21]{Foucart2013} is essential for our analysis.
\begin{defi}\label{def:robust-NSP
Let $q\in [1,\infty)$ and $s\in\mathbb{N}$. We say that a linear map $\mathcal{A} : \mathbb{R}^N \mapsto \mathbb{C}^{n\times n}$
satisfies the $\ell^q$-\textit{robust nullspace property ($\ell^q$-NSP)} of
order $s$ with respect to $\norm{\cdot}$ with parameters
$\rho\in(0,1)$ and $\tau>0$ if for all
$S\subseteq [N]\coloneqq\set{1,\ldots,N}$ with cardinality $|S|\leq s$
\begin{equation}\label{eq:NSP}
\norm{v_S}_q \leq \frac{\rho}{s^{1-1/q}} \norm{v_{S^c}}_1 + \tau \norm{\mathcal{A}(v)}
\end{equation}
holds for all $v \in \mathbb{R}^N$. Here, $v_S\in\mathbb{R}^N$ denotes the vector
containing the same entries as $v$ at the positions in $S$ and zeros
at the others and $S^c=[N]\backslash S$.
\end{defi}
Furthermore, by $\sigma_s(x)_1=\min_{|S|\leq s}\|x-x_S\|_1$ we denote
here the best $s$-term approximation to $x\in\mathbb{R}^N$ in the
$\ell^1$-norm. The nullspace property is essential for recovery via
$\ell^1$-based convex recovery programs like basis pursuit and basis
pursuit denoising, see for example \cite[Theorem
4.22]{Foucart2013}. When recovering nonnegative vectors, the following
additional property, often called ${\cal M}^+$-criterion, controls
the $\ell^1$-norms of all feasible vectors such that
$\ell^1$-regularization becomes superfluous.
\begin{defi}
A linear map $\mathcal{A} : \mathbb{R}^N \mapsto \mathbb{C}^{n\times n}$ satisfies the
${\cal M}^+$-criterion if there exists a matrix $T\in\mathbb{C}^{n\times n}$
such that $w\coloneqq\mathcal{A}^*(T)>\vc{0}$ componentwise. For a given $T$,
we then define the condition number
$\kappa(w)=\max_{i\in[N]}|w_i|/\min_{i\in[N]}|w_i|$.
\label{def:kappa}
\end{defi}
{Note that $\kappa(w)$ is scale-invariant, i.e.,
$\kappa(w)=\kappa(t w)$ for all $t\neq 0$.} For further
illustration of this property, consider the noiseless setting and
assume for simplicity that we can find $T$ such that
$w=\mathcal{A}^*(T)=(1,\dots,1)=:1_N$ is the all-one vector. Then
\begin{equation*}
\begin{split}
\|x\|_1
&\overset{x\geq 0}{=}\langle 1_N,x\rangle=\langle\mathcal{A}^*(T),x\rangle\\
&\,\,=\langle T,\mathcal{A}(x)\rangle=\langle T,Y\rangle=\text{const}
\end{split}
\end{equation*}
shows that all feasible vectors $x$ have the same $\ell^1$-norm. As
we shall show below, a similar conclusion follows for the general case
$w>0$ and the tightness of such an argument will depend on $\kappa(w)$.
The following theorem essentially extends and refines \cite[Theorem
3]{Kueng2018} to the case of matrix observations and generic norms.
\begin{thm}
Let $q\geq1$ and let $\mathcal{A}: \mathbb{R}^N \rightarrow \mathbb{C}^{n\times n}$ be a linear map which (i) satisfies the
$\ell^q$--NSP of order $s$ with respect to $\|\cdot\|$ and with
parameters $\rho\in[0,1)$ and $\tau>0$ and (ii) fulfills the
${\cal M}^+$-criterion for $T\in\mathbb{C}^{n\times n}$ with
$\kappa=\kappa(\mathcal{A}^*(T))$. If $\rho\kappa<1$, then, for any nonnegative $x\in\mathbb{R}^{N}$
and all $E\in\mathbb{C}^{n\times n} $, the solution $x^\sharp$ of \eqref{eq:generic:nnnorm} for
$Y=\mathcal{A}(x)+E$ obey
\begin{equation}
\|x^\sharp-x\|_p \leq \frac{C'\kappa\sigma_s(x)_1}{s^{1-\frac{1}{p}}}
+ \frac{D'\kappa}{s^{\frac{1}{q}-\frac{1}{p}}}\br{\tau + \frac{\theta}{s^{1-\frac{1}{q}}}}\norm{E}
\label{eq:thm:main:nonneg}
\end{equation}
for all $p\in[1,q]$, where
$C' \coloneqq 2\frac{(1+\kappa\rho)^2}{1-\kappa\rho}$ and
$D' \coloneqq 2\frac{3 + \kappa\rho}{1 -
\kappa\rho}$ and $\theta={\smallnorm{\mathcal{A}^*({T})}_\infty^{-1}}\cdot{\norm{T}\dualnormsymbol}$.
\label{thm:main:nonneg}
\end{thm}
We prove this theorem in Section \ref{sec:nonneg:generic}.
As a second main result, we show that it is applicable to the
following random observation model:
\begin{model}
Let $a_i= (a_{i,k})_{k\in[n]}\in\mathbb{C}^n$ for $i=1,\ldots, N$ be independent
random vectors with independent subgaussian real and
imaginary parts $\re(a_{i,k})$ and $\im(a_{i,k})$ satisfying
\begin{align*}
\expec{a_{i,k}}
&=\expec{\re(a_{i,k})} =\expec{\im(a_{i,k})} = 0 \\
\expec{\re(a_{i,k})^2} &= \expec{\im(a_{i,k})^2} = 1/2,
\end{align*}
so that $\expec{|a_{i,k}|^2}=1$ and $\expec{\norm{a_i}_2^2}=n$.
We consider the following map $\mathcal{A}:\mathbb{R}^N\rightarrow\mathbb{C}^{n\times n}$:
\begin{equation}
\mathcal{A}(x):=\sum_{i=1}^N x_i a_i a_i^*
\label{eq:rankone:model}
\end{equation}
Let $\psi_2\geq 1$ be a uniform bound on the subgaussian norms
$\normpsi{\re(a_{i,k})}{2}$ and $\normpsi{\im(a_{i,k})}{2}$ for all $i\in [N],k\in
[n]$, see \eqref{eq:def:psip} below for the
definition.
\label{model:aa}
\end{model}
The case where the vectors $a_i$ are drawn uniformly from the complex sphere has been
discussed already in \cite{Hag:isit18} and the full proof of the
recovery guarantee can be found in \cite{alex2019nonbayesian}.
In this work we discuss the subgaussian iid case instead where
additionally also the distribution of $\|a_i\|_2$ affects the probability bounds.
We have the following second main result.
\begin{thm}
Let $\mathcal{A}: \mathbb{R}^N \rightarrow \mathbb{C}^{n\times n}$ be a random measurement
map following Model \ref{model:aa}. Set
$m\coloneqq2n(n-1)$ and assume
\begin{equation}
s\lesssim m\log^{-2}(N/m),
\label{eq:thm:main:subgaussian:phasetransition}
\end{equation}
$n\gtrsim \log(N)$ and $N\geq m$.
With probability at least $1-4\exp(-c_1\cdot n)$ it holds that for
all $p\in[1,2]$, all $x\in\mathbb{R}_{\geq 0}^N$ and $E\in\mathbb{C}^{n\times n}$, the
solution $x^\sharp$ of the NNLS (the convex program \eqref{eq:generic:nnnorm}
for the Frobenius norm $\|\cdot\|_\text{F}$) for $Y=\mathcal{A}(x)+E$ obeys
\begin{equation}\begin{split}
\|&x^\sharp-x\|_p
\leq \frac{c_2\sigma_s(x)_1}{s^{1-\frac{1}{p}}}
+ \frac{c_3\br{c_4 +
\sqrt{\frac{n}{s}}}}{s^{\frac{1}{2}-\frac{1}{p}}}\frac{\norm{E}_\text{F}}{n},
\end{split}
\label{eq:thm:main:subgaussian}
\end{equation}
where $C_1,c_1,c_2,c_3,c_4$ are absolute constants depending on $\psi_2$
but not on the dimensions.
\label{thm:main:subgaussian}
\end{thm}
The proof of this theorem will be presented in Section
\ref{sec:nonneg:subgaussian}. We have not optimized the constants but
some concrete numbers are for example $c_2=11.36$, $c_3=15.55$ and
$c_4=3.07$, more details are in the proof below. The constants $C_1$ and $c_1$
depend on the subgaussian norm $\psi_2$ in Model \ref{model:aa} and
can also be obtained from the proof.
\subsection{Motivating Application}
We will briefly mention an application of the results above in the area of
wireless communication \cite{Hag:isit18,Che2019,alex2019nonbayesian}. An important task in
wireless networks is to estimate the nonnegative large-scale
path-gain coefficients (product of transmit power and attenuation due
to path-loss) and user activity using multiple antennas. Here, a small
subset of $s\ll N$ devices indicate activity by transmitting specific
length-$n$ sequences which superimpose at each receive antenna with
individual and unknown instantaneous channel coefficients. Let us
denote this nonnegative vector of large-scale path-gains by
$\gamma\in\mathbb{R}^N$ and due to activity $\gamma$ is essentially
$s$--sparse. For a single receive antenna, the received (noiseless)
signal would be:
\begin{equation*}
y=A\diag(\sqrt{\gamma})h
\end{equation*}
Here $h\in\mathbb{C}^N$ is the vector of unknown small-scale fading
coefficients and $A=(a_1|\dots|a_N)\in\mathbb{C}^{n\times N}$ is the matrix
containing all the sequences $a_i$ registered in the network (in real
applications for example pseudo-random sequences seeded by the device
id). Well-known results in compressed sensing show that when using
sufficiently random sequences of length
$n \simeq s\cdot \text{polylog}(N)$ for given $s$ and $N$, one can recover per antenna
w.h.p. the complex-valued channel coefficients $\diag(\sqrt{\gamma})h$
and the activity pattern (the essential support).
However, since in future even the number of active devices $s$ will
grow considerably, the question is how to further gain from a massive
number of receive antennas {\em when one is only interested in
reconstructing $\gamma$ or its support}.
A very promising approach is to recover then the sparse and non-negative vector
from covariance information, an approach which has been investigated
already in \cite{Pal2015d}.
In more detail, assuming that the
small-scale fading coefficient vectors $h$ for different receive
antennas and different users are uncorrelated, we can view the
received signal $y$ at each receive antenna as a new realization of
the same random process having a covariance matrix which is
parametrized by $\gamma$, i.e., this leads precisely to the following
covariance model:
\begin{equation*}
\mathcal{A}(\gamma)=\mathbb{E} yy^*=A\diag(\gamma)A ^*
\end{equation*}
Here $\gamma$ is an unknown nonnegative and sparse parameter which
should match (in a reasonable norm) the empirical covariance
\begin{equation*}
Y=\frac{1}{M}\sum_{k=1}^My_ky_k^*\overset{(!!)}{=}\mathcal{A}(\gamma)+E
\end{equation*}
computed from the received vectors $\{y_k\}_{k=1}^M\subset\mathbb{C}^n$ at $M$
receive antennas. The error $E$ accounts therefore for the fact of
having only finite $M$ (and obviously further unknown disturbances
like adversarial noise and interference always present in communication
systems). Note that the error $E$ above usually depends then on the
unknown parameter $\gamma$ as well.
Our result, Theorem \ref{thm:main:subgaussian}, now shows that
pathloss coefficients and activity of up to
$s\leq O(n^2 / \log^2(N/n^2))$ devices can be robustly recovered from
the empirical covariance $Y$ over sufficiently many receive antennas
when matching the model in the Frobenius norm. Note that, although not
further considered in this work, errors due to having finite $M$ will
vanish with increasing $M$ for moderate assumptions on the
distribution of $h$ and one could obviously also make concrete
statements about the concentration of $\|E\|_\text{F}$ in
\eqref{eq:thm:main:subgaussian} in terms of $M$, see
\cite{alex2019nonbayesian}.
\section{Generic Nonnegative Recovery Guarantee via the Nullspace Property}
\label{sec:nonneg:generic}
In this section, we are following \cite{Kueng2018} aiming towards
showing Theorem \ref{thm:main:nonneg} which is a more general and
refined version of the deterministic guarantee given in
\cite{Kueng2018}. The proof of Theorem \ref{thm:main:nonneg} is given
at the end of this section. First, we will need \cite[Theorem
4.25]{Foucart2013}:
\begin{thm}[Theorem 4.25 in \cite{Foucart2013}]\label{thm:1}
Let $q \in [1,\infty)$
and $s \in \mathbb{N}$. Assume $\mathcal{A}$ satisfies the $\ell^q$-NSP of order $s$
with respect to $\norm{\cdot}$ and with constants
$\rho \in (0,1)$ and $\tau > 0$. Then, for any $p \in [1,q]$ and for all
$x,z \in \mathbb{R}^N$,
\begin{align}
\norm{x - z}_p
&\leq \frac{C(\rho)}{s^{1-1/p}} \br{\norm{z}_1 - \norm{x}_1 + 2 \sigma_s(x)_1} \nonumber\\
&+ D(\rho)\tau s^{1/p - 1/q} \norm{\mathcal{A} (x-z)}
\label{eq:thm:1}
\end{align}
holds, where $C(\rho) \coloneqq \frac{( 1 + \rho )^2}{1 - \rho}\leq
\frac{(3 + \rho)}{1 - \rho}=:D(\rho)$.
\end{thm}
First we show a modified version of \cite[Lemma 5]{Kueng2018}. Recall
that for a diagonal matrix
$W = \diag(w_1, \ldots, w_N) \in \mathbb{R}^{N\times N}$ considered as a
linear operator from $\mathbb{R}^N$ to $\mathbb{R}^N$ equipped with $\norm{\cdot}_p$
for any $p \in [1, \infty ]$, the operator norm is given as
$\norm{W}_o = \max \set{\abs{w_1}, \ldots, \abs{w_N}}$. Furthermore,
$W$ is invertible if and only if all the diagonal entries are nonzero,
with inverse
$W^{-1} = \diag (\frac{1}{|w_1|}, \ldots,
\frac{1}{|w_N|})$.
Thus, in this case we can also write the condition number
in Definition \ref{def:kappa} as $\kappa(w)=\|W\|_o\|W^{-1}\|_o$.
\begin{lem}\label{lem:1}
Let $W = \diag(w)$ for some
$0<w\in\mathbb{R}^N$. If $\mathcal{A}$ satisfies the assumption in
Theorem \ref{thm:1} and
$\kappa=\kappa(w) <
\frac{1}{\rho}$, then $\mathcal{A} \circ W^{-1}$ satisfies the
$\ell^q$-NSP of order $s$ with respect to $\norm{\cdot}$ and
with constants $\tilde{\rho} \coloneqq \kappa \rho$,
$\tilde{\tau} \coloneqq \norm{W}_o \tau$.
\end{lem}
\begin{proof}
Let $S \subseteq [N]$ with $|S|\leq s$ and $v \in \mathbb{R}^N$. Since $W$ is diagonal, we have $(W^{-1}v)_S=W^{-1}v_S$ (same for $S^c$). We get:
\begin{align*}
&\norm{v_S}_q \leq\norm{W}_o \norm{(W^{-1}v)_S}_q \\
&\leq \norm{W}_o\br{\frac{\rho}{s^{1-1/q}}\norm{(W^{-1}v)_{S^c}}_1 + \tau \norm{\mathcal{A} (W^{-1}v)} } \\
&\leq \frac{\norm{W}_o\norm{W^{-1}}_o\rho}{s^{1-1/q}}\norm{v_{S^c}}_1 + \norm{W}_o \tau \norm{\mathcal{A}(W^{-1}v)} \\
&= \frac{\tilde{\rho}}{s^{1-1/q}}\norm{v_{S^c}}_1 + \tilde{\tau}\norm{(\mathcal{A}\circ W^{-1})v}
\end{align*}
\end{proof}
The next lemma is a generalization of \cite[Lemma 6]{Kueng2018}.
\begin{lem}\label{lem:2}
Assume $w \coloneqq \mathcal{A}^*(T) \in \mathbb{R}^N$ is strictly positive for some
$T \in \mathbb{C}^{n \times n}$ and set $W \coloneqq \diag(w)$. For any nonnegative $x,z \in \mathbb{R}^N$ it holds that
\begin{equation*}
\norm{Wz}_1 - \norm{Wx}_1 \leq \norm{T}\dualnormsymbol\norm{\mathcal{A} (z-x)}.
\end{equation*}
\end{lem}
\begin{proof}
Let $x,z \in \mathbb{R}^N$ be nonnegative. By construction, we have $W = W^*$ and $Wx$ is nonnegative. This implies
\begin{align*}
\norm{Wz}_1 &= \scp{1_N}{Wz} = \scp{w}{z} \\
&= \scp{\mathcal{A}^*(T)}{z} = \scp{T}{Az}
\end{align*}
where $1_N$ denotes the vector in $\mathbb{R}^N$ containing only ones. With an analogous reformulation for $x$ we get
\begin{align*}
\norm{Wz}_1 - \norm{Wx}_1 &= \scp{T}{\mathcal{A} (z-x)}\\
&\leq \norm{T}\dualnormsymbol\norm{\mathcal{A}(z-x)}.
\end{align*}
\end{proof}
We can now show a more general version of
\cite[Theorem 3]{Kueng2018} which holds for general $p\in [1,\infty)$
and generic norms on matrices. It parallels Theorem \ref{thm:1} in
the nonnegative case.
\begin{thm}\label{thm:M+}
Suppose that $\mathcal{A}$ satisfies the assumptions in Theorem
\ref{thm:1} and that there exists some ${T} \in \mathbb{C}^{n\times n}$ such that
$\mathcal{A}^*({T})$ is strictly positive. Set $W\coloneqq\diag(\mathcal{A}^*(T))$ and $\kappa\coloneqq\kappa(\mathcal{A}^*(T))=\norm{W}_o\norm{W^{-1}}_o$. If $\kappa\rho < 1$, then
\begin{align*}
\norm{z-x}_p
&\leq
\frac{2C(\kappa\rho)\kappa}{s^{1-1/p}}\sigma_s(x)_1 \\
&+ \frac{D(\kappa\rho)}{s^{1/q-1/p}}
\Bigbr{\kappa\tau +
\frac{\norm{W^{-1}}_o \norm{T}\dualnormsymbol }{s^{1-1/q}} }\norm{\mathcal{A} (z-x)}
\end{align*}
holds for all $p \in [1,q]$ and all nonnegative $x,z \in \mathbb{R}^N$.
\end{thm}
Note that we used here the definition of $C(\rho)$ and $D(\rho)$ from
Theorem \ref{thm:1}. Using this result for $p=q=2$ and with
$s^{1-1/q}\geq 1$ yields essentially \cite[Theorem 3]{Kueng2018}.
\begin{proof}
Let $p \in [1,q]$ and $x,z \in \mathbb{R}^N$ be nonnegative. By Lemma \ref{lem:1}, $\mathcal{A}\circ W^{-1}$ satisfies the NSP with parameters $\tilde{\rho}=\kappa\rho$ and
$\tilde{\tau}=\|W\|_o\tau =\norm{W}_o\tau$. Therefore,
we can now use Theorem \ref{thm:1} for $Wx$ and $Wz$ (instead of $x$
and $z$) and $\mathcal{A}\circ W^{-1}$ (instead of $\mathcal{A}$):
\begin{align*}
\|W&(z-x)\|_p \\
\overset{\eqref{eq:thm:1}}{\leq}& C(\kappa\rho)\frac{\norm{Wz}_1 - \norm{Wx}_1 +2\sigma_s(Wx)_1}{s^{1-1/p}}\\
&+D(\kappa\rho)\norm{W}_o\tau s^{1/p-1/q}\norm{\mathcal{A}(z-x)}
\end{align*}
By Lemma \ref{lem:2} and invoking $\sigma_s(Wx)_1 \leq \norm{W}_o \sigma_s(x)_1$, this is at most
\begin{align*}
&C(\kappa\rho)\frac{\|{T}\|\dualnormsymbol\norm{\mathcal{A} (z-x)} +2\norm{W}_o\sigma_s(x)_1}{s^{1-1/p}}\\
+&D(\kappa\rho)\norm{W}_o\tau s^{1/p-1/q}\norm{\mathcal{A}(z-x)}
\end{align*}
which we can further upper bounded by using $C(\kappa\rho)\leq D(\kappa\rho)$ by:
\begin{align*}
&2 C(\kappa\rho)\norm{W}_o\frac{\sigma_s(x)_1}{s^{1-1/p}}\\
+&D(\kappa\rho)s^{1/p-1/q}\br{\norm{W}_o\tau +
\frac{\norm{T}\dualnormsymbol}{s^{1-1/q}}}\norm{\mathcal{A}(z-x)}.
\end{align*}
This yields
\begin{align*}
&\norm{z-x}_p\leq\|W^{-1}\|_o\norm{W(z-x)}_p\\
&\leq 2C(\kappa\rho)\kappa\frac{\sigma_s(x)_1}{s^{1-1/p}}\\
&+ \frac{D(\kappa\rho)}{s^{1/q-1/p}}\Bigbr{\kappa\tau +
\frac{\norm{W^{-1}}_o\norm{T}\dualnormsymbol}{s^{1-1/q}}}\norm{\mathcal{A}(z-x)}
\end{align*}
\end{proof}
\begin{proof}[Proof of Main Theorem \ref{thm:main:nonneg}]
Applying Theorem \ref{thm:M+} above for $Y=\mathcal{A}(x) + E$
we obtain
\begin{align*}
&\|x^\sharp-x\|_p
\leq \frac{2C(\kappa\rho)\kappa}{s^{1-1/p}}\sigma_s(x)_1\\
&+
\frac{D(\kappa\rho)}{s^{1/q-1/p}}\Bigbr{\kappa \tau + \frac{ \norm{W^{-1}}_o\norm{T}_o}{s^{1-1/q}}}
\left(\norm{\mathcal{A} (z)-Y}+\norm{E}\right).
\end{align*}
This indeed suggests to use the convex program \eqref{eq:generic:nnnorm} for recovery.
Obviously, the minimizer $x^\sharp$ of \eqref{eq:generic:nnnorm} fulfills
\begin{equation*}
\|A(x^\sharp)-Y\|\leq\|\mathcal{A}(x)-Y\|=\|E\|,
\end{equation*}
and therefore we have:
\begin{equation}\begin{split}{}
\smallnorm{x^\sharp-x}_p
&\leq \frac{2C(\kappa\rho)\kappa}{s^{1-1/p}}\sigma_s(x)_1 \\
&+ \frac{2D(\kappa\rho)}{s^{1/q-1/p}}\Bigbr{\kappa\tau + \frac{\norm{W^{-1}}_o \norm{T}\dualnormsymbol}{s^{1-1/q}}}\norm{E}
\label{eq:main:unscaled}
\end{split}
\end{equation}
Note that $T$ can be rescaled by any positive factor, the $\mathcal{M}^+$--criterion is still fulfilled and the terms $\kappa$ and $\norm{W^{-1}}_o \norm{T}\dualnormsymbol$ in Theorem \ref{thm:M+} above do not change. However, replacing $T$ by
$\norm{\mathcal{A}^*(T)}_\infty^{-1}\cdot T$, which yields $\norm{W}_o=1$ and therefore $\kappa = \smallnorm{W^{-1}}_o$, allows us to write \eqref{eq:main:unscaled} in the more convenient form \eqref{eq:thm:main:nonneg}.
\end{proof}
\section{The Rank-one and Sub-Gaussian Case}
\label{sec:nonneg:subgaussian}
In this part we will proof our second main result, Theorem
\ref{thm:main:subgaussian}. We will consider a random linear map
$\mathcal{A}:\mathbb{R}^N\rightarrow\mathbb{C}^{n\times n}$ given by Model \ref{model:aa},
i.e., with the special form
\begin{align}
\mathcal{A}(x)=\sum_{i=1}^N x_i a_i a_i^*=:\sum_{i=1}^N x_i A_i
\label{eq:defA:rankone}
\end{align}
where $A_i \coloneqq a_i a_i^* \in \mathbb{C}^{n\times n}$ are independent
random positive-semidefinite rankone matrices. The adjoint map $\mathcal{A}^*: \mathbb{C}^{n\times n}\rightarrow\mathbb{R}^N$ is given as
\begin{align*}
\mathcal{A}^*(T)=(\scp{A_i}{T})_{i=1}^N = ( a^{*}_i T a_i)_{i=1}^N.
\end{align*}
Note that, even in the more general case where $A_i$ are (non-zero)
positive semidefinite matrices, it can been easily seen that $\mathcal{A}$
fulfills the $\mathcal{M}^+$ criterion since for $T=\Id_n$ we get that
$\mathcal{A}^*(T)=(\trace A_i)_{i=1}^N>0$.
Now, according to Model \ref{model:aa},
$\{a_i\}_{i=1}^N$ are independent complex random vectors with
independent subgaussian real and imaginary part components. To make
this precise we will need the following characterization. For a
real-valued random variable $X$ and $r\in [1,\infty)$ define
\begin{equation
\norm{X}_{\psi_r} \coloneqq \inf\set{t>0 : \expec{\exp(\abs{X}^r/t^r)} \leq 2}.
\label{eq:def:psip}
\end{equation}
This is a norm on the \textit{Orlicz space} of random variables $X$
with $\normpsi{X}{r}<\infty$. For $r=2$ these random variables are called \textit{sub-Gaussian} and for $r=1$ \textit{sub-exponential}. More information about these spaces can be found in \cite{Foucart2013} and \cite{Vershynin:datasciencebook} for example.
\subsection{The ${\cal M}^+$--Criterion}
We already discussed above that the measurement map $\mathcal{A}$ in
\eqref{eq:defA:rankone} fulfills the $\mathcal{M}^+$-criterion (by
choosing $T=\Id_n$ or a scaled version). However, its ``quality'' depends (for a chosen
$T$) on the condition number $\kappa$ which is a random variable.
We follow
the ideas of \cite{Kueng2018} again.
\begin{lem}\label{lem:M+AM}
Assume that $\mathcal{A}$ is given by Model \ref{model:aa}.
For a given $\eta\in(0,1)$ it holds with high probability at least
\begin{equation*}
1 - 2N\exp \bigbr{-\frac{c\eta^2}{2\psi_2^4}\cdot n },
\end{equation*}
that for all $i\in[N]$
\begin{align}\label{eq:M+bound1}
& n(1-\eta)\leq \norm{a_i}_2^2 \leq n(1+\eta),
\end{align}
where $c>0$ is the constant appearing in the Hanson-Wright inequality \eqref{eq:Hanson-Wright}.
In particular,
\begin{align}\label{eq:M+bound2}
&\frac{\max_{i\in [N]} \norm{a_i}_2^2}{\min_{i\in [N]} \norm{a_i}_2^2} \leq \frac{1 + \eta}{1 - \eta}.
\end{align}
\end{lem}
A variant of Lemma \ref{lem:M+AM} is possible for random vectors
beyond the iid. model if a convex concentration property hold, see
\cite{Adamczak2015}. Let us already indicate how we will use this
result later on. In the context of proving Theorem
\ref{thm:main:subgaussian} applied to Model \ref{model:aa} with
$T=t\Id_n$ we have
$\kappa = \frac{\max_{i\in [N]} \norm{a_i}_2^2}{\min_{i\in [N]}
\norm{a_i}_2^2}$ and
$\norm{\mathcal{A}^*(T)}_\infty=t\max_{i\in [N]} \norm{a_i}_2^2$. Thus, Lemma
\ref{lem:M+AM} allows us to control the terms related to
the $\mathcal{M}^+$-criterion. We will do this more explicitely below
when proving Theorem \ref{thm:main:subgaussian}.
\begin{proof}
Note that $\expec{\|a_i\|_2^2}=n$. We will show that
with high probability it holds for all
$i\in[N]$ that
\begin{equation*}
|\|a_i\|_2^2 - n| \leq \eta n.
\end{equation*}
This directly implies \eqref{eq:M+bound1} and \eqref{eq:M+bound2}.
Using the Hanson-Wright inequality \eqref{eq:Hanson-Wright}
(which is a Bernstein inequality in this case) yields that for all $i\in [N]$ it holds that
\begin{align*}
&\prob{\smallabs{ \norm{a_i}_2^2 - n} \geq n\eta}\\
\leq&\, 2 \exp \big( -c n \min \{ \frac{\eta^2}{2\psi_2^4}, \frac{\eta}{\psi_2^2}\}\big)\\
=& \,2\exp \bigbr{-\frac{c\eta^2}{2\psi_2^4}\cdot n },
\end{align*}
using $\psi_2\geq 1$ and $\eta<1$.
As a remark, such a concentration may also hold for certain
non-iid. models, see here the convex concentration property
\cite{Adamczak2015}. By taking the union bound it follows that
\eqref{eq:M+bound1} and \eqref{eq:M+bound2} hold with probability
\begin{align*}
&\geq 1 - 2N\exp \bigbr{-\frac{c\eta^2}{2\psi_2^4}\cdot n },
\end{align*}
depending on $\psi_2$, the dimensions $n$ and $N$ and some
$\eta\in (0,1)$.
\end{proof}
\subsection{The Nullspace Property}
We will now establish that the $\ell^2$--NSP holds
with overwhelming probability once the sparsity $s$ is below a certain
threshold, in detail $s\lesssim m/\log^2(N/m)$ where $m=2n(n-1)$.
This resembles that the well-known compressed sensing phase
transition holds (up the order of the logarithm) also for such structured
random matrices.
It is well-known that the $\ell^2$--NSP is implied by the \textit{restricted isometry property}
(with respect to the $\ell^2$--norm).
\begin{defi}
For $s\in [N]$, the \textit{restricted isometry constant}
$\delta_s=\delta_s(\Phi)$ of order $s$ of a matrix
$\Phi\in\mathbb{C}^{m\times N}$ is defined as the
smallest $\delta\geq0$ satisfying
\begin{equation}
(1-\delta)\norm{x}_2^2 \leq \norm{\Phi x}_2^2 \leq (1+\delta)\norm{x}_2^2
\end{equation}
for all $s$-sparse vectors $x\in\mathbb{R}^N$, i.e. vectors with at most $s$
non-zero components. If $\delta_s(\Phi) <1$, the matrix $\Phi$ is
said to have the {\em restricted isometry property} ($\ell^2$--RIP) of order $s$.
\end{defi}
If
$\delta_{2s}(\Phi)<1/\sqrt{2}$, then $s$-sparse vectors $x\in\mathbb{R}^N$ can be
recovered in a stable way from given measurements $\Phi x$ using
$\ell^1$-based convex algorithm (basis pursuit etc.) \cite{Cai2014}. The following
theorem \cite[Theorem 6.13]{Foucart2013} shows the important
relation to the nullspace property.
\begin{thm}
\label{thm:RIPNSP}
If the 2sth restricted isometry constant
$\delta_{2s}=\delta_{2s}(\Phi)$ of a matrix $\Phi\in\mathbb{C}^{m\times N}$
obeys $\delta_{2s}\leq\delta <\frac{4}{\sqrt{41}}$, then $\Phi$
satisfies the $\ell^2$--NSP of order $s$ with constants
\begin{align}\label{eq:rho_tau_RIP}
&\rho \leq \frac{\delta}{\sqrt{1-\delta^2}-\delta/4}\quad\text{and}\quad
\tau \leq \frac{\sqrt{1+\delta}}{\sqrt{1-\delta^2}-\delta/4}.
\end{align}
\end{thm}
For the proof see \cite[Theorem 6.13]{Foucart2013}.
For example, as seen in Figure \ref{fig:nspparam}, $\delta=0.5$ gives
$\rho\lessapprox 0.7$, $\tau\lessapprox 1.5$ and the constants in Theorem \ref{thm:1}
are $C(\rho)\approx 8.6$ and $D(\rho)\approx 11.3$.
\begin{figure}
\hspace*{-1em}
\includegraphics[width=1.1\linewidth]{fig/nsp_param}
\caption{Dependency of the NSP parameter (bounds) $\rho=\rho(\delta)$,
$\tau=\tau(\delta)$ in Theorem \ref{thm:RIPNSP}
and the constants
$C(\rho(\delta))$ and $D(\rho(\delta))$
in Theorem \ref{thm:1}. For example, $\delta=0.5$ gives $\rho\approx 0.7$, $\tau=1.5$,
$C(\rho)\approx 8.6$ and $D(\rho)\approx 11.3$.}
\label{fig:nspparam}
\end{figure}
Our first step will be to show that in the considered regime a
modified version $\Phi$ of $\mathcal{A}$ has with high probability $\ell^2$--RIP
with a sufficiently small RIP-constant. This then
implies that $\Phi$ and also $\mathcal{A}$ satisfy the $\ell^2$--NSP.
To this end, we define an operator
$P:\mathbb{C}^{n\times n} \rightarrow \mathbb{R}^{m}$, where $m\coloneqq 2n(n-1)$,
that maps a complex matrix to a real valued vector containing the real
and imaginary parts of all off-diagonal entries scaled by $\sqrt{2}$.
Hence, for any $M\in\mathbb{C}^{n\times n}$ we have $\|P(M)\|_2\leq\sqrt{2}\|M\|_F$.
Furthermore, we define the real vectors
\begin{equation}\begin{split}
X_i:&=P(a_ia_i^*)\\
&=\sqrt{2}[(\re(a_{i,k}\bar{a}_{i,l})_{k\neq l},\im(a_{i,k}\bar{a}_{i,l})_{k\neq l}]
\end{split}
\label{eq:def:P}
\end{equation}
These are independent and have subexponential
zero-mean entries. The factor $\sqrt{2}$ normalizes the resulting
vector so that
\begin{equation*}
\mathbb{E}\|X_i\|_2^2
=2\mathbb{E}\sum_{k\neq l}\abs{a_{i,k}\bar{a}_{i,l}}^2=m.
\end{equation*}
To show $\ell^2$--RIP, we will use the following result on matrices with independent heavy-tailed columns from \cite{Adamczak2011}:
\begin{thm}[Theorem 3.3 in \cite{Adamczak2011} for the $\psi_1$-case]
Let $X_1,\ldots,X_N \in \mathbb{R}^m$ be independent subexponential
random vectors with $\mathbb{E} \|X_i\|_2^2 = m$ and let
$\psi = \max_{i\in[N]}\|X_i\|_{\psi_1}$. Assume $s\leq \min \{N,m\}$ and let
$\theta \in(0,1)$, $K,K^\prime \geq 1$ and set
$\xi = \psi K + K^\prime$. Then for $\Phi := (X_1|...|X_N)$ it holds that
\begin{equation}
\delta_s\left(\frac{\Phi}{\sqrt{m}}\right) \leq C \xi^2\sqrt{\frac{s}{m}}
\log\left(\frac{eN}{s\sqrt{\frac{s}{m}}}\right)
+ \theta
\end{equation}
with probability larger than
\begin{equation}\begin{split}
1 &- \exp(-\hat{c}K\sqrt{s}\log(\frac{eN}{s\sqrt{\frac{s}{m}}})) \\
&- \prob{\max_{i\in [N]}\|X_i\|_2\geq K^\prime\sqrt{m}} \\
&- \prob{\max_{i\in [N]}|\frac{\|X_i\|_2^2}{m} - 1|\geq \theta},
\label{eq:adam_error}
\end{split}
\end{equation}
where $C,\hat{c} > 0$ are universal constants.
\label{thm:RIP:heavy}
\end{thm}
The last term in \eqref{eq:adam_error} shows the intuitive behavior
that the concentration of the column norms $\|X_i\|^2_2/m$ have direct
impact on the RIP (take for example $s=1$). In our case we will apply
Theorem \ref{thm:RIP:heavy} above to the vectors defined in \eqref{eq:def:P}. The norm $\|X_i\|^2_2$ is in
general a $4$th order polynomial in the $m$ real subgaussian random variables $\re (a_{i,k}), \im (a_{i,k})$.
In Appendix \ref{app:psir:moments} we show how to calculate tail bounds for a polynomial of this form, the summary for our specific case is the following corrollary:
\begin{cor}\label{cor:tailPaa}
Consider the model \ref{model:aa} and $X_i\coloneqq P(a_i a_i^*)$ as
defined in \eqref{eq:def:P} so that $\mathbb{E}(\|X_i\|_2^2)=m$. Assume $n\geq \psi_2^4$.
For $\omega\in [0,1]$ it holds that
\begin{align*}
\prob{ \abs{\|X_i\|_2^2 - m} \geq m\omega} \leq 2 \exp\bigbr{ -\gamma \frac{\omega^2}{{\psi_2^4}} \cdot n },
\end{align*}
where $\gamma\in (0,1)$ is some absolute constant.
\end{cor}
\begin{proof}
This follows from Proposition \ref{prop:conc:4ord} in Appendix
\ref{app:psir:moments}. In our case we have
$\mu=0$ and $\sigma^2={1}$ and $L=\psi_2\geq 1$, hence the
minimum in \eqref{eq:minZeta} can be computed as
$\min\{\frac{\omega^2}{\psi_2^8}\cdot n,\frac{\omega^2}{\psi_2^4}\}=\frac{\omega^2}{\psi_2^4}$, using $n\geq \psi_2^4$.
\end{proof}
Now we are ready to show $\ell^2$--RIP for the matrix
$\Phi \coloneqq \frac{1}{\sqrt{m}}P\circ\mathcal{A}$. A similar result for
the real case where (informally) ``$P$ is replaced by centering''
has been established in \cite{Fengler:krrip:2019}. However, to
establish the NSP it is more direct to remove the diagonal part with
the definition of $P$ in \eqref{eq:def:P}.
\begin{thm}\label{thm:RIP}
Assume that $\mathcal{A}:\mathbb{R}^N\rightarrow\mathbb{C}^{n\times n}$ is given by Model
\ref{model:aa}
and let $\delta\in(0,1]$. Assume
$N\geq m=2n(n-1)$. If
\begin{equation}
2s\leq \alpha m\log^{-2}(\frac{eN}{\alpha m})
\label{eq:thm:RIP}
\end{equation}
and $n\geq \frac{2 \log(4N)}{C_1}$, then,
with probability
\begin{equation}
\geq 1-2\exp(- \min\smallset{\hat{c} \sqrt{\alpha},\, \frac{1}{2}C_1} \cdot n ),\label{eq:RIPprob}
\end{equation}
the matrix $\Phi=\frac{1}{\sqrt{m}}P\circ \mathcal{A}\in\mathbb{R}^{m\times N}$
has $\ell^2$--RIP of order $2s$ with RIP-constant $\delta_{2s}(\Phi) \leq \delta$.
The constants $C,\hat{c}$ are the same as in Theorem \ref{thm:RIP:heavy} and $C_1, \alpha$ are given as $C_1= \frac{\gamma \delta^2}{4\psi_2^4}$, with $\gamma$ as in Corollary \ref{cor:tailPaa}, and $\alpha\coloneqq\min\{1,
\big(\frac{\delta}{6C(\psi_1+\sqrt{1+\delta/2})^2}\big)^2\}$, where $\psi_1\coloneqq \max_{i\in [N]}\norm{P(a_i a_i^*)}_{\psi_1}$.
\end{thm}
\begin{proof}
We will apply Theorem \ref{thm:RIP:heavy} and use ideas already
presented in \cite[Theorem 5 and Corollary 1]{Fengler:krrip:2019}.
Define the $N$ real-valued random vectors
$X_i=P(a_ia_i^*), i\in [N]$. The number
$\psi_1=\max_{i\in[N]}\normpsi{X_i}{1}$ defined above is finite,
independent of the dimension and depends quadratically on $\psi_2$, see Appendix
\ref{app:psir:moments}. Let $\alpha\in (0,1]$, the value will be
specified later, and set
$s^* \coloneqq \alpha m/\log^2(\frac{eN}{\alpha m})$. Since
$\log(\frac{eN}{\alpha m})\geq 1$, we ensure $s^*\leq m\leq N$. By
Theorem \ref{thm:RIP:heavy}, the RIP-constant
$\delta_{s^*}\coloneqq \delta_{s^*}(\Phi)$ of the matrix
$\Phi\coloneqq\frac{1}{\sqrt{m}}P\circ \mathcal{A}$ satisfies
\begin{equation}
\delta_{s^*} \leq C \xi^2 \sqrt{\frac{s^*}{m}} \log(\frac{eN}{s^* \sqrt{(s^*/m)}}) + \theta \label{deltaStar}
\end{equation}
with probability larger than
\begin{align}
1 &-\exp\bigbr{-\hat{c} K \sqrt{s^*} \log (\frac{eN}{s^* \sqrt{s^*/m}})} \label{1stprob}\\
&- \prob{\max_{i\in [N]} \smallnorm{X_i}_2 \geq K' \sqrt{m}} \label{2ndprob}\\
&-\prob{\max_{i\in [N]} \smallabs{ \frac{\norm{X_i}_2^2}{m} - 1} \geq \theta}.\label{3rdprob}
\end{align}
By definition of $s^*$, we can estimate \eqref{deltaStar} as
\begin{align}
\delta_{s^*} &\leq \frac{C \xi^2\sqrt{\alpha}}{\log(\frac{eN}{\alpha m})} \log\Big((\frac{eN}{\alpha m})^{3/2} \log^3(\frac{eN}{\alpha m})\Big) + \theta\nonumber \\
&= C\xi^2 \sqrt{c} \Big( \frac{3}{2} + 3\frac{\log \log(\frac{eN}{\alpha m})}{\log(\frac{eN}{\alpha m})}\Big) + \theta\nonumber\\
&\leq C\xi^2 \sqrt{\alpha } \big( \frac{3}{2} + \frac{3}{e}\big)+ \theta\nonumber\\
&\leq 3C\xi^2\sqrt{\alpha }+ \theta,\label{eq:deltaEstimate}
\end{align}
where we used $\frac{\log \log x}{\log x}\leq \frac{1}{e}$ for $x>1$ in the last line.
For \eqref{2ndprob}, \eqref{3rdprob}, taking union
bounds and rewriting gives
\begin{align}
&\prob{\max_{i\in [N]} \smallabs{ \frac{\norm{X_i}_2^2}{m} - 1} \geq \theta}\nonumber \\
&\leq N\cdot\prob{\smallabs{ \norm{X_i}_2^2 - m} \geq \theta m}\label{eq:tailTheta}
\end{align}
and
\begin{align*}
& \prob{\max_{i\in [N]} \smallnorm{X_i}_2 \geq K' \sqrt{m}}\\
&\leq N\cdot\prob{\smallnorm{X_i}^2_2 \geq K'^2 m}\\
&\leq N\cdot\prob{\smallabs{ \norm{X_i}_2^2 - m} \geq (K'^2-1) m}.
\end{align*}
Choosing $K'\coloneqq \sqrt{1 + \theta}$, both terms above are
equal. We set $\theta= \frac{\delta}{2}$.
Note that $n\geq\frac{2\log(4N)}{C_1}$ yields $n\geq \psi_2^4$ since $C_1= \frac{\gamma \delta^2}{4\psi_2^4}$ and $\gamma,\delta\leq 1$. Hence, using Corollary \ref{cor:tailPaa} with $\omega=\frac{\delta}{2}$, the probabilities above can be bounded by
$2N \exp(-C_1\cdot n)$. Since
$n\geq\frac{2 \log (4N)}{C_1}$, we can estimate
\begin{align}
4Ne^{-C_1\cdot n} =e^{\log(4 N) - C_1\cdot n}\leq e^{-
\frac{1}{2}C_1\cdot n}.
\label{eq:unionbound:1}
\end{align}
Now set $K=1$ and choose $\alpha $ sufficiently
small so that we get $\delta_{s^*}\leq \delta$ from
\eqref{eq:deltaEstimate}, i.e., $\alpha \leq \big(\frac{\delta}{6C(\psi+\sqrt{1+\delta/2})^2}\big)^2$. The term \eqref{1stprob} can be estimated
in the following way using $s^*=\alpha m/\log^2(\frac{eN}{\alpha m})\leq \alpha ^{2/3}m$
:
\begin{align}
&\exp\bigbr{-\hat{\alpha } \sqrt{s^*} \log (\frac{eN}{s^* \sqrt{s^*/m}})}\nonumber\\
\leq
& \exp\bigbr{ -\hat{c } \sqrt{s^*} \log ( \frac{eN}{\alpha m}) } \nonumber\\
=&\exp\bigbr{ -\hat{c} \sqrt{\alpha }\cdot \sqrt{m} }
\nonumber\\
\leq & \exp\bigbr{-\hat{c}\sqrt{\alpha }\cdot n} \label{eq:unionbound:2}
\end{align}
Using \eqref{eq:unionbound:1}, \eqref{eq:unionbound:2} we get
\begin{align*}
\prob{\delta_{s^*} \leq \delta}&\geq
1 - \exp\bigbr{- \hat{c} \sqrt{\alpha } \cdot n} -
\exp\bigbr{-\frac{1}{2} C_1 \cdot n}\\
&\geq 1 - 2\exp\bigbr{-\min\smallset{\hat{c}\sqrt{\alpha },\frac{1}{2}C_1}\cdot n}.
\end{align*}
By monotonicity of the RIP-constant we get the same lower bound for $\prob{\delta_{2s} \leq \delta}$, whenever $2s\leq s^*$.
\end{proof}
From this it easily follows that $\Phi$ and also $\mathcal{A}$ itself satisfy the $\ell^2$--NSP.
\begin{thm}\label{thm:NSP}
Assume that $\mathcal{A}:\mathbb{R}^N\rightarrow\mathbb{C}^{n\times n}$ is given by Model
\ref{model:aa}.
Let $N\geq m=2n(n-1)$, $\delta\in (0, \frac{4}{\sqrt{41}})$ and
assume
\begin{equation*}
s\lesssim m\log^{-2}(N/m)
\end{equation*}
and $n\gtrsim \log(N)$ as in \eqref{eq:thm:RIP}.
Then, with probability
\begin{equation}\label{eq:NSPprob}
\geq {1-2\exp(-c_\delta\cdot n )},
\end{equation}
$\mathcal{A}$ has the $\ell^2$--NSP of order $s$ w.r.t. the
Frobenius norm $\|\cdot\|_\text{F}$ with parameters $\rho$ and $\tau\sqrt{2}/\sqrt{m}$. The number ${c_\delta}$ is defined so that \eqref{eq:NSPprob} coincides with \eqref{eq:RIPprob} and $\rho, \tau$ satisfy \eqref{eq:rho_tau_RIP} with the chosen $\delta$.
\end{thm}
\begin{proof}
We set
$\Phi=\frac{1}{\sqrt{m}}P\circ \mathcal{A}\in\mathbb{R}^{m\times N} $. By Theorem \ref{thm:RIP}, with probability \eqref{eq:NSPprob} $\Phi$ has $\ell^2$--RIP of order $2s$ with RIP-constant $\delta_{2s}(\Phi)\leq\delta$. Theorem \ref{thm:RIPNSP}
implies that $\Phi$ in this case satisfies the
$\ell^2$--NSP with parameters $(\rho,\tau)$ depending on $\delta$ as
given in \eqref{eq:rho_tau_RIP}. Hence,
for all $v\in\mathbb{R}^{N}$ and $S\subset [N]$ with
$|S| \leq s$ it holds that
\begin{align*}
\norm{v_S}_2
&\leq \frac{\rho}{\sqrt{s}} \norm{v_{S^c}}_1 + \tau
\norm{\Phi v}_2 \\
&\leq \frac{\rho}{\sqrt{s}} \norm{v_{S^c}}_1 + \frac{\tau}{\sqrt{m}} \norm{P(\mathcal{A}(v))}_2 \\
&\leq
\frac{\rho}{\sqrt{s}} \norm{v_{S^c}}_1+ \frac{\tau\sqrt{2}}{\sqrt{m}} \norm{\mathcal{A}(v)}_\text{F},
\end{align*}
showing that the linear map $\mathcal{A}$ has the $\ell^2$--NSP of order $s$
with respect to $\|\cdot\|_\text{F}$ and with parameters
$(\rho,\tau\sqrt{2}/\sqrt{m})$.
\end{proof}
\subsection{Proof of the Main Recovery Guarantee for Model \ref{model:aa}}
Now we are ready to proceed with the proof of the second main result,
Theorem \ref{thm:main:subgaussian}.
\begin{proof}[Proof of Main Theorem \ref{thm:main:subgaussian}]
We start from our first main result, Theorem \ref{thm:main:nonneg},
for the case of the Frobenius norm $\|\cdot\|_\text{F}$. The convex
program \eqref{eq:generic:nnnorm} is then {\em Nonnegative
Least-Squares} (NNLS) and Theorem \ref{thm:main:nonneg} states
that if the linear map $\mathcal{A}$ has the $\ell^2$--NSP with respect to
$\|\cdot\|_\text{F}$ and fulfills the ${\cal M}^+$-criterion for some
matrix $T$ with a sufficiently well-conditioned
$\kappa=\kappa(\mathcal{A}^*(T))$, then NNLS obeys a recovery guarantee of
the form \eqref{eq:main:unscaled}. It will be more convenient to
choose here a different scaling for $T$ as we did in the end of the
proof of Theorem \ref{thm:main:nonneg}.
Theorem \ref{thm:NSP} states that with high probability $\mathcal{A}$ has the
$\ell^2$--NSP with parameters $(\rho, \sqrt{2} \tau/\sqrt{m})$,
where $\rho,\tau$ depend on the number $\delta$ from Theorem
\ref{thm:RIPNSP} and \ref{thm:RIP}. We know that the
$\mathcal{M}^+$--criterion for $\mathcal{A}$ is fulfilled for
$T=t\cdot \Id_n$ with $t>0$. Lemma \ref{lem:M+AM} furthermore states
that with overwhelming probability the resulting vector
$w=t\mathcal{A}^*(\Id_n)$ is well-conditioned and concentrates around its
mean. Set $\kappa \coloneqq \kappa(w)$ and $W\coloneqq \diag (w)$.
Conditioned on events
when $\mathcal{A}$ indeed has the $\ell^2$--NSP and $\kappa\rho<1$, we have from
\eqref{eq:main:unscaled} that for any $1\leq p\leq q=2$ it holds that
\begin{equation}\begin{split}\label{eq:main:subgaussian:1}
\|&x^\sharp-x\|_p\\
&\leq \frac{2C(\kappa\rho)\kappa}{s^{1-\frac{1}{p}}}\sigma_s(x)_1\\
&+ \frac{2D(\kappa\rho)}{s^{\frac{1}{2}-\frac{1}{p}}}\Bigbr{\kappa\frac{\sqrt{2}\tau}{\sqrt{m}} +
\frac{\norm{W^{-1}}_o\norm{T}\dualnormsymbol}{\sqrt{s}}}\norm{E}_\text{F}.
\end{split}
\end{equation}
The equation \eqref{eq:M+bound2} in this setting translates to $\kappa(w)\leq
\frac{1+\eta}{1-\eta}=:\kappa_\eta$, where $\eta\in (0,1)$ will be specified later. Recall that the condition number is invariant to scaling of $w$, hence $\kappa=\kappa(w)$. The dual norm in
\eqref{eq:thm:main:nonneg} is
$\|T\|^\circ=\|T\|_\text{F}=t\|\Id_n\|_\text{F}=t\sqrt{n}$ and $\norm{W^{-1}}_o = (t\min_{i\in[N]} \norm{a_i}_2^2)^{-1}\leq \bigbr{tn(1-\eta)}^{-1}$. Choosing $t\coloneqq \bigbr{n(1+\eta)}^{-1}$ we achieve $\norm{W^{-1}}_o \leq \kappa_\eta$ and $\norm{T}\dualnormsymbol =\bigbr{\sqrt{n}(1+\eta)}^{-1} $. With these bounds and setting
$C_{\eta,\rho}=2C(\kappa_\eta\rho)\kappa_\eta$,
$D_{\eta,\rho}=2D(\kappa_\eta\rho)\kappa_\eta$, we can further estimate \eqref{eq:main:subgaussian:1} as
\begin{equation}\begin{split}\label{eq:main:subgaussian:2}
&\leq \frac{C_{\eta,\rho}\sigma_s(x)_1}{s^{1-\frac{1}{p}}}
+ \frac{D_{\eta,\rho}}{s^{\frac{1}{2}-\frac{1}{p}}}\Bigbr{\frac{n \sqrt{2}\tau}{\sqrt{m}} + \sqrt{\frac{n}{s}}(1+\eta)^{-1}}\frac{\norm{E}_\text{F}}{n}\\
&\leq \frac{C_{\eta,\rho}\sigma_s(x)_1}{s^{1-\frac{1}{p}}}
+ \frac{D_{\eta,\rho}}{s^{\frac{1}{2}-\frac{1}{p}}}\Bigbr{2\tau +
\sqrt{\frac{n}{s}}(1+\eta)^{-1}}\frac{\norm{E}_\text{F}}{n},
\end{split}
\end{equation}
In particular the last step may be improved further by explicitly accounting
for the bound in \eqref{eq:thm:RIP}. Instead we have assumed only $n>1$
so that
$\frac{n}{\sqrt{m}}=\frac{n}{{\sqrt{2n(n-1)}}}\leq \sqrt{2}$.
A possible concrete choice of the not yet specified numbers is
$\eta=1/3$ and $\delta = 1/6$, see here also Figure
\ref{fig:nspparam1}. In this case we have $\kappa_{\eta} = 2$ and
$\rho\leq 0.18$, hence $\kappa\rho < 1$ is fulfilled,
$\tau\leq 1.15 $ and
$C_{\eta,\rho}\leq 11.36, D_{\eta,\rho}\leq 20.73$.
Plugging into \eqref{eq:main:subgaussian:2} yields the desired inequality \eqref{eq:thm:main:subgaussian}
\begin{align*}
\smallnorm{x^\sharp-x}_p
\leq \frac{c_2\sigma_s(x)_1}{s^{1-\frac{1}{p}}}
+ \frac{c_3\br{c_4 +
\sqrt{\frac{n}{s}}}}{s^{\frac{1}{2}-\frac{1}{p}}}\frac{\norm{E}_\text{F}}{n}
\end{align*}
with constants
\begin{align*}
c_2&=C_{\eta,\rho}\leq 11.36,\\ c_3&=D_{\eta,\rho}(1+\eta)^{-1}\leq 15.55,\\ c_4&=2\tau(1+\eta)\leq 3.07.
\end{align*}
The probability for \eqref{eq:M+bound1}, \eqref{eq:M+bound2} to hold can be estimated as
\begin{align*}
&1 - 2N\exp \bigbr{-\frac{c}{18\psi_2^4}\cdot n }\\
\geq & 1 - 2\exp\bigbr{-\frac{c}{36\psi_2^4}\cdot n}
\end{align*}
if $n\geq \frac{36\psi_2^4 \log(N)}{c}$. Taking a union bound with \eqref{eq:NSPprob} gives a probability of at least $1 - 4 \exp\bigbr{-c_1\cdot n }$ with
$c_1\coloneqq\min \{ \frac{c}{36\psi_2^4}, \hat{c}\sqrt{\alpha}, \frac{1}{2}C_1 \}$
for \eqref{eq:thm:main:subgaussian} to hold if also $n\geq\frac{2\log(4N)}{C_1}$, where $c$ is the constant from the Hanson-Wright inequality and $\hat{c}, \alpha, C_1$ are the same as in Theorem \ref{thm:RIP} and depend on $\psi_2$ but not on the dimensions.
\begin{figure}
\hspace*{-1em}
\includegraphics[width=1.1\linewidth]{fig/nsp_param1}
\caption{Dependency of the constants
$C_{\eta,\rho}$ and $D_{\eta,\rho}$
in \eqref{eq:main:subgaussian:2}
depending on $\delta$ for fixed $\eta=1/3$ (yielding
$\kappa_\eta=2$ and therefore $C_{\eta,\rho}=4C(2\rho(\delta)))$
and $D_{\eta,\rho}=4D(2\rho(\delta)))$, where $C$ and $D$ are
defined as in Theorem \ref{thm:1} and shown in Figure \ref{fig:nspparam}).}
\label{fig:nspparam1}
\end{figure}
\end{proof}
\section{Numerical Experiments}
In the following we validate our theoretical result
\eqref{eq:thm:main:subgaussian:phasetransition} in Theorem
\ref{thm:main:subgaussian} about the phase transition for successful
recovery via NNLS for Model \ref{model:aa} with numerical experiments.
We performed recovery experiments for dimensions $n=20,\dots, 30$ and
sparsity range $s=20,\dots, 150$. For every pair $(n,s)$ we have performed
$20$ experiments with randomly generated vectors $\{a_i\}_{i=1}^N$
with independent standard normal entries and a nonnegative sparse
vector $x\in\mathbb{R}^N$. The support of $x$ is generated uniformly over all
possible $\binom{N}{s}$ combinations. The nonnegative values on the support are
generated independently as absolute values from a standard normal
distribution. Given the noiseless measurement $Y=\mathcal{A}(x)$, we then used
the MATLAB function \texttt{lsqnonneg} to solve the NNLS problem (the
convex program \eqref{eq:generic:nnnorm} for the Frobenius norm)
yielding the estimate $x^\sharp$. We assume that the vector is
successfully recovered if $\|x-x^\sharp\|_2\leq 10^{-4}$. The
corresponding result is shown in Figure \ref{fig:phasetrans}.
\begin{figure}[h]
\includegraphics[width=1\linewidth]{fig/phasetrans_nnls_d=2000_20190524_165116_nnls_matlab-succ}
\caption{Phase transition for NNLS (the convex program
\eqref{eq:generic:nnnorm} for the Frobenius norm) in the noiseless
case (success=light/yellow and failure=blue/dark).
The function $x\rightarrow x^2/4-x-25$ is overlayed in
black.}
\label{fig:phasetrans}
\end{figure}
\section*{Acknowledgments}
We thank Alexander Fengler, Radoslaw Adamczak and Saeid
Haghighatshoar.
PJ has been supported by DFG grant JU 2795/3.
The work was partially supported by DAAD grant 57417688.
\begin{appendices}
\section{Hanson Wright Inequality}
The Hanson-Wright inequality is an important tool to calculate tail bounds for sub-Gaussian random vectors. We first state it for the real case, taken from \cite[Theorem 1.1]{Rudelson2013}.
\begin{thm}[Hanson-Wright inequality]
Let $a = (a_1,\ldots,a_n)\in\mathbb{R}^n$ be a random vector with
independent and centered
sub-Gaussian components and $Z\in\mathbb{R}^{n \times n}$. For all $t\geq 0$ it holds that
\begin{equation}\begin{split}
\label{eq:Hanson-Wright}
&\prob{\abs{\langle a, Z a\rangle - \expec{\langle a, Z a\rangle}} > t} \\
&\leq 2 \exp(-c \min\{\frac{t^2}{K^4 \norm{Z}_F^2}, \frac{t}{K^2 \norm{Z}_o}\}),
\end{split}
\end{equation}
where $K$ is a bound on the $\psi_2$-norms of the components of $a$ and $c>0$ a universal constant.
\end{thm}
The complexifications have been discussed in \cite[Sec. 3.1]{Rudelson2013}.
One important application for us is bounding the deviation of the Euclidian norm squared of a complex vector $a\in\mathbb{C}^n$ from its mean by writing
\begin{align*}
\norm{a}_2^2= \norm{\tilde{a}}_2^2 = \scp{\tilde{a}}{I_{2n}\tilde{a}},
\end{align*}
where $\tilde{a}\coloneqq\begin{bmatrix}\re(a)\\ \im(a)\end{bmatrix}\in\mathbb{R}^{2n}$ and $I_{2n}$ is the $2n\times 2n$ identity matrix with $\norm{I_{2n}}_F^2=2n$ and $\norm{I_{2n}}_o=1$.
But we can furthermore even state a complete complex version.
\begin{thm}[Hanson-Wright inequality, complex version]
\label{thm:HWcomplex}
Let $a = (a_1,\ldots,a_n)\in\mathbb{C}^n$ be a random vector so that $\re(a_i),\im(a_i)$ are
independent and centered
sub-Gaussian random variables and let $Z\in\mathbb{C}^{n \times n}$. For all $t\geq 0$ it holds that
\begin{equation}\begin{split}
\label{eq:HWcomplex}
&\prob{\abs{\langle a, Z a\rangle - \expec{\langle a, Z a\rangle}} > t} \\
&\leq 4 \exp(-c \min\{\frac{t^2}{4K^4 \norm{Z}_F^2}, \frac{t}{\sqrt{2}K^2 \norm{Z}_o}\}),
\end{split}
\end{equation}
where $K$ is a bound on the $\psi_2$-norms of the real and imaginary parts of the components of $a$ and $c>0$ the same constant as in \eqref{eq:Hanson-Wright}.
\end{thm}
\begin{proof}
Taking squares on both sides and using $\abs{\cdot}^2 = \re(\cdot)^2 + \im(\cdot)^2$ yields
\begin{align}
&\prob{ \abs{\scp{a}{Za} - \expec{\scp{a}{Za}}}>t}\nonumber\\
= & \prob{\re^2(\scp{a}{Za} - \expec{\scp{a}{Za}})\nonumber\\
&+\im^2(\scp{a}{Za} - \expec{\scp{a}{Za}}) >t^2}\nonumber\\
\leq & \prob{\abs{\re(\scp{a}{Za} - \expec{\scp{a}{Za}})} \geq \frac{1}{\sqrt{2}}t}\label{term:HWrealPart}\\
&+\prob{ \abs{\im(\scp{a}{Za} - \expec{\scp{a}{Za}})} \geq \frac{1}{\sqrt{2}}t}.\label{term:HWimPart}
\end{align}{}
Writing
\begin{align*}
&\scp{a}{Za}\\
=& \bigbr{ \re(a)^T - \i \im(a)^T }\bigbr{ \re(Z) + \i \im(Z) }\\
&\cdot \bigbr{ \re(a) + \i \im(a) }\\
=& \begin{bmatrix} \re(a)^T & \im(a)^T \end{bmatrix} \begin{bmatrix} \re(Z) & -\im(Z) \\ \im(Z) & \re(Z) \end{bmatrix}
\begin{bmatrix} \re(a) \\ \im(a) \end{bmatrix}\\
&+ \i \begin{bmatrix} \re(a)^T & \im(a)^T \end{bmatrix} \begin{bmatrix} \im(Z) & \re(Z) \\ -\re(Z) & \im(Z) \end{bmatrix}
\begin{bmatrix} \re(a) \\ \im(a) \end{bmatrix}\\
\eqqcolon& \tilde{a}^T\tilde{Z}_1\tilde{a} + \i\,\tilde{a}^T\tilde{Z}_2\tilde{a} ,
\end{align*}
we can apply the Hanson-Wright inequality for the real case to \eqref{term:HWrealPart} and \eqref{term:HWimPart} with $\smallnorm{\tilde{Z}_{1/2}}_{HS} = \sqrt{2} \norm{Z}_{HS}$ and $\smallnorm{\tilde{Z}_{1/2}}_{o} = \norm{Z}_o$, to obtain the result.
\end{proof}{}
\section{Concentration of $4$th order Polynomials - Full Approach}
\label{app:conc4}
To calculate the probabilities of the form $\prob{\abs{\smallnorm{X_i}^2_2 - m} \geq \omega m}$ appearing in \eqref{2ndprob}, \eqref{3rdprob}, we observe that $\norm{X_i}_2^2$ is essentially a $4$th order polynomial in the sub-Gaussian random variables $\re(a_{i,k})$ and $\im(a_{i,k})$.
Setting $v_k \coloneqq \re(a_{i,k})$ and
$v_{n+k} \coloneqq \im(a_{i,k})$, a quick calculation shows that we
can write this as
\begin{align*}
\norm{X_i}_2^2
&=\sum_{k,l\in [n],k\neq l}(v^2_k + v^2_{n+k})(v^2_l + v^2_{n+l})\\
&=\sum_{(k,l)\in I} v_k^2 v_l^2,
\end{align*}
setting
\begin{equation}\label{eq:setI}
I=\{(k,l)\in [2n]\times[2n] : k\neq l, k\neq n+l, l\neq n+k\}
\end{equation}{}
The following theorem, which can be seen as a generalization of the Hanson-Wright inequality, allows to analyze these terms.
\newcommand{\mathcal{J}}{\mathcal{J}}
\begin{thm}[Theorem 1.6 in \cite{Goetze2019}]\label{thm:concentrationPoly}
Let $Z=(Z_1,\dots,Z_\ell)$ be a random vector with independent
components, such that $\|Z_i\|_{\psi_2}\leq L$ for all $i\in [\ell]$. Then, for
every polynomial $f:\mathbb{R}^\ell\rightarrow\mathbb{R}$ of degree $D$ and all $t>0$, it holds that
\begin{align*}
& \prob{ |f(Z)-\mathbb{E} f(Z)|\geq t}\\
&\leq 2\exp\big(-\frac{1}{C_D}\min_{1\leq d\leq D}\min_{\mathcal{J}\in
P_d}\eta_{\mathcal{J}}(t)\big)
\end{align*}
where
\begin{equation}
\eta_{\mathcal{J}}(t)=\left(\frac{t}{L^d\|\mathbb{E}\mathbf{D}^d f(Z)\|_{\mathcal{J}}}\right)^{2/\#\mathcal{J}}.\label{eq:defEta}
\end{equation}
\end{thm}
\newcommand{\mathbf{D}}{\mathbf{D}}
Here $\mathbf{D}^d f$ is the $d$-th derivative of $f$ and for a
multi-index array $W=(w_{i_1\dots i_d})_{i_1\dots i_d=1}^\ell$ the $\norm{\cdot}_\mathcal{J}$-norm is defined as
\begin{equation}
\begin{split}
\|W\|_\mathcal{J}:=\sup\{&\sum_{\mathbf{i}\in[\ell]^d}w_{\mathbf{i}}\prod_{l=1}^k(x_{\mathbf{i}_{J_l}})^l\,|\,
\|x_{\mathbf{i}_{J_l}}\|_2\leq1\\
& \text{for all } l\in [k]\},
\end{split}
\end{equation}
where $\mathcal{J}=(J_1,\dots,J_k)\in P_d$ is a partition of $[d]$
into non-empty, pairwise disjoint sets. Some examples are:
\begin{equation*}
\begin{split}
\|W\|_{\{1,2\}} &=\|W\|_F\\
\|W\|_{\{1\}\{2\}} &=\|W\|_o\\
\|W\|_{\{1,2\}\{3\}} &=\sup_{\|x\|_F\leq 1\,\&\,\|y\|_2\leq 1}\sum_{ijk} w_{ijk}x_{ij}y_j
\end{split}
\end{equation*}
Our first calculation allows the analysis of the deviation of $\norm{Z}_2^2$ from its mean for a complex sub-Gaussian random vectors $Z$ with iid. components.
\begin{prop}
Let $Z=(Z_1,\dots,Z_{2n})$ be a random vector with independent
components, such that $\|Z_i\|_{\psi_2}\leq L$,
$|\expec{ Z_i} |\leq \mu$ and $\expec{Z_i^2}\leq \frac{1}{2} \sigma^2$ for some $L\geq 1,\, \mu,\sigma^2\geq 0$ and all
$i\in [2n]$. Consider the $4$-th order polynomial
\begin{align*}
f: \,\mathbb{R}^{2n} \rightarrow \mathbb{R},\quad
v \mapsto \sum_{(k,l)\in I} v_k^2 v_l^2 ,
\end{align*}
where $I$ is given as in \eqref{eq:setI}.
Assume $n\geq 2$\note{?}.
Then for all $\omega>0$ it holds that
\begin{align*}
&\prob{ |f(Z)-\mathbb{E} f(Z)|\geq n(n-1)\omega}\\
&\leq 2\exp(-\gamma\,\zeta\cdot n)
\end{align*}
where $\gamma\in (0,1)$ is an absolute constant and
\begin{equation}\begin{split}\label{eq:minZeta}
\zeta=&\min\{\frac{\omega^2}{L^2\mu^2\sigma^4},\frac{\omega}{L^2(\sigma^2 + 2 \mu^2)},\frac{\omega^2}{L^4 (\sigma^2 + \mu^2)^2},\\
&\frac{\omega^{2/3}}{L^2\mu^{2/3}}, \frac{\omega}{L^3\mu},\frac{\omega^2}{L^6 \mu^2}\cdot n, \frac{\omega^{1/2}}{L^2},
\frac{\omega^{2/3}}{L^{8/3}}, \frac{\omega}{L^4},\frac{\omega^2}{L^8}\cdot n
\}.
\end{split}\end{equation}
\label{prop:conc:4ord}
\end{prop}
Note that two of the terms in \eqref{eq:minZeta} contain a factor $n$ and will therefore not play a role for large $n$.
\begin{proof}
The partial derivatives are
\begin{align*}
\partial_{i} f(v)
&=4v_i\sum_{k\in [2n],(i,k)\in I} v_k^2 \\
\partial_{i,i} f(v)
&= 4\sum_{k\in [2n], (i,k)\in I} v_k^2 ,\\
\partial_{i,j} f(v)
&= 8v_iv_j,\\
\partial_{i,i,j}f(v)
&= 8v_j\\
\partial_{i,i,j,j} f(v) &=8,\\
\end{align*}
for all $(i,j)\in I$. All combinations not mentioned here are zero or follow from the calculations above by Schwarz's theorem about mixed partial derivatives.
We have to estimate \eqref{eq:defEta} for all possible partitions $\mathcal{J}$ and $t=\omega m$. We will only state some of the calculations here, the others follow in a similar manner. Note that $\# I =2n(2n-2) $ and for any $i\in [2n]$, there are $2(n-1)$ indices $k\in [2n]$ such that $(i,k)\in I$.
For the case $\mathcal{J}=\{1\}$,
\begin{align*}
&\norm{\expec{ \mathbf{D}^1 f(Z)}}_{\{1\}}\\
=&\sup\{ \sum_{i\in[2n]}\expec{\partial_{i} f(Z)}x_{i} \,|\,x\in\mathbb{R}^{2n} \text{ with } \norm{x}_2\leq 1\},
\end{align*}
let $x\in\mathbb{R}^{2n}$ with $\norm{x}_2\leq 1$. Since
\begin{align*}
&\sum_{i\in[2n]}\expec{\partial_{i} f(Z)} x_{i} =4\sum_{(i,k)\in I}\expec{Z_i}\,\expec{Z_k^2} x_{i} \\
\leq& 4 \mu \sigma^2 \sum_{(i,k)\in I} x_{i}
\leq 4 \mu\sigma^2 (2n-2) \norm{x}_1 \\
\leq& 8 \mu\sigma^2 (n-1)\sqrt{2n},
\end{align*}
we get the estimate
\begin{align*}
\eta_{\{1\}}(\omega m) &\geq \br{\frac{2\omega n(n-1) }{L^1 \cdot 8\mu \sigma^2 (n-1)\sqrt{2n}}}^{2/1} \\
&= \frac{\omega^2}{32L^2 \mu^2 \sigma^4 }\cdot n.
\end{align*}
\newcommand{\bold{F}}{\bold{F}}
To illustrate another important technique, consider
$\mathcal{J}=\{1,2\}\{3\}$ and $x\in\mathbb{R}^{2n\times2n}, y\in\mathbb{R}^{2n}$ with
$\|x\|_\text{F}=\|y\|_2=1$. We can assume $x,y\geq 0$, entrywise, to calculate the upper bound.
\begin{align*}
&\expec{\mathbf{D}^3 f(Z)}(x,y) \\
=& \sum_{i,j,k\in [2n]} \expec{\partial_{i,j,k}f(Z)}x_{ij}y_k \\
=&\sum_{(i,j) \in I}8\expec{Z_j}x_{ii}y_j + 8\expec{Z_j}x_{ij}y_i + 8\expec{Z_j}x_{ji}y_i\\
\leq &8\mu \sum_{(i,j)\in I} x_{ii} y_j+ x_{ij}y_i + x_{ji} y_i\\
\leq& 8 \mu \big( \smallnorm{\diag(x)}_1 \smallnorm{y}_1 + \sum_{j\in [2n]} (\smallscp{x_j}{y} + \smallscp{{}_jx}{y}) \big)\\
\leq & 8\mu \big( \sqrt{2n}\cdot\sqrt{2n} + \sum_{j\in [2n]} (1+ 1)\big)\\
= &48\mu n,
\end{align*}
where by $x_j, {}_jx$ we denoted the $j$-th row respectively column of $x$ and by $\diag(x)$ the $2n$-vector containing its diagonalelements. This shows
\begin{align*}
\eta_{\{1,2\}\{3\}}(\omega m) \geq \br{\frac{2\omega n(n-1)}{L^3\cdot 48\mu n}}^{2/2} \geq \frac{\omega}{48 \mu L^3}\cdot n,
\end{align*}
where we used that $n-1\geq \frac{1}{2}n$ because $n\geq 2$.
The other cases follow in a similar manner, the sums can be estimated directly or using the Cauchy Schwarz inequality by euclidian or $1$-norms of tensors with unit norm or by norms of their columns, rows or diagonal elements. We
only state the results here:
\begin{align*}
\eta_{\{1\}\{2\}}(\omega m) &\geq \frac{\omega}{4L^2(\sigma^2 + 2 \mu^2)} \cdot n
\\
\eta_{\{1,2\}}(\omega m) &\geq \frac{\omega^2}{32 L^4(\sigma^2 + \mu^2 )^2}\cdot n\\
\eta_{\{1\}\{2\}\{3\}}(\omega m) &\geq\frac{\omega^{2/3}}{192^{2/3}\mu^{2/3} L^2}\cdot n\\
\eta_{\{1,2,3\}} (\omega m) &\geq \frac{\omega^{2}}{48^2 \mu^2 L^6}\cdot n^2\\
\eta_{\{1\}\{2\}\{3\}\{4\}}( \omega m) &\geq \frac{\omega^{1/2}}{24^{1/2}L^2}\cdot n\\
\eta_{\{1,2\}\{3\}\{4\}}( \omega m) &\geq \frac{\omega^{2/3}}{64^{2/3} L^{8/3}}\cdot n\\
\eta_{\{1,2\}\{3,4\}}( \omega m) &\geq \frac{\omega}{128L^{4}}\cdot n\\
\eta_{\{1,2,3\}\{4\}}( \omega m) &\geq\frac{\omega}{48 L^{4}}\cdot n\\
\eta_{\{1,2,3,4\}}( \omega m) &\geq \frac{\omega^2}{48^2L^{8}}\cdot n^2.
\end{align*}
\end{proof}
\section{The $\psi_r$--norm via Moments}
\label{app:psir:moments}
It is well-known, see \cite{Vershynin:datasciencebook}, that
\begin{equation}\label{eq:psi_r_equivalent}
\norm{X}_{\psi_r} = \sup_{p\geq 1} p^{-1/r} (\expec{\abs{X}^p})^{1/p}
\end{equation}
is equivalent to \eqref{eq:def:psip}. Now, let $a\in\mathbb{C}^n$ be a
random vector with subgaussian entries and
$\norm{\re(a_i)}_{\psi_2},\norm{\im(a_i)}_{\psi_2}\leq \psi_2$ for a
constant $\psi_2$. In this section we show how to estimate the
$\psi_r$-norm for $r\geq 1$ of the matrix $aa^*$ by $\psi_2$. The $\psi_r$-norm of
a random matrix $A\in\mathbb{C}^{n\times n}$ is defined as
\begin{equation*}
\norm{A}_{\psi_r}\coloneqq\sup_{\norm{Z}_F\leq 1}\norm{\scp{A}{Z}}_{\psi_r}.
\end{equation*}
For the matrix $aa^*-\expec{aa^*}$ this can be written as
\begin{align*}
\norm{aa^*-\expec{aa^*}}_{\psi_r} = \sup_{\norm{Z}_F\leq 1}\norm{\scp{a}{Za} - \expec{\scp{a}{ZA}}}_{\psi_r}.
\end{align*}
Set $Y_Z\coloneqq\scp{a}{Za} - \expec{\scp{a}{Za}}$ for some arbitrary $Z\in\mathbb{C}^{n\times n}$ with $0<\norm{Z}_F \leq 1$. Using \eqref{eq:psi_r_equivalent} we can compute its $\psi_r$-norm as
\begin{equation}\label{eq:psi_r_alternative}
\norm{Y_Z}_{\psi_r} = c_r \cdot \sup_{p\geq 1} \frac{\expec{\abs{Y}^p}^{1/p}}{p^{1/r}},
\end{equation}
with some constant $c_r>0$.
The expectation can be expressed as
\begin{equation}\label{eq:LPintegral}
\expec{\abs{Y_Z}^p} = p\int_0^\infty t^{p-1} \prob{ \abs{Y_Z} \geq t } \dif t.
\end{equation}
The Hanson-Wright inequality \eqref{eq:HWcomplex} yields
\begin{align*}
&\prob{\abs{Y_Z} \geq t} \\
\leq& 4 \exp (-c \min \{ \frac{t^2}{4 \psi_2^4\norm{Z}^2T_{\text{F}} },\frac{t}{\sqrt{2}\psi_2^2 \norm{Z}_o}\})\\
\leq& 4 \exp (-c \min \{ \frac{t^2}{4 \psi_2^4 },\frac{t}{\sqrt{2}\psi_2^2 }\})\\
=& 4\max \bigset{e^{-t^2/a^2}, e^{-t/b}},
\end{align*}
where we used $\norm{Z_Y}_o\leq \norm{Z_Y}_F\leq 1$ and abbreviated
$a\coloneqq\frac{2 \psi_2^2}{\sqrt{c}}$ and $b\coloneqq \frac{\sqrt{2}\psi_2^2}{c}$. Plugging into \eqref{eq:LPintegral} and substituting $s\coloneqq t/a$, respectively $s\coloneqq t/b$, we obtain
\begin{align*}
\expec{\abs{Y_Z}^p} \leq& 4p \int_0^\infty s^{p-1} \max\smallset{a^p e^{-s^2}, b^p e^{-s} } \dif t\\
\leq & 4p\bigbr{\frac{1}{2}a^p \Gamma(\frac{p}{2}) + b^p \Gamma(p)},
\end{align*}
where we estimated the maximum by the sum of both terms and expressed the integrals in terms of the Gamma function. Using the identity $\Gamma(x)x = \Gamma(1+x)$ , for $x>0$, and the asymptotic estimation $\Gamma(x+1) \lesssim x^x$ derived from Stirling's formula, we obtain
\begin{align*}
\expec{\abs{Y_Z}^p} &\leq 4 \bigbr{ a^p \Gamma(\frac{p}{2}+1) + b^p \Gamma(p+1) }\\
&\leq c' \bigbr{a^p (\frac{p}{2})^{p/2} + b^p p^p}\\
&\leq 2^{p/2} c' \psi_2^{2p} p^p \bigbr{ c^{-p/2} + c^{-p}},
\end{align*}
for some constant $c'>0$. Plugging this into \eqref{eq:psi_r_alternative} yields
\begin{align}
\norm{Y_Z}_{\psi_1} &\leq \sqrt{2} c_r \psi_2^2 \cdot \sup_{p\geq 1} \bigbr{ c^{-p/2} + c^{-p}}^{1/p}\nonumber\\
&= c'' \psi_2^2,\label{eq:psi_1_2}
\end{align}
where $c''$ is some constant that does not depend on the dimensions.
Since for $r,p\geq 1$ it holds that $p^{-1/r}\leq p^{-1}$, we have $c_r^{-1} \normpsi{Y_Z}{r} \leq c_1^{-1} \normpsi{Y_Z}{1}$ whenever $r\geq 1$. Plugging into \eqref{eq:psi_1_2} and taking the supremum over all $Z\in\mathbb{C}^{n\times n}$ with $\norm{Z}_F \leq 1$ shows that $\normpsi{Y}{r} \leq c'_r \psi_2^2$, for some constant $c'_r$.
\end{appendices}
|
1,116,691,499,028 | arxiv | \section*{Acknowledgments}
We thank Harald Grie\ss hammer for generating the graphs shown here, Arman Margayan for identifying the errors and correcting the code, and Bruno Strandberg for his help running the corrected code. This work was supported by the US DOE under grant number DE-FG02-93ER-40756.
|
1,116,691,499,029 | arxiv | \section{Introduction}\label{sec:introduction}
Nowadays, energy storage systems have established their efficacy for more than a dozen power system applications, which cover all stages in the energy supply chain: bulk power and energy; ancillary services; transmission and distribution infrastructure applications; customer energy management \cite{akhil2013doe}. Among all storage technologies used in power systems, lithium-ion (Li-ion) batteries are the fastest-growing energy storage technology \cite{navigant2019}, which is characterized by high efficiency, high power and energy density, long cycle lifetime, and environmental friendliness \cite{hu2017technological}. As with any other equipment utilized in power systems, a techno-economic analysis should be performed for Li-ion storage systems prior to its installation and operation, which is usually done employing various optimization methods \cite{zidar2016review}. The result of such an analysis is typically an optimal choice for storage unit siting, sizing, and technology selection as well as the optimal charge/discharge scheduling, i.e., operation strategy
In early optimization problem formulations, such as in \cite{dvorkin2016ensuring,xu2013optimal}, constant efficiency for charge and discharge were considered when modelling battery behavior. In practice, efficiency is a function of the battery output current, and also the battery state parameters, which include internal resistance and open-circuit voltage, that change significantly with the battery State of Charge (SoC), temperature, and State of Health (SoH) \cite{plett2015battery}. For instance, it was shown in \cite{alvaro2020} that charge and discharge efficiencies may vary significantly - they can drop as much as $33\%$ from their maximum values depending on the battery operating conditions. To account for the influence of power output and SoC on battery efficiency, \cite{morstyn2017model} proposed a second-order polynomial formulation, which can be considered within the convex programming approach. Then, a Mixed-Integer Linear Programming (MILP) compatible representation of the Li-ion battery has been proposed in \cite{sakti2017enhanced}, where efficiency was modelled using a piece-wise linear approximation of the simulated sample data. As an efficient alternative, \cite{alvaro2020} proposed a Linear Programming (LP) framework to account for efficiency based on the equivalent circuit model, while still considering the MILP formulation in \cite{sakti2017enhanced} as a benchmark.
While focusing on a more accurate representation of battery efficiency, the above mentioned references did not account for an \emph{operation-aware lifetime} and, most importantly, for the \emph{available energy capacity} of the Li-ion battery storage, which decreases gradually over its lifetime due to degradation. The very first attempts to represent operation-aware battery lifetime were models based on the total energy throughput, as in \cite{sayfutdinov2018incorporating}. To respect the nonlinear relationship between battery operation strategy, i.e., Depth of Discharge (DoD), and its cycle lifetime, \cite{alsaidan2018comprehensive} approximated the dependency using a piece-wise linear formulation and considered it within a MILP framework for optimal battery sizing problem. Next, in \cite{padmanabhan2019battery} previous approaches were enhanced by incorporating $C$-rate as an additional factor of battery wear-and-tear. However, the methods above did not account for inevitable capacity loss of Li-ion battery over its lifetime, which plays one of the most important roles in techno-economic analysis of battery storage.
Extensive experimental results \cite{rodrigues1999ac,wang2017online,spotnitz2003simulation,grolleau2014calendar,stroe2016degradation} suggest that the battery degradation depends in a more complicated (often non-linear) way on a number of factors, such as battery SoC, temperature, DoD etc. Thus, certain approximations have to be made to account for these effects when formulating an optimization problem for techno-economical analysis. In early attempt \cite{qiu2017stochastic}, a constant capacity fade rate of Li-ion battery was introduced for the storage investment problem. Even though the degradation rate was considered to be fixed, irrespective of the battery operation, the results suggest that capacity fade is among the most important factors to account for. In addition to the previous effect, in \cite{miranda2016holistic,parra2015optimum} the battery available capacity was considered to be fading in time proportionally to the energy throughput. Considering the degradation rate to be dependant on operation variables, i.e., battery power output, made the optimization problem bilinear and required applying the whole enumeration search to find the globally optimal solution. In our recent study \cite{sayfutdinov2019degradation}, a dynamic programming and mixed-integer problem reformulation approaches have been proposed to consider operation-aware degradation from SoC and DoD, while still respecting the formal optimization requirements. In \cite{maheshwari2020optimizing}, the short-term operation strategy of the Li-ion battery storage has been investigated using the MILP problem formulation, where the nonlinear cycling degradation effect from SoC, DoD and C-rate has been captured using the piece-wise linear approximation. In \cite{li2020design,berrueta2018combined}, comprehensive Li-ion battery models were formulated for the optimal sizing problem, where the capacity fade effect from both idling and cycling mechanisms were complemented with the phenomenon known as the internal resistance growth, which affects the battery maximum power output and efficiency. Both models are characterized with the nonlinear formulation, which were approached with two distinct methods. Particularly, the Particle Swarm Optimization heuristic has been used in \cite{li2020design}, while a formal approach of dynamic programming has been applied in \cite{berrueta2018combined}, where the former method cannot guarantee optimality of a solution and the latter possesses high computational burden.
In contrast to the previous references, we develop a comprehensive battery modelling approach that takes into account a variety of physical phenomena and can be used in a MILP problem formulation that allows finding the globally optimal solution in a computationally efficient way. Based on the existing experimental literature, we propose lithium iron phosphate (LiFePO$_4$) battery model that include the realistic dependencies of efficiency, lifetime, and available capacity on its operation strategy and linearize them using the \emph{Special Order Sets 2}. We then provide the formulation of an optimization problem for the optimal choice of battery size and operation strategy for realistic case-studies, where the operation strategy can be adjusted for each battery lifetime period individually, i.e., optimization problem variables. Our findings suggest that there exist a number of trade-offs when deciding on a particular battery size and operation strategy, where the former might be significantly bigger than the minimum required capacity and the latter should be modified over the whole battery lifetime to provide economically optimal result. Particularly, to achieve optimal utilization of the LiFePO$_4$ battery, its capacity may exceed the minimum service requirement by at least 77.3\%, its average SoC needs to be altered by up to 20\%, while the duration of charging process is required to increase by up to 75\% during the battery lifetime. The associated economical effect of the proposed approach, compared to the state-of-the-art methodology, accounts for 12.1\% of reduction of battery investment and operating costs. Even though the proposed approach has been demonstrated for the LiFePO$_4$ battery, the methodology is applicable for other types of Li-ion family.
To summarize, the main contributions of the present manuscript are the following:
\begin{enumerate}
\item A MILP compatible battery model that is based on the experimental results on Li-ion technology and accounts for realistic operation-aware efficiency and degradation, including capacity fade and internal resistance growth.
\item We illustrate that the LiFePO$_4$ battery operation strategy requires significant life-long modifications to achieve the optimal battery utilization
\item We validate our findings on real case-studies and demonstrate that there exist a number of trade-offs in LiFePO$_4$ battery operation, which impact the operation strategy.
\end{enumerate}
\section{Li-ion Battery Modelling}\label{sec:model}
The central part in energy storage modelling is a storage continuity differential equation, which tracks the battery charge. In a general form, it looks as follows
\begin{equation}\label{eq:charge_diff}
\dot{e}=P^\text{B},
\end{equation}
where $e$ is a battery charge and $P^\text{B}$ is a battery power input. While the former cannot take negative values, the latter is positive when the battery charges and negative when it discharges.
The battery power input $P^\text{B}$ accounts for the amount of power drawn in and out of the battery cells. Due to power losses, present in real cells, the battery power input $P^\text{B}$ is different from the power seen at terminals $P^\text{T}$ - power that goes to/from the grid. In the most simplistic representation, the ratio of $P^\text{B}$ and $P^\text{T}$ is considered to be constant, which corresponds to constant battery efficiency. In reality, the efficiency depends on the battery operation parameters as well as on its SoH. In the present study, we use the equivalent circuit representation to approximate the relationship between $P^\text{B}$ and $P^\text{T}$.
\subsection{Equivalent circuit model}
Equivalent circuit modelling is an efficient tool to represent complex phenomena using circuit theory. A comprehensive electric circuit model for Li-ion cells derived from physics-based phenomenological model has been provided in \cite{greenleaf2014application}. The model incorporates a number of $RLC$ circuits connected in series that represent dynamics of electrochemical processes, and it is mainly used for dynamic studies. However, due to non-linearity such detailed model is found to be intractable for optimization tasks. In fact, this detalization is found to be redundant for the applications where the time-scale is significantly longer than the transient time constant, i.e., scheduling and sizing. Thus, given the fact that the aggregate time constant of transient processes of Li-ion batteries is in the order of minutes \cite{plett2015battery}, a steady-state model can be effectively used for the optimal siting and scheduling problems, where the characteristic time-scale is of the order of hours or half-hours. The equivalent steady-state model would corresponds to a circuit that contains voltage source and effective series resistance as depicted in Fig. \ref{fig:rint} - $Rint$ model \cite{morstyn2017scalable}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Figs/Rint_model.eps}
\caption{Rint model of a Li-ion cell}
\label{fig:rint}
\end{figure}
Given the $Rint$ model of Fig. \ref{fig:rint}, the battery power input $P^\text{B}$ can be expressed as a function of the power at terminals $P^\text{T}$ and battery state parameters, i.e., open-circuit voltage $V^\text{OC}$ and internal resistance $R^\text{in}$,
\begin{equation}\label{eq:Pb}
P^\text{B}=\frac{V^\text{OC}\sqrt{{V^\text{OC}}^2 + 4 P^\text{T} R^\text{in}} - {V^\text{OC}}^2}{2 R^\text{in}}.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Voc_vs_SoC.eps}
\caption{LiFePO$_4$ battery open-circuit voltage vs SoC}
\label{fig:ocv_soc}
\end{figure}
The first element of the $Rint$ model is a voltage source, with voltage level $V^\text{OC}$ dependent on the battery SoC. Fig. \ref{fig:ocv_soc} illustrates the dependency of the LiFePO$_4$ battery open-circuit voltage and SoC state value at 25\degree C \cite{plett2015battery}. For Li-ion chemistries, the dependency is considered to be linear within a wide range of SoC. Particularly, for LiFePO$_4$ batteries it is found to be linear between 10\% and 98\% SoC. Thus, it can be effectively approximated using the following linear relation:
\begin{equation}\label{eq:Voc}
V^\text{OC}(SoC)=\text{k}^\text{V} SoC + V^\text{0},
\end{equation}
where $\text{k}^\text{V}$ is a voltage slope and $V^\text{0}$ is an offset value, e.g., for LiFePO$_4$ battery $\text{k}^\text{V}=0.15$ V/pu, $V^\text{0}=3.2$ V.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Rin_vs_SoC.eps}
\caption{Resistance vs SoC}
\label{fig:Rin}
\end{figure}
The second element of the $Rint$ model is the internal resistance $R^\text{in}$, which incorporates a series of resistive elements of the original model \cite{greenleaf2014application} and depend on the state of the battery, including SoC and SoH, where the latter sometimes is expressed in the equivalent full cycles. Fig. \ref{fig:Rin} illustrates the relationship of the internal battery resistance from SoC and the number of equivalent full cycles at 25\degree C \cite{wang2017online}. It can be noted that the value of the internal resistance is a non-monotonous function of SoC, which can be effectively linearized using three linear segments. At the same time, the value of internal resistance increases monotonously with the equivalent full cycles and can be approximated with a single linear function. Thus, the battery internal resistance can be represented with the combination of linear functions as follows:
\begin{equation}\label{eq:Rin}
R^\text{in}=\sum_{k=1}^\text{K}(\text{a}_k^\text{SoC}SoC_k + \text{b}_k^\text{SoC}) + \text{a}^\text{FC}N^\text{FC},
\end{equation}
where $SoC_k$ is the $k$-th segment of the battery SoC, $\text{a}_k^\text{SoC}$ and $\text{b}_k^\text{SoC}$ are the corresponding coefficients of the linear functions, $\text{a}^\text{FC}$ is a rate of internal resistance growth, and $N^\text{FC}$ is a number of equivalent full cycles. The latter is found as a ratio of energy throughput to double capacity.
To estimate the losses obtained by the proposed $Rint$ model and the dependencies above, the charge and discharge efficiencies can be found as a ratio between $P^\text{B}$ and $P^\text{T}$, depending on the power flow direction. Fig. \ref{fig:efficiency_3d} illustrates battery discharge efficiencies derived from (\ref{eq:Pb}) for RCR123A 0.45Ah LiFePO$_4$ cell from \cite{greenleaf2014application} at the beginning of its lifetime. It can be noted that even at a moderate discharge rate of 1C, one-way efficiency may drop below 90\%.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{Figs/Efficiency_3d_discharge.eps}
\caption{Discharge efficiency vs SoC and C-rate}
\label{fig:efficiency_3d}
\end{figure}
\subsection{Degradation model}
From the operational perspectives, the most important aspects of the Li-ion battery degradation are internal resistance growth and capacity fade. While the former influences the maximum power output and losses, the latter affects the available energy capacity during the battery lifetime.
The battery internal resistance growth is associated with the Solid Electrolyte Interface (SEI) formation on the surface of the anode \cite{rodrigues1999ac}. The SEI resistance increases with every cycle through the whole battery lifetime, which is considered by the second term in (\ref{eq:Rin}). As reported in \cite{wang2017online}, the total internal resistance increases nearly linearly with the number of equivalent full cycles, rising by as much as 20\% per $1,000$ full cycles.
The next aspect of the battery degradation is a continuous decrease of available capacity - capacity fade. The are two main degradation mechanisms considered in the literature, namely, idling $\delta^\text{idl}$ and cycling $\delta^\text{cyc}$, and the total capacity loss $\delta^\text{CF}$ can be approximated as a sum of both contributions \cite{schmalstieg2014holistic}:
\begin{equation}\label{eq:cap_fade}
\delta^\text{CF} \approx \delta^\text{idl}+\delta^\text{cyc}.
\end{equation}
Degradation from cycling implies that the available capacity decreases after each charge-discharge cycle, and the amount of the capacity loss is driven by the charge and discharge rate (C-rate), cycle DoD and SoC range, and cell temperature during the cycle \cite{stroe2016degradation}. At the same time, idling degradation implies that the available capacity is lost, even when the battery is not being cycled. The rate of capacity fade in this case depends on the state of the battery, i.e., SoC and cell temperature \cite{grolleau2014calendar}. In \cite{stroe2016degradation}, empirical capacity fade models due to both cycling and idling are provided based on the accelerated aging tests results:
\begin{equation}\label{eq:cap_fade_cyc}
\delta^\text{cyc}= 0.00568 e^{-1.943 SoC^\text{cyc}} DoD^{0.7162} \sqrt{n},
\end{equation}
\begin{equation}\label{eq:cap_fade_idl}
\delta^\text{idl}= 0.000112 e^{0.7388 SoC^\text{idl}} \tau^{0.8},
\end{equation}
where $SoC^\text{cyc}$ is the SoC level around which a cycle is made, i.e., median cycle SoC, $DoD$ is the cycle DoD, $n$ is the number of cycles, $SoC^\text{idl}$ is the average battery SoC and $\tau$ is time in days.
It can be noted that both (\ref{eq:cap_fade_cyc}) and (\ref{eq:cap_fade_idl}) are formulated for the cell temperature of $25$\degree C, which is considered to be constant in our study. The reason for that is two-fold. First, the battery thermodynamics depend on many application and chemistry agnostic factors, including ambient conditions, battery system form factor, and design of a cooling system. Second, most of the battery storage applications correspond to the C-rate, which does not exceed one, meaning that power losses are moderate and they do not influence cell temperature significantly \cite{patsios2016integrated}.
Figs. \ref{fig:cap_fade_idl} and \ref{fig:cap_fade_cyc} depict the capacity fade characteristics of the LiFePO$_4$ battery due to idling and cycling respectively, both assuming constant cell temperature of $25$ \degree C. Particularly, Fig. \ref{fig:cap_fade_idl} illustrates that capacity fade from idling is slower when the battery SoC is kept low. From this figure, one can infer that it is in general better to keep the battery discharged when the service is not required. On the other hand, Fig. \ref{fig:cap_fade_cyc} suggests that capacity loss from cycling is the most severe for high DoD and low median SoC. Thus, to decrease capacity loss from cycling, one would want to charge and discharge the battery around the highest possible SoC. Obviously, the above degradation mechanisms disagree and require a balanced trade-off to ensure optimal battery utilization.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Cap_fade_idling.eps}
\caption{Capacity fade due to idling}
\label{fig:cap_fade_idl}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Cap_fade_cycling.eps}
\caption{Capacity fade due to cycling}
\label{fig:cap_fade_cyc}
\end{figure}
\section{Optimization Problem Formulation}\label{sec:case_study}
In the present section we formulate a generic optimization problem for the optimal scheduling and sizing of Li-ion battery storage, which takes into account phenomena formulated in the previous section, and the battery is aimed to deliver power according to predetermined demand profiles, e.g., peak-shaving, EV drive cycle, etc. The corresponding objective function is formulated as follows
\begin{equation}\label{eq:objective}
min{[\frac{\text{C}^\text{E} \bar{\text{E}} + \text{C}^\text{P} \bar{P}}{365 \: \text{T}^\text{Lt}} + \sum_{y\in Y}\pi_y \sum_{t\in T}(-\text{C}^\text{LL} P_{y,t}^\text{LL} + \text{C}^\text{En} (P_{y,t}^{\text{T}_\text{Ch}} + P_{y,t}^{\text{T}_\text{Dis}}))\Delta t}],
\end{equation}
where $Y$ is a set of operation scenarios (e.g., years) indexed by $y$, $T$ is a set of time intervals indexed by $t$ with a time step $\Delta t$. $\pi_y$ is a normalized probability of a scenario $y$. $\bar{\text{E}}$ and $\bar{P}$ are installed energy and power capacities of the battery, $\text{C}^\text{E}$ and $\text{C}^\text{P}$ are the corresponding prices for the installed capacities, which all together make the investment cost of energy storage. To consider the investment cost in the same time-scale as the daily operating costs, the later is divided by the battery lifetime $365 \: \text{T}^\text{Lt}$, which also corresponds to a planning horizon. The battery power input at terminals is broken into positive charge $P_{y,t}^{\text{T}_\text{Ch}}$ and negative discharge $P_{y,t}^{\text{T}_\text{Dis}}$ to avoid nonlinear problem formulation. $\text{C}^\text{En}$ is a price for energy, necessary to translate power losses accounted in (\ref{eq:Pb}) into pecunial losses. $P_{y,t}^\text{LL}$ is a slack variable to allow minor deviations from the power balance equality (\ref{eq:balance}), which is penalized by the value of lost load $\text{C}^\text{LL}$.
To ensure that the battery delivers power according to predetermined demand profiles, the following power balance and thermal line limit constraints are applied
\begin{align}
P_{y,t}^\text{G} + \text{P}_{y,t}^\text{D} + P_{y,t}^{\text{T}_\text{Ch}} + P_{y,t}^{\text{T}_\text{Dis}} & \: + P_{y,t}^\text{LL}= 0 \label{eq:balance},\\
-\bar{\text{P}}^\text{G} \leq P_{y,t}^\text{G} \leq& \: 0 \label{eq:thermal_limit},
\end{align}
where $P_{y,t}^\text{G}$ is a power supplied from the grid, $\bar{\text{P}}^\text{G}$ is the line thermal limit and $\text{P}_{y,t}^\text{D}$ is a power demand profile.
To model battery storage, the linear and mixed-integer linear constraints are formulated below. First, storage continuity differential equation (\ref{eq:charge_diff}) in a discrete form looks as follow
\begin{equation}\label{eq:SOC}
e_{y,t+1}=(1-\text{k}^\text{sd})e_{y,t} + (P_{y,t}^{\text{B}_\text{Ch}} + P_{y,t}^{\text{B}_\text{Dis}}){\Delta t},
\end{equation}
where $\text{k}^\text{sd}$ is a self-discharge rate and battery power input $P^\text{B}$ from (\ref{eq:Pb}) is broken into positive charge $P_{y,t}^{\text{B}_\text{Ch}}$ and negative discharge $P_{y,t}^{\text{B}_\text{Dis}}$ to avoid nonlinear problem formulation.
Net storage charge, power rating, available storage capacity and maximum capacity fade are respected through (\ref{eq:net_charge}) - (\ref{eq:eol_cond})
\begin{align}
e_{y,1} = e_{y,\text{T}+1}& \label{eq:net_charge},\\
0 \leq P_{y,t}^{\text{T}_\text{Ch}} \leq \bar{P} & \label{eq:ch_limit},\\
-\bar{P} \leq P_{y,t}^{\text{T}_\text{Dis}} \leq 0 & \label{eq:dis_limit},\\
0 \leq e_{y,t} \leq \bar{\text{E}}(1 - & \: \delta_y^\text{CF}) \label{eq:charge_limit},\\
\delta_y^\text{CF} \leq 1 - \text{EoL} & \label{eq:eol_cond},
\end{align}
where $\delta_y^\text{CF}$ is a battery capacity fade and $\text{EoL}$ is End of Life criterion, i.e., minimum remaining battery capacity threshold.
Before approximating nonlinear battery power input and capacity fade using \emph{Special Order Sets 2} it is required that the reference variables are broken into segments as in (\ref{eq:pt_ch})-(\ref{eq:soc_cur})
\begin{equation}\label{eq:pt_ch}
P_{y,t}^{\text{T}_\text{Ch}} = \sum_{g=1}^{\text{G}}P_{y,t,g}^{\text{T}_\text{Ch}},
\end{equation}
\begin{equation}\label{eq:pt_dis}
P_{y,t}^{\text{T}_\text{Dis}} = \sum_{h=1}^\text{H}P_{y,t,h}^{\text{T}_\text{Dis}},
\end{equation}
\begin{equation}\label{eq:dod_cyc}
\frac{1}{2 \bar{\text{E}}}\sum_{t \in T_c}(P_{y,t}^{\text{B}_\text{Ch}} - P_{y,t}^{\text{B}_\text{Dis}}) \Delta t = \sum_{i=1}^\text{I}DoD_{y,c,i}^\text{cyc}
\end{equation}
\begin{equation}\label{eq:soc_cyc}
\frac{\min_{t \in T_c}\{e_{y,t}\}}{\bar{\text{E}}} + \frac{\sum_{i=1}^\text{I}DoD_{y,c,i}^\text{cyc}}{2} = \sum_{l=1}^\text{L}SoC_{y,c,l}^\text{cyc},
\end{equation}
\begin{equation}\label{eq:soc_idl}
\frac{1}{\bar{\text{E}} \text{T}}\sum_{t \in T} e_{y,t} \Delta t = \sum_{j=1}^\text{J}SoC_{y,j}^\text{idl},
\end{equation}
\begin{equation}\label{eq:soc_cur}
\frac{e_{y,t}}{\bar{\text{E}}} = \sum_{k=1}^\text{K}SoC_{y,t,k},
\end{equation}
where segmented $P_{y,t,g}^{\text{T}_\text{Ch}}$ and $P_{y,t,h}^{\text{T}_\text{Dis}}$ are charge and discharge power outputs, $DoD_{y,c,i}^\text{cyc}$ and $SoC_{y,c,l}^\text{cyc}$ are cycle DoD and median SoC, $SoC_{y,j}^\text{idl}$ is the average daily SoC and $SoC_{y,t,k}$ is momentary SoC. $T_c$ is a time range of a cycle $c$, $\text{T}_c$ is a cycle duration and $\text{G}, \text{H}, \text{I}, \text{J}, \text{K}, \text{L}$ are the numbers of segments. In (\ref{eq:soc_cyc}), the minimum battery charge during a cycle is found with the following reformulation
\begin{equation}\label{eq:e_min}
\min_{t \in T_c}\{e_{y,t}\} = e_{y,c}^\text{min},
\end{equation}
\begin{equation}\label{eq:e_min_con}
e_{y,c}^\text{min} \leq e_{y,t} \: \forall t \in T_c.
\end{equation}
To ensure that the segments in (\ref{eq:pt_ch})-(\ref{eq:soc_cur}) are filled in the consecutive manner, the following constraints are applied
\begin{align}
|\text{P}_g^{\text{T}_\text{Ch}}| \alpha_{y,t,g+1} \leq P_{y,t,g}^{\text{T}_\text{Ch}} \leq |\text{P}_g^{\text{T}_\text{Ch}}| \alpha_{y,t,g},g=1..\text{G} \label{eq:pt_ch_aux},\:\:\:\:\:\: \\
|\text{P}_h^{\text{T}_\text{Dis}}| \beta_{y,t,h+1} \leq P_{y,t,h}^{\text{T}_\text{Dis}} \leq |\text{P}_h^{\text{T}_\text{Dis}}| \beta_{y,t,h} ,h=1..\text{H} \label{eq:pt_dis_aux},\:\:\:\:\: \\
|\text{DoD}_i^\text{cyc}| \gamma_{y,c,i+1} \leq DoD_{y,c,i}^\text{cyc} \leq |\text{DoD}_i^\text{cyc}| \gamma_{y,c,i},i=1..\text{I} \label{eq:dod_cyc_aux},\\
|\text{SoC}_l^\text{cyc}| \zeta_{y,c,l+1} \leq SoC_{y,c,l}^\text{cyc} \leq |\text{SoC}_l^\text{cyc}| \zeta_{y,c,l},l=1..\text{L} \label{eq:soc_cyc_aux},\:\: \\
|\text{SoC}_j^\text{idl}| \eta_{y,j+1} \leq SoC_{y,j}^\text{idl} \leq |\text{SoC}_j^\text{idl}| \eta_{y,j},j=1..\text{J} \label{eq:soc_idl_aux},\:\:\:\:\: \\
|\text{SoC}_k| \theta_{y,t,k+1} \leq SoC_{y,t,k} \leq |\text{SoC}_k| \theta_{y,t,k},k=1..\text{K} \label{eq:soc_cur_aux}, \:\:
\end{align}
where $\alpha_{y,t,g}, \beta_{y,t,h}, \gamma_{y,c,i}, \zeta_{y,c,l}, \eta_{y,j}, \theta_{y,t,k}$ are auxiliary binary variables, which indicate if a particular segment is used, and the binaries for the indices $\text{G}+1, \text{H}+1, \text{I}+1, \text{J}+1, \text{K}+1, \text{L}+1$ are enforced to zeros and considered as parameters. Finally, $|\cdot|$ is a length of a particular segment.
Now capacity fade can be approximated as follows
\begin{multline}\label{eq:cap_fade_lin}
\delta_{y+1}^\text{CF} = \delta_{y}^\text{CF} + \sum_{c=1}^\text{C} [ \sum_{i=1}^\text{I}(\gamma_{y,c,i}-\gamma_{y,c,i+1}) \sum_{l=1}^\text{L}(\zeta_{y,c,l}-\zeta_{y,c,l+1}) \\ \cdot \frac{\partial \delta^\text{cyc}(\hat{\text{DoD}}_{y,c,i}^\text{cyc}, \hat{\text{SoC}}_{y,c,l}^\text{cyc}, 365\text{C}(y-0.5))}{\partial n}365] + \\ + \sum_{j=1}^\text{J}(\eta_{y,j}-\eta_{y,j+1}) \frac{\partial \delta^\text{idl}(\hat{\text{SoC}}_{y,j}^\text{idl},365(y-0.5))}{\partial \tau}365,
\end{multline}
where $\text{C}$ is a number of cycles performed during a scenario. The partial derivatives of capacity fade from cycling (\ref{eq:cap_fade_cyc}) and idling (\ref{eq:cap_fade_idl}) are found for the corresponding lifetime moments, i.e, time, number of performed cycles, cycle DoD $\hat{\text{DoD}}_{y,c,i}^\text{cyc}$, median cycle SoC $\hat{\text{SoC}}_{y,c,l}^\text{cyc}$ and average daily SoC $\hat{\text{SoC}}_{y,j}^\text{idl}$. The latter three are found as follows
\begin{equation}\label{eq:dod_cyc_str}
\hat{\text{DoD}}_{y,c,i}^\text{cyc} = \sum_{i'=1}^\text{i-1}|\text{DoD}_{i'}^\text{cyc}| + \frac{|\text{DoD}_{i}^\text{cyc}|}{2},
\end{equation}
\begin{equation}\label{eq:soc_cyc_str}
\hat{\text{SoC}}_{y,c,l}^\text{cyc} = \sum_{l'=1}^\text{l-1}|\text{SoC}_{l'}^\text{cyc}| + \frac{|\text{SoC}_l^\text{cyc}|}{2},
\end{equation}
\begin{equation}\label{eq:soc_idl_str}
\hat{\text{SoC}}_{y,j}^\text{idl} = \sum_{j'=1}^\text{j-1}|\text{SoC}_{j'}^\text{idl}| + \frac{|\text{SoC}_j^\text{idl}|}{2}.
\end{equation}
The product of binary variables in (\ref{eq:cap_fade_lin}) is substituted with a variable $u_{y,c,i,l} = \gamma_{y,c,i} \zeta_{y,c,l}$, which is linearized as in (\ref{eq:bin_prod1_lin})
\begin{equation}\label{eq:bin_prod1_lin}
\begin{gathered}
0 \leq u_{y,c,i,l} \leq 1,\\
u_{y,c,i,l} \leq \gamma_{y,c,i},\\
u_{y,c,i,l} \leq \zeta_{y,c,l},\\
u_{y,c,i,l} \geq \gamma_{y,c,i} + \zeta_{y,c,l} - 1.
\end{gathered}
\end{equation}
Next, charge and discharge battery power output is approximated as follows
\begin{multline}\label{eq:Pb_ch}
P_{y,t}^{\text{B}_\text{Ch}} = \prod_{y'=1}^y \sum_{i=1}^\text{I}(\gamma_{y',c,i} - \gamma_{y',c,i+1}) \sum_{k=1}^\text{K}(\theta_{y,t,k} - \theta_{y,t,k+1}) \\ \sum_{g=1}^\text{G} \frac{\partial P^\text{B}(\hat{\text{P}}_g^{\text{T}_\text{Ch}},\hat{\text{SoC}}_{y,t,k},\hat{\text{N}}_{I(y)}^{\text{FC}})}{\partial P^\text{T}} P_{y,t,g}^{\text{T}_\text{Ch}}
\end{multline}
\begin{multline}\label{eq:Pb_dis}
P_{y,t}^{\text{B}_\text{Dis}} = \prod_{y'=1}^y \sum_{i=1}^\text{I}(\gamma_{y',c,i} - \gamma_{y',c,i+1}) \sum_{k=1}^\text{K}(\theta_{y,t,k} - \theta_{y,t,k+1}) \\ \sum_{h=1}^\text{H} \frac{\partial P^\text{B}(-\hat{\text{P}}_h^{\text{T}_\text{Dis}},\hat{\text{SoC}}_{y,t,k},\hat{\text{N}}_{I(y)}^{\text{FC}})}{\partial P^\text{T}} P_{y,t,h}^{\text{T}_\text{Dis}}
\end{multline}
where the partial derivative of the battery power output (\ref{eq:Pb}) is found for each segment of terminal power outputs $\hat{\text{P}}_g^{\text{T}_\text{Ch}}$ and $\hat{\text{P}}_h^{\text{T}_\text{Dis}}$, momentary SoC $\hat{\text{SoC}}_{y,t,k}$ and the number of full equivalent cycles $\hat{\text{N}}_{I(y)}^{\text{FC}}$, which are found as follows
\begin{equation}\label{eq:Pb_ch_aux1}
\hat{\text{P}}_g^{\text{T}_\text{Ch}} = \sum_{g'=1}^{g-1}|\text{P}_{g'}^{\text{T}_\text{Ch}}| + \frac{|\text{P}_{g}^{\text{T}_\text{Ch}}|}{2},
\end{equation}
\begin{equation}\label{eq:Pb_dis_aux1}
\hat{\text{P}}_h^{\text{T}_\text{Dis}} = \sum_{h'=1}^{h-1}|\text{P}_{h'}^{\text{T}_\text{Dis}}| + \frac{|\text{P}_{h}^{\text{T}_\text{Dis}}|}{2},
\end{equation}
\begin{equation}\label{eq:Pb_ch_aux2}
\hat{\text{SoC}}_{y,t,k} = \sum_{k'=1}^{k-1}|\text{SoC}_{k'}| + \frac{|\text{SoC}_{k}|}{2},
\end{equation}
\begin{equation}\label{eq:etp}
\hat{\text{N}}_{I(y)}^{\text{FC}} = \sum_{y'=1}^y \sum_{i'=1}^\text{i(y')-1} |\text{DoD}_{i'}^\text{cyc}| + \frac{|\text{DoD}_{i(y')}^\text{cyc}|}{2},
\end{equation}
where $I(y)$ is a set of segments used in a particular year $y$.
Finally, to linearize the product of binary and continuous variables in (\ref{eq:Pb_ch}) and (\ref{eq:Pb_dis}), the product of binary variables $\gamma_{1,c,I(1)}..\gamma_{y,c,I(y)} \theta_{y,t,k} = v_{I(y),k}$ has been linearized similar to the previous instance
\begin{equation}\label{eq:bin_prod2_lin}
\begin{gathered}
0 \leq v_{I(y),k} \leq 1,\\
v_{I(y),k} \leq \gamma_{1,c,I(1)}, ...\\
v_{I(y),k} \leq \gamma_{y,c,I(y)},\\
v_{I(y),k} \leq \theta_{y,t,k},\\
v_{I(y),k} \geq \gamma_{1,c,I(1)} + ... + \gamma_{y,c,I(y)} + \theta_{y,t,k} - y,
\end{gathered}
\end{equation}
while the products of binary and continuous variables $v_{I(y),k} P_{y,t,g}^{\text{T}_\text{Ch}} = w_{I(y),k,g}$ and $v_{I(y),k} P_{y,t,h}^{\text{T}_\text{Dis}} = x_{I(y),k,h}$ have been liniarized as in (\ref{eq:bin_con_prod1_lin}) and (\ref{eq:bin_con_prod2_lin}), respectively
\begin{equation}\label{eq:bin_con_prod1_lin}
\begin{gathered}
w_{I(y),k,g} \leq |\text{P}_g^{\text{T}_\text{Ch}}| v_{I(y),k},\\
P_{y,t,g}^{\text{T}_\text{Ch}} - |\text{P}_g^{\text{T}_\text{Ch}}|(1 - v_{I(y),k}) \leq w_{I(y),k,g} \leq P_{y,t,g}^{\text{T}_\text{Ch}}
\end{gathered}
\end{equation}
\begin{equation}\label{eq:bin_con_prod2_lin}
\begin{gathered}
x_{I(y),k,h} \leq |\text{P}_h^{\text{T}_\text{Dis}}| v_{I(y),k},\\
P_{y,t,h}^{\text{T}_\text{Dis}} - |\text{P}_h^{\text{T}_\text{Dis}}|(1 - v_{I(y),k}) \leq x_{I(y),k,h} \leq P_{y,t,h}^{\text{T}_\text{Dis}}
\end{gathered}
\end{equation}
\section{Numerical Study}\label{sec:num}
\subsection{Case Study}
\iffalse
were considered in a separate numerical studies, where the red dashed line indicate the maximum amount of power that can be exported from the grid, e.g., line thermal limit. The first numerical study has been performed for one daily peak demand scenario, where the demand profile (Demand 1 in Fig. \ref{fig:demand}) represents conventional daily energy consumption by a domestic load \cite{networkrevolution}. The second numerical study has been performed for two peak demand scenario, where the demand profile (Demand 2 in Fig. \ref{fig:demand}) represents the net load in a network with high share of photovoltaic energy sources, i.e., duck curve \cite{iso2012duck}.
\fi
For our particular examples we consider two peak-shaving scenarios, given by the Fig. \ref{fig:demand}, where blue and purple curves represent demand profiles with one and two peaks, respectively. The red dashed line represents the maximum desired demand level. Both cases illustrate practically wide-spread scenarios, where the first case can correspond to a typical evening peak situation \cite{networkrevolution}, while the second - to a "duck curve" pattern due to massive photovoltaics integration \cite{iso2012duck}. In both cases, the minimum storage power and energy required to shave the highest peak are $7$ MW and $17.2$ MWh, respectively.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Demand.eps}
\caption{Case study demand profiles}
\label{fig:demand}
\end{figure}
To focus on the optimal operation of the LiFePO$_4$ battery storage driven by its internal characteristics we fix external factors to constants, i.e., demand profiles remain unchanged during the battery lifetime and energy price $\text{C}^\text{En}$ is fixed to $80$ \$/MWh \cite{ofgem_inicators}. It is worth noting that the proposed approach allows considering variable energy price and a set of demand profiles for increasing load or stochastic problem formulation. Capital costs for battery power $\text{C}^\text{P}$ and energy $\text{C}^\text{E}$ capacities are $90$ \$/kW and $290$ \$/kWh, respectively \cite{fu20182018}. The End of Life (EoL) criterion is set to 75\%, while the planning horizon corresponds to the battery operational lifetime $\text{T}^\text{Lt}$, i.e., optimization problem variable.
\subsection{Results}
The main results of the formulated optimization problem applied to the case study above are provided in Table \ref{Table:results}. For the one peak demand scenario, the optimal solution corresponds to $25.4$MWh/$7$MW battery system, which results in per diem battery investment and operating costs of $1512.1$\$/day for $15$ years of operational lifetime. For the two peak demand scenario, the optimal solution suggests installing $30.5$MWh/$7$MW battery storage, which corresponds to $2233.3$\$/day of per diem battery investment and operating costs for $12$ years of operation. Fig. \ref{fig:Obj} illustrates the maps of the objective function in battery storage capacity $\bar{\text{E}}$ and operational lifetime $\text{T}^\text{Lt}$ space for two demand scenarios, while Fig. \ref{fig:SOC_res} and \ref{fig:deg_res} depict SoC, operation and degradation characteristics of the optimal solutions. Before analyzing the results, let us declare three major findings:
\begin{enumerate}
\item The optimal capacity of the LiFePO$_4$ battery is driven by the operating requirements, e.g., considerable capacity headroom becomes economically feasible for the case of two peaks per day.
\item Given the gradient near the optimal solutions in Fig. \ref{fig:Obj}, it is safer to overestimate the capacity and underestimate operational lifetime than the opposite.
\item The operation strategy should be altered over the whole battery lifetime to ensure optimal utilization of the LiFePO$_4$ battery
\end{enumerate}
\begin{table}[t]
\caption{Solutions of the optimization problem}\label{Table:results}
\begin{center}
\begin{tabular}{c | c | c | c}
\textbf{Demand} & \textbf{Objective} & \textbf{Capacity} & \textbf{Lifetime}\\ \hline
One peak & 1512.1 \$/day & 25.4 MWh / 7 MW & 15 years\\
Two peak & 2233.3 \$/day & 30.5 MWh / 7 MW & 12 years\\
\hline
\end{tabular}
\end{center}
\end{table}
As it was already mentioned in the previous subsection, the minimum power and energy capacities to perform peak-shaving are $7$ MW and $17.2$ MWh, respectively. Even though, the optimal solutions match the minimum power capacity requirement, there exist significant headroom in terms of energy capacity. For instance, the optimal battery energy capacities for one and two peak demand scenarios (Table \ref{Table:results}) correspond to $25.4$ MWh and $30.5$ MWh, which are 47.7\% and 77.3\% higher than the actual energy required to cover the highest peak, i.e., headroom. Even though, the large part of these (33.3\%) corresponds to capacity fade compensation and around 2.5\% can be attributed to compensate for discharge losses, the remaining capacity margin is related to the operation strategy. Particularly, for the one peak demand scenario, this accounts for the remaining 11.9\% of energy capacity margin, while for the two peak demand scenario, where battery is used more extensively, this accounts for the remaining 41.5\% of headroom to achieve optimal utilization of the LiFePO$_4$ battery storage. In contrast to the above solutions, a naive strategy would be to choose the battery capacity accounting only for the minimum required energy capacity, EoL criterion and discharge efficiency, e.g., $17.2$ MWh $/0.75/0.98 = 23.4$ MWh. Even though the derived battery capacity would require less capital investments, compared to the obtained solutions, the resulting per diem investment and operating costs would be higher due to shorter operational lifetime (11 and 8 years for one peak and two peaks demand scenarios, respectively).
\begin{figure*}
\begin{center}
\subfigure[One peak demand]{\includegraphics[width=0.48\textwidth]{Figs/Obj_1cyc2.eps}}
\hspace{0.01\textwidth}
\subfigure[Two peaks demand]{\includegraphics[width=0.48\textwidth]{Figs/Obj_2cyc2.eps}}
\caption{Objective function value as a function of installed energy capacity and operational lifetime}
\label{fig:Obj}
\end{center}
\end{figure*}
Fig. \ref{fig:Obj} illustrates the positions of the optimal solutions in the objective function value map, which is presented as a function of installed energy capacity and operational lifetime. The red stars indicate the minimum objective function value positions, i.e., the optimal solutions. For the one peak demand scenario (a), the minimum objective function value equals to $1512.1$ \$/day, which corresponds to $25.4$ MWh of installed energy capacity and $15$ years of battery lifetime. For the two peak demand scenario (b), the minimum objective function value is found at the intersection of $30.5$ MWh of installed energy capacity and $12$ years of operational lifetime, and equals to $2233.3$ \$/day. As it can be seen from Fig. \ref{fig:Obj}, both solutions are located very close to the high gradient of the objective function, meaning that the small disturbance (error) of the optimal solution might result into significant increase of the objective function value. Particularly, the profitability of a solution might be significantly compromised if the capacity is underestimated and operational lifetime is overestimated. However, one might want to overestimate the installed energy capacity and underestimate operational lifetime to reduce the sensitivity and investment risks at the cost of a minor increase of the investment and operating costs.
\begin{figure*}
\begin{center}
\subfigure[One peak demand]{\includegraphics[width=0.48\textwidth]{Figs/SOC_1cyc.eps}}
\hspace{0.01\textwidth}
\subfigure[Two peaks demand]{\includegraphics[width=0.48\textwidth]{Figs/SOC_2cyc.eps}}
\caption{State of charge profiles of the optimal solution}
\label{fig:SOC_res}
\end{center}
\end{figure*}
Fig. \ref{fig:SOC_res} illustrates the optimal LiFePO$_4$ battery scheduling during the whole operational lifetime period. In case of the one peak demand scenario (a), the SoC profile variation changes from [27\%;95.8\%] range in the beginning of the battery lifetime to [5.4\%;75\%] during the terminal year. The similar picture is observed for the case of the two peak demand scenario (b), where the SoC ranges of two consecutive cycles change from [58.8\%;95.2\%] and [38\%;95.2\%] during the first year of operation to [38.4\%;75\%] and [17.3\%;75\%] during the terminal year, respectively. Even though the span of the ranges, i.e., DoD, increases only by 0.8\% for the one peak demand scenario (a) and by 0.2\% and 0.5\% for the two consecutive peaks of the two peak demand scenario (b), the battery SoC strategy changes through the whole lifetime period quite significantly. For instance, the gradual decrease of the average battery SoC can be observed on Fig. \ref{fig:deg_res}, where in case of the one peak demand scenario (a) it drops from 39.3\% to 19.1\%, and in case of the two peak demand scenario (b) it falls from 61.8\% to 42.1\%. Since the DoD is tied to the amount of energy required to shave the peak, it cannot be changed once the battery capacity is chosen. Thus, the only operation characteristic that can be altered is the SoC, which is observed in the numerical study.
\begin{figure*}
\begin{center}
\subfigure[One peak demand]{\includegraphics[width=0.48\textwidth]{Figs/DOD_SOC_AvCap_1cyc.eps}}
\hspace{0.01\textwidth}
\subfigure[Two peaks demand]{\includegraphics[width=0.48\textwidth]{Figs/DOD_SOC_AvCap_2cyc.eps}}
\caption{Operation and degradation characteristics of the optimal solution}
\label{fig:deg_res}
\end{center}
\end{figure*}
Given the constant peak-shaving requirements for the entire battery lifetime period, the small increase in the DoD strategy is explained by the need to compensate for the increased discharge losses associated with the internal resistance growth. The substantial alternation of the battery operation strategy relates to both internal resistance growth and capacity fade characteristics. As per (\ref{eq:cap_fade_cyc}) and (\ref{eq:cap_fade_idl}), the battery SoC is in direct relation to the capacity fade from idling, while the median cycle SoC is in inverse relation to the capacity fade from cycling. Thus, on Fig. \ref{fig:SOC_res} we can observe a rapid charge of the battery just before it is required to discharge. This way it is possible to keep the average battery SoC low, while the median cycle SoC is high, which is complementary to slow degradation process. However, given the fact that the average daily SoC decreases asymptotically with the available capacity (see Fig. \ref{fig:deg_res}), it can be concluded that capacity fade from cycling is the dominating factor. Also, it can be noted that during the course of the battery lifetime, the time duration for battery charging is increased from four hours in the beginning of the battery lifetime to seven hours during the terminal year (see Fig. \ref{fig:SOC_res}), which negatively affects the average daily SoC. This reflects the time-varying trade-off between power losses and capacity fade from idling, where the latter dominates during the early battery lifetime, while the former comes to the fore after
\subsection{Comparative analysis}
To quantify the advantages of the proposed modelling approach, it has been compared to two existing battery sizing methodologies. The first methodology (referred to as "Cyc.Lt.(DoD,C-rate)") is taken from \cite{padmanabhan2019battery}, where the nonlinear relationship between the battery DoD, C-rate and cycle lifetime is considered with a piece-wise linear function. However, in opposite to the proposed methodology, the battery efficiency and available battery capacity are kept constant. The second methodology (referred to as "Deg.(SoC,DoD,C-rate);Rint(SoC)") has been proposed in \cite{berrueta2018combined}, where the dynamic programming optimization is used to resolve a comprehensive Li-ion battery model that accounts for the battery degradation (i.e., capacity fade and internal resistance growth from both idling and cycling), and SoC dependant equivalent circuit $Rint$ model. In contrast to the proposed approach, both methodologies allow choosing only one battery operation strategy for the whole planning horizon, while, as it has been shown in the previous subsection, to achieve optimal battery utilization the strategy needs to be substantially altered during the whole operational lifetime (see Fig. \ref{fig:SOC_res} and \ref{fig:deg_res}). All three methodologies have been applied to the same LiFePO$_4$ benchmark model from the literature and the same case-study of one peak demand scenario, described in the present section. It is worth noting that given the same disposition of the sizing methodologies to possible errors (investment risks), the obtained solutions would be indicative for the relative expected benefit of one method over the other if the original model is the same. Thus, we derive the advantage of the proposed methodology over the state-of-the-art based on the obtained optimal solutions.
\begin{table}[t]
\caption{Comparative study}\label{Table:compare}
\begin{center}
\begin{tabular}{c | c | c | c}
\textbf{Model} & \textbf{Objective} & \textbf{Capacity} & \textbf{Lifetime}\\ \hline
Cyc.Lt.(DoD,C-rate)\cite{padmanabhan2019battery}&1879.5\$/day&23.4MWh/7MW&11years\\
Deg.(SoC,DoD,C-rate);Rint(SoC)\cite{berrueta2018combined}&1695.5\$/day&29.0MWh/7MW&15years\\
Proposed&1512.1\$/day&25.4MWh/7MW&15years\\
\hline
\end{tabular}
\end{center}
\end{table}
The results of the three approaches under comparison are given in Table \ref{Table:compare}. In case of the variable battery lifecycle (Cyc.Lt.(DoD,C-rate)), the solution suggests installing $23.4$MWh/$7$MW battery system, which results in daily investment and operating costs of 1879.5 \$/day. The optimal DoD is found to be 75\%, which corresponds to the EoL criterion and leads to 4,000 cycles or 11 years. In case of the comprehensive battery modelling approach (Deg.(SoC,DoD,C-rate);Rint(SoC)), the optimal solution suggests installing $29.0$MWh/$7$MW battery system, which results in the objective function value of 1695.5 \$/day. The solution corresponds to the battery dispatch depicted in Fig. \ref{fig:SOC_els}, where the operation strategy is found to be 25.5\% average battery SoC, 44.7\% cycle median SoC, and 60.7\% cycle DoD over the whole battery lifetime, which in this case is found to be $15$ years. In its turn, the optimal solution obtained by the proposed approach corresponds to $25.4$MWh/$7$MW battery system, which corresponds to the objective function value of 1512.1 \$/day for 15 years of expected battery lifetime. As per Fig. \ref{fig:SOC_res} (a) and \ref{fig:deg_res} (a), the optimal battery utilization corresponds to the operation characteristics that evolve through the whole lifetime period. Particularly, the average battery SoC changes from 39.3\% in the beginning of the battery lifetime to 19.1\% during the terminal year, cycle median SoC changes from 61.4\% to 40.2\%, and cycle DoD changes from 68.8\% to 69.6\%. Compared to the previous approach, such adjustable operation strategy allows providing the same service for the same planning horizon with substantially smaller battery capacity. Particularly, the battery energy capacity found by the approach in \cite{berrueta2018combined} is 14.2\% higher than the one found by the proposed method, what leads to 12.1\% reduction of the objective function value, i.e., investment and operating costs.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figs/SOC_els.eps}
\caption{State-of-the-art solution \cite{berrueta2018combined} - State of charge profile}
\label{fig:SOC_els}
\end{figure}
\section{Conclusions}\label{sec:con}
This paper has presented a new battery modelling approach, which accounts for numerous Li-ion battery characteristics, i.e., degradation from idling and cycling, internal resistance as a function of both degradation and SoC, as well as the equivalent circuit model to account for battery efficiency. The nonlinear characteristics have been linearized using the \emph{Special Order Sets 2} to be suitable for use within the MILP problems, e.g., optimal scheduling and sizing. The distinctive advantage of the proposed methodology resides in the fact that the operation strategy of a battery storage system can be adjusted for each lifetime period separately, i.e., separate variables of the optimization problem. Even though the proposed modelling approach have been based on the LiFePO$_4$ battery models available in the literature, the proposed methodology can be applied to other Li-ion chemistry.
Applying the developed LiFePO$_4$ battery model to realistic case-studies, it has been found that the optimal utilization of the battery corresponds to the nonconstant operation strategy through the whole battery lifetime. This includes increasing DoD to compensate for the growing internal resistance and associated charge and discharge losses, decreasing median cycle SoC to minimize battery degradation from cycling, and increasing average SoC and battery charging process duration as a trade-off between degradation from idling and growing charge and discharge losses. Finally, the proposed model have been applied to the optimal battery sizing problem and compared to the state-of-the-art methodologies, where an improvement of $12.1\%$ in the investment and operating costs has been demonstrated.
\bibliographystyle{elsarticle-num-names}
|
1,116,691,499,030 | arxiv |
\section{Introduction}
The formation and evolution of galaxies is intimately linked to their interstellar medium (ISM). Indeed, the ISM provides the fuel for star formation and in turn, the physical and chemical properties of the ISM are affected by stars (through UV radiation, cosmic rays, winds, enrichment by metals and dust, mechanical energy injection, etc.). The ISM presents several phases:
the cold dense phases (cold neutral medium, CNM, itself including molecular phases) that may eventually collapse to form stars, the warmer, less dense phases (warm neutral and ionized medium, WNM and WIM, respectively), and the hot ionized medium (HIM) \citep{Field1969, McKee1977}.
These phases are well studied in the local Universe via analysis of emission over a large range of wavelengths in the electromagnetic spectrum from X-ray \citep{Snowden1997} to radio \citep{Heiles2003a}, but their description is still limited at high redshift, due to flux dimming at cosmological distances and significantly coarser spatial resolution available for emission-line studies of most tracers of the ISM.
This problem can be overcome by absorption-line spectroscopy.
Both the WNM and CNM at high redshift are detectable in the spectra of background quasars and $\gamma$-ray burst (GRB) afterglows as Damped Lyman-$\alpha$ systems (DLAs) -- absorption-line systems with the highest column densities of neutral hydrogen, \ion{H}{I} ($\log N({\rm \ion{H}{I}}) > 20.3$\footnote{Here and in what follows, $N$ is the column density in cm$^{-2}$.}) and a collection of associated metal lines \citep[for a review, see][]{Wolfe2005}.
Most DLAs actually represent WNM \citep{Srianand2005, Neeleman2015}, while CNM is much more rarely detected \citep[in a few percent of DLAs,
see e.g.,][]{Balashev2018}.
One of the main tracers of CNM is molecular hydrogen (H$_2$), the most abundant molecule in the Universe.
Using UV absorption lines of H$_2$ in the Lyman and Werner bands, one can probe diffuse and translucent molecular clouds along the line of sight \citep{Ledoux2003, Noterdaeme2008, Noterdaeme2010, Balashev2017, Ranjan2018}.
If the H$_2$ column density is large enough, the less abundant isotopologue, HD, can also be detected \citep{Varshalovich2001}.
To date, HD lines have been detected only in twelve intervening systems among $\sim40$
confirmed H$_2$-bearing DLAs at high redshift ($>0$)
\citep{Noterdaeme2008, Balashev2010, Tumlinson2010, Ivanchik2010, Noterdaeme2010, Klimenko2015, Ivanchik2015, Klimenko2016, Noterdaeme2017, Balashev2017, Rawlins2018, Kosenko2018}. This number remains limited
since the detailed analysis of H$_2$ and HD lines can be done only in high-resolution quasar spectra, which require observations with
the largest optical telescopes. Also, as mentioned before, the incident rate of the cold ISM in DLAs at high $z$ is quite low. Hence, blind searches for HD/H$_2$ are very inefficient \citep{Jorgenson2014}.
Notwithstanding, in recent years several efficient techniques were proposed to pre-select saturated $\rm H_2$ lines in DLAs where HD is then easier to detect \citep[][]{Balashev2014, Ledoux2015, Noterdaeme2018}.
Some of the high-redshift $N({\rm HD})/2N(\rm H_2)$
measurements lie close to the primordial isotopic (D/H)$_p$ ratio, triggering discussion on whether the molecular isotopic ratio could serve as a proxy for D/H, in particular at high column densities where the cloud is thought to be fully molecularized \citep[e.g.,][]{Ivanchik2010}. However, models
suggest that the HD/H$_2$ ratio varies significantly with depth into the clouds \citep{LePetit2002, Liszt2015, Balashev2020} since HD and H$_2$ have different main
formation mechanisms: H$_2$ is forming mainly on the surface of dust grains, while HD is mostly formed via fast ion-molecular reactions.
At the same time, destruction of both HD and H$_2$ mainly occurs via photo-dissociation by UV photons.\footnote{Photo-dissociation is the main destruction process for molecules but there can be additional reactions such as destruction by cosmic rays or reversed reaction (Eq.~\ref{H2+D+}) that lead to a non-unity molecular fraction even in the fully self-shielded part of the clouds.}
This implies that the HD/H$_2$ ratio is sensitive to a combination of physical conditions, and that the HD/H$_2$ ratio can differ from the isotopic ratio even at high column densities in self-shielded regions \citep{Balashev2020}.
Moreover, under some conditions the D/HD transition may take place earlier than the H/H$_2$ transition \citep{Balashev2020}, which leads to HD/2H$_2$ $>$ D/H \citep{Tumlinson2010, Noterdaeme2017}, and therefore HD/H$_2$ may not be used as a lower limit for the isotopic ratio.
From the known HD-bearing systems, it was found that the relative HD/H$_2$ abundance tends to systematically be higher at high redshift than in the Galaxy \citep{Snow2008, Balashev2010, Tumlinson2010, Ivanchik2015}. This discrepancy cannot be solely explained by the progressive destruction of deuterium, since the astration of D through stellar evolution is expected to be small \citep{Dvorkin2016}. Therefore, the most probable explanation is to be sought in differences in physical conditions between the ISM of the Galaxy and that of distant galaxies. Indeed, models of ISM chemistry show that the HD/H$_2$ ratio is sensitive to the physical conditions in the ISM -- UV flux, cosmic-ray ionization rate (CRIR), metallicity, number density, and cloud depth (\citealt{LePetit2002, Cirkovic2006, Liszt2015, Balashev2020}).
Among these parameters,
the cosmic-ray ionization rate seems to play a major role, being extremely important for the ISM chemistry.
Indeed, cosmic rays are
an important source of heating and the main ionizing source and therefore drives almost all the chemistry in the ISM. In the case of $\rm HD$, cosmic rays promote the main channel of its formation as follows:
\begin{equation}
\label{eq:HD_equations}
{\rm H} \xrightarrow{\rm CR} {\rm H^+} \xrightarrow{ \rm D } {\rm D^+} \xrightarrow{\rm H_2} {\rm HD}
\end{equation}
Therefore, HD can, in principle, be used to constrain the CRIR \citep[e.g.,][]{Balashev2020}. Such independent constraint would be extremely valuable, given the still loose constraints on CRIR in both the local Universe \citep[see, e.g.,][]{Hartquist1978a, vanDishoeck1986, Federman1996, Indriolo2007, Neufeld2017, Gonzalez_Alfonso2013, Gonzalez_Alfonso2018, vanderTak2016} and at high redshift \citep{Muller2016, Shaw2016, Indriolo2018}.
Additionally, a $\rm HD$-based method to constrain CRIR has an important advantage compared to other, widely-used methods based on oxygen-bearing molecules: the abundance of $\rm HD$ is found to increase relative to H$_2$ when the metallicity decreases \citep[mostly due to chemistry;][]{Liszt2015, Balashev2020}.
Therefore, $\rm HD$ allows us to probe the CNM in lower metallicity environments.
Motivated by this emerging new possibility of using $\rm HD$ as a probe of CRIR and the lack of known HD detections to date,
we performed a systematic search for $\rm HD$ in recently-published and archival H$_2$-bearing DLAs at high redshift.
We report four new HD detections. Additionally, we refit HD, $\rm H_2$, and \ion{C}{i} in a few systems to obtain confidence upper limits on HD olumn density and to get consistent constraints on its physical parameters. Finally, in one DLA we find $\rm H_2$ that has not been reported before (while we only obtain an upper limit on $\rm HD$).
This paper is organized as follows: in Sect.~\ref{sec:data} we present the sample in which we searched for HD lines. Sect.~\ref{sec:analysis} describes the data analysis and details on individual systems. The measurements of $\rm HD/H_2$ abundances are summarized in Sect.~\ref{sec:results} and used in Sect.~\ref{sec:phys_cond} to constrain the physical conditions in the absorbing medium. In Sect.~\ref{sec:discussion}, we discuss some implications on the derived CRIR and limitations of the model. Lastly, in Sect.~\ref{sec:conclusion} we offer our concluding remarks.
\section{Data}
\label{sec:data}
To search for HD absorption lines in high-$z$ DLAs, we used quasar spectra obtained at medium and high resolving power with X-shooter ($R\sim6000$; \citealt{Vernet2011}) and the Ultraviolet and Visual Echelle Spectrograph (UVES, $R\sim50\,000$; \citealt{Dekker2000}) on the Very Large Telescope (VLT) as well as the High Resolution Echelle Spectrograph (HIRES, $R\sim50\,000$; \citealt{Voht1994}) on the Keck telescope.
Most of the data come from X-shooter and include the spectra of quasars with recently reported
high-$z$ H$_2$-bearing DLAs from \citet{Noterdaeme2018, Ranjan2018, Balashev2019, Ranjan2020}.
The detailed description of the observations and data reduction is presented in the above-mentioned papers. Typically, these quasars were observed with 1-4 exposures, each about one hour long.
The UVES data include the system at $z=3.09$ towards J\,1311+2225, recently reported by \citet{Noterdaeme2018}, where \ion{C}{I} together with H$_2$ and CO molecules were detected, the well-known three ESDLA systems at $z=2.402$ towards HE0027$-$1836 \citep{Noterdaeme2007} (for which further data was obtained by \citet{Rahmani2013}, leading to an improved quality spectrum), at $z=3.85$ towards J\,0816+1446 \citep{Guimaraes2012}, at $z=2.48$ towards J\,2140$-$0321 \cite{Noterdaeme2010}, and the DLA system towards J\,2340-0053 where HD was reported independently and almost simultaneously by \citet{Kosenko2018} and \citet{Rawlins2018}. For this latter system,
we refitted HD together with $\rm H_2$ and \ion{C}{I} lines to obtain self-consistent priors on physical parameters that are used to derive the CRIR.
For these systems we used the spectra from original publications or the SQUAD UVES database \citep{Murphy2019}, or from KODIAQ DR2 database \citep{OMeara2017}.
We also looked at all other
known H$_2$-bearing DLAs at high $z$ to search for HD absorption lines that were not detected or considered in the original studies.
\citet{Kosenko2018} reported a new H$_2$-bearing system at $z=2.067$ towards Q\,0812+3208. Unfortunately, only the weakest HD transition (L0-0 band) is covered
by the HIRES spectrum (see details by \citealt[][]{Balashev2010}), so that we were only able to place an upper limit on $N$(HD).
A summary of the H$_2$-bearing DLAs analysed in this paper is provided in Table~\ref{tab:qso}. In Table~\ref{tab:known_qso}, we provide information on previously known high-$z$ HD/H$_2$-bearing systems known to date, that were used later to derive physical conditions.
\begin{center}
\begin{table*}
\centering
\caption{H$_2$-bearing DLA systems searched for HD.}
\label{tab:qso}
\begin{tabular}{lccccccc}
\hline
Quasar &$z_{\rm em}$ & $z_{\rm abs}$ & $\log N(\rm HI)$ & [X/H]$^a$ & X & $\log N(\rm H_2)$ & References$^b$\\
\hline
\multicolumn{8}{c}{VLT/X-shooter data:} \\
J\,0136+0440 & 2.78 & 2.779 & $20.73\pm0.01$ & $-0.58\pm0.03$ & S & $18.65^{+0.06}_{-0.07}$ & 1 \\
J\,0858+1749 & 2.65 & 2.625 & $20.40\pm0.01$ & $-0.63\pm0.02$ & S & $19.72^{+0.01}_{-0.02}$ & 1\\
J\,0906+0548 & 2.79 & 2.567 & $20.13\pm0.01$ & $-0.18^{+0.05}_{-0.08}$ & S & $18.88\pm0.02$ & 1\\
J\,0917+0154 & 2.18 & 2.107 & $20.75\pm0.04$ & $0.17\pm0.07$ & Zn & $20.11\pm0.06$ & 2, 3\\
J\,0946+1216 & 2.66 & 2.607 & $21.15\pm0.02$ & $-0.48\pm0.01$ & S & $19.97^{+0.01}_{-0.02}$ & 1\\
J\,1143+1420 & 2.58 & 2.323 & $21.64\pm0.06$ & $-0.80\pm0.06$ & Zn
& $18.3\pm0.1$ & 4 \\
J\,1146+0743 & 3.03 & 2.840 & $21.54\pm0.01$ & $-0.57\pm0.02$ & Zn & $18.82^{+0.03}_{-0.02}$ & 1\\
J\,1236+0010 & 3.02 & 3.033 & $20.78\pm0.01$ & $-0.58^{+0.04}_{-0.03}$ & S & $19.76\pm0.01$ & 1\\
J\,1513+0352 & 2.68 & 2.46 & $21.83\pm0.01$ & $-0.84\pm 0.23$ & Zn & $21.31\pm0.01$ & 5\\
J\,2232+1242 & 2.30 & 2.230 & $21.75\pm0.03$ & $-1.48\pm0.05$ & Zn & $18.56\pm0.02$ & 4 \\
J\,2347$-$0051 & 2.62 & 2.588 & $20.47\pm0.01$ & $-0.60^{+0.06}_{-0.09}$ & S & $19.44\pm0.01$ & 1 \\
\hline
\multicolumn{8}{c}{High-resolution (Keck/HIRES and VLT/UVES) data:} \\
HE0027$-$1836 & 2.56 & 2.402 & $21.75\pm0.10$ & $-1.63\pm0.10$ & Zn & $17.43\pm0.02$ & 4, 6 \\
J\,0812+3208 & 2.70 & 2.067 & $21.50\pm0.20$ & $-1.83\pm0.20$ & Si & $19.28\pm0.01^{c}$ & 7, 8 \\
J\,0816+1446 & 3.85 & 3.287 & $22.00\pm0.10$ & $-1.10\pm0.10$ & Zn & $18.48\pm0.02^{c}$ & 9 \\
J\,1311+2225 & 3.14 & 3.093 & $20.62\pm0.10$ & $-0.34^{+0.13}_{-0.14}$$^c$ & Zn & $19.69\pm0.01^{c}$ & 2 \\
J\,2140$-$0321 & 2.48 & 2.339 & $22.41\pm0.03$ & $-1.52\pm0.08$ & Zn & $20.13\pm0.07$ & 4, 10 \\
\hline
\end{tabular}
\begin{tablenotes}
\item $(a)$ Metallicity with respect to solar \citep{Asplund2009}: $[{\rm X}/{\rm H}] = \log({\rm X/H}) - \log({\rm X/H})_{\odot}$.
\item $(b)$ References:
(1) \citet{Balashev2019},
(2) \citet{Noterdaeme2018},
(3) \citet{Zou2018},
(4) \citet{Ranjan2020},
(5) \citet{Ranjan2018},
(6) \citet{Noterdaeme2007},
(7) \citet{Kosenko2018},
(8) \citet{Jorgenson2010},
(9) \citet{Guimaraes2012},
(10) \citet{Noterdaeme2015}.
\item $(c)$ This work.
\end{tablenotes}
\end{table*}
\end{center}
\begin{center}
\begin{table*}
\centering
\caption{Known HD-bearing DLA systems.}
\label{tab:known_qso}
\begin{tabular}{lcccccccc}
\hline
Quasar &$z_{\rm em}$ & $z_{\rm abs}$ & $\log N(\rm HI)$ & [X/H]$^a$ & X & $\log N(\rm H_2)$ & $N({\rm HD})$ & References$^b$\\
\hline
J\,0000+0048 & 3.03 & 2.5255 & $20.8\pm 0.1$ & $0.46\pm0.45$ & Zn & $20.43\pm0.02$ & $16.64^{+0.16}_{-0.18}$ & 1 \\
B\,0120$-$28 & 0.434 & 0.18562 & $20.50\pm0.10$ & $-1.19^{+0.15}_{-0.21}$ & S & $20.00\pm0.10$ & $14.82\pm0.15$ & 2 \\
Q\,0528$-$2505 & 2.77 & 2.81112 & $21.35\pm0.10$ & $-0.68\pm0.02$ & Zn & $17.85\pm0.02$ & $13.33\pm0.02$ & 3, 4 \\
J\,0643$-$5041 & 3.09 & 2.658601 & $21.03\pm0.08$ & $-0.91\pm0.09$ & Zn & $18.54\pm0.01$ & $13.65\pm0.07$ & 5 \\
J\,0812+3208 & 2.7 & 2.626443 & $21.35\pm0.10$ & $-0.81\pm0.10$ & Zn & $19.93\pm0.04$ & $15.71\pm0.07$ & 6, 7 \\
& & 2.626276 & & $-0.81\pm0.10$ & Zn & $18.82\pm0.37$ & $12.98\pm0.22$ & 6, 7 \\
J\,0843+0221 & 2.92 & 2.786 & $21.82\pm0.11$ & $-1.52^{+0.08}_{-0.10}$ & Zn & $21.21\pm0.02$ & $17.35^{+0.15}_{-0.34}$ & 8 \\
J\,1232+0815 & 2.57 & 2.3377 & $20.90^{+0.08}_{-0.10}$ & $-1.32\pm0.12$ & S & $19.57^{+0.10}_{-0.13}$ & $15.53^{+0.17}_{-0.12}$ & 9, 10 \\
J\,1237+0647 & 2.78 & 2.68959 & $20.00\pm0.15$ & $0.34\pm0.12$ & Zn & $19.20\pm0.13$ & $14.48\pm0.05$ & 11 \\
J\,1331+170 & 2.08 & 1.77637 & $21.18\pm0.04$ & $-1.22\pm0.10$ & Zn & $19.43\pm0.10$ & $14.83\pm0.15$ & 6, 12 \\
& & 1.77670 & & $-1.22\pm0.10$ & Zn & $19.39\pm0.11$ & $14.61\pm0.20$ & 6, 12 \\
J\,1439+1117 & 2.58 & 2.41837 & $20.10\pm0.10$ & $0.16\pm0.11$ & Zn & $19.38\pm0.10$ & $14.87\pm0.03$ & 13, 14 \\
J\,2100$-$0641 & 3.14 & 3.09149 & $21.05\pm0.15$ & $-0.73\pm0.15$ & Si & $18.76\pm0.04$ & $13.83\pm0.06$ & 15, 16 \\
J\,2123$-$0050 & 2.261 & 2.0593 & $19.18\pm0.15$ & $-0.19\pm0.10$ & S & $17.94\pm0.01$ & $13.87\pm0.06$ & 17 \\
J\,2340$-$0053 & 2.083 & 2.05 &
$20.35\pm 0.05$ & $-0.52\pm 0.06$
& S & 18.62$^{+0.02}_{-0.01}$ $^{c}$ & $14.11\pm0.06$ $^{c}$ & 18 \\
\hline
\end{tabular}
\begin{tablenotes}
\item $(a)$ Metallicity with respect to solar \citep{Asplund2009}: $[{\rm X}/{\rm H}] = \log({\rm X/H}) - \log({\rm X/H})_{\odot}$.
\item $(b)$ References: (1) \citet{Noterdaeme2017}, (2) \citet{Oliveira2014}, (3) \citet{Klimenko2015}, (4) \citet{Balashev2020b}, (5) \citet{Albornoz2014},
(6) \citet{Balashev2010}, (7) \citet{Jorgenson2009}, (8) \citet{Balashev2017}, (9) \citet{Ivanchik2010}, (10) \citet{Balashev2011}, (11) \citet{Noterdaeme2010}, (12) \citet{Carswell2011}, (13) \citet{Srianand2008}, (14) \citet{Noterdaeme2008}, (15) \citet{Ivanchik2015}, (16) \citet{Jorgenson2010}, (17) \citet{Klimenko2016}, (18) \citet{Rawlins2018}.
\item $(c)$ This work.
\end{tablenotes}
\end{table*}
\end{center}
\section{Analysis}
\label{sec:analysis}
We analyzed the absorption lines using multi-component Voigt profile\footnote{The Voigt profile is a convolution of Lorenzian and Gaussian functions, arising from natural broadening and thermal/turbulent motions of the gas, respectively.} fitting.
The unabsorbed continuum was typically constructed by-eye using spline interpolation constrained by the regions free from any evident absorption lines \citep[see e.g.][]{Balashev2019}.
The lines were fitted simultaneously and the spectral pixels that were used to constrain the model were selected by eye to avoid blends (mainly with Ly-$\alpha$ forest lines). The best value and interval estimates on the fitting parameters (Doppler parameter, $b$, column density, $N$ and redshift, $z$) were obtained with a Bayesian approach, using standard $\chi^2$ likelihood to compare the data and the model. To sample the posterior distribution function of the parameters we used Monte Carlo Markov Chain (MCMC) \citep[see e.g.][]{Balashev2017} with affine-invariant sampling \citep{Goodman2010}. By default the priors on most parameters were assumed to be flat (for $b$, $\log N$ and $z$).
However,
for most X-shooter spectra, the resolution is not high enough to accurately resolve the velocity structure and some HD lines can be in the saturated regime. In these cases, we found that column densities and Doppler parameters can be highly degenerated, resulting in uncertain constraints. Therefore, we used priors on the number of components, their redshifts and Doppler parameters from the analysis of H$_2$ or \ion{C}{I} absorption lines \citep[see e.g.][]{Balashev2019}. This is adequate, since H$_2$ is usually constrained by a large number of lines ($\sim50-100$) and \ion{C}{I} is fitted in the region out of Ly$\alpha$ forest.
We used mostly components where the column density of H$_2$ exceeds $\log N (\rm H_2) \gtrsim 18$, since for lower H$_2$ columns, the expected HD column densities will be much lower than what the data can constrain,
i.e. even upper limits will be uninformative.
Moreover, we found that in X-Shooter spectra, the continuum placement for some HD lines is non-trivial.
We estimated the resulting uncertainty independently using the following procedure. We performed a large number ($\sim500$) of realizations, where we randomly shifted the continuum level for each line. The values of the shifts were drawn from a normal distribution with dispersion corresponding to the mean uncertainty of spectral pixels at the positions of absorption lines. For each realization, we also randomly drew an HD Doppler parameter using constraints obtained from H$_2$. The redshift uncertainty from H$_2$ (or \ion{C}{I}) in most cases is quite low and has only marginal effect on the results.
We then fitted each realization $i$ with fixed $b$ and $z$
and obtained the best fit column density $N^i$(HD). We obtained the final HD column density measurement from the distribution of $N^i$(HD).
We found that the uncertainties on HD column densities increase in most cases by a factor of $\sim 2$ compared to MCMC fit with fixed continuum, meaning that the continuum placement uncertainty contributes significantly to the total $N($HD) uncertainty budget at medium resolution.
We summarize the results of fitting HD lines in Table~\ref{tab:fit_results}
and provide specific comments on each system as follows:
\subsection{VLT/X-shooter data:}
\subsubsection{J\,0136$+$0440}
We only tentatively detected HD absorption lines at the expected positions based on the redshift of the main H$_2$ component ($z=2.779430$) with column density $\log N(\rm H_2)=18.64^{+0.06}_{-0.08}$ and Doppler parameter $b=7.7^{+2.4}_{-1.9}$\,km\,s$^{-1}$. Therefore, fixing $z$ and using priors on Doppler parameter from H$_2$ analysis, we placed only an upper limit to the HD column density in this component, $\log N({\rm HD})<14.5$. The fits to the unblended HD absorption lines are shown in Fig.~\ref{fig:J0136}. Here and in the following figures we show only those HD absorption lines that are not totally blended with other absorption lines (from Ly$\alpha$ forest and/or H$_2$ and metal lines from corresponding DLA).
\subsubsection{J\,0858$+$1749}
We detected HD absorption lines at the position of H$_2$ component ($z=2.62524$) that has $\log N(\rm H_2)=19.72^{+0.01}_{-0.02}$ and
$b=7.9^{+0.4}_{-0.4}$\,km\,s$^{-1}$. To fit HD lines we fixed $z$ and used $b$ as a prior from H$_2$ analysis. Using the HD\,L8-0R(0) line and red wings of HD\,L4-0R(0), HD\,L7-0R(0), HD\,L11-0R(0) and HD\,L12-0R(0) absorption lines (see Fig.~\ref{fig:J0858}), we constrained $\log N({\rm HD}) = 14.87^{+0.06}_{-0.09}$.
\subsubsection{J\,0906$+$0548}
We only tentatively detected HD absorption lines at the position of the main H$_2$ component ($z=2.56918$) that has
$\log N(\rm H_2)=18.87\pm0.02$ and
$b=6.8^{+0.1}_{-0.1}$\,km\,s$^{-1}$. Although we did find HD lines at the expected positions, all of them are partially or fully blended with other absorption lines (see Fig.~\ref{fig:J0906}). Therefore, using $z$ and priors on $b$ obtained from H$_2$ analysis, we were only able to place an upper limit to the HD column density in this component to be $\log N({\rm HD})<14.7$.
\subsubsection{J\,0917$+$0154}
This system was selected by \citet{Ledoux2015} in their search for cold gas at high redshift through \ion{C}{I} lines. The detection and analysis of H$_2$ was presented by \citet{Noterdaeme2018} (they reported total column density $N(\rm H_2) =20.11\pm0.06$) and the metal lines were studied by \citet{Zou2018}.
Unfortunately, due to low resolution and relatively high velocity extent of H$_2$ lines, almost all HD lines are blended, including usually avaliable L3-0R0, L4-0R0 and W0-0R0 lines. The only not blended line L0-0R0 has a very low oscillator strength and therefore we were able to put only very conservative upper limit on HD column density using priors on the redshifts and Doppler parameters for three components fit obtained from the refitting jointly C\,{\sc i}\ and H$_2$ absorption lines. The fit to \ion{C}{I} and HD lines are shown in
Fig.~\ref{fig:J0917} and H$_2$ lines profiles are presented in Fig.~\ref{fig:J0917_H2}. The detailed fit result is given in Table~\ref{tab:J0917}.
\subsubsection{J\,0946$+$1216}
The detection of HD at
the position of the main H$_2$ component ($z=2.60642$, $\log N(\rm H_2)=19.96^{+0.01}_{-0.02}$, $b=9.8^{+0.8}_{-0.3}$\,km\,s$^{-1}$) for this system is also tentative. Unfortunately, the spectrum is very noisy and significantly contaminated by highly saturated H$_2$ lines and intervening Ly$\alpha$ forest absorption.
Therefore we fixed $z$ and used Doppler parameter from H$_2$ analysis as a prior. Hence we were only able to obtain relatively loose constraint on the HD column density in this component to be $\log N({\rm HD})<15.2$, see
Fig.~\ref{fig:J0946}.
\subsubsection{J\,1143$+$1420}
This extremely saturated DLA at $z = 2.3228054$ was previously analysed by \citet{Ranjan2020} and H$_2$ column density was found to be $\log N({\rm H_2}) = 18.3\pm0.1$. We looked for HD lines associated with H$_2$, and we were able to place an upper limit on HD column density. We used fixed $z$ and priors on Doppler parameter from H$_2$ analysis, and got $N({\rm HD}) < 15$.
The fit to HD lines is shown in Fig.~\ref{fig:J1143}.
\subsubsection{J\,1146$+$0743}
We do not detected HD absorption lines at the position of both H$_2$ components ($z=2.84163$ and 2.83946 with $N(\rm H_2) = 18.76\pm0.01$ and $17.94^{+0.11}_{-0.13}$ respectively). Therefore we constrained $\log N({\rm HD})<14.4$ and $\log N({\rm HD})<14.5$ for the red and blue components, respectively, using a combination of HD\,L3-0R(0), HD\,L8-0R(0), HD\,W0-0R(0), HD\,W1-0R(0), HD\,L11-0R(0) and HD\,L12-0R(0) lines and priors on $b$ and fixed $z$ from H$_2$ analysis. The spectrum at the expected positions of HD absorption lines is shown in Fig.~\ref{fig:J1146}.
\subsubsection{J\,1236$+$0010}
We do not detect HD absorption lines at the position of H$_2$ component of DLA ($z=3.03292$, $\log N(\rm H_2)=19.76\pm0.01$,
$b=2.3^{+0.2}_{-0.2}$\,km\,s$^{-1}$). To fit HD lines we fixed $z$ and used Doppler parameter of H$_2$ as a prior. Using HD\,L0-0R(0), HD\,L3-0R(0), HD\,L4-0R(0), HD\,L5-0R(0), HD\,W0-0R(0), HD\,L11-0R(0) and HD\,L14-0R(0) lines (see Fig.~\ref{fig:J1236}) we put quite loose constraint on HD column density to be
$\log N(\rm HD) \lesssim 16.1$
since lines are found to be in the intermediate
regime.
\subsubsection{J\,1513$+$0352}
The extremely saturated DLA at $z = 2.463598$ towards J\,1513$+$0352 was found in SDSS database by \citet{Noterdaeme2014}. Detailed analysis of system by \citet{Ranjan2018} using X-shooter spectrum revealed a very high H$_2$ column density: $\log N({\rm H_2}) = 21.31\pm0.01$ (actually the highest value reported to the date at high-$z$).
We detected HD L0-0R0, HD L5-0R0 and HD L7-0R0 absorption lines in this system. However, because of the H$_2$ lines were damped, they did not
constrain the
Doppler parameters. We therefore used instead the value obtained from associated \ion{C}{I} as a prior for HD and obtained
$\log N({\rm HD}) = 17.42^{+0.64}_{-1.09}$.
This makes it the DLA with one of the highest HD column density as well. However, since the absorption lines are in the saturated regime and resolution is moderate, the uncertainty on $N(\rm HD)$ remains quite large.
The fit to the HD absorption lines is shown in Fig.~\ref{fig:J1513}.
\subsubsection{J\,2232$+$1242}
We do not detect HD absorption lines in H$_2$-bearing DLA ($z = 2.2279378$, $N({\rm H_2}) = 18.56\pm0.02$, \citealt{Ranjan2020}) towards J\,2232$+$1242. Using redshift and prior on Doppler parameter from H$_2$ fit we obtain the upper limit on HD column density to be $\log N(\rm HD) < 13.8$ (see Fig.~\ref{fig:J2232}).
\subsubsection{J\,2347$+$0051}
We detect HD absorption lines at the position of H$_2$ ($z=2.58797$, $\log N(\rm H_2)=19.44\pm0.01$, $b=6.2^{+0.2}_{-0.2}$\,km\,s$^{-1}$). Using HD\,L3-0R(0), HD\,L5-0R(0), HD\,L7-0R(0), HD\,L13-0R(0) and HD\,L15-0R(0) lines (see Fig.~\ref{fig:J2347}), we measured HD column density to be $\log N({\rm HD})=14.33^{+0.18}_{-0.16}$ (to fit HD lines we fixed $z$ and used prior on $b$ from H$_2$ analysis).
\subsection{KECK/HIRES and VLT/UVES data:}
\label{sect:high}
\subsubsection{HE\,0027$-$1836}
The extremely saturated DLA system at $z = 2.4018258$ have been studied by \citet{Noterdaeme2007, Rahmani2013}. H$_2$ was identified in this DLA with column density $\log N({\rm H_2}) = 17.43$. Searching for HD absorption lines at the redshift of H$_2$ absorption lines, we obtained an upper limit on the HD column density $\log N(\rm HD) < 13.6$ (we fixed $z$ and used Doppler parameter as a prior from H$_2$ analysis) due to inconsistency of L2-0R0 and L3-0R0 lines with the fit in the spectrum (see Fig.~\ref{fig:HE0027_HD}).
\subsubsection{J\,0812$+$3208}
The spectrum towards J\,0812$+$3208 features two DLAs at $z=2.626491$ and $z=2.06779$ \citep{Prochaska2003}. \citet{Jorgenson2010} detected absorption lines from \ion{C}{I} fine-structure levels in both of them. Associated HD/H$_2$ absorption lines at $z=2.626491$ were studied in details by several authors \citep{Jorgenson2009, Balashev2010, Tumlinson2010}, however, no significant attention have been paid to the system at $z=2.06678$. Knowing that \ion{C}{i} is an excellent tracer of H$_2$ in ISM \citep{Noterdaeme2018}, we searched for H$_2$ and HD molecules in this system as well.
We used the Keck/HIRES spectrum whose
reduction is detailed in \cite{Balashev2010}. We detected H$_2$ absorption lines from $J\le4$ rotational levels, which we fitted
using a one component model, with tied redshifts and Doppler parameter rotational levels for all levels. Indeed, H$_2$ lines are
located at the blue end of the spectrum, covering only one-two unblended H$_2$ lines from each rotational level. The fit results is given in Table~\ref{table:J0812} and line profiles are shown in Fig.~\ref{fig:J0812}. Using relative population of $J=1$ and $J=0$ levels, we found the excitation temperature to be $T_{01}=67^{+4}_{-3}$\,K.
Unfortunately, only two HD lines (L0-0R0 and L1-0R0) were covered in this spectra and only the weakest HD L0-0R0 line from this system was unblended (see Fig.~\ref{fig:J0812}). Thus, we estimated only an upper limit to the HD column density, fixing the redshift and Doppler parameter from H$_2$, and obtained $\log N(\rm HD)<14.4$.
\subsubsection{J\,0816$+$1446}
The multicomponent $\rm H_2$-bearing DLA system towards J\,0816$+$1446 was identified by \citet{Guimaraes2012}. This system have quite large redshift and hence is significantly blended with Ly$\alpha$ forest lines. \citet{Guimaraes2012} reported H$_2$ in two components, with one at $z=3.28742$ indicates a significantly high H$_2$ column density, $\log N(\rm H_2) = 18.66\pm0.27$ to be searched for $\rm HD$. We refit H$_2$ absorption lines at $z=3.28742$ with three subcomponents, since it provides a better fit, and measured the total $\log N(\rm H_2) = 18.51\pm0.04$ in agreement with \citet[][]{Guimaraes2012}. Unfortunately, all $\rm HD$ lines are blended and therefore using fixed z and Doppler parameter from H$_2$ analysis we were able to obtain an upper limits on the HD column densities $\log N (\rm HD) \lesssim 15$ from the L4-0 R(0) line (fit results are presented in Table~\ref{tab:J0816} and Fig.~\ref{fig:J0816_HD}).
\subsubsection{J\,1311$+$2225}
This multicomponent $\rm H_2$-bearing DLA system was selected through \ion{C}{i}
by \citet{Ledoux2015}. \citet{Noterdaeme2018} reported $\log N({\rm H_2}) = 19.69\pm 0.01$ in this system, using single component model,
but they noted that four components for H$_2$ lines can be distinguished.
We refitted $\rm H_2$ and \ion{C}{I} lines in this system using four-component model. First we fit \ion{C}{I} absorption lines from three fine-structure levels, where we tied Doppler parameters for each component. Then we performed a four-component fit to the H$_2$ lines, where the selection of initial guess of components was based on \ion{C}{I} result.
For $\rm H_2$, we tied Doppler parameters only between $J=0$ and $J=1$ levels, while Doppler parameters for other rotational levels were allowed to vary independently. However, since the components are significantly blended among themselves and the data is quite noisy, we added two penalty functions to the likelihood. The first one is set to artificially suppress situations where the Doppler parameter of the some $J$ level would be lower than that of the $J-1$ level. This is well motivated physically and observationally, since the increase of the Doppler parameters for the higher $\rm H_2$ rotational levels has been established in many $\rm H_2$ absorption systems \citep[see e.g.][]{Lacour2005, Noterdaeme2007, Balashev2009}. The other penalty is to keep a reasonable excitation diagram of $\rm H_2$: we penalized models with $T_{J-1,J} > T_{J,J+1}$\footnote{where $T_{J,J+1}$ is
the excitation temperature between
$J$ and $J+1$ levels}. This is also reasonably motivated by both observations and modelling \cite[see e.g.][]{Klimenko2020}.
Therefore we get total H$_2$ column density to be $N({\rm H_2}) = 19.59\pm 0.01$, which is a bit lower than the value $19.69\pm 0.01$ reported previously \citep{Noterdaeme2018}.
The fitting results are shown in Table~\ref{tab:J1311_results} and \ion{C}{I} and H$_2$ profiles in Figs.~\ref{fig:J1311_CI}, \ref{fig:J1311_H2_low}, \ref{fig:J1311_H2_j2}, \ref{fig:J1311_H2_j3}, \ref{fig:J1311_H2_j45}.
We also estimated metallicity in this system. Unfortunately, very few metal lines, that are usually used to obtain metallicity, were covered in this spectrum, and almost all covered lines are blended. Therefore to obtain metallicity we used \ion{Zn}{II}\,2062 line. We fitted this line, assuming 4 components in the positions of \ion{C}{I} components, and obtained \ion{Zn}{II} total column density to be $12.84^{+0.09}_{-0.11}$, therefore the metallicity is $-0.34^{+0.13}_{-0.14}$ relative to solar. The fit to \ion{Zn}{II} absorption line is shown in Fig.~\ref{fig:J1311_ZnII}.
We again used a four-component model to analyse HD, associated with \ion{C}{I} components. We found that component 3 for HD is shifted in comparison with \ion{C}{I} lines. However, the component 3 in C\,{\sc i}\ have quite large Doppler parameter, that indicates that there is velocity structure within this component, which meanwhile we can not resolve due to low quality of the spectrum and mutual blending from other components.
So for HD we did not use the H$_2$ and \ion{C}{I} priors on redshifts (except weak component 1, where only upper limit on HD column density could be placed) and Doppler parameters.
After MCMC procedure we found HD to be detected in the component 2, 3 and 4, and the redshifts of the components are well agree within uncertainties (see Table~\ref{tab:J1311_results}). Component 1 is too weak, so we could only place an upper limit on $N({\rm HD})$ there.
The fit to the HD lines is shown in Fig.~\ref{fig:J1311_HD} and HD column densities reported in Table~\ref{tab:J1311_results}.
\subsubsection{J\,2140$-$0321}
H$_2$ absorption lines were previously found and analysed by \citet{Noterdaeme2015, Ranjan2020} at $z = 2.339$ and H$_2$ column density was found to be quite large $\log N(\rm H_2) = 20.13$. To fit HD absorption lines we used together the spectra, obtained by X-shooter and UVES. However, since the UVES spectrum is very noisy, and X-shooter is low-resolution hence it is not appropriate for HD analysis. Therefore we were able only to place upper limit on HD column density to be $\log N(\rm HD) < 14.6$ using the priors on the Doppler parameters and the redshifts obtained from H$_2$ analysis \citep{Noterdaeme2015}, see Fig.~\ref{fig:J2140_HD}.
\subsubsection{J\,2340$-$0053}
\ion{C}{I} and H$_2$ absorption lines in the DLA at $z\approx2.055$ towards J\,2340$-$0053 were first reported by \citet{Jorgenson2010}. These authors found \ion{C}{i} in nine components, while they fitted H$_2$ using a six components model. This spectrum was recently reanalysed by \citet{Rawlins2018} with a seven components for both \ion{C}{I} and H$_2$ and found their redshifts to be consistent with each other. HD absorption lines, associated with H$_2$ were later independently detected by \citet{Kosenko2018} and \citet{Rawlins2018}. In this paper, we present detailed reanalysis of HD, H$_2$ and \ion{C}{I} absorption lines.
Using the reduced 1D-spectrum of J\,2340$-$0053 from the KODIAQ database \citep{OMeara2017}, we refitted \ion{C}{I}, H$_2$ and HD absorption lines with seven component model using the same methodology as in the previous section for J\,1311$+$2225. We fit \ion{C}{I} lines first, taking into account the partial coverage of the background emission line region by \ion{C}{I} line at $\sim$1560\AA\ reported by \citet{Bergeron2017}. We fit the covering factors as an independent parameter following the methodology from \citet{Balashev2011}. We found that a fit with three independent covering factors for each of the three main components provides a better fit, than using a single covering factor for all components.
We then used the \ion{C}{I} fit as first guess to the redshifts of the $\rm H_2$ lines. Unfortunately, three central components are significantly blended with each other in almost all $\rm H_2$ absorption lines from $J=0$, 1, 2 and 3 rotational levels. Therefore we used redshifts determined during \ion{C}{I} fit as priors, and as for J1311$+$2225, we used penalty functions during $\rm H_2$ analysis to reproduce physically reasonable constraints.
We obtained the total H$_2$ column density to be $\log N({\rm H_2}) = 18.57\pm 0.02$, which is higher than reported by \citep[][$\log N({\rm H_2}) = 17.99\pm0.05$]{Rawlins2018}.
The difference is partly due to the fact that
\citet{Rawlins2018} tied all H$_2$ Doppler parameters for $J > 0$ to H$_2$ $J = 0$, while we tied only H$_2$ $J = 1$ and allowed increasing $b$-values for other levels.
The fitting results are shown in Table~\ref{tab:J2340_results} and \ion{C}{I} and H$_2$ profiles in Figs~\ref{fig:J2340_CI}, \ref{fig:J2340_H2_J01}, \ref{fig:J2340_H2_J23}, \ref{fig:J2340_H2_J45}.
We fit HD absorption lines at the positions of these components using the priors on the Doppler parameters from the fit of $J=0$ and $J=1$ rotational levels of $\rm H_2$. However, the exact $b$-values affect little the results since the $\rm HD$ absorption lines are optically thin.
The obtained total HD column density is $\log N({\rm HD}) = 14.11\pm 0.06$, which is is a bit lower than found by \citet{Rawlins2018} ($\log N({\rm HD}) = 14.28\pm0.08$).
The fit to the HD lines is reported in Table~\ref{tab:J2340_results} and shown in Fig.~\ref{fig:J2340_HD}
\begin{table*}
\centering
\caption{Results from the analysis of HD lines.}
\label{tab:fit_results}
\begin{tabular}{lccccccl}
\hline
Quasar &$z$ & $b$ (km\,s$^{-1}$) & $\log N(\rm HD)^a$& $\log N(\rm H_2)$ & $N({\rm HD})/2N({\rm H}_2)$\\
\hline
\multicolumn{6}{c}{X-shooter data:}\\
J\,0136$+$0440 & 2.779430 & $7.7^{+2.4}_{-1.9}$ & $< 14.5$ & $18.64^{+0.06}_{-0.08}$ & $< 3.6 \times 10^{-5}$ \\
J\,0858$+$1749 & 2.625241 & $7.9^{+0.4}_{-0.4}$ & $14.87^{+0.06}_{-0.09}$ & $19.72^{+0.01}_{-0.02}$ & $\left(7.1^{+1.1}_{-1.4}\right)\times 10^{-6}$ \\
J\,0906$+$0548 & 2.569180 & $6.8^{+0.1}_{-0.1}$ & $< 14.7$ & $18.87^{+0.02}_{-0.02}$ & $< 3.4\times 10^{-5}$ \\
J\,0917+0154$(b)$ & $2.10586$ & $5.2^{+1.1}_{-1.8}$ & $<12$ & $17.96^{+0.82}_{-0.16}$ & $<5.5\times10^{-7}$ \\
& $2.10624$ & $6.4^{+1.5}_{-2.4}$ & $<15.9$ & $18.4^{+1.0}_{-0.3}$ & $<1.6\times10^{-3}$ \\
& $2.106812$ & $4.7^{+1.1}_{-1.3}$ & $<18.1$ & $20.09^{+0.07}_{-0.08}$ & $<5.1\times10^{-3}$ \\
J\,0946$+$1216 & 2.606406 & $9.8^{+0.8}_{-0.3}$ & $<15.2$ & $19.96^{+0.01}_{-0.02}$ & $< 9.0\times 10^{-6}$ \\
J\,1143+1420 & 2.3228054 & $2.2^{+2.0}_{-0.6}$ & $< 15$ & $18.3^{+0.1}_{-0.1}$ & $< 2.5\times10^{-4}$ \\
J\,1146$+$0743 & 2.839459 & $7.6^{+0.1}_{-0.4}$ & $< 14.5$ & $17.94^{+0.11}_{-0.13}$ & $< 1.8\times 10^{-4}$ \\
& 2.841629 & $11.4^{+0.5}_{-0.7}$ & $< 14.4$ & $18.76^{+0.01}_{-0.01}$ & $< 2.2\times 10^{-5}$ \\
J\,1236$+$0010 & 3.03292 & $2.3^{+0.2}_{-0.2}$ & $< 16.1
& $19.76^{+0.01}_{-0.01}$ & $< 1.1\times 10^{-4}
\\
J\,1513$+$0352 & 2.463598 & $3.9^{+0.3}_{-0.3}$ & $17.42^{+0.64}_{-1.09}$ & $21.31^{+0.01}_{-0.01}$ & $\left(6.4^{+2.1}_{-5.9}\right)\times 10^{-5}$ \\
J\,2232+1242 & 2.2279378 & $8.1^{+1.1}_{-1.2}$ & $<13.8$ & $18.56^{+0.02}_{-0.02}$ & $<8.7\times10^{-4}$ \\
J\,2347$+$0051 & 2.587971 & $6.2^{+0.2}_{-0.2}$ & $14.33^{+0.18}_{-0.16}$ & $19.44^{+0.01}_{-0.01}$ & $\left(3.9^{+2.0}_{-1.2}\right)\times 10^{-6}$ \\
\hline
\multicolumn{6}{c}{High-resolution data:}\\
HE\,0027$-$1836 & 2.4018258 & $1.2^{+0.1}_{-0.2}$ & $< 13.6$ & $17.43^{+0.02}_{-0.02}$ & $<7.4\times10^{-5}$ \\
J\,0812$+$3208 & $2.066780(^{+1}_{-1})$ & $4.4^{+0.1}_{-0.1}$ & $<14.4$ & $19.26^{+0.02}_{-0.01}$ & $< 7.4\times 10^{-6}$ \\
J\,0816$+$1446$(b)$ & $3.287252(^{+3}_{-2})$ & $0.6^{+0.1}_{-0.1}$ & $<14.9$ & $16.97^{+0.09}_{-0.10}$ & $<4.3\times10^{-3}$ \\
& $3.287399(^{+2}_{-3})$ & $1.5^{+0.1}_{-0.1}$ & $<14$ & $18.43^{+0.04}_{-0.03}$ & $<1.9\times10^{-5}$ \\
& $3.287515(^{+2}_{-3})$ & $1.1^{+0.1}_{-0.1}$ & $<14.2$ & $17.60^{+0.10}_{-0.10}$ & $<2.0\times10^{-4}$ \\
J\,1311+2225 & $3.091410(^{+21}_{-14})$ & $8.0^{+4.6}_{-5.4}$ & $<12.8$ & $17.87^{+0.37}_{-0.33}$ & $<4.4\times 10^{-6}$ \\
& $3.0915397(^{+66}_{-77})$ & $5.4^{+0.8}_{-0.8}$ & $14.82^{+0.08}_{-0.08}$ & $19.52^{+0.02}_{-0.02}$ & $\left(1.0^{+0.3}_{-0.2}\right)\times 10^{-5}$ \\
& $3.091714(^{+28}_{-48})$ & $\lesssim 2.8$ & $14.30^{+0.37}_{-0.31}$ & $18.25^{+0.22}_{-0.39}$ & $
\left(5.6^{+13.7}_{-3.2}\right)\times 10^{-5}$ \\
& $3.091871(^{+11}_{-26})$ & $4.0^{+1.6}_{-1.2}$ & $14.27^{+0.10}_{-0.13}$ & $18.57^{+0.05}_{-0.09}$ & $\left( 2.5^{+0.9}_{-0.7}\right)\times 10^{-6}$ \\
& Total: & & $15.02^{+0.11}_{-0.07}$ & $19.59^{+0.01}_{-0.01}$ & $\left(1.3^{+0.4}_{-0.2}\right)\times 10^{-5}$ \\
J\,2140$-$0321 & $2.33996(^{+3}_{-3})$ & $4.5^{+0.9}_{-0.7}$ & $<14.6$ & $20.13^{+0.07}_{-0.07}$ & $<1.5\times 10^{-6}$ \\
J\,2340$-$0053$^{b}$ & $2.0541703(^{+6}_{-4})$ & $2.5^{+0.1}_{-0.1}$ & $<13.5$ & $15.99^{+0.04}_{-0.04}$ & $<1.7\times 10^{-3}$ \\
& $2.0542913(^{+4}_{-9})$ & $1.7^{+0.1}_{-0.2}$ & $<12.7$ & $15.24^{+0.04}_{-0.03}$ & $<1.4\times 10^{-3}$\\
& $2.054528(^{+3}_{-3})$ & $3.0^{+0.1}_{-0.2}$ & $<13.8$ & $17.11^{+0.12}_{-0.14}$ & $<2.2\times 10^{-4}$\\
& $2.054610(^{+1}_{-1})$ & $1.0^{+0.1}_{-0.3}$ & $13.60^{+0.15}_{-0.14}$ & $18.27^{+0.06}_{-0.06}$ & $\left(1.1^{+0.5}_{-0.3}\right)\times 10^{-5}$\\
& $2.054723(^{+3}_{-3})$ & $3.1^{+0.1}_{-0.1}$ & $13.84^{+0.05}_{-0.05}$ & $18.14^{+0.04}_{-0.04}$ & $\left(2.5^{+0.4}_{-0.3}\right)\times 10^{-5}$\\
& $2.0549952(^{+5}_{-4})$ & $3.8^{+0.1}_{-0.1}$ & $<12.6$ & $16.43^{+0.03}_{-0.03}$ & $<7.1\times 10^{-5}$\\
& $2.0551398(^{+6}_{-4})$ & $1.8^{+0.1}_{-0.1}$ & $13.29^{+0.15}_{-0.21}$ & $17.43^{+0.04}_{-0.05}$ & $ \left(3.6^{+1.6}_{-0.3}\right)\times 10^{-5}$ \\
& Total: & & $14.11^{+0.06}_{-0.06}$ & $18.57^{+0.02}_{-0.02}$ & $\left(1.7^{+0.3}_{-0.2}\right)\times 10^{-5}$ \\
\hline
\end{tabular}
\begin{tablenotes}
\item $(a)$ The point and interval estimates were obtained from a 1D marginalized posterior distribution function, and correspond to its maximum and 0.683 (1$\sigma$) confidence interval, respectively. In case of tentative detection, the upper limits are constrained from the 1$\sigma$ one-sided confidence interval.
\item $(b)$ These system were re-fitted to get consistent results for HD, H$_2$, and \ion{C}{i} (see text).
\end{tablenotes}
\end{table*}
\section{Results}
\label{sec:results}
We summarize our new measurements of HD (and H$_2$) column densities and relative abundance of $N({\rm HD})/2N({\rm H}_2)$ in Table~\ref{tab:fit_results}. In total,
we report four new detections of HD molecules in high-redshift DLAs (sometimes in several components) and place upper-limits for another twelve.
Fig.~\ref{fig:HD_H2} compares the HD and H$_2$ column densities in the Galaxy (\citealt{Snow2008}) and at high redshift (new measurements and values from Table~\ref{tab:known_qso}).
We also compare the data to the primordial D/H isotopic ratio derived from updated Big Bang Nucleosynthesis (BBN) calculations \citep{Pitrou2018} and $\Omega_{\rm b} h^2$ from \citep{Planck2018}.
One can see that the molecular ratios are well below the primordial isotopic ratio in the Galaxy,
while distant measurements do not show such a tendency and instead indicate a systematically higher $\rm HD/H_2$ relative abundance than locally at specific $\log N(\rm H_2)$, and closer to the BBN value.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/HD_H2_data_obs4.pdf}
\caption{Relative abundance of HD and H$_2$ molecules. Green circles, red squares, and yellow triangles correspond to known HD-bearing systems at high redshift (for references, see text), new HD detections at high redshift and upper limits on HD column densities (filled and empty squares; this work), and measurements in the Galaxy \citep{Snow2008}, respectively. The solid blue line shows the D/H isotopic ratio estimated using standard BBN calculations and $\Omega_{\rm b}h^2$ from the \citet{Planck2018}.
}
\label{fig:HD_H2}
\end{figure*}
Several processes can in principle affect the $\rm HD/H_2$ relative abundance, such as fractionation and astration of deuterium. The chemical fractionation of D (the process through which D efficiently replaces hydrogen in complex molecules, such as D$_2$, HDO, D$_2$O, NH$_2$D, NHD$_2$, ND$_3$, H$_2$D$^+$, DCO$^+$, etc.) should play a minor role, since complex molecules mainly reside in the cold dense medium with $n_{\rm H} \gtrsim 10^{5}$\,cm$^{-3}$ and $T\lesssim 25$\,K
\citep[see e.g.][and references therein]{Kim2020}, which is not the case here as we probe more diffuse clouds.
Measurements at high $z$ tend to probe lower metallicities than locally
and hence represent gas that has been less processed in stars. Such gas should therefore be less affected by astration of deuterium
than measurements in the Galaxy (i.e. $\sim$10 Gyr later). However, \citet{Dvorkin2016} showed that D/H is never reduced to less than 1/3 of its primordial ratio, i.e. astration cannot explain the observed discrepancy.
On the other hand, the low metallicity affects the $\rm HD$ abundance from the chemical pathway \citep{Liszt2015,Balashev2020}. Indeed, as the metallicity decreases, both the dust abundance and electron fraction comes from carbon (here we consider diffuse in the diffuse ISM) decrease.
This results in a drop of the radiative and $\rm H^+$ grain recombination rates, and hence the ionization fraction of H and D increases (see set of reactions leading to HD, \ref{eq:HD_equations}). This then results in an increase of the $\rm HD$ formation rate through the reaction:
\begin{equation}
\label{H2+D+}
{\rm H_2} + \rm{D^{+}} \longrightarrow \rm{HD} + \rm{H^{+}}
\end{equation}
The enhanced $\rm HD$ formation rate consequently increases the $\rm HD$ abundance relative to $\rm H_2$. Interestingly, in certain physical conditions, this may lead to a D/HD transition occurring earlier (lower penetration depth) in ISM clouds than the H/H$_2$ transition \citep{Balashev2020}.
The evident observational consequence of this is that $ N({\rm HD})/2 N({\rm H_2}) > {\rm D/H}$, while the opposite case was generally assumed \citep[e.g.][]{LePetit2002}, since naively $\rm HD$ is always significantly less self-shielded in the medium than $\rm H_2$.
In conclusion, the typically lower metallicities at high $z$ can in principle explain the systematic difference in relative $\rm HD/H_2$ abundance between high-$z$ and Milky-Way measurements \citep[see also][]{Liszt2015}.
\section{Physical conditions}
\label{sec:phys_cond}
The relative $\rm HD/H_2$ abundance depends not only on the metallicity, but also on the physical conditions in the medium -- number density, UV flux, and cosmic-ray ionization rate \citep{LePetit2002, Cirkovic2006, Liszt2015}. To describe this dependence, we used recently published simple semi-analytic description of the dependence of the HD/H$_2$ ratio on these parameters \citep{Balashev2020}.
This method includes solving the HD balance equation between formation and destruction processes in a plane-parallel, steady-state cloud and permits the determination of how $N({\rm HD})$ -- as a function of $N({\rm H_2})$ -- depends on the physical properties in the cloud, namely cosmic-ray ionization rate per hydrogen atom (CRIR, $\zeta$), UV field intensity (relative to Draine field, \citealt{Draine1978}, $\chi$), number density ($n = n_{\rm \ion{H}{I}} + 2n_{\rm H_2}$), and metallicity ($Z$). We assumed that the D/H isotopic ratio is $2.5\times10^{-5}$ for all systems, i.e., we neglected a possible astration of D, which is typically much smaller \citep{Dvorkin2016} than the uncertainties of this method (see below).
To constrain the distributions of the physical parameters from the measured $N(\rm HD)$ and $N(\rm H_2)$, we followed a Bayesian approach using affine-invariant Markov Chain Monte Carlo (MCMC) sampling \citep{Foreman2013}. Because we don't have access to the total hydrogen $(N(\text{H\,{\sc i}})+N({\rm H_2}))$ in individual $\rm HD$-bearing components, the metallicity for each cloud
was set to the overall metallicity in the corresponding DLA, as provided in Tables~\ref{tab:qso} and \ref{tab:known_qso}.
For the intensity of the UV field as well as the number density, we used priors that have been estimated from the analysis of the relative population of H$_2$ rotational and \ion{C}{I} fine-structure levels \citep{Balashev2019, Klimenko2020}.
This allows us to significantly reduce the constrained probability distribution function of CRIR, for which we used a flat prior on $\log\zeta$.
An example of the constrained 1D and 2D-posterior probability distribution functions of the parameters for the system towards J\,0858$+$1749 is given in Fig.~\ref{fig:J0858_MCMC}.
The derived physical conditions for the sample are summarised in Table~\ref{tab:phys_params},
and plots of the marginalized posterior distribution functions for each component are shown in Figures~\ref{fig: MCMC_results1} -- \ref{fig: MCMC_results5}.
We do not report results for J\,1513$+$0352 nor J\,1311$+$2225 (component 3), towards which we obtained a very loose constraint on $\zeta$, due to large uncertainties on the HD column densities.
Note that the constraints on the number density, $n$, and UV flux, $\chi$, typically match the priors used.
\begin{table}
\caption{Constraints on physical conditions.}
\label{tab:phys_params}
\centering
\begin{tabular}{lcccc}
\hline
Quasar & $\log \zeta$ & $\log\chi$ & $\log n$ & Ref.$^{\dagger}$ \\
\hline
J\,0000$+$0048 & $\gtrsim -16.3$ & $0.0^{+0.3}_{-0.3}$ & $1.2^{+0.5}_{-0.4}$ & (2) \\
Q\,0528$-$2505 & $-14.9^{+0.2}_{-0.1}$ & $1.1^{+0.1}_{-0.1}$ & $2.4^{+0.1}_{-0.1}$ & (3) \\
J\,0812$+$3208, c1 & $-16.6^{+1.4}_{-0.5}$ & $-0.1^{+0.2}_{-0.1}$ & $2.4^{+0.2}_{-0.2}$ & (2) \\
J\,0812$+$3208, c2 & $\lesssim -19.2$ & $-0.8^{+0.2}_{-0.2}$ & $0.8^{+0.3}_{-0.3}$ & (2) \\
J\,0843$+$0221 & $-16.5^{+0.9}_{-1.1}$ & $2.0^{+0.1}_{-0.1}$ & $1.9^{+0.1}_{-0.1}$ & (2) \\
J\,0858$+$1749 & $-17.3^{+0.1}_{-0.1}$ & $0.1^{+0.2}_{-0.2}$ & $1.8^{+0.1}_{-0.1}$ & (1) \\
J\,1232$+$0815 & $-18.3^{+0.3}_{-0.3}$ & $-0.4^{+0.2}_{-0.2}$ & $1.6^{+0.1}_{-0.1}$ & (2) \\
J\,1237$+$0647 & $-14.8^{+0.2}_{-0.2}$ & $1.1^{+0.1}_{-0.1}$ & $1.3^{+0.1}_{-0.1}$ & (2) \\
J\,1311$+$2225, c2 & $-16.2^{+0.1}_{-0.1}$ & $1.1^{+0.1}_{-0.1}$ & $1.7^{+0.2}_{-0.2}$ & (4) \\
J\,1311$+$2225, c3 & $-$ & $1.0^{+0.1}_{-0.1}$ & $1.9^{+0.1}_{-0.1}$ & (4) \\
J\,1311$+$2225, c4 & $-15.1^{+0.2}_{-0.3}$ & $0.6^{+0.2}_{-0.2}$ & $2.1^{+0.3}_{-0.2}$ & (4) \\
J\,1439$+$1118 & $-15.4^{+0.3}_{-0.2}$ & $0.8^{+0.2}_{-0.2}$ & $0.9^{+0.2}_{-0.2}$ & (2) \\
J\,1513$+$0352 & $-$ & $0.6^{+0.3}_{-0.2}$ & $1.9^{+0.1}_{-0.2}$ & (2) \\
J\,2100$-$0641 & $-17.2^{+0.3}_{-0.2}$ & $-0.3^{+0.3}_{-0.3}$ & $1.4^{+0.3}_{-0.3}$ & (2) \\
J\,2340$-$0053, c4 & $-16.4^{+0.7}_{-0.7}$ & $-0.1^{+0.2}_{-0.3}$ & $0.6^{+0.3}_{-0.4}$ & (4) \\
J\,2340$-$0053, c5 & $-14.8^{+0.2}_{-0.2}$ & $0.6^{+0.1}_{-0.1}$ & $1.2^{+0.1}_{-0.1}$ & (4) \\
J\,2340$-$0053, c7 & $-15.4^{+0.8}_{-1.0}$ & $-0.2^{+0.2}_{-0.2}$ & $0.8^{+0.4}_{-0.4}$ & (4) \\
J\,2347$+$0051 & $-17.6^{+0.6}_{-0.5}$ & $-0.4^{+0.4}_{-0.4}$ & $2.8^{+0.1}_{-0.1}$ & (1) \\
\hline
\end{tabular}
\begin{tablenotes}
\item $\dagger$ References used to obtain priors on $\chi$ and $n$: (1) \citet{Balashev2019}, (2) \citet{Klimenko2020}, (3) \citet{Balashev2020b}, (4) this work.
\end{tablenotes}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/MCMC/J0858_MCMC2.pdf}
\caption{Posterior probability functions for CRIR ($\zeta$), UV field intensity ($\chi$), and number density ($n$) obtained from HD/H$_2$ fitting for the system at $z = 2.625241$ towards J\,0858$+$1749. The diagonal panels show 1D marginalized posterior function, non-diagonal show 2D posterior functions, where the dark- and light-blue regions correspond to 1$\sigma$ and 2$\sigma$ confidence levels, respectively.
\label{fig:J0858_MCMC}
}
\end{figure}
\section{Discussion}
\label{sec:discussion}
We find the CRIRs to vary significantly from $\zeta \sim 10^{-18}$
to $10^{-15}$ s$^{-1}$, possibly reflecting
a wide range of environments being probed by our sample.
Indeed, DLA systems are selected owing to their absorption cross-section and likely probe
the overall
galaxy population, with a high fraction of low-mass galaxies at high redshift \citep[e.g.,][]{Cen2012}, in which the
star-formation and cosmic-ray ionization rates are expected to vary significantly.
Even though the $\rm HD/H_2$ absorption systems in our sample do not necessarily probe the immediate environments of star formation, as we will show below, the measured high CRIR values correlate with the relatively high UV fluxes that
reach up to 10 times the Draine field.
We find that the range of CRIR estimates are in line with other recent
measurements
both at high redshift \citep{Indriolo2018, Muller2016, Shaw2016} and in nearby galaxies \citep{vanderTak2016, Gonzalez_Alfonso2013, Gonzalez_Alfonso2018}, which also show quite large dispersion. This dispersion can be partly due to the use of various methods, or connected to a real physical dispersion of the CRIR. Indeed, the measurements in the Galaxy \citep[for a review see][and references therein]{Padovani2020} and in the lensed system at $z\sim 0.89$ towards PKS\,1830$-$211 \citep{Muller2016} show that this parameter can vary significantly between different sightlines even inside a given galaxy, mostly depending on the proximity to the CR accelerator. \citet{LePetit2016} also present
evidence of CRIR enhancement
in the center of the Galaxy relative to the disk. Finally, we think that the comparison of previous data with our measurement is likely not straightforward since different methods have been used, which probe various environments. Indeed, the aforementioned and most recent constraints on CRIR in local and high-$z$ galaxies have been based on oxygen-bearing species ($\rm OH^+$ and $\rm H_2O^+$). Since these have been analysed in quite luminous starburst galaxies with roughly solar metallicity and high star formation rates, they may sample rather high CRIR values compared to the overall galaxy population.
\begin{comment}
where the dispersion can also be partly due to the use of various methods.
In addition, measurements in the Galaxy \citep[for a review see][and references therein]{Padovani2020} and in the lensed system at $z\sim 0.89$ towards PKS\,1830-211 \citep{Muller2016} show that this parameter can vary significantly between different sightlines even inside a given galaxy, mostly depending on the proximity to the CR accelerator. \citet{LePetit2016} also showed
evidence of CRIR enhancement
in the center of the Galaxy relative to the disk.
Finally, the CRIR values can suffer from selection effects: most recent constraints on CRIR in local and high-$z$ galaxies were based on oxygen-bearing species ($\rm OH^+$ and $\rm H_2O^+$). Since these were analysed in quite luminous starburst galaxies with roughly solar metallicity and high star-formation rates, they may sample rather high CRIR values compared to the overall galaxy population.
\end{comment}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/cr_H2_galaxy_data2.pdf}
\caption{Estimated CRIR as a function of H$_2$ column density. Here, red squares are values obtained in this work, smaller squares are values obtained for other galaxies (blue - \citet{Muller2016}; violet - \citet{Shaw2016}; pink - \citet{Indriolo2018}), triangles are values measured in the Galaxy (yellow - \citet{Indriolo2007}; light green are protostellar envelopes \citep[for references, see table 6 from][]{Padovani2009}; blue - \citet{Caselli1998}; cyan - \citet{Shaw2008}; brown - \citet{Maret2007}; violet - \citet{Indriolo2012b}; dark green - \citet{Indriolo2015}).
\label{fig:cr_H2_data}
}
\end{figure*}
Figure~\ref{fig:cr_H2_data}
shows and compares our measurements with literature ones in the [$\zeta$, $\log N(\rm H_2)$] plane.
An attenuation of the cosmic-ray ionization rate with increasing column density is theoretically expected \citep{Padovani2009}.
However, we do not see strong evidence for a correlation between $\zeta$ and N(H$_2$) in our sample, probably because of the large dispersion (unweighted Pearson test gives correlation coefficient $r = -0.49$, with p-value $ 0.08$).
In addition, we probe mostly diffuse clouds with low cloud depths (except Q\,$0843+0221$ which will be discussed later), which may be insufficient
to attenuate the cosmic-ray flux. Additionally, the observed clouds should have quite large ($N(\rm H\,I)\gtrsim 10^{20}\,\rm cm^{-2}$) column densities of associated H\,{\sc i}, which is hard to constrain observationally, but which is also able to attenuate the CR flux, and therefore may provide an additional uncertainty in our calculations.
Previous measurements at high redshift and in the Galaxy show that in case of a denser medium (e.g., dense cores, blue triangles, \citet{Caselli1998} and protostellar envelopes, light green triangles, for references see \citet{Padovani2009}) the cosmic-ray ionization rates tend to be slightly lower.
\begin{figure*}
\begin{minipage}{0.33\textwidth}
\center{\includegraphics[width=1\linewidth]{figures/cr_uv_Z.pdf}}
\end{minipage}
\hfill
\begin{minipage}{0.33\textwidth}
\center{\includegraphics[width=1\linewidth]{figures/cr_n_Z.pdf}}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\center{\includegraphics[width=\linewidth]{figures/cr_Z_uv.pdf}}
\end{minipage}
\caption{Estimated cosmic-ray ionization rate, $\zeta$, as a function of UV field strength (left panel), number density (middle panel), and metallicity (right panel), using the $\rm HD/H_2$ relative abundance measured in DLAs at high redshift. The points are color-coded by metallicity (left and middle panels) and UV field strength (right panel), with color bars provided within each panel.}
\label{fig:cr_uv_n}
\end{figure*}
\begin{comment}
\begin{figure*}
\begin{minipage}{0.49\textwidth}
\center{\includegraphics[width=\linewidth]{figures/cr_uv_Z.pdf}}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\center{\includegraphics[width=\linewidth]{figures/cr_Z_uv.pdf}}
\end{minipage}
\vfill
\begin{minipage}{0.49\textwidth}
\center{\includegraphics[width=\linewidth]{figures/cr_n_Z.pdf}}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\center{\includegraphics[width=\linewidth]{figures/cr_n_uv.pdf}}
\end{minipage}
\caption{Estimated cosmic ray ionization rate, $\zeta$, as a function of UV field strength (upper left panel), metallicity (upper right panel) and number density (lower panels) using $\rm HD/H_2$. Color bars show metallicity (left panels) and UV field strength (right panels).}
\label{fig:cr_uv_n}
\end{figure*}
\end{comment}
In Figure \ref{fig:cr_uv_n}, we
investigate the dependence of $\zeta$ on
the intensity of the UV field, number density and metallicity in the medium. The CRIR is found to correlate strongly with
the UV field intensity, while it does not correlate with number density and only
slightly correlates with metallicity.
Removing a possible outlier at $\log \chi\sim 2$ (corresponding to J\,$0843+0221$) and lower and upper limits from J\,0000$+$0048 and J\,0812$+$3208 (comp 2), we find
a Pearson correlation coefficient between
$\zeta$ and $\chi$ of $r=0.75$ (with p-value=0.002 to reject the null hypothesis that $\zeta$ and $\chi$ do not correlate).
The outlier
in this plot (J\,0843$+$0221) may be due to
its very high H$_2$ column density,
with
suppression of the CRIR
or due to its exceptionally low metallicity.
Indeed, in our formalism,
we assume CRIR, $\zeta$, to be constant throughout the cloud. However, it is expected that CRIR can be attenuated in the cloud at column densities $N(\rm H) \gtrsim 10^{20}\,\rm cm^{-2}$ \citep[see, e.g.,][]{Silsbee2019}.
Therefore, if the CRIR is attenuated inside the cloud, then the derived value of $\zeta$ is lower than the incident value.
That means that in principle, to draw accurate physical conclusions, cosmic-ray propagation effects at high column densities should be taken into account properly.
This would also require knowledge of the magnetic-field configuration in DLAs, which is not well probed by available observations.
Since the relative $\rm HD/H_2$ abundance depends in opposite ways on
$\zeta$ and $\chi$ (i.e., $N({\rm HD})/N({\rm H_2})$ increases when $\zeta$ increases but also when $\chi$ decreases), the $\zeta$-$\chi$ correlation could be artificially introduced by issues in the measurements themselves.
However, the posterior distributions for individual systems (e.g., Fig~\ref{fig:J0858_MCMC}) indicate that the measurements of $\zeta$ may correlate more strongly with $n$ than with $\chi$ (if any).
We also note that the HD/H$_2$ ratio depends on the number density and metallicity (with similar sensitivity on variation of $n$ as for $\chi$, and even higher sensitivity for metallicity (see \citealt{Balashev2020});
therefore, it is not evident why we should see a strong correlation between $\zeta$ and $\chi$, and a lack of correlation between $\zeta$ and $n$. This motivates us to assume that the $\zeta$-$\chi$ correlation has a real physical origin.
Indeed, we expect a common star-formation origin
between cosmic rays and UV radiation.
Furthermore, one can see that the slope of the $\log \zeta - \log \chi$ correlation is close to 2, i.e., CRIR increases quadratically with the strength of the UV field. This also may have reasonable explanation, since the low-energy cosmic rays ($\lesssim 100$ MeV, that mostly determine the ionization rate) may have complex propagation behaviour, related to the diffusion in the ISM magnetic fields. In addition, taking into account the energy losses \citep[see loss function for cosmic rays in][]{Padovani2018}, this may result in a local enhancement of the $\zeta / \chi$ ratio near the production sites and hence super-linear dependence of $\zeta$ on $\chi$, since UV photons escape much more easily from the star-forming regions.
\begin{figure}
\center{\includegraphics[width=1.0\linewidth]{figures/sqrt_cr_uv_H2.pdf}}
\caption{$\sqrt{\zeta}/\chi$ as a function of $\rm H_2$ column density
The points are color-coded by metallicity using the color bar shown at the bottom.
\label{fig:sqrt_cr_uv_H2}
}
\end{figure}
Assuming that $\zeta \propto \chi^2$ dependence is real, we plot the $\sqrt{\zeta}/\chi$ as a function of $\log N(\rm H_2)$ in Fig.~\ref{fig:sqrt_cr_uv_H2}. One can see that indeed for the main bulk of the systems with $\log N(\rm H_2)$ in the range 18 -- 20 the dispersion is significantly reduced and only within 1~dex, in comparison with the 3-dex dispersion of $\zeta$ in Figure~\ref{fig:cr_H2_data} (at the same time, we have not found a significant correlation of $\sqrt{\zeta}/\chi$ with neither $Z$ nor $n$).
As already discussed, the single outlier value of $\sqrt{\zeta}/\chi$ with the highest $\log N(\rm H_2)$ (corresponding to J\,0843$+$0221) can be related to cosmic-ray propagation effects at high column densities, or its exceptionally-low metallicity. In turn, at the lower column-density end, absorption systems probe very diffuse gas, where the application of the $\rm H/H_2$ and $\rm D/\rm HD$ transition model that we used should be taken with caution. The most important issue, in our opinion, is that the $\rm HD/H_2$ ratio primarily depends on the hydrogen ionization fraction of the medium, while the CRIR was derived assuming that this ionization state of the diffuse ISM is mainly determined by CRIR and recombination on dust grains \citep{Balashev2020}. In the case of a very diffuse medium, one can expect that the hydrogen ionization fraction is higher due to mixing with ionized and/or warm neutral media, the latter being mostly atomic and hence having higher ionization fractions. All this can effectively mimic the increase of the $\sqrt{\zeta}/\chi$ ratio at lower $\log N(\rm H_2)$ values, that one can notice in Fig.~\ref{fig:sqrt_cr_uv_H2}. Additionally, the recombination rate coefficient can have a non-linear dependence on the metallicity (in our model, we assume it is linear), since it strongly depends on the properties of the dust and we have no strict constraints on it as a function of the metallicity. Indeed a possible correlation of $\zeta$ with $Z$ (see right panel of Fig.~\ref{fig:cr_uv_n}; excluding J\,0843$+$0221 as a possible outlier, we obtain a correlation coefficient of 0.67 with p-value 0.012)
can be caused by this non-linear behaviour. Therefore, we caution to use a simple homogeneous model to estimate CRIR at low $\rm H_2$ column densities. Otherwise, one can propose a physical explanation of the correlation of $\zeta$ with $Z$, as higher metallicity systems probe more massive galaxies \citep[from well-known mass-metallicity relations (see, e.g.,][]{Sanders2015}, where star formation on average is expected to be higher than in low-mass galaxies and the cosmic-ray flux (and therefore CRIR) is expected to be enhanced.
\section{Conclusion}
\label{sec:conclusion}
We have presented new measurements of HD molecules in high-$z$ absorption systems found in quasar spectra. We looked for HD in all known strong H$_2$-bearing systems and we detected HD molecules in four DLAs and placed upper limits on the HD column density for another twelve DLAs.
So with this study, we significantly increased the sample of HD-bearing DLAs. We find that HD/H$_2$ relative abundances show large dispersion around the D/H isotopic ratio.
This together with previously-known inputs from the modelling of the ISM chemistry indicate that HD/H$_2$ ratios cannot be used to constrain the primordial D/H value.
In turn, observed HD/H$_2$ ratios can be used to estimate the gas physical conditions, in particular the cosmic-ray ionization rate (CRIR; \citealt{Balashev2020}). We find that the CRIR varies from a few $10^{-18}\,{\rm s^{-1}}$ to a few $10^{-15}\,{\rm s^{-1}}$ in our sample of high-redshift absorbers, that likely reflects the
wide range of environments and physical conditions probed by DLAs. These ranges and dispersion are also in line with previous measurements using various methods in the Galaxy as well as other galaxies.
We find that the CRIR is highly correlated with the UV field intensity in our sample, while it does not correlate with number density and only slightly with metallicity.
These correlations suggest a physical connection between the sources of cosmic rays and those of UV radiation.
Moreover, we find a quadratic dependence of $\zeta$ on $\chi$ in our sample, which is probably due to transport effects of low-energy cosmic rays.
We caution however that these correlations may be artificial because of the dependence of $N(\rm HD)/N{(\rm H_2)}$ on combination of $\zeta$, $\chi$, and $Z$.
Additionally, most of the methods currently used to determine CRIR involve a detailed chemical modeling of the
regions related to the diffuse and translucent ISM. Being the dominant species in this region that determine the chemical network results, H$_2$ can be subject to strong systematics concerning the time-dependent chemistry, since the formation timescale of H$_2$ can be relatively high in comparison with cloud lifetimes and steady-state models may not be appropriate \citep[e.g.,][]{Balashev2010}.
\section*{Acknowledgements}
This work was supported by RSF grant 18-12-00301.
We acknowledge support from the French {\sl Centre National de la Recherche Scientifique} through the Russian-French collaborative research program
``The diffuse interstellar medium of high redshift galaxies'' and from the
French {\sl Agence Nationale de la Recherche} under ANR grant 17-CE31-0011-01, project ``HIH2'' (PI: Noterdaeme).
\section*{Data availability}
The results of this paper are based on open data retrieved from the ESO and KECK telescope archives. These data can be shared on reasonable requests to the authors.
\bibliographystyle{mnras}
|
1,116,691,499,031 | arxiv | \section{Introduction}
\label{sec:intro}
The present work uses a Deep Reinforcement Learning (RL) approach to planning the operation of a multi-echelon supply chain with uncertain seasonal demands and lead times.
We consider the case in which the decisions of the whole supply chain are based on ultimate customer demands, and so, there is a central decision-maker and the stages collaborate to minimize total costs.
The supply chain considered is a four-echelon chain composed of two suppliers, two manufacturers (or factories), two wholesalers, and two retailers.
Suppliers produce and provide raw materials that are processed by manufacturers to generate finished products.
Products are distributed by manufacturers to wholesalers, and wholesalers, in turn, send products to retailers.
Retailers are responsible for meeting uncertain seasonal customer demands.
Every node of the chain has a capacitated local stock; suppliers and manufacturers store raw material, while wholesalers and retailers store finished products.
There are stochastic delays (lead times) to produce raw material at suppliers and to transport material from one node to another.
There are also maximum capacities regarding production in the suppliers and processing in the factories.
The objective is to operate the entire chain, within a given planning horizon, to meet customer demands and minimize total operating costs.
Costs are associated with the production and processing of raw materials and with the stock and transport of raw materials and products.
There is also a penalization cost when customer demand is not met.
As customers demands and lead times are uncertain, it is not trivial to define the best policy that can meet the customer seasonal demands and, at the same time, minimize the total operating cost.
Figure \ref{fig:supplychain} illustrates the supply chain scenario addressed, and in Section \ref{subsec:problem_definition}, we present a detailed problem definition.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig1.pdf}
\caption{Supply Chain addressed: there are four echelons (suppliers, factories, wholesalers, and retailers) with two nodes per echelon.
All nodes have local capacitated stocks, and suppliers and factories are also capacitated.
There are stochastic lead times to produce raw material at the suppliers and to transport material from one node to another.
There are uncertain seasonal customer demands to be met by the retailers.}
\label{fig:supplychain}
\end{figure*}
The problem addressed can be classified as a multi-period production planning and distribution problem under uncertainty in the context of Operations Research (OR).
Uncertainty in the parameters of a model is very common in real-world OR problems, and usually, it is compensated by safety margins and flexibility buffers; but this generates unused excess capacities and stocks \citep{seelenmeyer2020}.
Machine Learning (ML) is an alternative approach to solve this issue and has been used to solve problems in many fields. Recent advances, especially with the use of deep neural networks, have leveraged and extended their use.
RL is a sub-area of ML designed to solve sequential decision-making problems under uncertainty.
In this context, the problem can be formulated as a Markov Decision Process (MDP), and as the resulting model cannot be solved numerically due to the high dimension of the state space \citep{Laumanns2017}, Deep RL is an appropriate tool to deal with the problem.
The present work uses a Deep RL approach, namely PPO2\footnote{Earlier works used the term PPO2 to reference the latest version of the algorithm, designed to run in parallel using GPU environments, as is the case of our work. But, recently, the term PPO has been more common in the literature regardless of the version of the algorithm.} (Proximal Policy Optimization), to solve a multi-echelon supply chain problem.
PPO2 was chosen because it achieves high performance in many RL tasks with high-dimensional action spaces \citep{OpenAIBaselinesPPO}.
Some works in the literature use Deep RL on related problems, but they usually deal with smaller supply chain networks, with two-echelon or serial supply chains \citep{Oroojlooyjadid2019,Kemmer2018,Hutse2019,Gijsbrechts2019,Peng2019}.
Considering serial supply chains, the dimensionality of the action space is close to the number of echelons.
But regarding non-serial supply chains, the dimensionality is higher since each node needs to send material to more than one node of the next echelon.
A serial four-echelon supply chain, for instance, has three transport links (supplier-manufacturer, manufacturer-wholesaler, wholesaler-retailer),
while a non-serial four-echelon supply chain with two nodes per echelon has 12 possible transport links.
Therefore, considering the size of the supply chain network used in this work, it is a challenge to solve the presented problem using RL approaches due to the dimensionality of the action space.
\cite{Perez2021} also consider a non-serial four-echelon supply chain, but they do not consider capacitated stocks, seasonal demands, neither stochastic lead times.
According to \cite{morales2020grokking} both Deep RL and OR study decision-making under uncertainty, but problems in the OR field usually have much larger action spaces than those generally treated by Deep RL approaches.
Therefore the use of a Deep RL technique in an OR problem whose action space is continuous and has more dimensions is a contribution of the present research.
Besides that, to the best of our knowledge, this is the first work that handles the problem with stochastic lead times using a Deep RL approach.
In Section \ref{subsec:relatedworks}, we present the related works, classifying them by the type of approach (production planning or order-based) and giving a more detailed explanation about the contributions of the present work.
In previous work \citep{AlvesICCL}, we have used PPO2 to solve the problem considering constant lead times and uncertain regular (nonseasonal) demands in a case study scenario.
In this paper, we have extended our work to consider uncertain seasonal demands, stochastic lead times, and processing capacities.
We have also deepened the experimental analysis, considering now 17 different scenarios to better assess PPO2 suitability to solve the problem.
The MDP formulation and Non-linear Programming (NLP) model from our previous work were updated to consider uncertain seasonal demands and lead times, and processing capacities.
As the non-linearity of the NLP model comes from the stochastic parameters (demands and lead times), the problem can also be seen as a Linear Program with uncertain parameters.
Therefore, we solve a version of the problem considering forecast (expected) demands and average lead times (a deterministic LP problem).
This solution is encoded in an LP-based agent used as a baseline.
In the experiments, after tuning the hyperparameters of PPO2, several training runs with different random seeds are executed for each scenario.
Studied scenarios consider regular and seasonal demands (with different levels of uncertainty), constant and uncertain lead times, and different stock costs.
The results show that PPO2 can achieve good results in all proposed scenarios.
The remainder of this article is structured as follows.
Section \ref{sec:DeepRL} presents key aspects of Deep RL and the PPO2 algorithm.
Section \ref{sec:problem} presents the problem definition, related works, and problem modeling (MDP formulation and NLP model).
The methodology is presented in Section \ref{sec:methodology}, including decisions on how to apply PPO2 to solve the problem, the LP-based agent used as a baseline, and the experimental setup. Experimental results are reported and discussed in Section \ref{sec:experiments}.
A summary of the results and proposed future research directions are finally given in Section \ref{sec:conclusions}.
\section{Deep Reinforcement Learning}
\label{sec:DeepRL}
The concepts presented in this section are mainly based on \cite{sutton2018reinforcement} and \cite{morales2020grokking}.
Deep RL agents can learn to solve sequential decision-making problems under uncertainty formulated as MDPs solely through experience.
The learning process occurs through a trial and error approach.
There is no human labels data, and there is no need to collect or design the collection of data.
Many of the Deep RL techniques are based on an iterative process between the agent and the environment.
At each cycle, the agent observes the environment, takes an action, and receives a reward and a new state (or observation), and the set of these data is called experience.
The purpose of the agent is to maximize the cumulative reward.
In the case of an episodic task, the idea is to maximize the total reward until the end of the horizon.
The Deep RL agent needs to deal with sequential and evaluative feedback.
Sequential because the action taken in a time step may have delayed consequences.
Evaluative, as the reward is not supervised and, therefore, the agent needs to explore the search space.
The appropriate balance between the gathering of information with the exploitation of current information is known as the exploration vs. exploitation trade-off.
Another important feature of the Deep RL agents is that feedback is sampled.
The reward function is not known by the agent and the state and action spaces are usually large (or even infinite), so the agent needs to generalize from sampled feedback.
The \emph{deep} term in Deep RL refers to the use of artificial neural networks (ANN) with multiple hidden layers.
There are other ways to approximate functions but ANNs often have better performance.
RL is different from Supervised Learning because in Supervised Learning there is a label that specifies the correct action the system should take to a given situation, while in RL, the reward is feedback for the agent but not tell him what would be the correct action.
And RL is not Unsupervised Learning since it is trying to maximize a reward signal instead of trying to find a hidden structure.
Many of the RL techniques are based on an iterative process that alternates between policy evaluation and policy improvement (a pattern also called Generalized Policy Iteration).
The policy evaluation phase calculates the values of a given policy, solving what is called the prediction problem, while policy improvement uses estimates of the current policy to find a new (better) policy.
Alternating between policy evaluation and policy improvement solves the control problem, that is, progressively generates better policies towards optimality.
RL techniques can be based on values or policies.
In the first case, they learn action (or state) values and use those values to choose a new action.
If based on policies it means that they learn a parameterized policy that allows them to choose actions without the need to consult value estimates.
The basic idea of tabular value-based methods is to use a value function that represents the expected return if the agent follows the current policy $\pi$ from the current state $s$, after taking an action $a$.
With this approach we can identify the best action to be taken in each state and, during the learning process, to update these values to always improve the policy.
This kind of approach uses exhaustive feedback meaning that the agents need to have access to all possible samples.
But many problems have high-dimensional state and action spaces (or they could be continuous spaces), and, in such cases, it is not possible to deal with a table for the value function (state or action-values).
The problem is not only with the time and storage constraints, but mainly because many states will probably never be visited during the learning process.
Thus, it is necessary to use methods that deal with sampled feedback and that can generalize from similar states.
One possible approach is to use function approximation to represent the value function instead of tables.
In this approach, when the weights of the function are updated for a state-action pair, the update also impacts the values for other states-action pairs, enabling the desired generalization.
Many of the approximated methods follow the same basic iterative mechanism but using a function approximator like a neural network for instance (e.g, DQN \citep{mnih2013playing} is an approximated approach based on Q-Learning).
But this kind of approach is very limited regarding continuous action spaces because they need to calculate the maximum value over the actions.
Another approach to deal with high-dimensional, and especially continuous, action spaces are policy-based algorithms.
These techniques try to find the best policy directly, instead of learning the value function to derive the best policy.
They have the advantage of being able to learn stochastic policies, and thus exploration is already part of the learned function.
A drawback of policy-based methods is that they can be less sample efficient since it is harder to use off-policy strategies (i.e., it is difficult to reuse a batch of experiences, if it was not generated by the policy being learned).
Policy-gradient methods are a type of policy-based algorithms that solve an optimization problem using the gradient of the performance function of a parameterized policy.
As they follow the gradient concerning stochastic policies, the actions change more smoothly what leads to better convergence properties than value-based methods.
Some of the policy-gradient methods, called actor-critic methods, approximate not only the policy but also the value function.
The actor learns the policy, and the critic learns the value function to evaluate the policy learned by the actor.
In this approach, the value function is used as a baseline and can reduce the variance of the policy gradient objective, and thus often accelerate the entire learning process.
The most powerful actor-critic methods use deep neural networks for both, the actor and the critic, but it is not so easy to obtain good results with these techniques.
In general, they are very parameter sensitive and sample inefficient.
Proximal Policy Optimization (PPO) was proposed by \cite{schulman2017proximal} to find a balance between implementation, parameterization, and sample complexity (and PPO2 is the latest version, designed to run in parallel using GPU environments).
PPO has similar underlying architecture as previous actor-critic algorithms but innovates with two main contributions.
The first one is a surrogate objective that enables the use of multiple gradient steps on the same mini-batch of experiences.
The second one is the limitation of step size updates.
The goal is to update the policy with a new one that is not so different from the current one.
This has already been proposed in the TRPO method \citep{schulman2017trust}, but while TRPO uses a constrained quadratic objective function (and, thus, it is necessary to calculate second-order derivatives; being hard to parameterize), PPO uses what the authors call a clipped objective function that needs only first-order derivatives; and, at the same time, keeps the sample efficiency and reliable performance of TRPO.
This conservative approach of policy updates prevents performance collapse and enables the reuse of mini-batches of experiences.
Thus the method is more sample efficient and has a lower variance, reaching better performance for many problems.
Before presenting the PPO algorithm in detail, it is interesting to have a big picture of how policy-based Deep RL methods that learn stochastic policies can handle a problem with continuous action spaces with several dimensions.
Figure~\ref{fig:pg_mechanism} presents the schematic idea of a simple policy-based method.
The learned policy $\pi(\theta)$ is parameterized by a (deep) ANN.
The state $s$ is represented by a vector of continuous values, and ANN's input layer has one node per state value.
The ANN's output layer consists of one node per each action dimension and provides mean values $\mu_{\theta}(s)$ for the actions.
Besides the ANN, the agent has a vector $\sigma_{\theta}$ with standard deviation values for each action value.
The action $a(s)$, returned by the agent, is given by $a(s) = \mu_{\theta}(s) + \sigma_{\theta}(s) \odot z$, where $z \sim \mathcal{N}(0, 1)$ and $\odot$ represents the elementwise product of two vectors \citep{SpinningUp2018}.
Regarding the vector $\sigma_{\theta}$, it can be used to control the exploration of the algorithm.
At the beginning of the training, the vector's values are greater allowing more exploration, and throughout the learning process, the values are slowly decreased to better exploit the agent knowledge.
Another possible approach is to have a separated ANN to learn the standard deviation values.
In this case, the vector is given by $\sigma_{\theta}(s)$, as it depends on the states.
The ANN's weights ($\theta$) can be updated using Stochastic Gradient methods, using the reward received by the agent.
PPO is more complex than this schematic idea as is illustrated in Figure~\ref{fig:ppo}.
First of all, the algorithm uses several workers that collect a bunch of trajectories (experiences) with the current policy and groups them in a batch.
For $k$ epochs, this batch is randomly split into mini-batches, that, in turn, are used to update the weights of the actor and critic's ANNs.
After the update, the process is repeated using the new policy.
These steps are executed until a stopping criterion is reached.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig2.pdf}
\caption{
Outline of a basic policy-based Deep RL method that learns stochastic policies for problems with continuous action spaces.
The policy is parameterized by an ANN that receives the state values in the input layer, and outputs mean values for each action value.
The returned action values are sampled from a Gaussian distribution using such mean values and standard deviations from a vector used to control the exploration of the algorithm.
}
\label{fig:pg_mechanism}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{Fig3.pdf}
\caption{
PPO algorithm's outline.
On each step, agent workers collect trajectories using the current policy.
These data are grouped in a batch of experiences.
The batch is split into mini-batches that are used to update the actor and critic's ANNs, and this is repeated $k$ times.
The process is repeated with the updated policy, and this is done until the end of the training.
}
\label{fig:ppo}
\end{figure*}
To present the algorithm in more detail, the PPO's pseudo-code is shown in Algorithm \ref{alg:PPO}.
In line 1 policy parameters, $\theta$, and value-function parameters, $\phi$, are initialized (randomly, e.g.).
The for-loop initialized in line 2 refers to the total amount of time steps the algorithm will be running and, so, it is a parameter to be defined for the problem.
The for-loop shown in lines 3-5 is responsible for collecting the buffer of experiences (trajectories).
Each actor (potentially in parallel) runs the current policy for $\tau$ time steps and collects a set of experiences (line 4), then, in line 5 the advantage function is calculated for each experience.
PPO uses a truncated version of generalized advantage estimation (GAE), given by $\hat{A}_t = \delta_t + (\gamma \lambda)\delta_{t+1} + ... + (\gamma \lambda)^{\tau-t+1}\delta_{\tau-1}$, where $t$ is the time step in range $[0,\tau]$; $\delta_t = r_t + \gamma V(s_{t+1}) - V(s_t)$; $V$ is the value-function; and $\gamma$ and $\lambda$ are parameters of the algorithm.
The advantage function indicates how much better it is to take one action instead of following the current policy, so indicates the advantage of choosing an action instead of the default one.
The buffer of experiences collected by all actors is joined in a batch in line 6.
Lines 7-11 perform multiple gradient steps with the collected experiences.
The batch of experiences is randomly split in mini-batches (line 8) and each mini-batch of experiences is used to update ANN weights for both policy (line 10) and value-function (line 11).
This process of randomly splitting the buffer and updating ANNs is repeated $K$ times.
\begin{algorithm}
\DontPrintSemicolon
Initialize $\theta$ and $\phi$ (policy and value-function parameters, respectively)\;
\For{i=0,1,2,...}{
\For{actor=1,2,...,N} {
Run policy $\pi(\theta)$ for $\tau$ time steps and collect the trajectories\;
Calculate advantage estimates $\hat{A}_1,...,\hat{A}_\tau$\ based
on value-function $V_{\phi}$\;
}
Form a $batch$, of size $N\tau$, with collected trajectories and advantages\;
\For{k=1,2,...,K} {
Shuffle $batch$ and split into $minibatches$\;
\ForEach{$minibatch$}{
Update the policy by maximizing the objective $\theta_{k+1} = \underset{\theta}{\argmax} \underset{s,a \sim \pi_{\theta_k}}{{\mathrm E}}\left[ L(s,a,\theta_k, \theta)\right]$ via stochastic gradient ascent\;
Update $\phi$ fitting value function by regression on mean-squared error via gradient descent\;
}
}
}
\caption{{\sc Proximal Policy Optimization (PPO)}}
\label{alg:PPO}
\end{algorithm}
The objective function $L$ used to update the policy parameters (line 10) is presented in Equation \ref{eq:PPO_obj_function}.
\begin{equation} \label{eq:PPO_obj_function}
L(\theta) = L^{CLIP}_t(\theta) - c_1 L^{VF}_t(\theta) + c_2 S[\pi_{\theta}](s_t),
\end{equation}
where $L^{CLIP}$ is given by Equation \ref{eq:PPO_Lclip}, $L^{VF}_t$ is the value-function loss, $S$ is an entropy bonus to assure enough exploration, and $c_1$ and $c_2$ are parameters of the algorithm.
Regarding $L^{CLIP}$ function\footnote{$clip(x, min, max)$ gives $x$ for $min \leq x \leq max$, $min$ for $x<min$ and $max$ for $x>max$.}: $\pi_{\theta}$ and $\pi_{\theta_k}$ are the new and current policies, respectively; and $\epsilon$ is a parameter which indicates how far away the new policy is allowed to deviate from the current one \citep{SpinningUp2018}.
\begin{equation} \label{eq:PPO_Lclip}
L^{CLIP}(\theta) = min\Big(\frac{\pi_{\theta}(a|s)}{\pi_{\theta_k}(a|s)}\hat{A}_t, clip\big(\frac{\pi_{\theta}(a|s)}{\pi_{\theta_k}(a|s)},1-\epsilon,1+\epsilon\big)\hat{A}_t\Big)
\end{equation}
Besides the parameters presented, the number and size of the ANN's hidden layers and the size of the update step (learning rate) used in the gradient methods are other parameters of PPO.
As mentioned by \cite{henderson2019deep}, there are implementation details that affect PPO performance that is not fully presented by \cite{schulman2017proximal}.
In Section \ref{sec:experiments}, the values used for all parameters and the implementation version of the algorithm are presented.
\section{The Problem}
\label{sec:problem}
This section is organized as follows.
Problem definition is presented in Section \ref{subsec:problem_definition}, and related works are presented in Section \ref{subsec:relatedworks}.
MDP formulation and NLP model are presented in sections \ref{subsec:mdp} and \ref{subsec:nlp}, respectively.
\subsection{Problem Definition}
\label{subsec:problem_definition}
In this section, we intend to formalize the supply chain problem we are considering in this work.
Related works that apply RL techniques to multi-echelon supply chains are usually inspired by the so-called beer distribution game \citep{Sterman1989}.
This game was proposed to study the bullwhip effect by analyzing the ordering behavior of individuals with only local information in a four-echelon linear supply chain.
In this context, each node of the chain can be viewed as an independent actor that needs to attend to demands from its direct successor in the supply chain and to decide how much to buy (to order) from its predecessor.
Regarding the beer game example, there is a supplier, a factory, a wholesaler, and a retailer.
The retailer attends final customer orders and needs to place orders to the wholesaler.
The wholesaler, in turn, attends the orders from the retailers and places orders to the factories, and so on.
The supplier (first-echelon) obtains material from an external source.
Therefore, in this case, which we call an order-based approach, the decisions go somehow upstream the chain since an order of a node is for the precedent echelon.
Another possible approach is to consider the decisions of the whole supply chain based on ultimate customer demands, as recommended by \cite{Lee1997} to counteract the bullwhip effect.
In this context, there is a single agent, a central decision-maker that controls all the chain operations, and the problem can be seen as a multiperiod production planning problem \citep{pinedo2009planning,Stadtler2015}.
In this setting, the decisions in each time step are: how much raw material to produce in each supplier, how much material to transport from a node to its successors, and how much material to store in each node's local stock.
Therefore the decisions to be taken go somehow downstream the chain, as the transport of material is decided from each node to the subsequent echelon's nodes.
In this work, we consider the second case, i.e., a multiperiod production planning problem with a single agent.
A supply chain can operate with backlog or lost sales approaches.
In case of backlog, a client demand in a time step can be met later, while with the lost sales approach an unattended client demand is discarded.
We consider a lost sales approach in this work, and a penalization cost is incurred when a client demand is not attended.
Another important definition is that we consider the case of continuous manufacturing (process) industries \citep{pinedo2009planning}, so all the quantities of materials are continuous values.
Nevertheless, we believe the methodology used here could be adapted for discrete industries, and suggestions on how to do this are given in Section \ref{subsec:applying_ppo2}.
We present now the dynamics of the supply chain we study in this work.
All scenarios experimented consider a four-echelon supply chain with two nodes per echelon.
At the beginning of the planning horizon (time step $t=0$), there is an initial amount of material (raw material or product) stored on each node's local stock, raw materials are being produced by the suppliers (that it will become available on the next time steps), and raw materials and products are being transported from each node to its successors in the next echelon.
There are also customer demands for each retailer that will need to be attended at the next time step ($t=1$).
The steps presented below are repeated until the end of the planning horizon:
\begin{enumerate}
\item The central decision-maker (the agent) decides the amount of material to be produced on each supplier, and the amount of material to be transported from each node to each of its successors.
The amount of material to be kept in stock is indirectly defined by the remaining material in each node.
Retailers are not controlled since they attend to customer demands whenever possible.
\item A new time step is considered (now $t=t+1$).
\item Material flow:
\begin{itemize}
\item Raw materials that were being produced in each supplier and that are now available, due to expired lead times, are stored in the supplier's stock.
\item Raw materials and products in transport, with expired lead times, are delivered in each node and stored in its stock.
\item Possible excess of materials (beyond stock's capacity) is discarded and penalization costs are incurred.
\end{itemize}
\item Retailers attend to customer demand using material from their stocks. If there is not enough material, penalization cost is considered for each unit of the missing product.
\item Agent's decisions are followed:
\begin{itemize}
\item Production of raw material is triggered in each supplier. The lead time of production is realized, i.e., it is defined when the raw material will be available in the supplier's stock.
\item For each node (except retailers) the amount of material to be transported to each of its successors is removed from stock and shipped to the corresponding successor.
The lead time of transport is realized, i.e., it is defined when the material will be available in the successor's stock.
In the case of factories, raw materials are processed into finished products before the shipment.
\end{itemize}
\item Uncertain (potentially seasonal) customer demands of each retailer for the next time step are realized.
\end{enumerate}
It is important to highlight that the demands are uncertain, and in each time step ($t$) the agent only sees the realization of the demand for the next time step ($t+1$).
As the retailers are not controlled by the agent, the agent cannot take any advantage of knowing only the demands of the next time step.
Another point is that, as the lead times for producing raw material and transporting material are uncertain, the agent needs to make such decisions without knowing exactly when the materials will be available (or delivered).
The lead time values are realized after the decisions have been made, and only in the next state, the agent will know when the material will arrive.
Furthermore, as it will be presented in the MDP formulation (Section \ref{subsec:mdp}), the agent only knows the exact amount of material that will arrive (or be delivered) in the next time step.
The quantities of material arriving in the time steps after the next one are added together into a single value.
Regarding solution methods, there are different approaches depending on how to handle uncertain parameters like demands and lead times.
In practice, it is common to use demand forecasts and average lead times and, in this case, the problem becomes deterministic and can be solved by LP models \citep{Stadtler2015}.
Another approach is to solve the problem taking into account the uncertainty of the demands and lead times by using methods that can handle stochasticity.
In this case, the problem is non-linear and can be solved with techniques like Stochastic Programming or Deep RL.
We use a Deep RL approach to solve the problem and an agent based on an LP model solution as a baseline.
There is also the option to handle the problem using single-agent or multi-agent approaches.
Multi-agent approaches are more common when the problem is modeled considering that each node of the chain is an independent actor.
In our approach, the problem is solved with a single agent that takes several different decisions at the same time.
\subsection{Related Works}
\label{subsec:relatedworks}
There are some works that use RL for supply chain operation problems, but many of them are based on tabular RL techniques, like Q-Learning, for example \citep{Giannoccaro2002,Chaharsooghi2008,Mortazavi2015}.
As tabular techniques cannot properly handle problems like that addressed in this work, with continuous state and action spaces, we focus here on the most recent works that use Deep RL approaches to solve similar problems.
Firstly, we present works that deal with the problem in a production planning approach, as we do in this paper.
Then we present related works that deal with order-based approaches.
\cite{Kemmer2018} use Approximate SARSA and three versions of Vanilla Policy Gradient (VPG, also called REINFORCE) on a two-echelon supply chain.
The scenario consists of a factory and one to three warehouses with increasing demands, no lead times, and a horizon of 24 time steps.
The state is composed of the stock levels and the demands of the last two time steps; the actions refer to the factory production and product transportation (but the action space is reduced to only 3 production and transportation levels);
and the rewards are the profit, considering operating costs and backlogs.
($r$-$Q$)-policy, a minimum stock approach, is used as a baseline.
All agents are better than baseline on the scenario with only one warehouse, and two versions of VPG improve over the baseline in the scenario with 3 warehouses.
The work is extended by \cite{Hutse2019}, including deterministic (non-zero) lead times, two product types, continuous action spaces, and four types of stochastic demand scenarios.
The author uses a DQN (Deep Q-Network) for discrete actions and DDPG (Deep Deterministic Policy Gradient) for continuous actions.
The state is composed of the stock, production, transport and the last $x$ (a parameter) demands, for each node-product combination.
The actions are, for each product, how much to produce and to send for each retailer (using aggregated levels in the discrete case and limiting the maximum action values in the continuous case).
The rewards are the profit, considering operating costs (including stock-outs).
The baseline is the ($r$-$Q$)-policy and the agents are better than the baseline in all scenarios (1 factory, 2 or 3 retailers, and 1 or 2 products).
\cite{Peng2019} use VPG in a capacitated supply chain with one factory warehouse and three retailers (with balanced and unbalanced costs), regular and seasonal stochastic demands, and constant lead times.
The state is composed of the stock levels and the last two demands, and the actions are how much to produce and to send to each retailer.
As the actions are state-dependent, they use two mechanisms to treat the inherent difficulty of using this approach with neural network outputs.
The rewards are the profit, considering operating costs and penalization by not satisfying demand.
The Deep RL agent achieves better results than baseline, ($r$-$Q$)-policy, in all experimented scenarios.
We now present related works that handle the problem using an order-based approach.
\cite{Gijsbrechts2019} propose a proof of concept by using Deep-RL on three different problems: dual-sourcing or dual-mode, lost sales, and multi-echelon inventory models.
In the multi-echelon setup, demands are uncertain and regular, and lead times are deterministic.
States are represented by the stock levels and orders of the warehouses and retailers, while the actions are the orders from each node (aggregated by state-dependent base-stock levels).
They apply A3C (Asynchronous Advantage Actor-Critic) on two different scenarios with one warehouse and ten retailers, considering stochastic demands.
The A3C agent performs better than a base-stock policy used as a baseline.
\cite{Oroojlooyjadid2019} uses a customized DQN to solve the MIT Beer Game, a four-echelon linear supply chain, considering deterministic and stochastic demands, and deterministic lead times.
The author treats the problem as decentralized, with multi-cooperative agents.
Each agent only knows the local information, and to avoid competition, there is an engineered mechanism to provide feedback to each agent at the end of an episode.
In the experiments, only one agent uses DQN and the others follow a base-stock heuristic.
The state is composed of the stock levels, demands, arriving orders, and arriving products, from the past $m$ (a parameter) time steps.
The actions refer to how much more or less to order than the received order, and the used intervals are $[-2,2]$ and $[-8,8]$.
The rewards are the stock plus backlog costs.
Experiments show that using DQN for one node achieves better results than using the base-stock policy for all nodes.
\cite{Hachaichi2020} use PPO and DDPG to solve an inventory replenishment problem in a two-echelon supply chain.
There is one distribution center and three stores, with local capacitated stocks.
Supply is unlimited, customer demands are nonseasonal (and lower than stock's store capacities), lead times are constant, and a planning horizon of 52 time steps is considered.
States are composed of stock levels, material in transport, and customer demands from the past $m$ (a parameter) time steps.
Actions are represented by the orders from the distribution centers and stores.
The objective is to maximize profit and, so, rewards are sales minus stock and order costs.
Experiments with a fixed scenario show that DDPG results are unstable and PPO achieves a 6.4\% gap from a baseline where all observed demands are satisfied.
\cite{Geevers2020} uses PPO in three problem cases, considering linear, divergent, and two-echelon supply chains.
Considering the last scenario, an industrial case study, stocks are capacitated and supply is unlimited.
The problem is solved considering one type of product, constant lead times, uncertain (nonseasonal) demands, and a planning horizon of 50 time steps.
States are composed of total stock, total backorders, the stock levels, and, for each pair of nodes, the backorders, and material in transport;
and actions are represented by the order quantity for every stock point.
The objective is to minimize the total holding and backorder costs.
In the experiments, the PPO agent achieves results with a large variance.
Considering 10 training runs, some runs are better than the base stock baseline while others yield poor results.
The author argues that the adopted method is unstable and, therefore, it is not yet fitted to be used in practice.
\cite{Perez2021} use PPO to solve an inventory management problem in a make-to-order four-echelon supply chain.
Nodes have local stocks (without capacity limits), production is limited, there is a single product and a planning horizon of 30 time steps is considered.
There is one retailer to attend to uncertain (nonseasonal) demands, and the lead times are heterogeneous (without uncertainty).
States are composed of the demand, the stock levels, and the material in transport, and the actions are represented by the reorders quantities.
The rewards are the profit calculated for each node of the chain.
They experiment with a case study scenario considering backlogging and lost sales options, and compare PPO with four LP models (deterministic and multi-stage stochastic, considering rolling or shrinking horizon).
All LP models are better than PPO when backlogging is considered.
In the lost sales scenario, PPO is only better than one of the models (deterministic LP with rolling horizon).
The authors argue that the PPO solution has a more balanced load and it could potentially have greater resilience to disruptions.
Table \ref{tab:rel_works} shows a comparison of related works and our approaches.
We have grouped the works by type of approach (production planning or order-based).
For each work is presented the supply chain configuration (nodes per echelon, products, and planning horizon), the type of uncertainty for demands (regular or seasonal), indication if lead times are deterministic or stochastic, information about state and action spaces (if they are continuous or discrete, and the number of dimensions), and Deep RL technique and baseline used.
As can be seen on the table, to the best of our knowledge, this paper, and our previous work, are the first ones that deal with the problem considering a production planning approach in a supply chain with more than two echelons and including stochastic lead times.
As we consider more than one node per echelon, the number of dimensions of the state and action spaces are larger than similar works, and, therefore, the problem is bigger and harder to solve.
When considering also papers with order-based approaches, most of the related works deal with two-echelon supply chains, with constant lead times and smaller action spaces.
Like our work, \cite{Perez2021} use PPO and LP-based baseline in a four-echelon supply chain.
However, they handle the problem in an order-based approach and do not consider capacitated stocks, seasonal demands, neither stochastic lead times, the state and action spaces are discrete, and the planning horizon is smaller.
Moreover, their experimental methodology lacks important steps such as hyperparameter tuning, training runs with different seeds, and rewards normalization \citep{henderson2019deep,stable-baselines3}, while all these steps are considered in the present paper.
\cite{Geevers2020} has also used PPO and considered continuous action space with many dimensions, but he uses regular demands and deterministic lead times, in a two-echelon supply chain, and has achieved unstable results.
Our main contributions to the literature, considering the best of our knowledge, can be summarized as follows.
\begin{enumerate}
\item The present and our previous works are the first ones to use Deep RL to handle the problem with a production planning approach in a supply chain with more than two echelons.
\item This is the first work to use a Deep RL method to solve the problem with stochastic lead times (even considering works with order-based approaches).
\item This work and \cite{Perez2021} are the only ones that deal with a four-echelon non-serial supply chain using Deep RL.
This leads to a problem with more state and action space dimensions and, therefore, harder to solve.
But, different from \cite{Perez2021}, we consider a production planning approach with seasonal demands, stochastic lead times, capacitated stocks, continuous state and action spaces, and a larger planning horizon.
\item Finally, we have conducted a robust experimental methodology, achieving good results with PPO2 considering continuous action space with more dimensions than related works.
\end{enumerate}
\begin{table}
\caption{Comparison with related works.
The works are grouped by type of approach: production planning, or order-based.
For chain configuration: $Config.$ means the number of nodes per echelon, $P$ the number of products, and $H$ the planning horizon.
Column $Dem$ indicates if demands are regular or seasonal; column $Lt$ shows if lead times are deterministic or stochastic.
In columns \textit{States} and \textit{Actions}, $D (M) $ or $C (M)$ indicate if it is a discrete or continuous $M$-dimensional space.
\textit{RL Alg.} column presents the RL technique used, and the last column shows the baseline.
The values refer to the most complex experimented scenarios of each work.}
\label{tab:rel_works}
\addtolength{\tabcolsep}{-3pt}
\begin{tabular}{llllllllll}
\hline\noalign{\smallskip}
\textbf{Authors} & \multicolumn{3}{c}{\textbf{Chain}} & \textbf{Dem} & \textbf{Lt} & \textbf{States} & \textbf{Actions} & \textbf{RL Alg.} & \textbf{Baseline} \\
& Config. & P & H & & & & & \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{4}{l}{\textit{\quad Production planning approaches}} \\
\noalign{\smallskip}
\cite{Kemmer2018} & 1-3 & 1 & 24 & S & Det & D (10) & D (4) & VPG & $(r,Q)$ \\
\cite{Hutse2019} & 1-3 & 2 & 52 & S & Det & D (30) & C (4) & DQN, DDPG & $(r,Q)$ \\
\cite{Peng2019} & 1-3 & 1 & 25 & S & Det & C (12) & C (4) & VPG & $(r,Q)$ \\
\cite{AlvesICCL} & 2-2-2-2 & 1 & 360 & R & Det & C (27) & C (14) & PPO2 & LP-based \\
\textbf{This work} & 2-2-2-2 & 1 & 360 & S & Sto & C (27) & C (14) & PPO2 & LP-based \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{4}{l}{\textit{\quad Order-based approaches}} \\
\noalign{\smallskip}
\cite{Gijsbrechts2019} & 1-10 & 1 & \textit{cont.} & R & Det & D (35) & D (2) & A3C & base stock \\
\cite{Oroojlooyjadid2019} & 1-1-1-1 & 1 & \textit{fixed} & R & Det & D (50) & D (1) & cust. DQN & base stock \\
\cite{Hachaichi2020} & 1-3 & 1 & 52 & R & Det & C (48) & C (4) & PPO, DDPG & \textit{custom} \\
\cite{Geevers2020} & 4-5 & 1 & 50 & R & Det & C (48) & C (9) & PPO & base stock \\
\cite{Perez2021} & 2-3-2-1 & 1 & 30 & R & Det & D (68) & D (11) & PPO & LP-based \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{MDP Formulation}
\label{subsec:mdp}
Modeling a complex sequential decision-making problem as an MDP is one of the most important tasks to solve the problem using RL techniques.
Formulating an MDP means defining the states, actions, rewards, and environment's dynamics (or transition function) to be used to solve the problem.
The formulation presented here is an extension of our previous work \citep{AlvesICCL} to include uncertain seasonal demands and lead times, and processing capacities.
\subsubsection{State Space}
A state for a given time step $t$ is a 27-dimensional continuous vector with the following values (an example of a state is presented in Figure~\ref{fig:mdp_state}).
\begin{itemize}
\item The current stock level of each node.
\item For each supplier:
\begin{itemize}
\item the amount of raw material being produced and that will be available in the next time step ($t+1$);
\item the sum of the amount of raw material being produced and that will be available in the time steps after the next one.\footnotemark
\end{itemize}
\item For each other node:
\begin{itemize}
\item the amount of material in transport that will arrive in the node in the next time step (it is the sum of material sent by the two nodes from the predecessor echelon);
\item the sum of the amount of material in transport that will arrive in the time steps after the next one.\footnotemark[\value{footnote}]
\end{itemize}
\item The final customer demands of each retailer for the next time step ($t+1$).
\item The number of remaining time steps until the end of the episode.
\end{itemize}
\footnotetext{The idea of using a summarized value for material available after the next time step is to avoid increasing state space with information that is not so precise, since with stochastic lead times these values are more likely to change on the next time steps.
}
In order to obtain good results with Deep RL algorithms like PPO2, it is important to normalize the state values \citep{stable-baselines3}.
In Section~\ref{subsec:applying_ppo2} we present how the state values normalization is done in our experiments.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig4.pdf}
\caption{Example of a state for a given time step $t$.}
\label{fig:mdp_state}
\end{figure*}
\subsubsection{Action space}
An action is a 14-dimensional continuous vector with the following values (an example of an action is presented in Figure~\ref{fig:mdp_action}).
\begin{itemize}
\item The amount of material to produce in each supplier.
\item For each node (except retailers).
\begin{itemize}
\item The amount of material to deliver to each of the two nodes of the next echelon.
\end{itemize}
\end{itemize}
As mentioned in the problem definition, the amount of material to be kept in stock is indirectly defined by the remaining material in each node, and retailers are not controlled since they attend to customer demands whenever possible.
One important practical aspect of defining the action representation is how to handle possible unfeasible actions.
Regarding the production of material in each supplier, it is simple to avoid unfeasible values by limiting the action to the supplier's capacity.
But, for material to be transported from one node to another the situation is more complex since the node's stock level changes throughout the simulation.
In Section~\ref{subsec:applying_ppo2} we present how we deal with this challenge and generate only feasible actions in our experiments. In the same section, we present how we handle action values normalization since it is important to obtain good results with Deep RL algorithms like PPO2 \citep{stable-baselines3}.
\begin{figure*}
\includegraphics[width=0.8\textwidth]{Fig5.pdf}
\caption{Example of an action: decisions are regarding production of raw materials in the suppliers and transportation between nodes (stock levels are indirectly defined and retailers are not controlled).}
\label{fig:mdp_action}
\end{figure*}
\subsubsection{Environment's dynamics}
As we use a model-free RL approach, a simulation of the supply chain is used to represent the environment’s dynamics.
Almost all supply chain operations are simulated in a deterministic way, that is, any amount of material defined by the action values (to be supplied or transported) is followed by the simulation, as the actions always represent feasible quantities.
The non-deterministic behavior of the simulation is due to uncertain customer demands and lead times.
Customer demands and lead times are sampled in each time step from a statistical distribution and are realized as presented in the problem definition (Section \ref{subsec:problem_definition}).
There is also a treatment for the excess of material in stocks, which can happen because a node can receive more material than it can store since we need to sum the amount of material arriving from different nodes and its current stock level.
The simulation considers that all arriving material needs to pass by the stock, even if it is not kept for the next time step.
The excess of material is discarded and a related penalization cost is incurred.
\subsubsection{Rewards}
The design of the rewards is crucial for the success of an RL algorithm.
For many problems, it is not clear the best way to define the reward to the agent, especially because in many cases it can be easy to define feedback at the end of an episode (success or fail), but it can be difficult to define feedback on each simulation step.
However, in the proposed supply chain problem, as it is a cost minimization problem, seems to make a lot of sense to use the negative of the total operating costs as the reward\footnote{We have also experimented with the inverse of the operating costs multiplied by a constant as the reward, but the PPO2 algorithm could not learn with this approach.}.
Therefore the reward used is the negative of the sum of all incurred costs at a time step (production, transportation, manufacturing, stocks, and penalization by material discarded due to stock capacities and by not meet customer demands).
\subsection{Non-Linear Programming Model}
\label{subsec:nlp}
In this section, an NLP model of the problem is presented.
The model is an extension of our previous work \citep{AlvesICCL} to include uncertain seasonal demands and lead times, and processing capacities.
The model is nonlinear only because demands and lead times parameters are stochastic.
If we consider forecast values for demands and average values for lead times the model becomes an LP model.
The sets of the model are presented in Table \ref{tab:nlp_sets}, where $q$ is the number of nodes of the chain, $h$ is the planning horizon (episode length) and $l^{max}$ is the maximum possible lead time value.
Time step $0$ refers to the initial state of the chain.
Variables of the model are presented in Table \ref{tab:nlp_variables} and the parameters in Table \ref{tab:nlp_param}.
Note that $p_{ijn}$, $f_{n}$, and $t_{ijnm}$ are binary, and all other parameters are integers.
$f_{n}$ is $1$ for all factories and $0$ for the other nodes.
$p_{ijn}$ is used to define which nodes are suppliers and also to map the lead times.
There is one stochastic lead time value for each supplier on each time step, so that $p_{ijn}=1$ only if $n$ is a supplier and the lead time realization to produce material on that supplier at time step $i-j$ is $j$, otherwise $p_{ijn}$ is zero.
Similarly, $t_{ijnm}$ defines which node pairs have transport links and also maps the lead times.
\begin{table}
\caption{Sets of the NLP model}
\label{tab:nlp_sets}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
\textbf{Set} & \textbf{Description} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\(N = \{1,...,q\}\) & set of the supply chain nodes \\
\(I = \{0,1,...,h+l^{max}\}\) & set of time steps (or periods) \\
\(J = \{1,...,l^{max}\}\}\) & set of possible lead times \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Variables of the NLP model}
\label{tab:nlp_variables}
\addtolength{\tabcolsep}{-2pt}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
\textbf{Variable} & \textbf{Description} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$S_{in}$ & the stock level of node $n$ on time step $i$ \\
$T_{ijnm}$ & amount of material sent from node $n$ at time step $i-j$ to arrive on node $m$ on time step $i$ \\
$P_{ijn}$ & amount of raw material produced by a supplier $n$ on time step $i-j$ to be available on time step $i$ \\
$F_{in}$ & amount of raw material processed by a factory $n$ on time step $i$ \\
$D^e_{in}$ & excess of material discarded for exceeding the stock capacity of node $n$ on time step $i$ \\
$D^d_{in}$ & amount of missing products to meet customer demand by a retailer $n$ in time step $i$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Parameters of the NLP model}
\label{tab:nlp_param}
\addtolength{\tabcolsep}{-2pt}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
\textbf{Par.} & \textbf{Description} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$q$ & number of nodes in the chain \\
$h$ & planning horizon (episode length) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\(c_n^s\) & cost of stocking one unit of material at node \(n\) \\
\(c_n^p\) & cost of producing one unit of raw material at node \(n\) \\
\(c^f_n\) & cost of processing one unit of raw material at node \(n\) \\
\(c^t\) & cost of sending one unit of material from one node to another \\
\(c^e\) & cost of one unit of material discarded by exceeding stock capacity \\
\(c^d \) & the cost incurred by unmet demand (for each unit of product) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\(b^s_n\) & the stock capacity of the node \(n\) \\
\(b^p_n\) & production capacity of the supplier \(n\) \\
\(b^f_n\) & processing capacity of the node \(n\) \\
\(b^t_n\) & the maximum amount of material that can be sent from node \(n\) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\(r_n\) & processing ratio at node \(n\) \\
\(l^{max}\) & maximum lead time (for production in the suppliers and transport) \\
\(l^{avg}\) & average lead time (for production in the suppliers and transport) \\
\(f_{n} \) & indicate if it is possible to process raw material on node \(n\) \\
\(p_{ijn}\) & indicate if it is possible to produce raw material on node \(n\) at time step \(i-j\) to be available at time step \(i\) \\
\(t_{ijnm}\) & indicate if it is possible to send material from node \(n\) at time step \(i-j\) to arrive on node \(m\) at time step \(i\) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\(s_{n}\) & initial stock level on node \(n\) \\
\(p_{in}\) & initial amount of material produced by supplier \(n\) that will be available on time step \(i\) \tiny{(defined for $i \leq l^{avg}$)} \\
\(t_{inm}\) & initial amount of material sent by node \(n\) to node \(m\) that will be available on time step \(i\) \tiny{
(defined for $i \leq l^{avg}$)} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\(d_{in}\) & stochastic customer demand to be met by node \(n\) on time step \(i\) \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
The target of the NLP model is to minimize the total operating cost and the objective function\footnote{Regarding results presented in Section \ref{sec:experiments}, the costs related to initial materials (stocks, supplied, and transport) are not considered in the objective function.} is given by Equation \ref{eq:nlp_obj_fun}.
\begin{equation} \label{eq:nlp_obj_fun}
min \quad \sum_{i \in I} \sum_{n \in N}\Big( c^s_n S_{in} + c^f_n F_{in} + c^e D^e_{in} + c^d D^d_{in} \Big) + \sum_{i \in I} \sum_{j \in J} \sum_{n \in N}\Big(c^p_n P_{ijn} + \sum_{m \in N} c^t T_{ijnm} \Big)
\end{equation}
The constraints are defined as follows.
Constraints \ref{eq:stock_const} control the stock, transport of material, and demands.
Capacities are handled by constraints \ref{eq:supcap_const}, \ref{eq:proccap_const}, \ref{eq:trancap_const}, and \ref{eq:stocap_const}.
Constraints \ref{eq:proc_const} are used to calculate the amount of raw material processed at factories.
And, finally, constraints \ref{eq:init_sup_const}, \ref{eq:init_tran_const}, and \ref{eq:init_sto_const} are used to take into account the initial supplied, transported, and stocked materials.
\begin{equation} \label{eq:stock_const}
\begin{split}
S_{in} = S_{(i-1)n}+ \sum_{j \in J} P_{ijn} + \sum_{j \in J} \sum_{m \in N} T_{ijmn} - D^e_{in} - r_n \Big(\sum_{j \in J} \sum_{m \in N} T_{(i+j)jnm}\Big) - d_{in} + D^d_{in} \\ \forall \quad i \in \{1,...,h\}, n \in N
\end{split}
\end{equation}
\begin{equation} \label{eq:supcap_const}
0 \leq P_{ijn} \leq p_{ijn} b^p_n \quad \forall \quad i \in I, j \in J, n \in N
\end{equation}
\begin{equation} \label{eq:proccap_const}
0 \leq F_{in} \leq b^f_n \quad \forall \quad i \in I, j \in J, n \in N
\end{equation}
\begin{equation} \label{eq:trancap_const}
0 \leq T_{ijnm} \leq t_{ijnm} b^t_n \quad \forall \quad i \in I, j \in J, n \in N, m \in N
\end{equation}
\begin{equation} \label{eq:stocap_const}
0 \leq S_{(i-1)n} + \sum_{j \in J} P_{ijn} + \sum_{j \in J} \sum_{m \in N} T_{ijmn} - D^e_{in} \leq b^s_n \quad \forall \quad i \in \{1,...,h\}, n \in N
\end{equation}
\begin{equation} \label{eq:proc_const}
F_{in} = f_n r_n \Big(\sum_{j \in J} \sum_{m \in N} T_{(i+j)jnm}\Big) \quad \forall \quad i \in \{1,...,h\}, n \in N
\end{equation}
\begin{equation} \label{eq:init_sup_const}
P_{iin} = p_{in} \quad \forall \quad i \in \{1,...,l^{avg}\}, n \in N
\end{equation}
\begin{equation} \label{eq:init_tran_const}
T_{iinm} = t_{inm} \quad \forall \quad i \in \{1,...,l^{avg}\}, n \in N, m \in N
\end{equation}
\begin{equation} \label{eq:init_sto_const}
S_{0n} = s_{n} \quad \forall \quad n \in N
\end{equation}
\section{Methodology}
\label{sec:methodology}
In this section, we describe the methodology used to solve the supply chain problem with uncertain seasonal demands and lead times.
In Section~\ref{subsec:scenarios}, we present the 17 scenarios we have used in the experiments to evaluate the suitability of the PPO2 algorithm to solve the problem.
We present how we have applied the algorithm in Section~\ref{subsec:applying_ppo2}.
The main objective is to present the normalization of state and action values we have used to obtain better results with the method.
In Section~\ref{subsec:lpagent}, we describe how the LP agent is built and used as a baseline in the experiments.
In Section~\ref{subsec:lower_bounds}, we present how we use the LP model with perfect information to calculate lower bounds for each experiment.
Finally, the three-phase experimental methodology (hyperparameter tuning, training, and evaluation) is presented in Section~\ref{subsec:exp_meth}.
\subsection{Experimental Scenarios}
\label{subsec:scenarios}
We have considered several scenarios to assess the suitability of the PPO2 algorithm to solve the proposed problem.
The parameters that are common to all scenarios are presented in Table~\ref{tab:scenarios_params}, following the notation from Section~\ref{subsec:nlp}.
Chain configuration and costs are the same used by \cite{AlvesICCL}, and all costs are applied by a unit of raw material or product.
Initial values and capacities were defined according to the range of demand values.
\begin{table}
\caption{Parameters common to all scenarios }
\label{tab:scenarios_params}
\addtolength{\tabcolsep}{-2pt}
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{llll}
\hline\noalign{\smallskip}
\textbf{Group} & \textbf{Param.} & \textbf{Value} & \textbf{Details} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Chain & q & 8 & 2 suppliers, 2 factories, 2 wholesalers, and 2 retailers \\
& \(f_{n}\) & 1 & for factories (0 for the other nodes) \\
& \(r_n\) & 3 & processing ratio for factories (1 for the other nodes) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Horizon & h & 360 & episode length \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Costs & \(c_n^s\) & 1 & stock costs for all nodes \\
& \(c_n^p\) & 6,4 & production cost for each supplier, respectively \\
& \(c^f_n\) & 12,10 & processing cost for each factory, respectively \\
& \(c^t\) & 2 & transport cost on the whole chain \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Pen. costs & \(c^e\) & 10 & cost of material discarded by exceed stock capacity \\
& \(c^d\) & 216 & cost incurred by unmet demand \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Capacities & \(b^p_n\) & 600, 840 & production capacity for each supplier, respectively \\
& \(b^f_n\) & 840, 960 & processing capacity for each factory, respectively \\
& \(b^s_n\) & 6400, 7200 & stock capacity for each factory, respectively \\
& \(b^s_n\) & 1600, 1800 & stock capacity for each pair of other nodes at the same echelon \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Initial values & \(s_{n}\) & 800 & stock level for all nodes \\
& \(p_{in}\) & 600,840 & material to be available on time steps $1,...,l^{avg}$ on each supplier \\
& \(t_{inm}\) & 600,840 & material to arrive on time steps $1,...,l^{avg}$ for each factory \\
& \(t_{inm}\) & 240,240 & idem, but for each wholesaler or retailer \\
\noalign{\smallskip}\hline
\end{tabular}}
\end{center}
\end{table}
We have designed the scenarios to have variety in terms of demand types (seasonal and regular), demand uncertainty, and lead times (stochastic and constant).
In scenarios with seasonal demands, the demand values of each retailer are generated using sinusoidal and perturbation functions.
The sinusoidal function $S$, presented in Equation~\ref{eq:sin_demands}, generates data with seasonal behavior, where $min$ and $max$ represent the minimum and maximum curve values, $z$ is the number of function's peaks, and $t$ is the related time step.
The value of the sinusoidal function is added to a perturbation function $P$ (defined for each scenario) parameterized by an uncertainty level $p$, as shown in Equation \ref{eq:demands}.
In this equation, $min^{sin}=100$ and $max^{sin}=300$ are the minimum and maximum values for the sinusoidal function, and $min=0$ and $max=400$ are the minimum and maximum possible demand values.
If we remove the perturbation term from the equation, we have deterministic seasonal demands, that could be seen as forecast demand values.
In the case of scenarios with non-seasonal (regular) demands, the demand values are generated by $d^{avg}+P(p)$, where $d^{avg}=200$.
\begin{equation} \label{eq:sin_demands}
S(min,max,z,t) = min + \frac{max-min}{2}\Big[1 + \sin{\Big(\frac{2.z.t.\pi}{h}}\Big)\Big]
\end{equation}
\begin{equation} \label{eq:demands}
D = clip\Big(S(min^{sin}, max^{sim}, z, t) + P(p), min, max\Big)
\end{equation}
Regarding scenarios with stochastic lead times, the lead time values of each node are sampled from a Poisson distribution, given by $min(Poisson(l^{avg}-1)+1,l^{max})$, where $l^{avg}=2$ is the average lead time, and $l^{max}=4$ the maximum lead time.
We have used this construction to avoid zero lead times and to keep the lead times in the interval $[1,4]$.
In scenarios with constant lead times, we have used the average value $l^{avg}=2$.
Table \ref{tab:scenarios_param} shows the proposed experimental scenarios, presenting their differences.
Scenarios of set $A$ were designed to verify the behavior of the PPO2 agent considering stochastic lead times and different levels of uncertainty for seasonal demands.
Set $B$ is similar to the first group but considering constant lead times.
In both sets, the uncertainty of the demand values (perturbation) is given by a Gaussian (Normal) distribution, with mean zero and standard deviation $p$.
The values of $p$, from 0 to 60, were chosen to represent different uncertain levels, from no uncertainty to higher levels.
The $p$ value was limited to 60 to ensure that demand values, although uncertain, would remain seasonal.
Figure \ref{fig:demands} shows examples of demands for scenarios \textit{N20} and \textit{N60}.
Black solid lines are the values of the sinusoidal function, without the perturbation.
Gray dashed lines represent the standard deviation of the perturbation ($\mathcal{N}(0,20)$ and $\mathcal{N}(0,60)$, respectively).
Blue dots show examples of demand values for one retailer.
Table \ref{tab:scenarios_param} also shows the sets $C$ and $D$, that contain scenarios with regular demands, considering stochastic and constant lead times, respectively.
Perturbation functions can be given by Gaussian or Uniform distributions.
In the case of scenarios in which demands are generated by Uniform distribution, the demand values are given by $200+\mathcal{U}([-200,200])$, which is the same as saying that they are uniformly sampled from $[0,400]$ interval.
Thus, we have designed the scenarios of sets $C$ and $D$, considering an increasing level of demand uncertainty.
Finally, a stock costs variation is evaluated with the scenario of group $E$.
\begin{table}
\caption{Experimental scenarios and their differences. Scenario \textit{N20stc} is equal to \textit{N20} except that the stock costs are [1,2,1,2,5,6,5,6].}
\label{tab:scenarios_param}
\addtolength{\tabcolsep}{-2pt}
\begin{tabular}{llcccccc}
\hline\noalign{\smallskip}
\textbf{Set} & \textbf{Scenario} & \phantom{abc} & \textbf{Seasonal Demands} & \textbf{Pert. Function} & $p$ & \phantom{abc} & \textbf{Stochastic Lead times} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \textbf{N0} & & \checkmark & \textit{none} & & & \checkmark \\
A & \textbf{N20} & & \checkmark & $\mathcal{N}$ & 20 & & \checkmark \\
& \textbf{N40} & & \checkmark & $\mathcal{N}$ & 40 & & \checkmark \\
& \textbf{N60} & & \checkmark & $\mathcal{N}$ & 60 & & \checkmark \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \textbf{N0cl} & & \checkmark & \textit{none} & & & \\
B & \textbf{N20cl} & & \checkmark & $\mathcal{N}$ & 20 & & \\
& \textbf{N40cl} & & \checkmark & $\mathcal{N}$ & 40 & & \\
& \textbf{N60cl} & & \checkmark & $\mathcal{N}$ & 60 & & \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \textbf{rN0} & & & \textit{none} & & & \checkmark \\
C & \textbf{rN50} & & & $\mathcal{N}$ & 50 & & \checkmark \\
& \textbf{rN100} & & & $\mathcal{N}$ & 100 & & \checkmark \\
& \textbf{rU200} & & & $\mathcal{U}$ & [-200,200] & & \checkmark \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \textbf{rN0cl} & & & \textit{none} & & & \\
D & \textbf{rN50cl} & & & $\mathcal{N}$ & 50 & & \\
& \textbf{rN100cl} & & & $\mathcal{N}$ & 100 & & \\
& \textbf{rU200cl} & & & $\mathcal{U}$ & [-200,200] & & \\
\noalign{\smallskip}\hline\noalign{\smallskip}
E & \textbf{N20stc$^*$} & & \checkmark & $\mathcal{N}$ & 20 & & \checkmark \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{figure*}
\includegraphics[width=\textwidth]{Fig6.pdf}
\caption{Example of demands for scenarios \textit{N20} and \textit{N60}: solid black lines are the sinusoidal function (representing expected values), dashed gray lines represent the standard deviation of the perturbation ($\mathcal{N}(0,20)$ and $\mathcal{N}(0,60)$, respectively), and blue dots show an instance of demands for one retailer.}
\label{fig:demands}
\end{figure*}
\subsection{Applying PPO2}
\label{subsec:applying_ppo2}
We have chosen the PPO2 algorithm to solve the problem because it achieves high performance in problems with high-dimensional continuous action spaces.
The first step to applying the algorithm is to implement the simulation of the supply chain operation (the environment).
One possible approach would be to implement the environment following exactly the MDP formulation presented in Section \ref{subsec:mdp}.
But PPO2, and other Deep RL algorithms, usually obtain better results if state and action values are normalized in $[-1,1]$ interval \citep{stable-baselines3}.
In fact, in preliminary experiments, we have tried to apply PPO2 without considering state normalization, and also using automatic state normalization (considering running averages), but the following proposed methodology has obtained better results.
The state values are divided by a maximum limit and then scaled from $[0,1]$ to $[-1,1]$ interval.
Following the notation used for the NLP model, let $b^s_n$ be the node's stock capacity, $b^p_n$ the supplier's capacity, $d^{max}$ the maximum possible demand value, $l^{max}$ the maximum possible lead time value, and $h$ the planning horizon.
The maximum limits used in the state normalization are:
\begin{itemize}
\item $b^s_n$ for the current stock level of each node.
\item For each supplier:
\begin{itemize}
\item $b^p_n$ for raw material being produced and that will be available in the next time step.
\item $b^p_n*(l^{max}-1)$ for the sum of raw material being produced and that will be available in the time steps after the next one.
\end{itemize}
\item For each other node:
\begin{itemize}
\item $b^s_n+b^s_m$ for material in transport arriving at the node in the next time step, delivered by the predecessor nodes $n$ and $m$.
\item $(b^s_n+b^s_m)*(l^{max}-1)$ for material in transport arriving at the node in the next time step, delivered by the predecessor nodes $n$ and $m$.
\end{itemize}
\item $d^{max}$ for customer demands of each retailer.
\item $h$ for the number of remaining time steps until the end of the episode.
\end{itemize}
Table \ref{tab:state_norm} presents the normalization for some of the values of the example state presented in Figure~\ref{fig:mdp_state}.
In this example we consider $b^s_n=500$, $b^p_n=400$, $l^{max}=4$, and $h=360$.
The first column indicates the type of state value, the second column shows the related time step, and the third column the related node.
Column \textit{Value} shows the actual value in the supply chain simulation, column \textit{Max} the maximum possible value, column \textit{N[0,1]} shows the value divided by the maximum value, and, finally, the last column shows the value scaled for $[-1,1]$ interval.
The last column values are used as the input for the PPO2 algorithm.
\begin{table}
\caption{Partial example of a state normalization for some of the values presented in Figure~\ref{fig:mdp_state}.
The actual values from the supply chain simulation are presented in column \textit{Value} and the values used as input for the PPO2 algorithm are presented in the last column.
}
\label{tab:state_norm}
\begin{tabular}{lllrrrr}
\hline\noalign{\smallskip}
\textbf{Type} & \textbf{Time step} & \textbf{Node} & \textbf{Value} & \textbf{Max.} & \textbf{N[0,1]} & \textbf{S[-1,1]} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Stock & $t$ & Supplier1 & 400 & 500 & 0.800 & 0.600 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Raw material & $t+1$ & Supplier2 & 330 & 400 & 0.825 & 0.650 \\
being produced & after $t+1$ & Supplier2 & 105 & 1200 & 0.088 & -0.825 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Material & $t+1$ & Factory1 & 280 & 1000 & 0.280 & -0.440 \\
in transport & after $t+1$ & Factory1 & 420 & 3000 & 0.140 & -0.720 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Customer demands & $t+1$ & Retailer1 & 138 & 400 & 0.345 & -0.310 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Remaining time steps & $t$ & & 330 & 360 & 0.916 & 0.833 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Regarding action values, the output of the PPO2 algorithm is a vector whose values are in the $[-1,1]$ interval.
These values are scaled to $[0,1]$ interval and then changed in the actual values used in the supply chain simulation.
The change from the $[0,1]$ interval to the actual values is done as follows.
Regarding decisions on how much to produce on each supplier, an action value is multiplied by the supplier's capacity ($b^p_n$).
About the decisions related to how much to deliver, let $a_{nm}$ and $a_{no}$ be the action values representing the amount of material to be delivered from a node $n$ to its successor nodes $m$ and $o$.
If the node $n$ is not a factory, the action values are first multiplied by the node's current stock levels $S_{in}$, so that we get $a'_{nm} = a_{nm}S_{in}$ and $a'_{ no} = a_{no}S_{in}$.
In the case of a factory, a treatment is necessary to ensure that the processing capacity of the factory will be respected.
For this, the stock values are first multiplied by the minimum between the factory's current stock level $S_{in}$ and its processing capacity $b^f_n$, so that we get $a'_{nm} = a_{nm}min(S_{in},b^f_n )$ and $a'_{no} = a_{no}min(S_{in},b^f_n)$.
The calculated values, $a'_{nm}$ and $a'_{no}$, are used to define the minimum and maximum cuts in material in stock at the node $n$, given by $c^{min}=min(a'_{nm},a'_{no})$ and $c^{max}=max(a'_{nm},a'_{no})$, respectively.
The value $c^{min}$ indicates the amount of material to be delivered to node $k$, such that $c^{min}=a'_{nk}$.
For the other node, the amount of material is given by $c^{max}-c^{min}$.
The remaining material $S_{in} - c^{max}$ is kept in the stock of the node $n$.
With this approach, all possible output values generated by the PPO2 represent feasible action values in the supply chain simulation.
To illustrate how the action values are handled, let's use some of the decisions of the action example presented in Figure~\ref{fig:mdp_action}, considering they are taken after observing the state example presented in Figure~\ref{fig:mdp_state}.
In the example, the decision regarding how much to produce in Supplier1 is $210$.
This value would come from an output value $a=0.050$ from PPO2, which would be scaled to the $[0,1]$ interval as $a'=\frac{a+1}{2}=0.525$.
Finally, the decision value would be calculated as $a' b^p_n = 0.525 * 400 = 210$.
Figure~\ref{fig:action_deliver} shows how to handle the delivery decisions from node Factory1.
The output values from PPO2 would be $0.492$ and $-0.864$ for Wholesaler1 and Wholesaler2, respectively.
These values would be scaled for $[0,1]$ interval as $0.746$ and $0.068$, and then multiplied by the stock level ($b^s_n=15+280=295$) of the Factory1, achieving $220$ and $20$, respectively.
The minimum value $c^{min}=20$ indicates that the amount of material to be delivered to Wholesaler2 is $20$.
For the Wholesaler1 the amount is $c^{max}-c^{min}=220-20=200$ units.
The remaining material, $b^s_n - c^{max} = 295-220=75$, would be kept in stock.
\begin{figure*}
\includegraphics[width=0.5\textwidth]{Fig7.pdf}
\caption{Example of the action values regarding how much material to be delivered from Factory1 to the wholesalers.
The output values from PPO2 are scaled to $[0,1]$ interval, then multiplied by Factory1's stock level.
The resulting values are then sorted and viewed as cuts in Factory1's stock.}
\label{fig:action_deliver}
\end{figure*}
We have used preliminary exploratory experiments to take some other decisions regarding the way we have used PPO2.
First, we have decided to use automatic reward normalization (considering running averages).
As mentioned about state and action spaces, it is also advisable to normalize the rewards \citep{stable-baselines3}, and our preliminary experiments confirmed that this approach obtains better results with PPO2.
We have also experimented to give feedback to the agent only at the end of the episode, i.e., consider costs equal to zero at every time step except the last one, in which the reward is the total accumulated cost.
The idea is that we intend to minimize the total operation costs, and it does not matter how the costs are allocated over the planning horizon.
But the results were not better with this approach, so we keep the rewards as the costs incurred in each time step.
The proposed methodology could be adapted to work with discrete state and action spaces or unlimited capacities.
In the case of discrete values, one possible approach would be to keep the states and actions as continuous values from the agent's point of view but rounding the action values before using them in the supply chain simulation.
Another possible approach would be to use discrete state and action spaces, as PPO2 can work in this setting as well, but we believe that the first approach would obtain better results.
In the case of unlimited capacities, one could use simulation to find upper bounds for the state and action values that would make sense with the customer demands' range.
Then the upper bounds could be used to normalize the values in a similar way we have done in our methodology.
\subsection{The Baseline: LP Agent}
\label{subsec:lpagent}
We have decided to use an LP-based agent as a baseline in our experiments.
Although related works usually use heuristics as baselines, like $(r,Q)$ or base stock, we believe that they are better suited for order-based approaches, or when the chain is linear.
In our case, with a non-serial supply chain, it is not so easy to adapt such heuristics, since it would be a trick to define how to combine the decisions of two nodes on each echelon.
In fact, the only related work that deals with non-serial supply chains \citep{Perez2021} also uses an LP-based baseline.
Furthermore, as we handle the problem with a production planning approach, we believe that an LP-based baseline using forecast demands and average lead times is a more practical approach.
As mentioned in Section~\ref{subsec:nlp}, the presented NLP model becomes an LP model if we consider forecast demand values and average lead times.
In the experiments, seasonal demands are generated from sinusoidal and perturbation functions, as presented in Section~\ref{subsec:scenarios}.
So we can consider the sinusoidal function without the perturbation as a forecast value for the demands.
In scenarios with regular demands, the forecast value can be defined as the average demand.
Thus, we can solve the LP model considering such forecast demand values and average lead times with an LP solver.
The LP model's solution is then used to encode the LP agent employed as a baseline in the experiments.
The LP agent is built from the decision variables $P_{ijn}$ and $T_{ijnm}$ of the model (production at the suppliers and transport of material, respectively).
The other decision variables (stock, material processed, excess of material, and missing products) do not need to be encoded in the agent, since they will be handled by the simulation of the supply chain.
The values of the used decision variables are normalized and scaled to $[-1,1]$ interval.
Thus, the LP agent can interact with the environment in the same way as the PPO2 agent does.
\subsection{Lower Bounds: Perfect Information LP}
\label{subsec:lower_bounds}
The LP model can also be used to find lower bounds for each experimental scenario, by solving the problem after the fact with perfect information.
If we solve the model using the true demand and lead times values, as if they were known in advance, we obtain the lower bounds for the total operating costs.
The model solution is optimal considering the realization of the demand and lead time values, but it has lower total costs than the optimal value of the original problem with stochastic demands and lead times.
Nevertheless, it can be viewed as an oracle for benchmark purposes.
\subsection{Experimental Methodology}
\label{subsec:exp_meth}
The experimental methodology consists of three parts: hyperparameter tuning, training, and evaluation and it is illustrated in Figure~\ref{fig:methodology}.
\begin{figure*}
\includegraphics[width=0.6\textwidth]{Fig8.pdf}
\caption{The experimental methodology consists of three phases: tuning of the PPO2 hyperparameters, training using the best parameter values found, and evaluation of the results using the best PPO2 models.}
\label{fig:methodology}
\end{figure*}
The first part, hyperparameters tuning, is essential to obtain better results with Deep RL algorithms and is present in the top part of Figure~\ref{fig:methodology}.
The proposed methodology uses 100 different combinations (trials) of hyperparameter values.
At the first 20 attempts, the values are randomly chosen from predefined intervals.
The remaining 80 attempts use the TPE algorithm \citep{bergstra2011algorithms} to choose the parameter values.
For each combination of hyperparameter values, i.e., each attempt, the agent is trained for 3.6 million time steps (equivalent to 10 thousand episodes), with evaluations of the model on every 50 episodes (18 thousand time steps).
Each evaluation step consists of 5 episodes, and the average of the accumulated rewards is used to define the quality of the model.
The best model found in all evaluations is considered as the result of the attempt.
A pruning by median mechanism is used in the last 80 attempts.
Unpromising attempts are early-stopped using the median stopping rule, that is, an attempt is pruned if the intermediate result of the trial is worse than the median of the previous trials.
Finally, the values of the parameters used in the attempt with the best results are chosen for the experiments.
The hyperparameter tuning is done considering one scenario, and the resulting parameter values are used for all experimented scenarios.
After selecting the PPO2 hyperparameter values, the second part of the methodology refers to training the PPO2 agent (middle part of Figure~\ref{fig:methodology}).
It is important to repeat the training considering different random seeds to ensure the robustness of the results achieved by RL algorithms \citep{henderson2019deep}.
Therefore we train the PPO2 agent five times, with different predefined random seeds, for each experimental scenario.
Each training consists of running the algorithm for 7.2 million time steps\footnote{In preliminary experiments, we have tried training runs with 3.6 million time steps but, for several scenarios, the model was still being improved until the end of the training. So, we have decided to run for 7.2 million time steps, and we have verified that the model stopped being improved before the end of the training.}
(or 20 thousand episodes), evaluating the model on every 50 episodes (or 18 thousand time steps), considering 10 episodes on each evaluation step.
The best model found on each training, i.e., for each random seed, is used to evaluate the results.
Finally, the evaluation of the results is the third part of the experimental methodology and is presented in the bottom part of Figure~\ref{fig:methodology}.
The evaluation consists of simulating 100 episodes of the environment for each PPO2 model found in the training process.
The 100 episodes are generated using 10 different predefined random seeds for the environment.
With this approach, a total of 500 episodes of evaluation is planned to be executed for each scenario, and the resulting metric is the average and standard deviation of accumulated rewards of all these episodes.
The LP agent, presented in Section \ref{subsec:lpagent}, is used as a baseline to be compared to PPO2.
The agent is built for each scenario from the solution of the LP model.
The model is solved considering average lead times and forecast (average) demands.
As presented, forecast demands mean generating the demands without the perturbation term.
To evaluate the LP agent, we have used the same 100 episodes of the environment used with PPO2, i.e., the same sequence of demands and lead times for each episode.
Therefore, we compare the results obtained by PPO2 and the baseline, considering the same conditions.
\section{Experiments and Results}
\label{sec:experiments}
The experimentation was conducted using Python 3.6.10 on a computer with a 2.9 GHz x 6 processor, 32 GB of RAM, a 6 GB GPU, and Ubuntu Linux 20.04.
The supply chain simulation was implemented in Python following the OpenAI Gym standard \citep{OpenAIGym} and the LP model was solved using CPLEX 12.10 via Python interface.
The PPO2 version of the Stable Baselines 3 (SB3) library \citep{stable-baselines3} was used in the experiments, and the RL Baselines3 Zoo library\footnote{RL Baselines3 Zoo library is a wrapper to use Optuna library \citep{akiba2019optuna} with SB3.} \citep{rl-zoo3} was used for the hyperparameter tuning.
We started the experiments with Stable Baselines 2 \citep{stable-baselines}, but SB3 has achieved better results in preliminary experiments.
Although we have verified that default hyperparameter values were the main reason for the difference between the results of the two versions of the library, we have chosen SB3 as its authors state that PPO2 implementation in this version is closer to the original one.
The remainder of this section is organized as follows.
In Section \ref{subsec:tuning}, we present the hyperparameter tuning phase.
Section \ref{subsec:eval_results} presents the main results for all experimented scenarios.
Section \ref{subsec:stocks_seasonality} shows that, in the scenarios with seasonal demands, the PPO2 agent can build stocks in advance, with seasonality.
Next, in Section \ref{subsec:cost_types}, the analysis of the results is detailed to verify the performance of the PPO2 agent regarding each type of cost.
The learning curves of the PPO2 are discussed in Section \ref{subsec:learning_curve}.
Finally, in Section~\ref{subsec:summary}, we present a summary of the results.
\subsection{Hyperparameter Tuning}
\label{subsec:tuning}
Following the proposed methodology (Section \ref{subsec:exp_meth}), we start with hyperparameter tuning using scenario \textit{N20}.
Table \ref{tab:param_tuning} shows the best values found for the PPO2 hyperparameters.
It is presented for each hyperparameter: the type of sampling (categorical, uniform, or log uniform), the predefined possible values (or interval), the best value found at the end of the tuning process, and the description of the hyperparameter.
Regarding the parameters with fixed values, we have experimented with four actors, and the \texttt{ent\_coef} parameter was fixed to zero since it is intended to be used with discrete action spaces \citep{stable-baselines3}.
The predefined possible values were chosen from preliminary tuning experiments.
We have started with default values of the RL Baselines3 Zoo and then and we reduced the options of some parameters to the values that achieved the best results.
The code of RL Baselines 3 Zoo library was modified to use the chosen predefined possible values and to fix the values of the first attempt\footnote{The first of the 100 trials was fixed to use the SB3 default hyperparameter values for PPO2, as the library documentation states that they are optimized for continuous problems \citep{stable-baselines3}.}.
Regarding optimizer, we have used the Adam method for gradient optimization \citep{kingma2017adam}.
The training, the second step of the methodology, of all proposed scenarios were done using the best hyperparameter values found in the tuning process.
\begin{table}
\caption{Best PPO2 parameter values found in hyperparameter tuning; columns S indicates the sampling options, which are C: categorical, U: sampled from the interval in the linear domain, L: sampled from the interval in the log domain, and -: fixed}
\label{tab:param_tuning}
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
\textbf{Parameter} & \textbf{S.} & \textbf{Possible values} & \textbf{Best value} & \textbf{Description} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\texttt{n\_steps} & C & $2^{5}, 2^{6}, ..., 2^{11}$ & 1,024 & $\tau$ parameter in Algorithm \ref{alg:PPO} \\
\texttt{n\_epochs} & C & [3, 5, 10, 20] & 20 & $K$ parameter in Algorithm \ref{alg:PPO} \\
\texttt{batch\_size} & C & [64, 128, 256, 512] & 64 & Mini-batch size in Algorithm \ref{alg:PPO} \\
\texttt{vf\_coef} & U & [0, 1] & 0.88331 & $c_1$ in Equation \ref{eq:PPO_obj_function} \\
\texttt{clip\_range} & C & [0.1, 0.2, 0.3] & 0.2 & $\epsilon$ in Equation \ref{eq:PPO_Lclip} \\
\texttt{gae\_lambda} & C & [0.9, 0.92, 0.95, 0.98, 1.0] & 0.95 & $\lambda$ used to calculate GAE \\
\texttt{gamma} & C & [0.95, 0.98, 0.99, 0.995, 0.999, 0.9999] & 0.999 & $\gamma$ used to calculate GAE \\
\texttt{net\_arch} & C & [(64,64), (128,128), (256,256)] & (64,64) & Units of ANN's hidden layers \\
\texttt{lr\_schedule} & C & [constant, linear] & constant & Learning rate schedule \\
\texttt{learning\_rate} & L & [0.00001, 0.01] & 0.0001 & Gradient method's step size \\
\texttt{activation\_fn} & C & [ReLU, TanH] & TanH & ANN's activation function \\
\texttt{max\_grad\_norm} & C & [0.3, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 2, 5] & 0.5 & To clip normalized gradients \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\texttt{n\_actors} & - & 4 & 4 & $N$ in Algorithm \ref{alg:PPO} \\
\texttt{ent\_coef} & - & 0 & 0 & $c_2$ in Equation \ref{eq:PPO_obj_function} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}}
\end{center}
\end{table}
\subsection{Main Results}
\label{subsec:eval_results}
Table \ref{tab:scenarios_results} summarizes the results of the experiments by comparing PPO2 and LP agents for all scenarios (detailed results are available in Online Resource 1).
The first column shows the set of scenarios, the second column indicates whether demands are seasonal or regular, and the third column whether lead times are constant or stochastic.
The fourth column shows the scenario's name, the fifth and sixth columns present the lower bounds for the total operating costs (i.e., optimal solution if demands and lead times were known in advance).
The following four columns present the average and the standard deviation of the total operating costs for the LP and PPO2 agents.
The last two columns show the gain of PPO2 over the LP agent (difference and percentage, respectively).
\begin{table}
\caption{Results for all considered scenarios: each set of scenarios indicates whether demands are seasonal or regular and whether lead times are constant or stochastic.
The table presents, for each scenario, the average and standard deviation of the total operating costs regarding the lower bounds, LP-agent (the baseline), and PPO2 agent.
The last two columns present the gain of PPO2 over LP (difference and percentage, respectively).
The numbers in the scenarios' names indicate the perturbation of the demand.}
\label{tab:scenarios_results}
\addtolength{\tabcolsep}{-3pt}
\begin{tabular}{lcclrrrrrrrr}
\hline\noalign{\smallskip}
\textbf{Set} & \textbf{Seas.} & \textbf{Stoch.} & \textbf{Scenario} & \multicolumn{2}{c}{\textbf{Lower Bound}} & \multicolumn{2}{c}{\textbf{LP agent}} & \multicolumn{2}{c}{\textbf{PPO2 agent}} & \multicolumn{2}{c}{\textbf{Gain}} \\
& \textbf{Dem.} & \textbf{Lead T.} & & \textit{Avg} & $\sigma$ & \textit{Avg} & $\sigma$ & \textit{Avg} & $\sigma$ & value & \% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \checkmark & \checkmark & \textbf{N0} & 8,004 k & 27 k & 10,298 k & 195 k & 9,147 k & 125 k & 1,151 k & 11.2 \% \\
\textbf{A} & \checkmark & \checkmark & \textbf{N20} & 8,005 k & 49 k & 10,316 k & 207 k & 9,252 k & 157 k & 1,065 k & 10.3 \% \\
& \checkmark & \checkmark & \textbf{N40} & 8,008 k & 88 k & 10,393 k & 237 k & 9,492 k & 196 k & 901 k & 8.7 \% \\
& \checkmark & \checkmark & \textbf{N60} & 8,010 k & 128 k & 10,503 k & 276 k & 9,737 k & 206 k & 766 k & 7.3 \% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \checkmark & & \textbf{N0cl} & 7,941 k & 0 & 7,941 k & 0 & 8,017 k & 7 k & -76 k & -1.0 \% \\
\textbf{B} & \checkmark & & \textbf{N20cl} & 7,944 k & 42 k & 8,226 k & 61 k & 8,231 k & 80 k & -6 k & -0,1 \% \\
& \checkmark & & \textbf{N40cl} & 7,951 k & 84 k & 8,501 k & 118k & 8,478 k & 162 k & 22 k & 0.3 \% \\
& \checkmark & & \textbf{N60cl} & 7,958 k & 124 k & 8,740 k & 171 k & 8,720 k & 201 k & 20 k & 0.2 \% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& & \checkmark & \textbf{rN0} & 7,806 k & 8 k & 9,405 k & 142 k & 8,565 k & 49 k & 840 k & 8.9 \% \\
\textbf{C} & & \checkmark & \textbf{rN50} & 7,804 k & 91 k & 9,557 k & 257 k & 8,811 k & 124 k & 746 k & 7.8 \% \\
& & \checkmark & \textbf{rN100} & 7,808 k & 174 k & 9,941 k & 388 k & 9,104 k & 235 k & 837 k & 8.4 \% \\
& & \checkmark & \textbf{rU200} & 7,817 k & 262 k & 10,143 k & 486 k & 9,219 k & 303 k & 924 k & 9.1 \% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& & & \textbf{rN0cl} & 7,652 k & 0 & 7,652 k & 0 k & 7,778 k & 3 k & -126 k & -1.6 \% \\
\textbf{D} & & & \textbf{rN50cl} & 7,647 k & 89 k & 8,283 k & 130 k & 8,098 k & 93 k & 185 k & 2.2 \% \\
& & & \textbf{rN100cl} & 7,666 k & 173 k & 8,747 k & 240 k & 8,402 k & 180 k & 345 k & 3.9 \% \\
& & & \textbf{rU200cl} & 7,714 k & 256 k & 8,985 k & 308 k & 8,565 k & 198 k & 420 k & 4.7 \% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{E} & \checkmark & \checkmark & \textbf{N20stc} & 8,706 k & 101 k & 12,685 k & 249 k & 11,673 k & 243 k & 1,012 k & 8.0 \% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
Scenarios of set $A$ have stochastic lead times and seasonal demands.
As can be seen in Table \ref{tab:scenarios_results}, PPO2 agent gain above LP agent is between 7.3\% and 11.2\%.
Regarding scenarios of set $B$, they have constant lead times and seasonal demands.
In this case, the PPO2 agent is pretty close to the LP agent, with a difference between -1.0\% and 0.3\%.
It is interesting to notice that in scenario \textit{N0cl}, which has no uncertainty, the LP agent result is, in fact, an optimal value.
Therefore in this scenario PPO2 agent achieves a 1.0\% optimality gap.
PPO2 agent performs better than the baseline in scenarios of set $C$, with regular demands and stochastic lead times, by 7.8\% to 9.1\%.
It is also better in scenarios of set $D$, which have regular demands but with constant lead times, by 2.2\% to 4.7\% (except in scenario \textit{rN0cl}, which has no uncertainty, and therefore LP agent achieves an optimal value).
Finally, PPO2 is also better in the scenario of set $E$, which has different stock costs.
This scenario is similar to \textit{N20}, so demands are seasonal and lead times stochastic, and PPO2 gain is 8.0\%.
Figure \ref{fig:results_CI} shows 95\% confidence intervals of average results obtained by PPO2 and LP agents.
We have used bootstrapped sampling \citep{efron1994introduction}, with 10 k iterations and the pivotal method, to generate statistically relevant confidence intervals, as suggested by \cite{henderson2019deep}.
We find that PPO2 has small confidence bounds from the bootstrap, showing that the mean value is representative of the performance of the algorithm.
The PPO2 confidence intervals are smaller than LP agent intervals, but this is expected since we have more data for PPO2 (as we evaluate the same 100 episodes for both agents, but for PPO2 is done for each of the 5 resulting models).
Regarding the distance between the PPO2 and LP intervals, we can see that only in scenarios of set B (constant lead times and seasonal demand) the confidence intervals of both agents overlap.
In the scenarios of the other sets, the difference is significant, especially in scenarios of sets A and C.
Considering set D (except scenario \textit{rN0cl} that has no uncertainty), although the difference between the agents is quite small (2.2 to 4.7\%), the confidence intervals do not overlap, so we can say that PPO2 is statistically better than LP.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig9.pdf}
\caption{95\% confidence intervals for average results of PPO2 and LP agents, obtained via bootstrapped sampling (with 10 k iterations and pivotal method).
The intervals overlap only in scenarios of set B.
In the scenarios of the other sets, the difference between the results achieved by PPO2 and LP are significant.
}
\label{fig:results_CI}
\end{figure*}
If we look at the results focusing on constant vs. stochastic lead times, we can see that PPO2 is a good tool to use with stochastic lead times.
The technique is better than baseline in all 9 scenarios with uncertain lead times (between 7.3\% and 11.2\%), regardless of whether demands are seasonal or not.
PPO2 is also better considering constant lead times if demands are regular with bigger uncertainty (2.2\% to 4.7\%).
Considering constant lead times and seasonal demands, PPO2 achieves pretty the same level of performance as baseline (between -1.0\% and 0.3\%).
Now, focusing on demands, we can see that the costs (and variance) of the PPO2 agent grow with the level of uncertainty, as it would be expected.
In scenarios with regular demands or constant lead times, the difference between PPO2 and LP agents, in general, also grows with the uncertainty of the demands.
In scenarios with seasonal demands and stochastic lead times, the difference between the two agents decreases with the uncertainty of the demands.
\subsection{Stocks with Seasonality}
\label{subsec:stocks_seasonality}
In this research, we are using the PPO2 algorithm to solve the addressed problem considering seasonal demands.
Thus it is interesting to evaluate if the agent can build stocks with seasonality.
Figure \ref{fig:n20_by_step} shows different types of costs over the planning horizon regarding scenario \textit{N20}.
The values are the average of all evaluated episodes and they refer to the sum for all nodes of the chain.
The top graph shows production at suppliers, stock and transport costs and the bottom graph shows unmet demands cost.
The sum of the demands for both retailers is also included in the bottom graph for reference.
We can see that stocks are built following the demand sinusoidal pattern with a small shift, showing that the PPO2 agent starts to build stocks before demands start rising.
Graphs show also that unmet demands occur when stocks achieve their lowest levels.
It can be noted that production at suppliers and transport of material also follow the demands sinusoidal pattern.
Another evaluation is that PPO2 was able to decrease production at the end of the episode, as production at suppliers, stock, and transport costs decay at the final time steps.
Figure \ref{fig:n20cl_by_step} shows the same type of data for scenario \textit{N20cl} and the behavior is similar to that observed for scenario \textit{N20}.
Comparing both scenarios, we can see that, to manage the uncertainty of the lead times, the level of stocks in scenario \textit{N20} is higher than in scenario \textit{N20cl}.
Another difference is that the variance of the curves is bigger in the scenario with stochastic lead times.
Finally, PPO2 can better attend to customer demands in scenario \textit{N20cl}, with constant lead times.
Other scenarios with seasonal demands also have similar behaviors, but they are not reported here for the sake of space.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig10.pdf}
\caption{The average amount of material by type and by time step considering all evaluated episodes for scenario \textbf{N20}.
Shaded areas denote $\pm 1$ standard deviation, and values refer to the sum for all nodes of the chain.
Stocks follow the sinusoidal pattern of the demands.}
\label{fig:n20_by_step}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{Fig11.pdf}
\caption{The average amount of material by type and by time step considering all evaluated episodes for scenario \textit{N20cl}. Shaded areas denote $\pm 1$ standard deviation. Behaviour is similar to that observed for scenario \textit{N20} in Figure \ref{fig:n20_by_step}, but with a lower level of stocks and lower variance.}
\label{fig:n20cl_by_step}
\end{figure*}
\subsection{Types of Cost}
\label{subsec:cost_types}
Another valuable analysis is to investigate which types of costs are responsible for the difference between the agents.
Let's evaluate this in the scenarios with seasonal demands of sets A and B, whose final results are presented in Figure \ref{fig:seasonal_totalcosts}.
As mentioned, PPO2 is better than LP agent in scenarios with stochastic lead times and has roughly the same level of performance regarding constant lead times.
Figure \ref{fig:seasonal_costsbytype} shows the composition of the final costs for each scenario, considering each type of cost.
The values are the average of all evaluated episodes.
It is important to notice that the vertical axis of each graph has a different range.
First, considering stochastic lead times, we can see that the main reason why PPO2 has lower costs is that it is more efficient in meeting customer demands.
PPO2 can better meet demands due to two reasons: by producing more material (leading to bigger operations costs related to production at suppliers, process and transport of material); and by having less material discarded by the excess of materials in stocks (or, saying in other words, by respecting better the stock capacities).
We can also see that, in the case of the PPO2 agent, the level of stock grows with demand perturbation.
Now, let's analyze the scenarios with constant lead times.
As demand uncertainty grows, both agents lose more customer demands, but PPO2 becomes more efficient than LP agent (PPO2 is worst only in the scenario with no demand perturbation, in which LP agent has an optimal value).
However, to better meet demands, the PPO2 agent needs to build bigger stocks, while other operations costs are pretty similar.
In real-world scenarios, with uncertain seasonal demands and constant lead times, PPO2 would be a more viable option if the level of uncertainty for the demands is high, or if it is difficult to model the problem (or even to get accurate values for its parameters).
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Fig12.pdf}
\caption{Results for scenarios of sets $A$ and $B$ (seasonal demands);
\textit{sl} means stochastic lead times and \textit{cl} constant lead times.
The horizontal axis refers to the level of demand perturbation, and the vertical axis refers to the total operating costs.
Shaded areas denote $\pm 1$ standard deviation.
LP agent is represented by dashed lines and triangular markers while PPO2 by solid lines and circular markers.
PPO2 is pretty close to LP agent considering constant lead times and is better with stochastic lead times.}
\label{fig:seasonal_totalcosts}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{Fig13.pdf}
\caption{Costs by type for scenarios of sets $A$ and $B$ (seasonal demands);
\textit{sl} means stochastic lead times and \textit{cl} constant lead times.
The horizontal axis refers to the level of demand perturbation and, the vertical axis refers to the total operating costs by type.
LP agent is represented by dashed lines and triangular markers while PPO2 by solid lines and circular markers.}
\label{fig:seasonal_costsbytype}
\end{figure*}
The same type of analysis, regarding types of cost, was also done for scenarios of sets $C$ and $D$, considering regular demands.
Figure \ref{fig:regular_totalcosts} shows the final results, i.e., the total operating costs for each scenario, comparing PPO2 and LP agents.
PPO2 is better than baseline in almost all scenarios, and the greater the uncertainty bigger the difference.
LP agent is better only in scenario \textit{N0cl}, in which there is no uncertainty and, therefore, LP agent achieves an optimal value.
The comparison regarding the type of cost is presented in Figure \ref{fig:regular_costsbytype}.
PPO2 agent better attends to customer demands in all scenarios and has lower levels of stock penalties (except in scenario \textit{N0cl}).
PPO2 agent achieves better final results by operating a higher amount of material.
Regarding the stock, the PPO2 agent keeps less material than baseline in scenarios with stochastic lead times and has closer levels when considering constant lead times.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Fig14.pdf}
\caption{Results for scenarios of sets $C$ and $D$ (regular demands);
\textit{sl} means stochastic lead times and \textit{cl} constant lead times.
The horizontal axis is the level of demand perturbation and, the vertical axis is the total operating costs.
Shaded areas denote $\pm 1$ standard deviation.
LP agent is represented by dashed lines and triangular markers while PPO2 by solid lines and circular markers.
PPO2 is better than LP in all scenarios (except \textbf{N0cl} where there is no uncertainty, and the LP agent achieves an optimal value), and the difference is higher with stochastic lead times.}
\label{fig:regular_totalcosts}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{Fig15.pdf}
\caption{Costs by type for scenarios of sets $C$ and $D$ (regular demands);
\textit{sl} means stochastic lead times and \textit{cl} constant lead times.
The horizontal axis is the level of demand perturbation and, the vertical axis is the total operating costs by type.
LP agent is represented by dashed lines and triangular markers while PPO2 by solid lines and circular markers.}
\label{fig:regular_costsbytype}
\end{figure*}
\subsection{Learning Curves}
\label{subsec:learning_curve}
The learning curves of PPO2 agent training, for scenario \textit{N20}, are shown in Figure \ref{fig:learning_curves}.
The vertical axis refers to the total cumulative rewards by episode, and the horizontal axis refers to the number of time steps during training.
The left graph shows the actual learning curve for each one of the five training runs.
As the PPO2 agent learns a stochastic policy, the learning curve is a lower bound of the performance of the algorithm \citep{stable-baselines3}.
So, to better evaluate such a metric, the right graph shows the mean values related to the evaluations carried out during training.
As mentioned in the proposed experimental methodology (Section \ref{subsec:exp_meth}), the model is evaluated on every 50 episodes, or 18 thousand time steps, considering 10 episodes on each evaluation step.
So, the values shown in the right graph refer to the mean of those 10 episodes for each evaluation step.
We can see that, at the beginning of the training, cumulative rewards are between -20 and -16 million, i.e., the total operating costs of the initial solutions are in the order of 16 to 20 million.
After an initial period of exploration, the results start to get better quickly, and the cumulative rewards achieve -10 million after around 1 million time steps of training.
After this point, the improvements are slower and the values tend to converge after 4 million time steps.
As presented, the final solution for this scenario is around -9.3 million.
These curves show that the PPO2 agent was able to learn how to operate the supply chain from a kind of random solution.
They show also that the learning process stabilizes before the end of the training.
Learning curves for other scenarios follow roughly the same pattern and are not presented here for the sake of space.
Concerning the execution time of the algorithm, on average, each training run spent less than 220 minutes.
It is important to note that, after the RL model has been trained, its application has a very short execution time.
Just provide the current state of the supply chain for the model and the neural network outputs will immediately provide the decisions for the next time step.
Thus, even though the training time may have a substantial execution time, its application is fast, which is a considerable advantage in possible real-time decision-making scenarios.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig16.pdf}
\caption{Learning curves for scenario \textit{N20}: the left graph shows the actual learning curves regarding the five training runs; the right one shows the mean value of the evaluations carried out during the same training runs.}
\label{fig:learning_curves}
\end{figure*}
\subsection{Results Summary}
\label{subsec:summary}
The experiments have shown that PPO2 can be a good tool to use in scenarios with stochastic lead times, regardless of whether demands are seasonal or not.
In these scenarios, PPO2 was better than baseline (LP agent) mainly by better meeting the uncertain demands.
These results were achieved by operating a higher amount of material while better attend the stock capacities.
The algorithm can also be useful in scenarios with constant lead times and non-seasonal demands, especially with higher demand uncertainty.
Regarding scenarios with constant lead times and seasonal demands, the PPO2 algorithm and the baseline (LP agent) have achieved similar results.
In such situations, PPO2 would be a more viable option if it is difficult to model the problem or to get accurate values for its parameters.
Considering all scenarios, the greater the uncertainty bigger are the costs of the solutions found by the algorithm, as would be expected.
We have also verified that PPO2 can build stock with seasonality.
The results have shown that the agent starts to build stocks before demands start rising and that higher stock levels are used when the lead times are stochastic.
Finally, the learning curves have shown that PPO2 was able to learn how to operate the supply chain and that the learning process stabilizes before the end of the training.
\subsubsection{Managerial Implications}
In this work, we have addressed the supply chain problem in a production planning approach, i.e., the decisions of the whole supply chain are based on ultimate customer demands, as recommended by \cite{Lee1997} to counteract the bullwhip effect.
There is a single agent that controls all the chain operations as a central decision-maker.
The results of the experiments have shown that, in this context, the PPO2 algorithm can be a good practical choice, especially if the lead times are stochastic, or the demands have higher uncertainty.
In scenarios with lower demand uncertainty (and constant lead times), strategies based on forecast or average values can easily handle the problem, but bigger the uncertainty more difficult to get good results with such type of approach.
The PPO2 algorithm is a policy-based algorithm that approximates the policy using ANNs.
Thus, it can handle the problem without the need to aggregate the state or action values and, thus, can better explore the solution space.
The results have shown that the algorithm can build stocks with seasonality minimizing the bullwhip effect issues.
Another characteristic of the PPO2 algorithm is that the final model (the solution) can be improved by continuing the agent's training.
This can be interesting to adapt the model after a change in the practical scenario, e.g., a new distribution of the demands or lead times, or a modification in some capacity, etc.
Finally, as a model-free Deep RL method, the proposed solution method only needs the simulation of the supply chain.
This can be an advantage in scenarios in which it is difficult to get a precise model of the supply chain operation or to get accurate values for its parameters.
\section{Conclusions}
\label{sec:conclusions}
Decision-making under uncertainty has a strong practical appeal in logistics management due to the inherent complexities of the involved processes.
Artificial Intelligence applications in supply chain planning problems can be a way to improve logistics management and have been increasingly explored in the literature.
In the present work, we have used a Deep RL approach (PPO2) to solve a production planning and product distribution problem in a multi-echelon supply chain with uncertain seasonal demands and lead times.
We have explored 17 different scenarios in a supply chain with four echelons and two nodes per echelon considering a planning horizon of 360 time steps.
On each time step, the RL agent needs to decide how much raw material to produce in the first echelon nodes and the amount of material to be sent from each node to the nodes of the next echelon (stock levels are indirectly defined).
The goal is to meet uncertain customer demands in the last echelon nodes while minimizing all incurred costs (operation costs, such as production at suppliers, stock, transport, processing; and penalization costs: if demand is not met, or a stock capacity is exceeded).
We have built upon our previous work \citep{AlvesICCL} adding uncertain seasonal demands, stochastic lead times, and manufacturers' capacities.
The formalization of the problem, an MDP formulation and an NLP model, have been extended to take account of changes in the problem.
To the best of our knowledge, the present and our previous works are the first ones to use Deep RL to handle the problem with a production planning approach in a supply chain with more than two echelons.
Therefore the problem solved has more state and action spaces dimensions, being harder to solve than related works.
Another contribution is that we are the first to solve the problem with stochastic lead times using a Deep RL method, even if we consider related works that handle the problem in an order-based approach.
We have done a robust experimental methodology to verify the quality and suitability of the PPO2 algorithm on the proposed problem.
Firstly, we have conducted a hyperparameter tuning process to choose the best values for the algorithm's hyperparameters.
Next, we have used such values to solve the problem in different scenarios, considering multiple training runs with different random seeds.
Finally, the results have been evaluated considering 100 episodes for each trained model.
We have compared the achieved results with an LP agent baseline, built from the solution of an LP model, considering forecast demands and average lead times.
PPO2 agent is better than baseline in all scenarios with stochastic lead times (7.3-11.2\%), regardless of whether demands are seasonal or not.
In scenarios with constant lead times, the PPO2 agent is better when uncertain demands are non-seasonal (2.2-4.7\%).
If uncertain demands are seasonal and lead times are constant, PPO2 and LP have roughly the same performance.
Considering the experimental results, PPO2 is a competitive and suitable tool for the addressed problem, and that the greater the uncertainty of the scenario, the greater the viability of this type of approach.
A detailed analysis regarding seasonal stock building and type of costs are also presented and discussed (Sections \ref{subsec:stocks_seasonality} and \ref{subsec:cost_types}).
In real-world OR problems, uncertainties in the parameters of a planning model are very common.
As a model-free approach, Deep RL techniques can be useful in such situations, which could avoid excess capacities and stocks.
Another advantage would be in real-time problems, in which the execution time of a previously trained Deep RL model is very fast.
In future works, we intend to compare PPO2 with other Deep RL algorithms to verify which is the most appropriate RL technique for the proposed problem.
Another possible path is to use stochastic programming approaches to solve the NLP model and compare it with those Deep RL algorithms.
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
\bibliographystyle{spbasic}
|
1,116,691,499,032 | arxiv | \section{Introduction}
The theory of quantum phase transitions \cite{sachdev2001quantum,Vojta2003}
provides a framework from which the low-temperature behavior of many
condensed-matter systems can be understood.
The quantum critical
point separates an insulating gapped phase and a gapless conducting phase.
Of particular importance are
magnetic insulators \cite{Zapf2014,Giamarchi2008}, for which the quantum critical regime
can be experimentally accessed through an applied magnetic field. In these systems,
the gapped phases are associated to magnetization plateaus in the magnetization curves.
In one dimension, magnetization plateaus can be
understood as a topological effect through the
Oshikawa, Yamanaka, and Affleck (OYA) argument \cite{PhysRevLett.78.1984}, which generalizes
the Lieb-Schultz-Mattis theorem \cite{Lieb1961}.
The OYA argument asserts that a magnetization plateau is possible only if $(S_u-m_u)=\text{integer}$, where
$m_u$ is the ground-state magnetization and $S_u$ is the sum of the spins in a unit period of the ground state, respectively.
If the ground state does not present spontaneous translation symmetry breaking, $S_u$ is equal to the
fully polarized magnetization per unit cell, while $m_u$ is the magnetization per unit cell of the system.
The OYA argument was further extended \cite{Oshikawa1999} to models in higher dimensions and to charge degrees of freedom.
Due to the gap closing a magnon excitation, the endpoints of magnetization plateaus are quantum critical points.
In three-dimensional systems, this transition is
in the same universality class of the Bose-Einstein condensation \cite{Giamarchi2008,PhysRevB.43.3215}
and was studied in a variety of magnetic insulators \cite{Giamarchi2008,Zapf2014,Paduan-Filho2012}.
In the magnetic system, the magnetization and the magnetic field play the role of the boson density and of the chemical potential, respectively,
of the bosonic model.
In one dimension the mapping to a hard-core boson model or a spinless fermion system \cite{PhysRevB.43.3215}
implies a square-root singularity in the magnetization curve: $m\sim \sqrt{|B-B_c|}$ as $B\rightarrow B_c$;
and, if three-dimensional couplings are present, the condensate can be stabilized at temperatures below
that of the three-dimensional ordering \cite{PhysRevB.43.3215}.
Exactly at the quantum critical field, the magnons have a classical dispersion relation,
$\omega\sim q^2$, where $q$ is the lattice wave-vector. In one dimension, this quantum critical field separates a gapped phase
from a gapless Luttinger liquid (LL)
phase \cite{giamarchi2003quantum,GIAMARCHI2012}, with excitations showing a linear dispersion relation, $\omega\sim q$.
The predictions of the Luttinger liquid theory in magnetic insulators with a magnetic field, including the quantum critical regime,
were investigated in many materials \cite{Ward2017,e247202,PhysRevB.83.054407}.
For finite temperatures and $B\approx B_c$, the quantum critical regime is observed, and the crossover
line \cite{PhysRevLett.99.057205} to the LL regime is given by $T(B)\sim a|B-B_c|$,
with a universal, model-independent, coefficient $a$.
One-dimensional ferrimagnets \cite{JBCS2008,PhysRevB.29.5144} show spontaneous
magnetization at $T=0$, as expected from the Lieb and Mattis theorem \cite{Lieb.Mattis},
and a gap in the excitation spectrum is responsible for a magnetization plateau in
their magnetization curves at the ground-state magnetization value.
In zero field, the critical properties in the vicinity of the thermal critical point at $T=0$ were studied
in the isotropic \cite{PhysRevLett.78.4853,*PhysRevB.59.14384,AlcarazandMa}
and anisotropic cases \cite{AlcarazandMa}.
Interesting physics emerges through the introduction of destabilizing factors
of the ferrimagnetic state, such as
doping \cite{PhysRevLett.74.1851,RenePRB2006,PhysRevB.59.7973,Rojas2012,Lopes2014,Montenegro-Filho2014,Kobayashi2016} or
geometric frustration \cite{Hida1994,Takano1996,RenePRB2008,Ivanovart10,Shimokawa2011,Furuya2014,Amiri2015,StreckaPRB2017,Hida2017}.
The spin-wave theory \cite{Noriki2017} of ferrimagnetic chains
\cite{PatiJPCM1997,*PhysRevB.55.8894,
Brehmer1997,PhysRevB.57.R14008,PhysRevB.57.13610,Maisinger1998,JCPM.10.11033.1998,Ivanov2000,PhysRevB.69.06,Noriki2017}
was developed from the classical ferrimagnetic ground state,
considering free and interacting magnons, with emphasis on zero-field properties.
The magnetization curves of these systems under an applied magnetic field were
discussed mainly through numerical methods \cite{PatiJPCM1997,*PhysRevB.55.8894,Maisinger1998,Gu2006,PhysRevB.80.014413,Gong2010,ReneJPCM2011,Strecka2017,StreckaActa2017}.
In this work, we investigate the spin-wave theory of ferrimagnetic alternating chains at
low temperatures and in the presence of a magnetic field. We compare some results with
quantum Monte Carlo (QMC) data, obtained using the stochastic series expansion method code from the Algorithms and Libraries for
Physics Simulations (ALPS) project \cite{Bauer2011}, with $1\times 10^6$ Monte Carlo steps.
We consider spin-wave excitations from the ferrimagnetic and fully polarized classical states.
In the ferrimagnetic case, we consider interacting spin-waves, while in the
fully polarized, only free spin-waves are discussed. Considering the whole values of magnetization,
from zero to saturation, the two approaches present similar deviations from
the QMC data. We deepen the theory from the ferromagnetic ground state and obtain the
crossover lines bounding the plateau and LL regimes. In particular, we show that susceptibility and magnetization
data can be used to identify a crossover between two LL regimes, one built from excitations of the ferrimagnetic
magnetic state, and the other from the fully polarized one.
This paper is organized as follows. In Sec. \ref{sec:model-qmc} we present the Hamiltonian model and discuss
the magnetization curves from QMC calculations. In Sec. \ref{sec:sw-theory}
the spin-wave theories from the FRI and FP classical states are discussed,
particularly the methodology used to obtain the respective
magnetization curves with a finite temperature, and make
a comparison between their results and QMC data. In Sec. \ref{sec:ll-regime},
we study LL and plateau regimes at finite temperature through the free spin-wave (FSW) theory from
the FP vacuum (FSW-FPv). Finally, in Sec. \ref{sec:summary-pd} we summarize our results
and sketch the $T$-$B$ phase diagram from the FSW-FPv theory of the alternating (1/2,1) spin chain.
\section{Model Hamiltonian and QMC magnetization curves}
\label{sec:model-qmc}
An alternating spin ($s$, $S$) chain has two kinds of spin, $S$ and $s$, alternating on a ring with antiferromagnetic superexchange coupling $J$
between nearest neighbors, and described by the Hamiltonian
\begin{equation}
\mathcal{H}=J\sum_{j=1}^N\Big(\mathbf{s}_{j}\cdot\mathbf{S}_{j} + \mathbf{s}_{j}\cdot \mathbf{S}_{j+1}\Big)-B\sum_{j}^N(S_{j}^{z} + s_{j}^{z}),
\label{HeisFerri}
\end{equation}
where $B$ is the magnetic field and $N$ denotes the number of unit cells. We assume $S>s$ and consider equal
$g$-factors for all spins, defining $g\mu_B=1$, where
$\mu_B$ is the Bohr magneton. The magnetization per unit cell is given by
\begin{equation}
m = \sum_{j}^{N}(S_{j}^{z} + s_{j}^{z}).
\end{equation}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig1.eps}
\caption{(color online). Magnetization plateaus at finite temperature, Luttinger liquid phase and crossovers: Quantum Monte Carlo (QMC) data.
Magnetization per cell $m$ and the susceptibility $\chi=\partial m/\partial B$ as a function of magnetic field $B$ for an
alternating ($s=1/2$, $S=1$) chain with $N=256$ unit cells and the indicated values of temperature $T$.
The critical endpoint of the ferrimagnetic (FRI) and the fully polarized (FP) plateaus are $B_{c, FRI}=1.76J$ and
$B_{c,FP}=3J$, respectively. The presence of the FRI and FP plateaus, and the
region dominated by Luttinger liquid (LL) regime is a common feature
for all values of $s$ and $S$, with $S>s$.
As $T\rightarrow 0$, $\chi\rightarrow\infty$ at the critical values of $B$; for $T\gtrsim0$, local maxima
in the $\chi$ curves marks the crossover from the LL regime to the quantum critical regime. The local minimum
in the $\chi$ curve (dashed line) between $B_{c,FRI}$ and $B_{c,FP}$ separates the LL regime into two regions: one
with excitations from the FRI state, $\text{LL}_1$; the other with excitations from the FP state, $\text{LL}_2$.
}
\label{fig:mb0-01}
\end{center}
\end{figure}
In Fig. \ref{fig:mb0-01} we show QMC results for $m(B)$ for the (1/2, 1) chain in the low-$T$ regime.
At $T=0$, $m(B)$ presents two magnetization plateaus: the ferrimagnetic (FRI), at $m_{FRI}=(S-s)$, and the fully polarized (FP) one,
at $m_{FP}=s+S$. In particular, at $T=0$, $m=m_{FRI}$ for $B=0$, with a gapless Goldstone mode. There are quantum phase transitions at the endpoint of the plateaus: $B=B_{c,FRI}$ and
$B=B_{c,FP}$, respectively; which have the values $B_{c,FRI}=1.76J$ and $B_{c,FP}=3.00J$ for the ($1/2$, $1$) chain.
At the critical fields, there is a transition
from a gapped plateau phase to a gapless Luttinger liquid (LL) phase,
as $B\rightarrow B_{c,FRI}$ from magnetic fields $B<B_{c,FRI}$, or $B\rightarrow B_{c,FP}$ from magnetic fields $B>B_{c,FP}$.
In the LL phase, the excitations have
a linear dispersion relation, $\omega\sim q$, and present critical (power-law) transverse spin correlations.
Exactly at the critical fields, the excitations have a classical dispersion relation $\omega\sim q^2$ and
in the high diluted limit can be represented by a hard-core boson model or a spinless fermion model.
Hence, the magnetization has a square-root behavior $m\sim\sqrt{|B-B_c|}$ and a diverging susceptibility
$\chi=\partial m/\partial B\sim 1/\sqrt{|B-B_c|}$ as $B\rightarrow B_{c}$.
For finite-$T$, but $T\rightarrow0$, the magnetization $m=0$ for $B=0$, since the system is one-dimensional.
Gapped magnetic excitations are thermally activated and the plateau widths reduce.
The susceptibility shows local maxima, with distinct amplitudes,
at $B\approx B_{c,FRI}$ and $B\approx B_{c,FP}$ marking the crossover between the LL regime, where the excitations
have a linear behavior, $\omega\sim q$, to the quantum critical regime, for which $\omega\sim q^2$.
We can define the local minimum in the $\chi$ curve, at $B\equiv B_i$,
as a crossover between the region where the excitations are predominantly from the FRI state, denoted by LL$_1$ in Fig. \ref{fig:mb0-01},
and that where the excitations are predominantly from the FP state, denoted by LL$_2$ in Fig. \ref{fig:mb0-01}. In particular,
for $B\approx B_i$, the magnetization curve has its more robust value and behavior as the temperature increases, showing
that the LL phase is more robust for $B\approx B_i$.
\section{Spin-wave Theory}
\label{sec:sw-theory}
The ferrimagnetic arrangement of classical spins is a natural choice of vacuum to study quantum ferrimagnets through free
spin-wave (FSW) theory \cite{PatiJPCM1997,*PhysRevB.55.8894}, if we want to study excitations from the quantum ground state. Two types of magnon excitations are obtained, one ferromagnetic, which decreases the ground state spin
by one unit, and the other antiferromagnetic, increasing the ground state spin by one unit. In particular, the antiferromagnetic
excitation has a finite gap $\Delta$, which implies the expected magnetization plateau at $m=S-s$ and $T=0$.
However, at this linear approximation, quantum fluctuations are underestimated, giving poor
results for the value of antiferromagnetic gap,
and other quantities, like the average spin per site.
When one-dimensional ferromagnets are studied through the linear spin-wave theory at finite temperatures, a diverging zero-field magnetization
is obtained for any value of $T$\cite{PhysRev.58.1098, PhysRev.86.694, PhysRev.87.568}.
Takahashi \cite{Prog.Theor.Phys.Supp.87.233, PhysRevLett.58.168} modified the theory by imposing a constraint
on the zero-field magnetization and an effective chemical potential in the thermal boson distribution.
This so-called modified spin-wave theory describes very well the low-temperature thermodynamics of one-dimensional
ferromagnets, and was further successfully adapted to other systems, including ferrimagnetic chains \cite{PhysRevB.57.R14008}.
In the case of ferrimagnets, the introduction of the magnetization constraint in the bosonic distribution, with the
linear spin-wave dispersion relations gives an excellent description of the low-$T$ behavior.
The description of the intermediate-$T$ regime can be improved by changing the
constraint \cite{Noriki2017}.
In this Section, we discuss interacting spin-wave theory using a ferrimagnetic vacuum (ISW-FRIv) for $B\neq0$ and $T\neq0$,
with the modified spin-wave approach (Takahashi's constraint);
and free spin-wave theory from a fully polarized vacuum (FSW-FPv), also for $B\neq0$ and $T\neq0$.
\subsection{Spin-wave theory - ferrimagnetic vacuum}
\label{Section_FRIv}
\begin{figure}[!htb]
\includegraphics[width=0.47\textwidth]{fig2.eps}
\caption{(color online). Interacting spin-wave (ISW) magnon branches from the classical ferrimagnetic vacuum (FRIv) - calculating the thermodynamic properties.
(a) The classical ferrimagnetic vacuum of the ($s$,$S$) chain.
(b) Magnon dispersion relations
for the ($s=1/2$, $S=1$) chain with $B=0$.
There are ferromagnetic and antiferromagnetic
magnons, carrying spin $\Delta S^z=-1$ and $\Delta S^z=1$, respectively. The values of the critical fields
are $B^{(\textsc{ISW-FRIv})}_{c,\textsl{FRI}}=1.68J$ and
$B^{(\textsc{ISW-FRIv})}_{c,\textsl{FP}}=2.74J$.
To calculate the thermodynamic functions, the antiferromagnetic (ferromagnetic) magnons occupies
their respective bands following the Fermi (Bose) distribution function.
An effective chemical potential $\mu$ is introduced in the Bose distribution to prevent particle
condensation at the $k=0$ mode for $B=0$ and $T\rightarrow0$.
(c) For each value of $T$, we use a value of $\mu$ such that $m=0$ for $B=0$.
The inset shows that $\mu(T\rightarrow0)\rightarrow0$ as $T\rightarrow0$.
In this limit, both bands are empty and $m=(S-s) =1/2$, the FRI magnetization.
}
\label{fig:swfrit}
\end{figure}
The Holstein-Primakoff spin-wave theory is developed from the classical ground state illustrated in Fig. \ref{fig:swfrit}(a),
which has the energy $E^{\text{\tiny\textsc{(FRIv)}}}_{class} = -2JNsS -B\big(S-s\big)N$.
The bosonic operators $a_j$ ($a^\dagger_j$) and $b_j$ ($b^\dagger_j$), associated to $A$ and $B$ sites, respectively,
have the following relation with the spin operators (Holstein-Primakoff transformation):
\begin{eqnarray}
S^+_j &=&\sqrt{2S}\Big(1-\frac{a^\dagger_j a_j}{2S}\Big)^{1/2}a_j\text{, and }S^z_j=S-a^\dagger_j a_j;\\
s^+_j &=&b^{\dagger}_j\sqrt{2s}\Big(1-\frac{b^\dagger_j b_j}{2s}\Big)^{1/2}\text{, and }s^z_j=b^\dagger_j b_j-s.
\label{eq:hp-relations}
\end{eqnarray}
Putting the Hamiltonian (\ref{HeisFerri}) in terms of these bosonic operators, expanding
to quadratic order, Fourier transforming and making the following Bogoliubov transformation \cite{PatiJPCM1997,*PhysRevB.55.8894}:
\begin{equation*}
a_{k} = \alpha_{k}\cosh\theta_{k} - \beta_{k}^\dag\sinh\theta_{k} ,
\label{TransfBogol}
\end{equation*}
\begin{equation}
b_{k} = \beta_{k}\cosh\theta_{k} - \alpha_{k}^\dag\sinh\theta_{k},
\end{equation}
\begin{equation}
\tanh2\theta_k = 2\frac{\sqrt{sS}}{s+S}\cos \Big(\frac{k}{2}\Big),
\label{TanBogol}
\end{equation}
where $k$ is the lattice wave-vector, the
non-interacting spin-wave Hamiltonian is given by
\begin{equation}
\mathcal{H}^{\text{\tiny\textsc{(FSW-FRIv)}}} = E_0+\sum_{k}\Big[\omega^{\text{\tiny\textsc{(FRIv)}}}_{k,-}\alpha_{k}^\dag \alpha_{k} + \omega^{\text{\tiny\textsc{(FRIv)}}}_{k,+}\beta_{k}^\dag \beta_{k}\Big].
\label{HeisFerri0Diag}
\end{equation}
The magnon branches obtained are:
\begin{equation}
\omega^{\text{\tiny\textsc{(FRIv)}}}_{k,\sigma} = \sigma J\big(S-s\big) -\sigma B + J\omega^{\text{\tiny\textsc{(FRIv)}}}_{k},
\label{RelDispFerriMaisMenos}
\end{equation}
with $\sigma=\pm$, and
\begin{equation}
\omega^{\text{\tiny\textsc{(FRIv)}}}_{k} = \sqrt{\big(S-s\big)^2 + 4sS{\sin}^2 \Big(\frac{k}{2}\Big)},
\label{omegaK}
\end{equation}
while the ground-state energy is
\begin{equation}
E_0 = J\sum_{k}\Big[\omega^{\text{\tiny\textsc{(FRIv)}}}_{k} - \big(S+s\big)\Big].
\end{equation}
The $\omega^{\text{\tiny\textsc{(FRIv)}}}_{k,-}$ modes carry a spin $\Delta S^z=-1$, having a ferromagnetic spin-wave nature,
and is gapless for $B=0$; while $\omega^{\text{\tiny\textsc{(FRIv)}}}_{k,+}$ modes carry a spin $\Delta S^z=+1$, having an
antiferromagnetic spin-wave nature and
has a gap $\Delta=2J(S-s)$ at $B=0$. For the ($s=1/2$, $S=1$) chain \cite{PatiJPCM1997,*PhysRevB.55.8894}, for example,
$\Delta=1$,
although the exact value is $1.76J$; while $\langle S^z_a\rangle=0.695$ and
$\langle S^z_b\rangle=-0.195$ at $T=0$, with the exact values \cite{PatiJPCM1997,*PhysRevB.55.8894}:
$\langle S^z_a\rangle=0.792$ and $\langle S^z_b\rangle=-0.292$.
The dispersion relations can be improved if interactions between magnons are considered.
The corrected dispersion relations described in Ref. \cite{JCPM.10.11033.1998}, shown in Fig. \ref{fig:swfrit}(b), are:
\begin{equation}
\tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,\sigma} = {\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,\sigma} - J\delta{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,\sigma},
\label{RelDispMelhorada}
\end{equation}
where
\begin{equation*}
\delta \omega^{\text{\tiny\textsc{(FRIv)}}}_{k,\sigma} = 2\Gamma_{1}\frac{(S + s)}{\omega^{\text{\tiny\textsc{(FRIv)}}}_{k}}\sin^{2}(k/2) - \frac{\Gamma_2}{\sqrt{sS}}\Big[\omega^{\text{\tiny\textsc{(FRIv)}}}_k +\sigma (S - s)\Big],
\label{RelacaoDispersaoMagAntiFerroFerroMelhoradas}
\end{equation*}
with
\begin{eqnarray}
\Gamma_1 &=& \frac{1}{N}\sum_{k}\sinh^{2} \theta_{k},\text{ and}\\
\Gamma_2 &=& \frac{1}{N}\sum_{k}\cos(k/2)\sinh \theta_{k}\cosh \theta_{k}.
\end{eqnarray}
Up to $\mathcal{O}(S^0)$, the Hamiltonian is
\begin{equation}
\mathcal{H}^{\text{\tiny\textsc{(ISW-FRIv)}}}= E_g + \sum_{k}\big(\tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,-} \alpha^{\dag}_{k}\alpha_{k} + \tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,+}\beta^{\dag}_{k}\beta_{k}\big),
\label{HamiltonianoFinal}
\end{equation}
where
\begin{equation}
E_g = E_{class} + E_0 + E_1,
\label{GScorrigido}
\end{equation}
with
\begin{equation}
E_1 = -2JN\Big[\Gamma_{1}^{2} + \Gamma_{2}^{2} - \Big(\sqrt{S/s} + \sqrt{s/S}\Big)\Gamma_1\Gamma_2 \Big].
\end{equation}
At $T=0$, the magnetization as a function of $B$, shown in Fig. \ref{fig:mb0-01} for the ($s=1/2$, $S=1$) chain, can be understood from these
ferromagnetic ($\Delta S^z=-1$) and antiferromagnetic ($\Delta S^z=+1$) magnon modes.
For $B=0$ the two bands are empty and the magnetization is the ferrimagnetic one. Increasing the magnetic field,
the ferromagnetic band acquires a gap which increases linearly with $B$, while the gap to the antiferromagnetic band
decreases linearly with $B$. Notice, in particular, that the ferromagnetic band is empty for all values
of $B$. At $B=B^{(\textsc{ISW-FRIv})}_{c,\textsl{FRI}}/2=\Delta/2$, the $k=0$ mode of the antiferromagnetic
band is the lower energy state, and at $B=B^{(\textsc{ISW-FRIv})}_{c,\textsl{FRI}}=\Delta$
the gap to this mode closes. The value of $B^{(\textsc{ISW-FRIv})}_{c,\textsl{FRI}}$ is
\begin{equation}
B^{(\textsc{ISW-FRIv})}_{c,\textsl{FRI}} =\tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{0,+}= 2(S-s)\left(1 + \frac{1}{\sqrt{sS}}\Gamma_2\right)J.
\end{equation}
In particular, for the ($s=1/2$, $S=1$) chain, with $\Gamma_1=0.305$ and $\Gamma_2=0.478$,
$B^{(\textsc{ISW-FRIv})}_{c,\textsl{FRI}}=1.68J$, which is very close to the exact value ($1.76J$).
The magnetization for $B>\Delta$ is obtained by considering the antiferromagnetic magnons as
hard-core bosons \cite{PhysRevB.43.3215}, or spinless fermions. The magnetization increases with $B$ as the antiferromagnetic band is filled, and
saturates when the Fermi level reaches the band limit, at $k=\pi$. The saturation field is
\begin{equation}
B^{(\textsc{ISW-FRIv})}_{c,\textsl{FP}}=\tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{\pi,+} = 2\left(S - \Gamma_1+\sqrt{\frac{S}{s}}\Gamma_2\right)J,
\end{equation}
which for the ($s=1/2$, $S=1$) chain is $B^{(\textsc{ISW-FRIv})}_{c,\textsl{FP}}=2.74$, departing from the exact value $3J$, but much better
than the free spin wave result: $2J$.
\subsubsection{Thermodynamics}
For $T>0$, ferromagnetic and antiferromagnetic modes are occupied in accord to Bose-Einstein ($n^{\text{\tiny\textsc{(FRIv)}}}_{k,-}$) and
Fermi-Dirac ($n^{\text{\tiny\textsc{(FRIv)}}}_{k,+}$) distributions, respectively, as indicated in Fig. \ref{fig:swfrit}(a). The magnetization, for example, is given by
\begin{equation}
m(T,B)= (S - s)+\frac{1}{N}\sum_{k}(n^{\text{\tiny\textsc{(FRIv)}}}_{k,+}-n^{\text{\tiny\textsc{(FRIv)}}}_{k,-}).
\label{RestricMagnet}
\end{equation}
We notice, however, that with $T>0$ and $B=0$ the ferromagnetic band will be thermally activated and $m\rightarrow-\infty$ as $T$ increases.
This problem arises, also, in one-dimensional ferromagnetic
chains, and was overcome by Takahashi \cite{Prog.Theor.Phys.Supp.87.233, PhysRevB.40.2494}, in the low-$T$ regime, through the introduction of
an effective chemical potential $\mu$ in the bosonic distribution, and a constraint $m(B=0,T)=0$.
A similar strategy was applied to one-dimensional ferrimagnetic systems \cite{PhysRevB.57.R14008}
and good results were also obtained in the low-$T$ regime.
The intermediate-$T$ regime, where the minimum in the $T\chi$ curve of the ferrimagnets \cite{PhysRevB.29.5144} are observed,
can be more accurately described if other constraints are used
\cite{PhysRevB.69.06,JCPM.10.11033.1998,Noriki2017}.
Here, for $B=0$, we use the simplest constraint
\begin{equation}
m(T,B=0)=0,
\end{equation}
since we are interested in the low-$T$ regime, with
\begin{eqnarray}
n^{\text{\tiny\textsc{(FRIv)}}}_{k,-} &=& \frac{1}{e^{\beta [\tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,-} - \mu]} -1},\label{NumMagnons1}\\
n^{\text{\tiny\textsc{(FRIv)}}}_{k,+} &=& \frac{1}{e^{\beta\tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,+}} +1}.
\label{NumMagnons2}
\end{eqnarray}
In Fig. \ref{fig:swfrit}(b), we present $m(T,B=0)$ for the indicated values of $T$. As discussed, $m\rightarrow-\infty$ at $\mu=0$
and the value of $\mu$ for which the constraint $m(T,B=0)=0$ is satisfied, monotonically decreases with $T$, in this low-$T$ regime.
A finite $\mu$ implies an effective gap for the ferromagnetic band, with an exponential thermal activation of their magnons.
In particular, notice that $\mu(T\rightarrow0)=0$, as expected. To calculate the thermodynamic functions for $B\neq0$,
we consider the distributions in Eqs. (\ref{NumMagnons1}) and (\ref{NumMagnons2}) and
use the same value of $\mu$ found in the case $B=0$: $\mu(B,T)=\mu(B=0,T)$, for any value of $B$.
The magnetization as a function of $B$ for $T\neq0$, shown in Fig. \ref{fig:mb0-01}, can be qualitatively understood from
this theory. For $B=0$, the magnetization $m=0$, due to the constraint. As $B$ increases, in the region $0<B<B_{c,FRI}/2$, the
gap to the ferromagnetic band increases, but this band is thermally activated and the magnetization decreases from the $m=S-s$ value.
This effect can also be seen from Fig. \ref{fig:swfrit}(b). If we move the Zeeman term, $+B$, from the ferromagnetic dispersion relation
to the chemical potential, $\tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,-}\rightarrow\tilde{\omega}^{\text{\tiny\textsc{(FRIv)}}}_{k,-}-B$ and $-\mu\rightarrow-(\mu-B)$, in Eq. (\ref{NumMagnons1}), the magnetization value is the one shown in Fig. \ref{fig:swfrit}(b) for
$\mu$ lower than that of $B=0$, and $m=0$. From Fig. \ref{fig:swfrit}(b), we see that increasing $B$ (decreasing $\mu$) from $B=0$ [from $\mu(B=0,T)$],
the magnetization rises exponentially to the ferrimagnetic value.
For $B=B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}/2$, the lower energy band is the antiferromagnetic ($\Delta S^z=+1$ magnons) fermionic band.
This band is thermally activated for $[B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}/2]<B<B_{c,FRI}$, and
the magnetization is higher than $S-s$. The magnetization increases through the filling of this band, in accord
to the Fermi distribution, up to the saturation value $m=s+S$, which is exponentially reached.
\subsection{Spin-wave theory - fully polarized vacuum}
\label{Section_FPv}
\begin{figure}[!htb]
\includegraphics[width=0.48\textwidth]{fig3.eps}
\caption{(color online). Free spin-wave magnon branches from the classical ferromagnetic vacuum - calculating the thermodynamic properties.
(a) The classical fully polarized vacuum of the ($s$,$S$) chain.
(b) Free spin-wave (FSW) results for the magnon energies relative to the
fully polarized vacuum (FPv) for $T\neq0$ and $B=0$ for the ($s=1/2$, $S=1$) chain.
In this case, both branches are ferromagnetic with magnons carrying a spin $\Delta S^z=-1$.
To calculate the thermodynamic functions, the lower (higher) magnon band is filled following the Fermi (Bose) distribution function.
An effective chemical potential $\mu$ is introduced in the Bose distribution to prevent particle
condensation at the $k=\pi$ mode for $B=0$ and $T\rightarrow0$.
The critical fields are $B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}=2.00J$ and $B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FP}=3.00J$.
(c) The chemical potential $\mu$ is chosen such that $m=0$ for $B=0$.
The inset shows that $\mu(T\rightarrow0)\rightarrow-1$ as $T\rightarrow0$. In this limit
only the lower energy band is occupied, implying that
$m\rightarrow (S-s) =1/2$, the ferrimagnetic magnetization, as $T\rightarrow0$ and $B\rightarrow0$.}
\label{fig:swfpt}
\end{figure}
In this section, we study the free spin wave theory from a fully polarized vacuum,
illustrated in Fig. \ref{fig:swfpt}(a).
We show that this theory provides a good description of the low-$T$ physics,
and is quantitatively much better than the free spin wave description from the ferrimagnetic
vacuum. The critical saturation field has an exact value, while the critical
field at the end of the ferrimagnetic plateau is $B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}=2J$.
The Holstein-Primakoff transformation in this case is
\begin{eqnarray}
S^+_j &=&\sqrt{2S}\Big(1-\frac{a^\dagger_j a_j}{2S}\Big)^{1/2}a_j\text{, and }S^z_j=S-a^\dagger_j a_j;\\
s^+_j &=&\sqrt{2s}\Big(1-\frac{b^\dagger_j b_j}{2s}\Big)^{1/2}b_j\text{, and }s^z_j=s-b^\dagger_j b_j,
\label{eq:hp-relations-fp}
\end{eqnarray}
with the two bosons lowering the site magnetization by one unit. To quadratic order in these bosonic operators, the Hamiltonian of the system, Eq. (\ref{HeisFerri}),
is
\begin{eqnarray}
\mathcal{H}^{\text{\tiny\textsc{(FSW-FPv)}}} & = & E^{\text{\tiny\textsc{(FPv)}}}_{class}+J\sum_{j}\Bigg\{-s\Big(a_{j}^\dag a_{j} + a_{j+1}^\dag a_{j+1}\Big) \nonumber \\
& - & 2Sb_{j}^\dag b_{j}+\sqrt{sS}\Bigg[\Big(a_{j} + a_{j+1}\Big)b_{j}^\dag + \Big(a_{j}^\dag + a_{j+1}^\dag \Big)b_{j}\Bigg] \nonumber \\
& + & B\sum_j\Big(a_{j}^\dag a_{j} + b_{j}^\dag b_{j}\Big) \Bigg\},
\label{eq:h-fsw-fp}
\end{eqnarray}
with $E^{\text{\tiny\textsc{(FPv)}}}_{class} = 2JNsS -B\big(S+s\big)N$. Fourier transforming the bosonic operators and using the Bogoliubov transformation
\begin{eqnarray}
a_{k}^\dag &=& \alpha_{k}^\dag\cos\theta_{k} - \beta_{k}^\dag\sin\theta_{k};\\
b_{k}^\dag &=& \beta_{k}^\dag\cos\theta_{k} + \alpha_{k}^\dag\sin\theta_{k},
\end{eqnarray}
with
\begin{equation}
\tan2\theta_k = 2\frac{\sqrt{sS}}{S-s}\cos \Big(\frac{k}{2}\Big),
\end{equation}
the Hamiltonian in Eq. \ref{eq:h-fsw-fp} is written as
\begin{equation}
\mathcal{H}^{\text{\tiny\textsc{(FSW-FPv)}}}= E^{\text{\tiny\textsc{(FPv)}}}_{class} + \sum_{k}\Big[\omega^{\text{\tiny\textsc{(FPv)}}}_{k,1}\alpha_{k}^\dag \alpha_{k} + \omega^{\text{\tiny\textsc{(FPv)}}}_{k,0}\beta_{k}^\dag \beta_{k}\Big],
\label{HamiltonianoGeralLivre}
\end{equation}
where the dispersion relations \cite{Maisinger1998} $\omega^{\text{\tiny\textsc{(FPv)}}}_{k,\eta}$ are
\begin{eqnarray}
\omega^{\text{\tiny\textsc{(FPv)}}}_{k,\eta} &=& (-1)^{\eta+1}\sqrt{\big(S-s\big)^2 + 4sS{\cos}^2 \Big(\frac{k}{2}\Big)}\nonumber\\
& &-\big(S + s\big)+B,
\label{eta0}
\end{eqnarray}
with $\eta=0\text{ or }1$.
To discuss the $T=0$ magnetization curve implied by these spin-wave modes, we present in Fig. \ref{fig:swfpt}(b) the dispersion relations
$\omega^{\text{\tiny\textsc{(FPv)}}}_{k,\eta}$ for the ($s=1/2$, $S=1$) chain
and $B=B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FP}=2J(s+S)=3J$.
At $B=B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FP}=B_{c,FP}$,
both bands are empty, and the magnetization is the fully polarized one.
Decreasing $B$, the $\eta=0$ band is filled in accord to Fermi-Dirac statistics, and
the magnetization decreases. The critical field at the end point of the ferrimagnetic
plateau is obtained making $\omega^{\text{\tiny\textsc{(FPv)}}}_{\pi,0}=0$, which implies
$B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}=2SJ$, equal to $2J$ for the ($s=1/2$, $S=1$) chain.
At this value of $B$, the $\eta=0$
band is totally filled and $m=(s+S)-1$, giving $1/2$ for the ($s=1/2$, $S=1$) chain.
There is a gap of $2(S-s)J$ between the $\eta=0$ and $\eta=1$ bands, at $k=\pi$;
hence, the bosonic $\eta=1$ band should start to be filled at $B=B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}-2(S-s)J$,
and the theory does not qualitatively reproduce the $T\rightarrow0$ magnetization curve.
This problem is overcome by considering the finite temperature theory,
with Takahashi's constraint and effective chemical potential. For finite $T$, the
magnetization is given by
\begin{equation}
m(T,B)= (S + s)-\frac{1}{N}\sum_{k}[n^{\text{\tiny\textsc{(FPv)}}}_{k,0}+n^{\text{\tiny\textsc{(FPv)}}}_{k,1}],
\label{contraint-fp}
\end{equation}
where
\begin{eqnarray}
n^{\text{\tiny\textsc{(FPv)}}}_{k,0} &=& \frac{1}{e^{\beta \omega^{\text{\tiny\textsc{(FPv)}}}_{k,0}} +1},\\
n^{\text{\tiny\textsc{(FPv)}}}_{k,1} &=& \frac{1}{e^{\beta[\omega^{\text{\tiny\textsc{(FPv)}}}_{k,1}-\mu]} -1}.
\label{eq:n-fpv}
\end{eqnarray}
The constraint, which is applied at $B=0$, is
\begin{equation}
m(T,B=0)=0.
\end{equation}
In Fig. \ref{fig:swfpt}(c) we present the magnetization as a function of the effective chemical $\mu$ for
the indicated values of temperature. We note that $m\rightarrow-\infty$ as the temperature increases,
similarly to the spin-wave theory with the ferrimagnetic vacuum. However, in this case $\mu\rightarrow-1$
as $T\rightarrow0$, as shown in Fig. \ref{fig:swfpt}(b). Hence, a finite chemical potential $\mu=-1$
associated to the bosonic $\eta=1$ band must be considered in the $T=0$ theory.
With this chemical potential, the $\eta=1$ band stays empty at $T=0$ for any value of $B$.
The thermodynamic functions are calculated using Eq. \ref{eq:n-fpv}, with $\mu(T,B)=\mu(T,B=0)$.
For finite $T$, the fermionic $\eta=0$ band is completely filled and the occupation of the $\eta=1$ band
is such that $m=0$. Considering the low-$T$ regime, as $B$ increases, the energy of the two bands raises, lowering the total occupation of
the $\eta=1$ band, since $\omega^{\text{\tiny\textsc{(FPv)}}}_{k,1}-\mu$ linearly increases with $B$ for any $k$,
and $m$ increases. The magnetization exponentially reaches its value at the ferrimagnetic plateau, $m=S-s$, as
$B$ increases, since $n^{\text{\tiny\textsc{(FPv)}}}_{k,1}\rightarrow0$ for any $k$ and the $\eta=0$ band is completely
filled. For $[B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}/2]<B<B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}$, with
$[B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}/2]$ related to the point $B=B_{c,FRI}/2$ in Fig. \ref{fig:mb0-01}, the occupation
of the $\eta=0$ band decreases from the $T=0$ case: $n^{\text{\tiny\textsc{(FPv)}}}_{k,0}=1$ for any $k$, and
the magnetization is higher than $S-s$.
The magnetization increases with $B$, and exponentially reaches the fully polarized value at
$B>B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FP}$, since magnons at the $\eta=0$ band are thermally excited.
\begin{figure}[!htb]
\includegraphics[width=0.36\textwidth]{fig4.eps}
\caption{(color online). Comparison between results from quantum Monte Carlo (QMC) method, $N=256$ unit cells, and the two spin-wave approaches for
the magnetization per cell $m$ and the susceptibility $\chi$:
($s=1/2$, $S=1$) chain at temperature $T=0.02(J/k_B)$. Results from the interacting spin-wave theory from a ferrimagnetic vacuum (ISW-FRIv)
and free spin-wave theory from a ferromagnetic vacuum (FSW-FPv) compare well with
QMC for $B\lesssim B_{c,FRI}$ and $B\gtrsim B_{c,FP}$. The maximum in $\chi$ related to $B_{c,FRI}$ ($B_{c,FP}$)
is better localized, compared to QMC, through the ISW-FRIv (FSW-FPv) approach.}
\label{fig:comp}
\end{figure}
\subsection{Comparison between QMC data and the two spin-wave approaches}
In Fig. \ref{fig:comp} we present magnetization and susceptibility $\chi=\partial m/\partial B$ as a function
of $B$ from ISW-FRIv and FSW-FPv theories along with QMC data, at $T=0.1J$. Since the ISW-FRIv gives a better
result for $B_{c,FRI}$, this theory is better in the vicinity of this critical field. Otherwise, the
FSW-FPv approach is better in the vicinity of $B_{c,FP}$. Further, the amplitudes of the two peaks
in $\chi(B)$, which marks the crossover to the LL regime, have values lower than the ones
given by QMC. The difference
between the amplitudes of the spin-wave approaches and QMC data is related
to limitations in the spin-wave theories.
Despite it, the description from both spin-wave theories are qualitatively excellent, and
quantitatively very acceptable in the low-$T$ regime.
Below we calculate the $T$ vs $B$ phase diagram in the low-$T$ regime from the FSW-FPv theory. We
study the crossover lines between the LL regimes and the quantum critical regimes; as well as the crossovers
lines between the plateau regimes and the quantum critical regimes.
We use the FSW-FPv approach since it has essentially the same precision of
the ISW-FRIv theory, if we consider a range of $B$ from 0 to the saturation field; also, the critical point
$B_{c,FP}$ is exact in the FSW-FPv theory.
\section{Luttinger liquid regime}
\label{sec:ll-regime}
In the LL phase, the dispersion relation can be approximated by $\pm v_F|k-k_F|$, where $v_F$ is the Fermi velocity.
Further, in this regime the magnetization has the
form \cite{PhysRevLett.99.057205}:
\begin{equation}
m=m(T=0)-\frac{\pi}{6v_F^2}\frac{\partial v_F}{\partial B}(k_BT)^2+O(T^3).
\label{eq:m-cft}
\end{equation}
In our case, the Fermi velocity along the $\eta=0$ band is $v_F=[\partial\omega^{\text{\tiny\textsc{(FPv)}}}_{k,0}/\partial k]_{k=k_F}$,
with $k_F$ calculated from $\omega^{\text{\tiny\textsc{(FPv)}}}_{k,0}\vert_{k=k_F}=0$.
In Fig. \ref{fig:mtfp}(a) we present $v_F$ as a function of $B$ for the (1/2,1) chain.
Near the critical fields, $|\partial v_F/\partial B|$ is large and $v_F$ little.
For a fixed $B\gtrsim B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}$, as shown in Fig. \ref{fig:mtfp}(b),
the magnetization presents a fast decay from the $T=0$ value as $T$ increases.
Also, for $B\lesssim B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FP}$, as shown in Figs. \ref{fig:mtfp}(c),
$m$ increases from $m(0)$. In both cases, the curvature of the $m(T\rightarrow0)$-curve increases as
$B$ get closer to the critical fields.
The crossover temperature $T(B)$ of the LL regime at a fixed $B$ is
defined as the point at which $m(T)$ departs from the quadratic behavior
in Eq. (\ref{eq:m-cft}).
So, $T(B)$ is taken to be at the minima ($B\gtrsim B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}$) and
maxima ($B\lesssim B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FP}$) of the $m(T)$ curve \cite{PhysRevLett.99.057205}.
In particular, as $B\rightarrow B_c$ the crossover line separates the LL regime and the
quantum critical regime, for which the excitations have a quadratic dispersion relation.
In this case, a universal, model independent,
straight line $k_BT(B)=a|B-B_c|$, with $a=0.76238$,
can be derived \cite{PhysRevLett.99.057205}.
\begin{figure}[!htb]
\includegraphics*[width=0.35\textwidth]{fig5a.eps}
\includegraphics*[width=0.48\textwidth]{fig5bcd.eps}
\caption{(color online). Results from the free spin-wave approach with the fully polarized vacuum (FSW-FPv).
(a) Fermi velocity $v_F$ as a function of the
magnetic field $B$ and [(b), (c) and (d)]
magnetization curves $m(T)$. (a) $\partial v_F/\partial B\rightarrow+\infty$ and $v_F\rightarrow0$ as $B\rightarrow\bcfsw{FRI}=2.00J$,
while $\partial v_F/\partial B\rightarrow-\infty$ and $v_F\rightarrow0$ as $B\rightarrow\bcfsw{FP}=3.00J$.
As shown in the inset, for $B=B_i\approx2.366J$,
$\partial v_F/\partial B=0$ and the susceptibility $\chi(B)$ has a minimum at this value of $B$.
(b) $m(T)$ for the indicated values of $B$ in the vicinity of the critical field $\bcfsw{FRI}$.
(c) $m(T)$ for values of $B$ in the vicinity of the critical field $\bcfsw{FP}$.
(d) $m(T)$ for $B=B_i$. The $m(T)$ curves to order $O(T^2)$, Eq. (\ref{eq:m-cft}),
are shown as dashed lines in (b) and (c) for the corresponding values of $B$, arrows indicate
local extreme points in $m(T)$, which are used as a criterium to identify the LL regime. The inset in (d) shows that
the minimum in $m(T)$ is associated to the local minimum in $\chi(B)$, which is found between the two critical fields.
}
\label{fig:mtfp}
\end{figure}
\begin{figure}[!htb]
\includegraphics*[width=0.44\textwidth]{fig6.eps}
\caption{(color online). Magnetization per cell $m(T)$ with fixed $B$: calculating the crossover lines bounding the Luttinger liquid regime.
Quantum Monte Carlo (QMC) results for the magnetization curves $m(T)$ and
the crossover lines for a system with $N=128$. (a) $m(T)$ for
values of $B$ in the vicinity of the critical field $B_{c,FRI}=1.76J$.
(b) $m(T)$ for values of $B$ in the vicinity of the critical field $B_{c,FP}=3.00J$.
(c) $m(T)$ for a value of $B$ such that $\partial \chi/\partial B\approx0$ at $T=0$ and inside the Luttinger liquid phase,
dashed line in Fig. \ref{fig:mb0-01}. (d) Local extreme points of $m(T)$ curves from QMC and free spin-wave from the
fully polarized vacuum (FSW-FPv). In the case of the FSW-FPv local minima, we shift $B$ by $B_{c,FRI}-\bcfsw{FRI}\approx0.24J$.
The exact crossover straight lines as $T\rightarrow 0$, extended in the figure for better visualization: $a|B-B_{c,FRI}|$ and $a|B-B_{c,FP}|$,
with $a=0.76238$, are also shown. The error bars are defined as half the temperature step ($\Delta T=0.008$) used
to calculate $m(T)$.}
\label{fig:mtQMC}
\end{figure}
In the inset of Fig. \ref{fig:mtfp}(a), we show that the minimum in the $\chi(B)=\partial m/\partial B$ curve
is found at $B=B_i$, a value of $B$ at which $|\partial v_F/\partial B|=0$.
This value of $B$ marks a crossover from the regime
where excitations are predominantly from the FRI critical state to the
regime where they come from the FP critical state.
At $B=B_i$, the Fermi wave-vector is at the inflection point of the dispersion curve
($d^2 \omega^{\text{\tiny\textsc{(FPv)}}}_{k,0}/dk^2=0$), since
\begin{equation}
\frac{\partial v_F}{\partial B}=\left[ \frac{d^2\omega^{\text{\tiny\textsc{(FPv)}}}_{k,0}}{dk^2}\right ]_{k=k_F}\left (\frac{\partial k_F}{\partial B} \right),
\end{equation}
and $k_F$ increases monotonically with $B$ between the critical fields.
If the value of $k$ at the inflection point is $k_i$, we can calculate
$B_i$ from the equation $\omega^{\text{\tiny\textsc{(FPv)}}}_{k_i,0}=0$.
For the (1/2,1) chain, for example, $B_i=2.366J$ and is indicated in Fig. \ref{fig:mtfp}(a).
At $B=B_i$, $\partial v_F/\partial B=0$ and the quadratic term
in Eq. (\ref{eq:m-cft}) is absent. So, the more stable, against $T$, LL
region is found for $B\approx B_i$. Since the crossover temperatures $T(B)\rightarrow 0$
near the critical fields, the $T(B)$ line has an \textit{asymmetric dome-like} profile, which
is a consequence of the $v_F$ curve, shown in Fig. \ref{fig:mtfp}(a) for the case of the (1/2,1) chain,
and is also observed in other quantum magnets \cite{Zapf2014}.
A minimum in the $m(T)$ curve is
also observed for $B=B_i$, due to the $O(T^3)$ in Eq. (\ref{eq:m-cft}), as shown in Fig. \ref{fig:mtfp}(d).
In this case, however, this extreme point is associated with the
minimum in the $\chi(B)$ curve, at $B=B_i$, as shown in the inset of
Fig. \ref{fig:mtfp}(d).
In Fig. \ref{fig:mtQMC} we show $m(T)$ curves for the (1/2,1) chain calculated with QMC method
to discuss the qualitatively
agreement between these almost exact results and the conclusions from the spin-wave theory.
In Figs. \ref{fig:mtQMC}(a) and (b), we show the minimum (maximum) in the $m(T)$ curve for
$B\gtrsim B_{c,FRI}=1.76J$ ($B\lesssim B_{c,FP}=3J$). In Fig. \ref{fig:mtQMC}(c), we calculate
$m(T)$ for a value of $B$ in the vicinity of the minimum in the $\chi(B)$ curve, $B=B_i$.
Using the data in Fig. \ref{fig:mb0-01}, it is located at $B_i=(2.27\pm0.07)J$, and is indicated as a
dashed line in that figure. As shown in Fig. \ref{fig:mtQMC}(c), the $m(T\rightarrow0)$ curve is also
flat, as in Fig. \ref{fig:mtfp}(d), for $B=2.25J$. The minimum in the $m(T)$ curve appears at
$T\approx0.1J$. As can be observed in the $T=0.1J$ susceptibility curve in Fig. \ref{fig:mb0-01}, it
is also associated with the minimum in the $\chi(B)$ curve, at $B\approx B_i$.
In Fig. \ref{fig:mtQMC}(d), we compare the position of the local extreme points in the $m(T)$ curves
from QMC and FSW-FPv methods. The values of $B$ at the minima of $m(T)$ were translated by
$B_{c,FRI}-\bcfsw{FRI}\approx0.24J$. The lines for the maxima in $m(T)$ from both
methods are in very good agreement since the FSW-FPv is almost exact for $T\rightarrow0$, due
to the low density of excited magnons in this temperature regime. Otherwise, the minima
from both methods do not compare well, except for $T\rightarrow0$, which is dominated by the critical point.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig7.eps}
\caption{(color online). Specific heat from the free spin-wave theory from a fully polarized vacuum (FSW-FPv)
for $T\rightarrow0$. In the Luttinger liquid (LL) regime, $C\sim T$ as $T\rightarrow0$, and $C/T$ is approximately constant
for $B\approx B_i=2.366J$. The inset shows this linear behavior of $C$ at $B=B_i$.
The crossover from the $T=0$ insulating
plateau regime to the gapless quantum critical regime, at local maxima, are indicated by arrows.}
\label{fig:ct}
\end{center}
\end{figure}
We determine the crossover lines between the LL and plateau regimes through specific heat data, $C(B)$.
In Fig. \ref{fig:ct} we present FSW-FPv results for $C(B)$
in the low-$T$ regime. In the LL phase, at $T=0$, the specific heat $C\sim T$ as $T\rightarrow0$,
and $C/T$ is approximately constant in the LL regime, as shown in Fig. \ref{fig:ct}.
The range of $B$ near $B= B_i$ is the more robust for this regime, and we present in the inset of Fig. \ref{fig:ct}
the linear behavior of $C$ as a function of $T$.
For $B\lesssim\bcfsw{FRI}$ or $B\gtrsim\bcfsw{FP}$, the excitations are exponentially activated
and the crossover to the quantum critical regime is marked by a local maximum in $C(B)$.
The points of these crossover lines, $T_{\text{plateau}}(B)\sim|B-B_c|$, are indicated by arrows in
Fig. \ref{fig:ct}. The quantum critical regime is bounded by this crossover line and that of the
LL regime, which points appears as a second local maximum near $\bcfsw{FRI}$ and $\bcfsw{FP}$
in Fig. \ref{fig:ct}.
\section{Summary and discussions}
\label{sec:summary-pd}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.47\textwidth]{fig8.eps}
\caption{(color online). Spin-wave $T - B$ phase diagram of the ($s=1/2$, $S=1$) chain from the FPv.
The quantum critical points $B^{(\textit{\tiny\textsc{FSW, FPv}})}_{c,FRI}=2.00J$ and
$\bcfsw{FP}=3.00J$ bound the FRI and FP plateau regions, respectively.
Increasing temperature, the plateau width decreases and the lines $k_BT=|B-\bcfsw{FRI}|$ and $k_BT=|B-\bcfsw{FP}|$ limit the
plateau regions for $B\lesssim B_c$ [ferrimagnetic (FRI) plateau] and $B\gtrsim B_c$ [fully polarized (FP) plateau]. The LL regime has crossover lines given by $a|B-\bcfsw{FRI}|$ and $a|B-\bcfsw{FP}|$,
with $a=0.76238$, for $B\rightarrow B_c$, as indicated by local maxima of the susceptibility
$\chi(B)=\frac{\partial m}{\partial B}$, $\chi(B)_{max}$. Between these local maxima, there is a local minimum [$\chi(B)_{min}$]
separating the regions under the influence of the $\bcfsw{FRI}$ critical point and that of the $\bcfsw{FP}$ one.}
\label{fig:pd}
\end{center}
\end{figure}
We have calculated the critical properties of alternating ferrimagnetic chains in the presence of a magnetic field from two
spin-wave theories. We determine the better low-energy description of the excitations, considering the level of approximation,
comparing the results with quantum Monte Carlo data. These ferrimagnetic chains present
two magnetization ($m$) plateaus, the ferrimagnetic (FRI) plateau, for which $m=S-s$ and the fully polarized (FP) one, at $m=s+S$.
The first spin-wave theory, is an interacting spin-wave (ISW) approach with the FRI classical vacuum, ISW-FRIv. The second methodology,
is a free spin-wave (FSW) calculation from the FP state, FSW-FPv. In both cases, two bands are obtained. To calculate the finite temperature ($T$)
properties of the system, one of the bands is considered as a bosonic band, with an effective chemical potential to
prevent boson condensation at $B=0$; while the other is considered as a hard-core boson band, with a fermionic one-particle
thermal distribution. Near the endpoint of the FRI plateau, the ISW-FRIv theory is a better option; while
the FSW-FPv is exact for $T\rightarrow0$
near the endpoint of the FP plateau. Since we are interested in describing the whole $T$ vs. $B$ phase diagram of
the system, we deepen the study on the FSW-FPv, calculating the finite $T$ crossover lines bounding the plateau and the Luttinger liquid (LL) regimes.
In Fig. \ref{fig:pd} we summarize our results in a $T$ vs. $B$ phase diagram, and show specific heat data $C/T$ as a function of
$B$ and $T$. In the FRI and FP plateau regions the excitations are gapped, and $(C/T)\rightarrow0$ as $T\rightarrow0$. The
gaps close at the quantum critical (QC) fields $\bcfsw{FRI}=2J$ and $\bcfsw{FP}=3J$, and local maxima appears in the values of
$C/T$ for a fixed $T$. These local maxima indicate the crossover between the plateau and the QC regimes, and
between the QC and LL regimes. As $T\rightarrow0$, the crossover line between the plateau and the QC regimes (P-QC line) is
a straight line $k_BT(B)=|B-B_c|$,
for $B_c=\bcfsw{FRI}$ and $B_c=\bcfsw{FP}$; while a straight line $a|B-B_c|$, with a model-independent constant $a=0.76238$,
marks the crossover between LL and QC regimes (LL-QC lines). The LL-QC line which contains the critical point $B=\bcfsw{FRI}$ $[B=\bcfsw{FP}]$
was also calculated from local minima (local maxima) in
the $m(T)$ curves: $m(T)_{min}$ $[m(T)_{max}]$. The LL-QC lines were also calculated from local maxima in the susceptibility curve
$\chi(B)$ at fixed $T$: $\chi_{max}(B)$.
The Luttinger liquid regime can
be divided into two regions, separated by the minimum in the $\chi(B)$ curve with a fixed temperature, $\chi_{min}(B)$.
The value of the magnetic field at which this minimum occurs at $T=0$, $B_i$, is at
the inflection point of the magnon band and changes little with $T$. The line $m(T)_{min}$ as a function of $B$
meets the line $\chi_{min}(B)$ for $B\approx B_i$. Finally, the LL regime has an asymmetric dome-like profile which is associated with
the Fermi velocity profile as a function of $B$ at the relevant magnon band, as observed in other quantum magnets \cite{Zapf2014}.
We acknowledge financial support from Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES),
Conselho Nacional de Desenvolvimento Cientifico e Tecnológico (CNPq), and Fundação de Amparo à Ciência e Tecnologia de Pernambuco (FACEPE),
Brazilian agencies, including the PRONEX Program of FACEPE/CNPq.
|
1,116,691,499,033 | arxiv | \section{Solution}
The model we select is the simple linear regression with l2 penalty. The formula used for prediction is:
$$Y = \tau \cdot X$$
$\tau$ is a 3 by 4 matrix, whose last entry is 1. We try to minimize the following loss function:
$$f(\tau) = \sum_{i=0}^{N} \|\tau \cdot X_i - Y_i\|^2 + \lambda\|\tau\|^2_F$$
When $\lambda\rightarrow0$, this is just a simple least square problem. When $\lambda\rightarrow\infty$, entries in matrix $\tau$ would approach 0. A closed form solution to minimize loss function can be acquired by setting the first derivative of $f(\tau)$ to be zero. The solution for $\tau$ is:
$$\tau = \|X \cdot X^\top + \lambda I\|^{-1} X \cdot Y^\top$$
\end{document}
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the CVPR 2020 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for CVPR 2020.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\medskip
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the CVPR 2020 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
Please direct any questions to the production editor in charge of these
proceedings at the IEEE Computer Society Press:
\url{https://www.computer.org/about/contact}.
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
Our report has three parts: First, we analyzed and quantified fatal police shooting news reporting deviation of mainstream media. Second, we used FP-growth to mine frequent patterns, clustered hotspots of fatal police shootings, and brought multi-attributes (social economics, demographics, political tendency, education, gun ownership rate, police training hours, etc.) to reveal connections under the iceberg. Third, we built regression models based on correlation analysis for numeric variables selection to predict police shooting rates at the state level. We also built classification models based on Chi-square testing for categorical variables selection to predict the victims' race of fatal police shootings. The main datasets we choose for our analysis include:
1. Washington Post Fatal Police Shooting Dataset (WP data) \cite{2020WashingtonPost} covers fatal police shooting from 01-01-2015 to 12-02-2020.
2. KilledByPolice (KBP): Fatal police shooting reported in KilledByPolice website \cite{2020KBP} from 01-01-2015 to 11-04-2020.
\section{Related work}
Several studies have been conducted based on utilizing local crime data to explain racial disparities and differences in fatal police shootings. Mentch (2020) \cite{mentch2020racial} implemented resampling procedures to take factors like local arrest demography and law enforcement density into account. He found substantially less racial disparity after accounting for local arrest demographics. On the contrary, Ross (2015) \cite{ross2015multi} built a multi-level Bayesian model to investigate the extent of racial bias in the recent shooting of civilians by police. He concluded that racial discrimination observed in police shootings is not explainable due to local-level race-specific crime rates. Noticeably, Mentch and Ross had reached contradictory conclusions. But they inspired us to use data mining and machine learning techniques to incorporate more factors rather than only crime data to understand fatal police shootings in the US better.
\section{Methodology}
We defined \textbf{reporting deviation rate} and \textbf{total absolute reporting deviation rate} to evaluate the media's reporting bias.
In WP dataset analysis, we used \textbf{FP-growth} and \textbf{word cloud} to reveal the frequent patterns and \textbf{DBSCAN clustering} to find fatal shooting hotspots. We also implemented \textbf{correlation analysis} to analyze correlation between multiple numeric attributes and fatal police shooting rate and tested the significance of their correlations. We used \textbf{T-test/ANOVA} to measure the significance of fatal police shooting rate by categorical attributes.
In fatal police shooting rate prediction, we used results of correlation analysis to select numeric predictors. We constructed a series of regression models, including \textbf{Kstar}, \textbf{K-Nearest-Neighbor}, \textbf{Random Forest}, and \textbf{Linear Regression}, to predict state level's fatal police shooting rate.
We measured their performance by \textbf{ten-fold cross validation} scores. In victims’ race prediction, we used \textbf{Chi-square testing} to do \textbf{variables selection}. We built a series of classification models, including \textbf{Gradient Boosting Machine}, \textbf{Multi-class Classifier}, \textbf{Logistic Regression}, and \textbf{Naïve Bayes Classifier}, to predict the race of fatal police shooting victims. We measured their performance by \textbf{stratified five-fold cross validation} scores.
\section{Media reporting analysis}
Since 2015, The Washington Post (WP) has created a database cataloging every fatal shooting nationwide by a police officer in the line of duty. There have been less than 1,000 people killed by police every year. The killed rate of African American people is disproportionally higher than any other race (use Black or B to distinguish with Asian or A in the following). The Figure-1 show the number of people killed by police shooting by year in national wide. Figure-2 shows the average proportion rate of each race killed by police shooting from WP’s website.
\begin{figure}[h!]
\includegraphics[scale=0.25]{F1.png}
\caption{Number of people killed by police shooting by year till 02/12/2020}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.25]{F2.png}
\caption{The average proportion rate of each race killed by police shooting, \cite{2020WP}}
\end{figure}
Admittedly, there is no doubt that Black people’s rate is high than any other group of people if we compare it with the population proportion. However, once we add the proportion of violent incidents offenders \cite{CrimeRatebyRace}to each racial group, we see the ratios have matched each other accordingly. See Figure-3 below.
\begin{figure}[h!]
\includegraphics[scale=0.25]{F3.png}
\caption{Percent of violent incidents of offenders (3-year average) VS. 5-year average population proportion VS. Fatal police shooting victims by race}
\end{figure}
We collected 2472 police shooting victims with known reported media and race from 2016 till now from KBP. We hold our null hypothesis that media reported news by each race should follow the real happened case distribution. We use the racial proportion of victims from WP Data as the ground truth. We selected media with over 100 fatal police shooting news reporting, which include one conservative media, FOX (318) and three liberal media: ABC (244), CBS (227), and NBC (135). The media’s political inclination is showed by Figure-4: Political bias of selected media. We excluded media with less than 100 reporting since most of them are local media whose news may be impacted by the local demographics. Figure-5 shows the comparison results: all four media have different deviations on reporting the truth. In general, Black victims were over-reported among all the media.
\begin{figure}[h!]
\includegraphics[scale=0.25]{F4.png}
\caption{Political bias of selected media}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.25]{F5.png}
\caption{Media reporting proportion of police shooting by different race}
\end{figure}
To exam the difference of deviation between four media, we defined the measurements and calculation methods:
1. Reporting deviation rate of media B regarding race A = $R(B, A)$ = reported proportion of race A by media B – real proportion of race A in WP Data.
If $R(B, A) < 0$, media B underreports race A victims.
Else $R(B, A) > 0$, media B overreports race A victims.
2. Total absolute reporting deviation rate of media B = $=\sum_{i=1}^{N} |R(B, A_i)|$, Ai is the i-th race and N is the number of races.
We then get Figure-6: Four media reporting deviations. FOX has the least deviation rate from the WP Data, and there are only –3.3\% deviation for White, +5.6\% for Black, -1.5\% for Hispanic, -0.6\% for Asian, and –0.6\% for Native American, and +0.4\% for Other. Nevertheless, ABC, CBS, and NBC have larger reporting deviations, which underreported 10\% White victims while overreported Hispanic and African American victims. Specifically, NBC underreported White victims' proportion by 17.4\% and overreported Black victims' proportion by 15.0\%. Furthermore, it even reported more Black victims (41.5\%) than whites (33.3\%). ABC overreported Hispanic victims' proportion by 13.4\%. It reported Hispanic victims (32.0\%) at the same level as White victims (36.1\%). The Figure-7 shows four media total absolute deviation rate.
\begin{figure}[h!]
\includegraphics[scale=0.25]{F6.png}
\caption{Four major media reporting deviation rate}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.25]{F7.png}
\caption{Four major media total absolute deviation rate }
\end{figure}
In terms of total absolute reporting proportion error, NBC has the largest reporting deviation rate (39.8\%), followed by ABC (35.3\%), and CBS (28.9\%), while FOX has the least rate (12.0\%) shown above.
\section{WP fatal police shooting dataset insight}
In this part, we use FP-growth and word cloud to reveal the frequent pattern behind the WP dataset. We use location data from the WP dataset to cluster police shooting incidents and find shooting hotspots. We also tried multi-attributes such as social economics, demographics, political tendency, education, gun ownership rate, police training hours, etc., to verify the possible reason for the police shooting.
\subsection{Frequent Pattern Mining}
From the frequent pattern mining, we can conclude a typical victim shot by police: \textbf{a “man” (96\%) “without mental illness” (77\%) uses “gun” (57\%) “attack” (65\%) police then get “shot” (95\%) by police who does not wear “body camera” (88\%)}. see below Figure-8 and Figure-9. \textbf{“California,” “Texas,” “Florida”} are the top three states were happened more frequently in total number, see Figure-10.
Therefore, our subsequent analysis considers gun ownership rate, crime rate, Marijuana legality, and governor’s party by state level. The frequent pattern uses FP-growth [HPY00], and the threshold of minimum support is 50\% of the total transactions of the WP dataset.
We also apply DBSCAN \cite{khan2014dbscan} to the longitude and latitude of fatal police shooting locations to identify hotspot clusters. Set parameters eps=0.5 and min\_sample=50, we find the dense areas of fatal police shootings, see below Figure-11. We discover that Los Angles and Atlanta metropolitan areas have two of the largest hotspots. Generally, all the fatal police shooting hotspots are in the top population cities in the country.
\begin{figure}[h!]
\includegraphics[scale=0.25]{F8.png}
\caption{Word cloud of police shooting}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{F9.png}
\caption{Frequent pattern of police shooting}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.25]{F10.png}
\caption{Yearly average Fatal Police shooting per 1m by State}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.27]{F11.png}
\caption{Fatal police shooting hotspots distribution}
\end{figure}
\subsection{Correlated variables analysis}
\subsubsection{Quantitative Variable analysis }
To avoid the population distorting the analysis, we normalized the number to the yearly average fatal police shooting per one million people (\textbf{fatal police shooting rate}). We use this density-kind value for the analysis afterwards. Figure-12 shows that every year on average, how many people were shot by police. \textbf{New Mexico} and \textbf{Alaska} where have relatively less population, become the top state. The color is getting darker from east to west except for large population states such as California, Washington. Doesn't it look like the U.S. history of territory expansion?
\begin{figure}[h!]
\includegraphics[scale=0.25]{F12.png}
\caption{Yearly average Fatal Police shooting per 1m by State}
\end{figure}
It looks the longer the state joined the U.S., the lower the fatal police shooting rate in that state. The correlation coefficient is 68\% between the fatal police shooting rate and the U.S. history of territory expansion. Our interpretation is: the reason that U.S. police use excessive violence may root from the westward expansion when handling the violent criminals, see Figure-13.
\begin{figure}[h!]
\includegraphics[scale=0.25]{F13.png}
\caption{US history of territory expansion}
\end{figure}
The correlation coefficient is 64\% between gun ownership rate \cite{2020Gun} and the fatal police shooting rate. 57\% of victims hold guns (not including other weapons), and 65\% of victims chose to attack police. This hold gun rate doubles than the average gun ownership rate among the country, which is 30\% according to Pew’s report \cite{2020PewResearch}, see Figure-14.
\begin{figure}[h!]
\includegraphics[scale=0.25]{F14.png}
\caption{Gun ownership rate by state}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.25]{F15.png}
\caption{Police Basic Training hours by state}
\end{figure}
The third high correlation variable is the land area \cite{US_states_area}, 59\%, followed by violence rate \cite{StateCrimeRate}, 48\%, poverty rate 37\%, unemployment rate 29\%, see Figure-16. Surprisingly, police basic training hours negatively correlate with the fatal police shooting rate, see Figure-15. Although TrainingReform \cite{2020TrainingReform} appeals appeal to increase police training hours, the current data shows the opposite result. It may suggest reviewing and improving the training itself rather than a single slogan for more hours.
\begin{figure}[h!]
\includegraphics[scale=0.25]{F16.png}
\caption{Correlation table}
\end{figure}
We also tested the correlation coefficient's significance to guarantee the association, which are all proved with relatively small p\_value, see Table-1.
$$t = r\sqrt\frac{n-2}{1-r^2}, \alpha = 0.01$$
\begin{table}[h!]
\includegraphics[scale=0.3]{T1.png}
\caption{Correlation coefficient test }
\end{table}
\subsubsection{Categorical variables analysis}
In this part, we tested the significance of the fatal police shooting rate by state-level Governor’s party \cite{US_Governors} and Marijuana Legality \cite{2020Marijuana}. We failed to reject the null hypothesis, and we can conclude that there is no difference between those states on the fatal police shooting.
\begin{figure}[h!]
\includegraphics[scale=0.35]{F17.png}
\caption{Boxplot of fatal police shooting in Republican and Democrat states}
\end{figure}
T-test results:
$H_0$: $\mu_{GOP} = \mu_{Dems}$
$H_A$: the average fatal police shooting rate are not equal between Republican and Democrat governor states
Test result:
since ${value}$ = 0.3254 > 0.05, we failed to reject the null hypothesis. The average fatal police shooting rate are equal between Republican and Democrat governor states
\begin{figure}[h!]
\includegraphics[scale=0.35]{F18.png}
\caption{Boxplot of fatal police shooting among different marijuana legality states}
\end{figure}
One-way ANOVA:
$H_0$: $\mu_{FL} = \mu_{MML} = \mu_{FI}$
$H_A$: at least one of the average rates differs from one of the others
Test result:
F = 0.6492, $P_{value}$ = 0.527, fail to reject the null hypothesis. The average fatal police shooting rate are equal among different marijuana legality states
\section{Fatal police shooting rate and victims race prediction}
In this part, we used the insights we draw from WP data and multi-attributes correlation analysis to build predictive models. We constructed a series of regression models to predict fatal police shooting rates on the state level and a series of classification models to predict fatal police shooting victims' race.
\subsection{Fatal police shooting rate prediction on state level}
According to above correlation analysis, we chose \textbf{the violent crime rate}, \textbf{land area}, and \textbf{gun ownership rate}, \textbf{state\_joined\_year} based on their highest correlation coefficient with the fatal police shooting rate. We acquired more data points by looking at each state every year from 2015 to 2019 separately.
In the Weka machine learning software, we tried all models and chose three of the best-performed models based on ten-fold cross-validation performance. The best one is Kstar \cite{cleary1995k}. It achieved 28.04\% cross-validation relative absolute error and explained 88.53\% variance, followed by K-Nearest-Neighbor Regression and Random Forest. These three models all performed much better than the baseline linear regression model, see Table-2.
\begin{table}[h!]
\includegraphics[scale=0.22]{T2.png}
\caption{Ten-fold cross validation results}
\end{table}
Figure-19 displays the cross-validation prediction error of each data point in the Kstar model (each data point represents the fatal police shooting rate of a state in a particular year). The X-axis is the real police shooting rate, while the Y-axis is the predicted police shooting rate. The large cross means a higher error rate.
\begin{figure}[h!]
\includegraphics[scale=0.24]{F19.png}
\caption{Predicted fatal police shooting rate vs. Real fatal police shooting rate}
\end{figure}
The prediction model tells us that the reason for fatal police shootings could be complex. It is related to the state joined year, state land area, gun ownership rate, and violent crime rate. It suggests us to understand this problem from multi-dimensional aspects.
\subsection{Predict victims’ race in fatal police shooting}
This prediction intends to test whether or not there is racial discrimination during the fatal police shooting. The null hypothesis is that the model cannot predict the victim’s race (No racial discrimination). The alternative hypothesis is that the model can predict the victim’s race (racial discrimination). We use WP data from 01/01/2015 to 02/12/2020 and excluded the data missing the race information. The total records are 4518. Since “age” is the only numeric variable, we applied the chi-square test to select the predictor for the rest of the variables.
\subsubsection{Chi-square testing}
$$\chi^2 = \sum\frac{(O_i - E_i)^2}{E_i}\\, \alpha = 0.05$$
where $\chi^2$ = chi squared, $O_i$ = observed value, $E_i$ = expected value
\begin{table}[h!]
\includegraphics[scale=0.2]{T3.png}
\caption{The chi\_square contingency table for body\_ camera}
\end{table}
\begin{table}[h!]
\includegraphics[scale=0.2]{T4.png}
\caption{Chi-square testing for categorical variables}
\end{table}
After applying chi-square testing to the above categorical variables, we find that threat\_level, signs\_of\_mental\_illness, armed, flee, body\_camera, and gender are not independent of the race at 0.05 statistically significant level, see Table 4. On the other hand, manner\_of\_death and is\_gencoding\_exact are independent of the race at 0.05 statistically significant level. For city and state, the degree of freedoms (DF) is too large to apply chi-square testing. Finally, we chose \textbf{armed}, \textbf{age}, \textbf{gender}, \textbf{signs\_of\_mental\_illness}, \textbf{threat\_level}, \textbf{flee}, and \textbf{body\_camera} as predictors and city, age as back-up predictors for the racial classification model.
\subsubsection{Classification model}
In the Weka machine learning software and Python AutoML package, we tried all models and chosen the top three best-performed models based on stratified five-fold cross-validation performance. see Table-5 below.
\begin{table}[h!]
\includegraphics[scale=0.28]{T5.png}
\caption{Stratified cross validation results}
\end{table}
We find that adding city and state attributes could boost model performance. Gradient Boosting Machine \cite{friedman2001greedy} performs best, having 0.589 precision and 0.611 recall, slightly better than predicting all victims to be white (about 50\% precision and recall). GBM algorithm gives us an idea of the importance of attributes we selected for prediction. \textbf{City}, \textbf{state}, \textbf{armed}, and \textbf {age} attributes play essential roles in racial prediction. See Figure-20 below. We failed to reject the null hypothesis since even the best-performed model cannot predict victims’ race well, proving that there is no racial discrimination for observed fatal police shootings in WP data.
\section{Conclusion}
In conclusion, we found that mainstream media disproportional reporting fatal police shooting by the race, which may instigate hostile sentiments between police and the public. We suggest mainstream media report all news according to the realistic. Second, we found that the police shooting rate depends on many variables. The top four significant attributes were \textbf{state joined year}, \textbf{state land area}, \textbf{gun ownership rate}, and {violent crime rate}. Choosing these four attributes as predictors, our best-performed regression model could predict the fatal police shooting rate with about 88.53\% correlation coefficient. Admittedly, we cannot find all the influence factors. It indicates that the fatal police shooting is a \textbf{complex multi-dimensional} problem. We also found two variables (police basic training hour, number of months police can work before basic training) appealed by CNBC negatively and weakly correlated with the fatal police shooting. Third, based on the WP dataset, we tried to depict a typical scenario when a police shooting happened and remark the hotspots among the country. Last, our three best performance models show no significant evidence to conclude that racial discrimination happened during fatal police shootings recorded by the WP dataset.
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,499,034 | arxiv | \section{Introduction}\label{sec:intro}
The Painlev\'e equations, which were discovered by Painlev\'e~\cite{P} and Gambier~\cite{Gm} in the early twentieth century,
are non-linear second order ordinary differential equations that define new special functions.
They are closely related to the classical special functions (such as the Gauss hypergeometric function, the Bessel function, and so on)
and elliptic functions.
Originally, the Painlev\'e equations were classified into six equations.
We often denote them as $P_{\mathrm{I}},\ldots,P_{\mathrm{VI}}$.
However, from the viewpoint of the theory of initial value spaces (\cite{O1}),
it is natural to divide the third Painlev\'e equation into three types according to the values of its parameters.
We denote them as $P_{\mathrm{III}(D_6)}, P_{\mathrm{III}(D_7)}$, and $P_{\mathrm{III}(D_8)}$.
The so-called third Painlev\'e equation is then $P_{\mathrm{III}(D_6)}$.
We thus consider that there are eight types of Painlev\'e equations~(\cite{Sak1}).
As is well known, the Painlev\'e equations can be written in Hamiltonian form (\cite{Ok, OKSO}).
Here we give the eight Hamiltonians:
{\allowdisplaybreaks
\begin{align*}
& t(t-1)H_{\rm VI}\left({\alpha , \beta\atop\gamma, \delta
};t;q,p\right)=\;q(q-1)(q-t)p^2\\
& \hspace{13em}+\{ \delta q(q-1)-(2\alpha +\beta +\gamma +\delta )q(q-t)+\gamma
(q-1)(q-t)\} p\\
& \hspace{13em}+\alpha (\alpha +\beta )(q-t),\\
& tH_{\rm V}\left({\alpha , \beta \atop \gamma };t;q,p\right)=\;p(p+t)q(q-1)
+\beta pq+\gamma p-(\alpha +\gamma )tq,\\
& H_{\rm IV}\left(\alpha , \beta;t;q,p\right)=\;
pq(p-q-t)+\beta p+\alpha q,\quad
tH_{\mathrm{III}(D_6)}\left(\alpha , \beta ;t;q,p\right)=\;
p^2q^2-(q^2-\beta q-t)p-\alpha q,\\
& tH_{\mathrm{III}(D_7)}\left(\alpha;t;q,p\right)=\;
p^2q^2+\alpha qp+tp+q,\quad
tH_{\mathrm{III}(D_8)}\left(t;q,p\right)=\;
p^2q^2+qp-q-\frac{t}{q}
\\
& H_{\rm II}\left(\alpha;t;q,p\right)=\;
p^2-(q^2+t)p-\alpha q,\quad \hspace{3em}
H_{\rm I}\left(t;q,p\right)=\;
p^2-q^3-tq,
\end{align*}
}
where $H_{\mathrm{J}}$ is the Hamiltonian corresponding to $P_{\mathrm{J}}$.
An important aspect of the Painlev\'e equations for us is their relation to linear differential equations;
that is, \textit{isomonodromic deformations} of linear differential equations.
The first example was given by R. Fuchs~\cite{F} in the case of the sixth Painlev\'e equation.
An isomonodromic deformation is a deformation of a linear differential equation that does not change its ``monodromy data'', see \cite{JMU} for details.
We only give here a brief description of isomonodromic deformations.
Throughout this series of papers, linear differential equations are written in the form of first order systems.
Consider a system of first order linear differential equations:
\begin{equation}\label{eq:linear_system}
\frac{dY}{dx}=
A(x,u)Y
\end{equation}
where $A(x,u)$ is a matrix whose entries are rational functions in $x$ and depend on some parameters $u=(u_1,\ldots,u_n)$.
The isomonodromic deformation of (\ref{eq:linear_system}) is equivalent to an existence of matrices $B_i(x,u) \, (i=1,\ldots,n)$, rational in $x$,
such that
\begin{align*}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=A(x,u)Y \\
\frac{\partial Y}{\partial u_i}&=B_i(x,u)Y \quad (i=1,\ldots,n)
\end{aligned}
\right.
\end{align*}
are completely integrable.
Then the isomonodromic deformation equations of (\ref{eq:linear_system}) are written as
\begin{align*}
&\frac{\partial A(x,u)}{\partial u_i}-\frac{\partial B_i(x,u)}{\partial x}+[A(x,u),B_i(x,u)]=O,\\
&\frac{\partial B_i(x,u)}{\partial u_j}-\frac{\partial B_j(x,u)}{\partial u_i}+[B_i(x,u),B_j(x,u)]=O.
\end{align*}
Hereafter we use the term {\it Painlev\'e-type equations} instead of isomonodromic deformation equations.
For example,
suppose that the system (\ref{eq:linear_system}) has the following form
\begin{equation}\label{eq:Fuchs}
\frac{dY}{dx}=
\sum_{i=1}^n \frac{A_i}{x-u_i}Y \quad (A_i \in M_m(\mathbb{C}))
\end{equation}
with some generic conditions.
Then $B_i$'s are known to be given by $B_i(x,u)=-\frac{A_i}{x-u_i}$~(\cite{Sc}).
Here we review the classification of the Painlev\'e equations in terms of isomonodromic deformations.
Among the Painlev\'e equations, the sixth Painlv\'e equation is the ``source equation'' in the sense that
all the other Painlev\'e equations can be obtained from $P_{\mathrm{VI}}$ through degenerations.
First, we look at the degeneration scheme of Painlev\'e equations, based on the original classification
which classifies the Painlev\'e equations into six types.
\vspace{3mm}
\begin{xy}
{(3,0) *{\begin{tabular}{|c|}
\hline
1+1+1+1\\
\hline
$H_{\rm VI}$\\
\hline
\end{tabular}
}},
{(40,0) *{\begin{tabular}{|c|}
\hline
2+1+1\\
\hline
$H_{\rm V}$\\
\hline
\end{tabular}
}},
{\ar (15,0);(31,0)},
{\ar (49,0);(67,10)},
{\ar (49,0);(67,-10)},
{(77,10) *{\begin{tabular}{|c|}
\hline
2+2\\
\hline
$H_{\mathrm{III}(D_6)}$\\
\hline
\end{tabular}}},
{\ar (86,10);(104,0)},
{\ar (86,-10);(104,0)},
{(77,-10) *{\begin{tabular}{|c|}
\hline
3+1\\
\hline
$H_{\rm IV}$\\
\hline
\end{tabular}}},
{(110,0) *{\begin{tabular}{|c|}
\hline
4\\
\hline
$H_{\rm II}$\\
\hline
\end{tabular}}},
{\ar (116,0);(133,0)},
{(140,0) *{\begin{tabular}{|c|}
\hline
7/2\\
\hline
$H_{\mathrm{I}}$\\
\hline
\end{tabular}}},
\end{xy}
\vspace{3mm}
The number in the upper half of each box is the ``singularity pattern'',
which expresses Poincar\'e ranks of the singularities of the associated linear system.
More specifically, each number partitioned by + expresses the number ``Poincar\'e rank +1'' (see Section~\ref{sec:HTL}).
In particular, 1+1+1+1 means that corresponding linear system has four regular singular points (thus it is a Fuchsian system).
However, in the above scheme, the third Painlev\'e equations of type $D_7^{(1)}, D_8^{(1)}$ are absent.
Moreover, in view of associated linear systems,
the degeneration $H_{\mathrm{II}} \to H_{\mathrm{I}}$ has different meaning from the other degenerations.
Namely, the other degenerations correspond to confluences of singular points of associated linear systems,
while the degeneration $H_{\mathrm{II}} \to H_{\mathrm{I}}$ corresponds to the degeneration of the Jordan canonical form
of the leading coefficient of (or, more precisely, the degeneration of the HTL canonical form at) the irregular singular point.
Considering the other possible degenerations of HTL canonical forms and confluences of singular points,
one can obtain the following ``complete'' degeneration scheme~\cite{OO}:
\vspace{3mm}
\begin{xy}
{(3,0) *{\begin{tabular}{|c|}
\hline
1+1+1+1\\
\hline
$H_{\mathrm{VI}}$\\
\hline
\end{tabular}
}},
{(35,0) *{\begin{tabular}{|c|}
\hline
2+1+1\\
\hline
$H_{\mathrm{V}}$\\
\hline
\end{tabular}
}},
{\ar (14,0);(26,0)},
{\ar (44,0);(57,14)},
{\ar (44,0);(57,0)},
{\ar (44,0);(57,-14)},
{(67,15) *{\begin{tabular}{|c|}
\hline
2+2\\
\hline
$H_{\mathrm{III}(D_6)}$\\
\hline
\end{tabular}}},
{(67,0) *{\begin{tabular}{|c|}
\hline
$\frac{3}{2}+1+1$\\
\hline
$H_{\mathrm{III}(D_6)}$\\
\hline
\end{tabular}}},
{\ar (78,14);(92,14)},
{\ar (78,14);(92,1)},
{\ar (78,0);(92,13)},
{\ar (78,0);(92,-13)},
{\ar (78,-14);(92,-1)},
{\ar (78,-14);(92,-14)},
{(67,-15) *{\begin{tabular}{|c|}
\hline
3+1\\
\hline
$H_{\mathrm{IV}}$\\
\hline
\end{tabular}}},
{(102,15) *{\begin{tabular}{|c|}
\hline
$2+\frac{3}{2}$\\
\hline
$H_{\mathrm{III}(D_7)}$\\
\hline
\end{tabular}}},
{(102,0) *{\begin{tabular}{|c|}
\hline
4\\
\hline
$H_{\mathrm{II}}$\\
\hline
\end{tabular}}},
{\ar (112,14);(125,1)},
{\ar (112,14);(125,14)},
{\ar (112,0);(125,0)},
{\ar (112,-14);(125,0)},
{(102,-15) *{\begin{tabular}{|c|}
\hline
$\frac{5}{2}+1$\\
\hline
$H_{\mathrm{II}}$\\
\hline
\end{tabular}}},
{(135,15) *{\begin{tabular}{|c|}
\hline
$\frac{3}{2}+\frac{3}{2}$\\
\hline
$H_{\mathrm{III}(D_8)}$\\
\hline
\end{tabular}}},
{(135,0) *{\begin{tabular}{|c|}
\hline
$\frac{7}{2}$\\
\hline
$H_{\mathrm{I}}$\\
\hline
\end{tabular}}},
\end{xy}
\vspace{3mm}
\begin{rem}
In the case of the Painlev\'e equations,
since (standard) corresponding linear systems are rank 2,
the degenerations of HTL canonical forms are caused by the degenerations of Jordan canonical forms.
However, in general, a degeneration of an HTL canonical form does not always correspond to a degeneration of a Jordan canonical form
(this will be shown in the forthcoming papers~\cite{K2, K3}).
\qed
\end{rem}
Recently, there have been many generalizations of the Painlev\'e equations in the literature.
What is important here is that they can be written as compatibility conditions of linear differential equations.
Thus we can say that they describe isomonodromic deformations of some linear differential equations.
The purpose of the present series of papers is to classify those of four-dimensional phase space using associated linear systems.
More specifically,
we construct the degeneration scheme starting from suitable Fuchsian systems
so that we obtain the classification of four-dimensional Painlev\'e-type equations.
What is the ``source equation'' of the degeneration scheme of four-dimensional Painlev\'e-type equations?
The recent development of the theory of Fuchsian systems (\cite{Katz, Os, HF}) enables us to
answer the question.
That is, there are four Fuchsian systems (thus corresponding four Painlev\'e-type equations~\cite{Sak2}) which should be placed
at the starting points of the degeneration scheme of four-dimensional Painlev\'e type equations:
one admits two-dimenional deformation and corresponds to the Garnier system~\cite{G},
the remaining three correspond to the so-called Fuji-Suzuki system~\cite{FS1, Ts} (abbreviated to FS-system),
the Sasano system~\cite{Ss}, and the sixth matrix Painlev\'e system~\cite{B2, K}.
In \cite{KNS}, the authors constructed the degeneration scheme starting with the above four Fuchsian systems
by considering confluences of singular points.
As the result, they obtained the degeneration scheme of four-dimensional Painlev\'e-type equations
associated with unramified linear equations.
Note that the degeneration scheme of the Garnier system in two variables had already been obtained by Kimura~\cite{Ki}.
However, the confluence of singularities of linear equations is not sufficient to produce all the four-dimensional Painlev\'e-type equations.
The aim of this series of papers is, by considering the degeneration of HTL canonical forms,
to obtain the ``complete'' degeneration scheme of four-dimensional Painlev\'e-type equations.
In the present paper, we focus on degenerations of HTL canonical forms starting from the Fuchsian system
corresponding to what we call the sixth matrix Painlev\'e system.
This paper is organized as follows.
In Section~\ref{sec:HTL}, we explain HTL canonical forms at singular points of linear systems
and demonstrate the degeneration of an HTL form.
In Section~\ref{sec:Lax_pairs}, we present the Hamiltonians
associated with ramified linear systems together with their Lax pairs.
Section~\ref{sec:Laplace} is devoted to describe correspondences through the Laplace transform.
We give in advance the degeneration scheme of the matrix Painlev\'e systems.
\vspace{2mm}
{\scriptsize
\begin{xy}
{(0,0) *{\begin{tabular}{|c|}
\hline
1+1+1+1\\
\hline
$22,22,22,211$\\
$H^{\mathrm{Mat}}_{\mathrm{VI}}$\\
\hline
\end{tabular}
}},
{\ar (12,0);(23,2)},
{\ar (12,0);(23,-2)},
{(35,0) *{\begin{tabular}{|c|}
\hline
2+1+1\\
\hline
$(2)(2),22,211$\\
$(2)(11),22,22$\\
$H^{\mathrm{Mat}}_{\mathrm{V}}$\\
\hline
\end{tabular}
}},
{\ar (47,2);(58,22)},
{\ar (47,2);(58,17)},
{\ar (47,2);(58,-19)},
{\ar (47,2);(59,2)},
{\ar (47,-2);(58,17)},
{\ar (47,-2);(59,-2)},
{\ar (47,-2);(58,-19)},
{(70,20) *{\begin{tabular}{|c|}
\hline
3+1\\
\hline
$((2))((2)),211$\\
$((2))((11)),22$\\
$H^{\mathrm{Mat}}_{\mathrm{IV}}$\\
\hline
\end{tabular}}},
{\ar (82,22);(93,0)},
{\ar (82,22);(95,22)},
{\ar (82,18);(93,0)},
{\ar (82,18);(95,18)},
{(70,0) *{\begin{tabular}{|c|}
\hline
$\frac{3}{2}+1+1$\\
\hline
$(2)_2,22,211$\\
$(11)_2,22,22$\\
$H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}$\\
\hline
\end{tabular}}},
{\ar (81,2);(95,22)},
{\ar (81,2);(95,18)},
{\ar (81,2);(95,-18)},
{\ar (81,-2);(95,18)},
{\ar (81,-2);(95,-22)},
{(70,-20) *{\begin{tabular}{|c|}
\hline
2+2\\
\hline
$(2)(2),(2)(11)$\\
$H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}$\\
\hline
\end{tabular}}},
{\ar (82,-20);(93,0)},
{\ar (82,-20);(95,-18)},
{\ar (82,-20);(95,-22)},
{(105,20) *{\begin{tabular}{|c|}
\hline
$\frac52+1$ \\
\hline
$(((2)))_2, 211$\\
$(((11)))_2, 22$\\
$H^{\mathrm{Mat}}_{\mathrm{II}}$\\
\hline
\end{tabular}}},
{\ar (115,22);(128,0)},
{\ar (115,17);(128,0)},
{(105,0) *{\begin{tabular}{|c|}
\hline
4\\
\hline
$(((2)))(((11)))$\\
$H^{\mathrm{Mat}}_{\mathrm{II}}$\\
\hline
\end{tabular}}},
{\ar (117,0);(128,0)},
{(105,-20) *{\begin{tabular}{|c|}
\hline
$\frac32+2$\\
\hline
$(2)_2, (2)(11)$\\
$(11)_2, (2)(2)$\\
$H^{\mathrm{Mat}}_{\mathrm{III}(D_7)}$\\
\hline
\end{tabular}}},
{\ar (116,-18);(129,-20)},
{\ar (116,-18);(128,0)},
{\ar (116,-22);(129,-20)},
{\ar (116,-22);(128,0)},
{(139,0) *{\begin{tabular}{|c|}
\hline
$\frac72$ \\
\hline
$(((((11)))))_2$\\
$H^{\mathrm{Mat}}_{\mathrm{I}}$\\
\hline
\end{tabular}}},
{(139,-20) *{\begin{tabular}{|c|}
\hline
$\frac32+\frac32$ \\
\hline
$(2)_2, (11)_2$\\
$H^{\mathrm{Mat}}_{\mathrm{III}(D_8)}$\\
\hline
\end{tabular}}}
\end{xy}
}
\vspace{2mm}
Degeneration schemes of the Sasano systems, the FS systems, and the Garnier systems will appear in
our forthcoming papers~\cite{K2,K3}.
\bigskip
\noindent
\textbf{Acknowledgements}
\noindent
The author wishes to thank Professors Hidetaka Sakai and Akane Nakamura
for their invaluable advices and comments.
The author also thanks Professor Kazuki Hiroe.
The notation of spectral types for ramified singularities in this series of papers follows his suggestion.
\section{HTL canonical forms and their degeneration}\label{sec:HTL}
As mentioned earlier, to classify the four-dimensional Painlev\'e-type equations,
we attach a linear system to each Painlev\'e-type equation.
Hence we would like to introduce simple notations of linear systems;
that is, Riemann schemes and, their abbreviated symbols, spectral types.
The Riemann scheme and the spectral type of a linear system will be defined through the collection of ``canonical forms'' of
the linear system.
Here the canonical form is determined at each singular point.
We call the canonical form the HTL canonical form (or HTL form) of the system.
\subsection{HTL canonical forms}\label{sec:HTL}
Here we recall the HTL canonical form of a linear system at a singular point.
Note that for a linear system
\[
\frac{dY}{dx}=A(x)Y,
\]
a transformation of the dependent variable $Y=P(x)Z$ by an invertible matrix $P(x)$ yields the
following system:
\[
\frac{dZ}{dx}=\left( P(x)^{-1}A(x)P(x)-P(x)^{-1}\frac{dP(x)}{dx} \right)Z.
\]
We write the coefficient matrix $P^{-1}AP-P^{-1}\frac{dP}{dx}$ as $A^P(x)$.
This kind of transformation $A(x) \mapsto A^P(x)$ is called a gauge transformation.
We often express the above transformation of the dependent variable as $Y \to PY$
(i.e. we write the new dependent variable as $Y$ again).
Linear systems that we treat in this series of papers are those with rational function coefficients.
Such a system is generally written as follows:
\begin{equation}
\frac{dY}{dx}=
\left(\sum_{\nu=1}^n\sum_{k=0}^{r_{\nu}}\frac{A_{\nu}^{(k)}}{(x-u_{\nu})^{k+1}}
+\sum_{k=1}^{r_{\infty}}A_{\infty}^{(k)}x^{k-1}
\right)Y, \quad
A_j^{(k)} \in M_m(\mathbb{C}).
\end{equation}
This system has singular points at $x=u_1,\ldots,u_n$, and $\infty$.
Choosing one of the singular points and taking the local coordinates $z=x-u_\nu$ or $z=1/x$
according to the choice,
we write the system at $z=0$ as follows:
\begin{equation}\label{eq:Laurent}
\frac{dY}{dz}=\left(
\frac{A_0}{z^{r+1}}+\frac{A_1}{z^{r}}+\cdots+A_{r+1}+A_{r+2} z+\cdots
\right)Y.
\end{equation}
We denote the field of formal Laurent series in $z$ by $\mathbb{C}(\!(z)\!)$,
and the field of Puiseux series $\cup_{p > 0}\mathbb{C}(\!(z^{\frac1p})\!)$ by $\mathcal{K}_z$.
We can convert (\ref{eq:Laurent}) to the HTL canonical form.
Here the definition of the HTL canonical form is as follows.
\begin{df}
An element in $M_m(\mathcal{K}_z)$ of the form
\begin{equation}
\frac{D_0}{z^{l_0}}+\frac{D_1}{z^{l_1}}+\cdots+\frac{D_{s-1}}{z^{l_{s-1}}}+\frac{\Theta}{z^{l_s}}
\end{equation}
where
\begin{itemize}
\item $l_0, \ldots, l_{s}\ (l_0 > l_1 > \cdots > l_{s-1}>l_s = 1)$ are rational numbers,
\item $D_0, \ldots, D_{s-1}$ are diagonal matrices,
\item $\Theta$ is a (not necessarily diagonal) Jordan matrix which commutes with all $D_j$'s,
\end{itemize}
is called an {\it HTL canonical form} or {\it HTL form} for short.
\qed
\end{df}
The following theorem is fundamental for us.
\begin{thm}[Hukuhara \cite{Huk}, Turrittin \cite{Tur}, Levelt \cite{Lev}]
For any
\begin{equation}\label{eq:Laurent2}
A(z)=\frac{A_0}{z^{r+1}}+\frac{A_1}{z^{r}}+\cdots+A_{r+1}+A_{r+2} z+\cdots, \quad
A_j \in M_m(\mathbb{C}),
\end{equation}
there exists a matrix $P(z) \in \mathrm{GL}_m(\mathcal{K}_z)$ such that $A^P(z)$ is an HTL form
\begin{equation}\label{eq:HTLform}
A^P(z)=
\frac{D_0}{z^{l_0}}+\frac{D_1}{z^{l_1}}+\cdots+\frac{D_{s-1}}{z^{l_{s-1}}}+\frac{\Theta}{z^{l_s}}.
\end{equation}
Here $l_0, \ldots, l_{s}$ are uniquely determined only by the original system (\ref{eq:Laurent2}).
If the following
\begin{equation}
\frac{\tilde{D}_0}{z^{l_0}}+\frac{\tilde{D}_1}{z^{l_1}}+\cdots+\frac{\tilde{D}_{s-1}}{z^{l_{s-1}}}
+\frac{\tilde{\Theta}}{z^{l_s}}
\end{equation}
is another HTL canonical form corresponding to the same system~(\ref{eq:Laurent2}),
there exists a constant matrix $g \in \mathrm{GL}_m(\mathbb{C})$ and a natural number $k \in \mathbb{Z}_{\ge 1}$ such that
\begin{equation}
\tilde{D}_j=g^{-1}D_j g, \quad \exp(2\pi i k \tilde{\Theta})=g^{-1}\exp(2\pi i k \Theta)g
\end{equation}
hold.
\end{thm}
We call (\ref{eq:HTLform}) the HTL canonical form (or HTL form) of (\ref{eq:Laurent2}).
The number $l_0-1$ is called the {\it Poincar\'e rank} of the singular point.
When there is a rational number $l_j$ that is not an integer, the singular point is called a {\it ramified} irregular singular point.
A linear system is said to be of \textit{ramified type} if it has a ramified irregular singular point.
When we express the Poincar\'e ranks of the singular points of a given system,
we attach the number ``Poincar\'e rank +1'' to each singular point and
connect them with ``+''.
We call it the {\it singularity pattern} of the system.
In this series of papers, the residue matrix $\Theta$ of an HTL form is always diagonal (see Section~\ref{sec:shearing}).
In the next subsection, we define Riemann schemes and spectral types for such cases.
\subsection{Riemann schemes and spectral types}
\subsubsection{Unramified singularities and refining sequences of partitions}\label{sec:RSP}
The Riemann scheme of a linear system is defined to be the table of HTL forms of the system at all singular points.
Here we introduce a special notation for HTL forms.
For an $m \times m$ HTL form
\begin{equation}\label{eq:HTL_form}
\frac{D_0}{z^{l_0}}+\frac{D_1}{z^{l_1}}+\cdots+\frac{D_{s-1}}{z^{l_{s-1}}}+\frac{\Theta}{z},
\end{equation}
let $d \in \mathbb{Z}_{>0}$ be the minimum element of
$\{ k \in \mathbb{Z}_{>0} \,|\, k l_j \in \mathbb{Z} \ (j=0,\ldots,s-1) \}$.
Then the canonical form (\ref{eq:HTL_form}) is rewritten in the following form
\begin{equation}
\frac{T_{0}}{z^{\frac{b}{d}+1}}+\frac{T_{1}}{z^{\frac{b-1}{d}+1}}+\cdots+\frac{T_{b-1}}{z^{\frac{1}{d}+1}}+\frac{\Theta}{z}
\end{equation}
where some of $T_j$'s may be the zero matrix.
Then, by writing the diagonal entries of $T_j$'s and $\Theta$ as $t^j_k$ and $\theta_k\ (k=1, \ldots, m)$
respectively, we can express the canonical form as follows:
\begin{equation}
\begin{array}{c}
x=u_i \ \left( \frac{1}{d} \right) \\
\overbrace{\begin{array}{ccccc}
t^0_1 & t^1_1 & \ldots & t^{b-1}_1 & \theta_1\\
\vdots & \vdots & & \vdots & \vdots \\
t^0_m & t^1_m & \ldots & t^{b-1}_m & \theta_m
\end{array}}
\end{array}.
\end{equation}
When the position of a singular point under consideration is $\infty$, then of course we write ``$x=\infty$''.
The symbol $(\frac1d)$ is omitted when $d=1$.
The table of the above expressions of HTL forms for all singularities of a linear system is called the {\it Riemann scheme} of the system.
It is shown in \cite{KNS} that the feature of an HTL form of a linear system at an \textit{unramified} singular point
is well described by the refining sequence of partitions
\begin{df}\label{def:rsp}
Let $\lambda=\lambda_1\ldots \lambda_p$ and $\mu=\mu_1\ldots \mu_q$ be partitions of a natural number $m$:
$\lambda_1+\cdots+\lambda_p=\mu_1+\cdots+\mu_q=m$.
Here we assume that $\lambda_i$'s and $\mu_i$'s are not necessarily arranged in descending or ascending order.
If there exist a disjoint decomposition $\{ 1,2,\ldots , p\}=I_1\coprod \cdots \coprod I_q$ of the index set of $\lambda$ such that $\mu_k=\sum_{j\in I_k}\lambda_j$ holds,
then we call $\lambda$ a {\it refinement} of $\mu$.
Let $[p_0,\ldots,p_r]$ be an $(r+1)$-tuple of partitions of $m$.
When $p_{i+1}$ is a refinement of $p_i$ for all $i\ (i=0,\ldots,r-1)$, we call $[p_0,\ldots,p_r]$ a {\it refining sequence of partitions} or {\it RSP} for short.
\qed
\end{df}
The appearance of the RSP is a consequence of the following proposition.
\begin{prop}{(block diagonalization)}\label{thm:block_diag}
For any formal Laurent series
\begin{equation}\label{eq:irreg_sys}
A(z)=\frac{1}{z^{r+1}}\left( A_0+A_1z+\cdots \right)
\end{equation}
where the eigenvalues of $A_0$ are $\lambda_1,\ldots,\lambda_n$ and their multiplicities $m_1,\ldots,m_n$ respectively,
we can choose a formal power series $P(z)$ so that the gauge transformation by $P(z)$ changes {\rm (\ref{eq:irreg_sys})} to
the following form:
\begin{align}
&A^P(z)
=
\begin{pmatrix}
B_1 & & \\
& \ddots & \\
& & B_n
\end{pmatrix}, \quad
B_k=\frac{1}{z^{r+1}}\left( B^k_0+B^k_1z+\cdots \right)
\end{align}
where $B^k_0=\lambda_k I_{m_k}+N_k$ and $N_k$ is nilpotent.
\end{prop}
Therefore the system corresponding to (\ref{eq:irreg_sys}) is formally reduced to the following $n$ systems:
\begin{equation}\label{eq:dec_system}
\frac{dZ_k}{dz}=B_k Z_k \quad (k=1,\ldots,n).
\end{equation}
To compute the HTL form for a given linear system, we first decompose the linear system according to
Proposition~\ref{thm:block_diag}.
If the leading matrices $B^1_0, \ldots, B^n_0$ are semisimple (i.e. if they are scalar matrices),
we focus on the next matrices $B^1_1, \ldots, B^n_1$.
If they are all semisimple, then (\ref{eq:dec_system}) again decompose along each eigenvalue of $B^k_1\ (k=1,\ldots,n)$.
In this way, the RSP structure arises.
\begin{eg}
Consider the following HTL form:
{\small
\begin{align*}
\begin{pmatrix}
t^0_1 & & & \\
& t^0_1 & & \\
& & t^0_2 & \\
& & & t^0_2
\end{pmatrix}
\frac{1}{z^3}+
\begin{pmatrix}
t^1_1 & & & \\
& t^1_1 & & \\
& & t^1_2 & \\
& & & t^1_3
\end{pmatrix}
\frac{1}{z^2}+
\begin{pmatrix}
\theta_1 & & & \\
& \theta_2 & & \\
& & \theta_3 & \\
& & & \theta_4
\end{pmatrix}
\frac{1}{z}.
\end{align*}
}
To this HTL form we attach the RSP $[22,211,1111]$, which is the spectral type of this HTL form.
The spectral type is also represented as $((11))((1)(1))$. We use the latter notation
(for detail, see~\cite{KNS}).
\qed
\end{eg}
\subsubsection{Spectral types of ramified singularities}\label{sec:shearing}
As we have seen, the spectral type of an \textit{unramified} singularity is an RSP.
If there appear non-semisimple matrices in the course of the above successive block diagonalizations,
we need to perform the so-called shearing transformation.
Then the singularity is in general ramified.
As to a ramified singularity, its spectral type consists of ``copies" of RSPs.
Here we briefly explain the shearing transformation.
According to Proposition~\ref{thm:block_diag}, it suffices to consider the reduction of the following system
\begin{equation}\label{eq:nilpotent_leading}
\frac{dY}{dz}=\frac{1}{z^{r+1}}\left( A_0+A_1z+\cdots+A_r z^r+\cdots \right)Y
\end{equation}
where $A_0$ is a Jordan matrix with only one eigenvalue to its HTL form.
By means of a scalar gauge transformation, we can shift $A_0$ by a scalar matrix.
Thus, without loss of generality, we can assume that $A_0$ is nilpotent.
The shearing transformation is a gauge transformation by a diagonal matrix, which is typically of the form
\begin{equation}\label{eq:shear_mat_general}
\mathrm{diag}(1, z^s, \ldots, z^{(m-1)s})
\end{equation}
where $s$ is a positive rational number.
By applying (more than once in general) the transformation of this kind with suitable value of $s$ to (\ref{eq:nilpotent_leading}),
we can make the leading matrix diagonalizable.
Here we illustrate with an example how the shearing transformation works.
For the general case, see for example \cite{BV, Huk, Wa}.
\begin{eg}
Consider the following system:
\begin{equation}\label{eq:eg_shear}
\frac{dY}{dx}=\left( \frac{A_0^{(-1)}}{x^2}+\frac{A_0^{(0)}}{x}+A_{\infty} \right)Y.
\end{equation}
Here
$A_0^{(-1)}$, $A_0^{(0)}$, and $A_{\infty}$ are given as follows:
\begin{align*}
A_0^{(-1)}&=t
\begin{pmatrix}
O_2 \\
I_2
\end{pmatrix}
\begin{pmatrix}
P & I_2
\end{pmatrix},\quad
A_0^{(0)}=
\begin{pmatrix}
QP & Q \\
I_2 & -PQ+\theta^0
\end{pmatrix},\quad
A_{\infty}=
\begin{pmatrix}
O & I_2 \\
O & O
\end{pmatrix}
\end{align*}
where the matrices $Q$, $P$, and $\Theta$ are given by
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u\\
-q_2/u & q_1
\end{pmatrix},
\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u\\
(p_2q_2-\theta^0-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^{\infty}_2 & \\
& \theta^{\infty}_3
\end{pmatrix}.
\end{align*}
Hereafter, we often abbreviate a scalar matrix $k I$ to $k$.
This system has an irregular singular point at $x=\infty$.
By changing the independent variable as $z=1/x$,
we rewrite the system (\ref{eq:eg_shear}) as follows:
\begin{align*}
\frac{dY}{dz}&=-
\left(
\frac{A_\infty}{z^2}+\frac{A_0^{(0)}}{z}+A_0^{(-1)}
\right)Y\\
&=:A(z)Y.
\end{align*}
In this case, a shearing matrix can be chosen as follows (though it has different form from (\ref{eq:shear_mat_general}))
\begin{equation}\label{eq:shear_mat}
S=
\begin{pmatrix}
I_2 & O \\
O & z^{1/2} I_2
\end{pmatrix}.
\end{equation}
Then we have
\begin{align*}
A^S(z)=
\begin{pmatrix}
O & -I \\
-I & O
\end{pmatrix}
\frac{1}{z^{3/2}}+
\begin{pmatrix}
-QP & O \\
O & PQ-\theta^0-1/2
\end{pmatrix}
\frac{1}{z}+
\begin{pmatrix}
O & -Q \\
-tP & O
\end{pmatrix}
\frac{1}{z^{1/2}}+
\begin{pmatrix}
O & O \\
O & -tI
\end{pmatrix}.
\end{align*}
Note that the leading term has semisimple coefficients.
In fact, let $G$ be
\begin{equation*}
G=
\begin{pmatrix}
-I & I \\
I & I
\end{pmatrix},
\end{equation*}
then we have
\begin{equation*}
A^{SG}(z)=
\begin{pmatrix}
I & O \\
O & -I
\end{pmatrix}
\frac{1}{z^{3/2}}+
\begin{pmatrix}
\Theta/2-1/4 & \frac{QP+PQ}{2}-\frac{\theta^0}{2}-\frac14 \\
\frac{QP+PQ}{2}-\frac{\theta^0}{2}-\frac14 & \Theta/2-1/4
\end{pmatrix}
\frac{1}{z}+\cdots.
\end{equation*}
Further, applying the gauge transformation by $z^{-1/4}$, we can eliminate $\frac{-\frac{1}{4}I_4}{z}$.
Hence the HTL form is
\begin{equation*}
\begin{pmatrix}
I & O \\
O & -I
\end{pmatrix}
\frac{1}{z^{3/2}}+
\begin{pmatrix}
\Theta/2 & O \\
O & \Theta/2
\end{pmatrix}
\frac{1}{z}.
\end{equation*}
This can be seen as the direct sum of $\frac{I_2}{z^{3/2}}+\frac{\Theta/2}{z}$ (whose spectral type is (11)) and its ``copy'' made by the action $z^{1/2} \mapsto -z^{1/2}$.
We write this spectral type as $(11)_2$.
Together with the HTL form at $x=0$ (it is not difficult to see),
we write the spectral type of the system (\ref{eq:eg_shear}) as $(11)_2,(2)(2)$.
The Riemann scheme is given in Section~\ref{sec:(11)_2,(2)(2)}.
\qed
\end{eg}
\begin{rem}
The matrix~(\ref{eq:shear_mat}) is applicable to the other ramified systems in this paper.
\qed
\end{rem}
Here
we describe the structure of the HTL form (at a ramified irregular singularity) in a more general setting.
Let
\begin{equation}\label{eq:direct_summand}
T=\frac{T_{0}}{z^{\frac{b}{d}+1}}+\frac{T_{1}}{z^{\frac{b-1}{d}+1}}+\cdots+\frac{T_{b-1}}{z^{\frac{1}{d}+1}}+\frac{\Theta}{z}
\end{equation}
be the $d$-reduced (i.e. $0 \le \Re(\theta) < 1/d$ holds for any eigenvalue $\theta$ of $\Theta$) HTL form of $A(z) \in M_m(\mathbb{C}(\!(z)\!))$.
We set
\[
T_{\mathrm{irr}}=T-\frac{\Theta}{z}.
\]
Let $\Sigma \subset \mathbb{C}(\!(z^{1/d})\!)$ be the set of the eigenvalues of $T_{\mathrm{irr}} \in \mathrm{End}(V)$ where
$V=\mathbb{C}(\!(z^{1/d})\!) \otimes_{\mathbb{C}} \mathbb{C}^m$.
Clearly we have $V=\bigoplus_{p \in \Sigma} V(p)$
where $V(p)$ is the eigenspace of $T_{\mathrm{irr}}$ corresponding to $p \in \Sigma$.
Let $C_d$ be the (multiplicative) cyclic group: $C_d=\{ {\zeta_d}^j \,|\, j=0,\ldots, d-1\}$ where $\zeta_d=e^{2\pi i/d}$.
Note that $g \in C_d$ acts on $\mathbb{C}(\!(z^{1/d})\!)$ as
\[
f=\sum_j f_j (z^{1/d})^j \mapsto g \cdot f=\sum_j f_j g^{-j}(z^{1/d})^j.
\]
Then the following holds.
\begin{prop}[\cite{BV}]
The set $\Sigma$ is stable under the action of $C_d$,
and there exists a representation $\rho : C_d \to \mathrm{GL}(V)$
such that
\[
\rho(g)\Theta=\Theta \rho(g),\quad
\rho(g)(V(p))=V(g\cdot p) \quad
(g \in C_d, \, p \in \Sigma).
\]
\end{prop}
We note that there exists a direct sum decomposition of $\Sigma$:
\begin{equation*}
\Sigma=\Sigma_{d_1} \sqcup \cdots \sqcup \Sigma_{d_r}
\end{equation*}
where
\begin{equation*}
p \in \Sigma_{d_j} \stackrel{\mathrm{def}}{\iff}
\min\{ k \in \mathbb{Z}_{\ge 1} \, | \, {\zeta_d}^k \cdot p=p \}=d_j.
\end{equation*}
We set
\[
V_{d_j}=\bigoplus_{p \in \Sigma_{d_j}} V(p).
\]
Then, as seen in the example above,
$T|_{V_{d_j}}$ can be represented by the RSP $S^j$ and its $d_j-1$ copies generated by the $C_d$-action on it.
We denote the collection of $S^j$ and its $d_j-1$ copies by $S^j_{d_j}$.
If $d_j$ equals 1, we simply write $S^j_{d_j}$ as $S^j$.
In this way, the HTL form (\ref{eq:direct_summand}) can be expressed as $S^1_{d_1} \ldots S^r_{d_r}$,
which we call the \textit{spectral type of the singular point}.
The tuple of the spectral types of all singular points is called the \textit{spectral type of the equation}.
\begin{eg}
We provide more examples.
The spectral types of
\begin{equation*}
{\small
\begin{matrix}
x=0 \ \left( \frac12 \right) \\
\overbrace{
\begin{array}{cc}
a & \alpha\\
a & \alpha \\
-a & \alpha \\
-a & \alpha
\end{array}}
\end{matrix},\quad
\begin{matrix}
x=0 \ \left( \frac12 \right) \\
\overbrace{
\begin{array}{cc}
a & \alpha\\
-a & \alpha \\
b & \beta \\
-b & \beta
\end{array}}
\end{matrix},\quad
\begin{matrix}
x=0 \ \left( \frac12 \right) \\
\overbrace{
\begin{array}{cc}
a & \alpha\\
-a & \alpha \\
0 & \beta \\
0 & \gamma
\end{array}}
\end{matrix},\quad
\begin{matrix}
x=0 \ \left( \frac13 \right) \\
\overbrace{
\begin{array}{cc}
a & \alpha \\
\omega a & \alpha\\
\omega^2 a & \alpha \\
0 & \beta
\end{array}}
\end{matrix},\quad
\begin{matrix}
x=0 \ \left( \frac14 \right) \\
\overbrace{
\begin{array}{cc}
a & \alpha \\
\sqrt{-1}a & \alpha \\
-a & \alpha \\
-\sqrt{-1}a & \alpha
\end{array}}
\end{matrix}
}
\end{equation*}
(where $\omega$ is a cube root of unity)
are $(2)_2$, $(1)_2(1)_2$, $(1)_2 11$, $(1)_3 1$, and $(1)_4$ respectively.
\qed
\end{eg}
\subsection{Degeneration of HTL forms}
As was mentioned in Section~1,
linear differential equations admit two kinds of degeneration;
namely, confluence of singular points and degeneration of HTL forms.
In this subsection, we illustrate the degeneration of an HTL form
which is caused by the degeneration of a Jordan canonical form.
\begin{rem}
The degenerations of HTL forms treated in this paper
correspond to degenerations of Jordan canonical forms.
However this is not true in general, see \cite{K2, K3}.
\qed
\end{rem}
First, we see an example of the degeneration of a Jordan canonical form. The following matrix
\begin{equation}\label{eq:eg_mat}
\begin{pmatrix}
\eta & 1 \\
0 & 0
\end{pmatrix}
\end{equation}
is diagonalizable provided that $\eta \neq 0$, and
when $\eta=0$ the matrix is nilpotent.
In other words, the semisimple matrix (\ref{eq:eg_mat}) degenerates to a nilpotent matrix
when $\eta$ tends to 0.
A matrix $g$ that diagonalizes the matrix (\ref{eq:eg_mat}) is of the form
\begin{equation}\label{eq:diag_gauge}
g=
\begin{pmatrix}
g_1 & -g_2/\eta \\
0 & g_2
\end{pmatrix}
\quad (g_1, g_2 \in \mathbb{C}^\times),
\end{equation}
that is,
\[
\begin{pmatrix}
\eta & 1 \\
0 & 0
\end{pmatrix}=
g
\begin{pmatrix}
\eta & 0 \\
0 & 0
\end{pmatrix}
g^{-1}
\]
holds.
Although the freedom $g_1$ and $g_2$ is redundant in this case,
we have retained both for later consideration.
Now we consider the following system:
\begin{equation}
\frac{d Y}{d x}=
\left(
\frac{A_0^{(-1)}}{x^2}+\frac{A_0^{(0)}}{x}+A_\infty
\right)Y.
\end{equation}
Here
$A_0^{(-1)}$, $A_0^{(0)}$, and $A_\infty$ are given as follows:
\begin{align*}
A_0^{(-1)}&=
\begin{pmatrix}
I_2 \\
P
\end{pmatrix}
\begin{pmatrix}
t(1-P) & tI_2
\end{pmatrix},\quad
A_0^{(0)}=
\begin{pmatrix}
-\theta^\infty_1I_2 & -Q \\
-Z & -\Theta
\end{pmatrix},\quad
A_\infty=
\begin{pmatrix}
-I_2 & O \\
O & O
\end{pmatrix},\\
Z&=(QP+\theta^0+2\theta^{\infty}_1)P-(QP+\theta^0+\theta^{\infty}_1),\quad
\Theta=
\begin{pmatrix}
\theta^\infty_2 & \\
& \theta^\infty_3
\end{pmatrix}.
\end{align*}
Note that $Q$ and $P$ are assumed to satisfy the relation $[P, Q]=\theta^0+\theta^\infty_1+\Theta$.
The Riemann scheme of the system is given by
\[
\left(
\begin{array}{cc}
x=0 & x=\infty \\
\overbrace{\begin{array}{cc}
0 & 0 \\
0 & 0 \\
t & \theta^0 \\
t & \theta^0
\end{array}}
&
\overbrace{\begin{array}{cc}
1 & \theta^\infty_1 \\
1 & \theta^\infty_1 \\
0 & \theta^\infty_2 \\
0 & \theta^\infty_3
\end{array}}
\end{array}
\right)
\]
and thus the spectral type of which is $(2)(2),(2)(11)$ (\cite{KNS}).
By changing the independent variable as $x=\varepsilon \tilde{x}$, we have
\begin{equation}
\frac{d Y}{d \tilde{x}}=
\left(
\frac{\varepsilon^{-1}A_0^{(-1)}}{\tilde{x}^2}+\frac{A_0^{(0)}}{\tilde{x}}+\varepsilon A_{\infty}
\right)Y.
\end{equation}
In a similar manner to (\ref{eq:diag_gauge}), we put
\begin{equation*}
G=
\begin{pmatrix}
G_1 & \varepsilon^{-1}G_2 \\
O & G_2
\end{pmatrix}
\end{equation*}
where $G_1$, $G_2$ are $2\times 2$ matrices.
The arbitrarity such as $G_1$ and $G_2$ will be used to eliminate negative powers of $\varepsilon$ or
simplify the system which result from the limit $\varepsilon \to 0$.
Using $G$ we have
\begin{equation}\label{eq:oshima-form}
G(\varepsilon A_\infty)G^{-1}=
\begin{pmatrix}
-\varepsilon I & I \\
O & O
\end{pmatrix},
\end{equation}
which tends to a nilpotent matrix as $\varepsilon \to 0$.
\begin{rem}
The matrices (\ref{eq:eg_mat}) and (the right-hand side of) (\ref{eq:oshima-form}) are
normal forms of matrices introduced by Oshima \cite{Os2005}.
\end{rem}
By changing the dependent variable as $Y=G^{-1}\tilde{Y}$,
we have
\begin{equation}
\frac{d \tilde{Y}}{d \tilde{x}}=G
\left(
\frac{\varepsilon^{-1}A_0^{(-1)}}{\tilde{x}^2}+\frac{A_0^{(0)}}{\tilde{x}}+\varepsilon A_{\infty}
\right)G^{-1}\tilde{Y}.
\end{equation}
Here we introduce the following transformations:
\begin{equation}
\begin{array}{l}
\theta^\infty_1=\varepsilon^{-1},\ \theta^\infty_2=\tilde{\theta}^\infty_2-\varepsilon^{-1},\
\theta^\infty_3=\tilde{\theta}^\infty_3-\varepsilon^{-1},\ t=\varepsilon\tilde{t},\\
Q=\varepsilon \tilde{Q}-(\theta^0\varepsilon+1)\tilde{P}^{-1},\
P=\varepsilon^{-1}\tilde{P}.
\end{array}
\end{equation}
Moreover, we choose $G_1=-\varepsilon P$, $G_2=I$.
Then, by a direct calculation, we have
\begin{align*}
G(\varepsilon^{-1}A_0^{(-1)})G^{-1}&=
\tilde{t}
\begin{pmatrix}
O \\
I
\end{pmatrix}
\begin{pmatrix}
\tilde{P} & I
\end{pmatrix}
+O(\varepsilon), \\
GA_0^{(0)}G^{-1}&=
\begin{pmatrix}
\tilde{Q}\tilde{P} & \tilde{Q} \\
I & -\tilde{P}\tilde{Q}+\theta^0
\end{pmatrix}
+O(\varepsilon).
\end{align*}
Taking the limit $\varepsilon \to 0$, we obtain a system of linear differential equations
which has a ramified singularity at $x=\infty$.
In fact, this is the system (\ref{eq:eg_shear}), which we have treated in Section~\ref{sec:shearing}.
This is an explicit description of the degeneration of an HTL form.
In terms of spectral types, it is expressed as $(2)(11), (2)(2) \to (11)_2, (2)(2)$.
\section{Lax pairs of degenerate matrix Painlev\'e systems}\label{sec:Lax_pairs}
The sixth matrix Painlev\'e system is derived from the isomonodromic deformation of the following Fuchsian system:
\begin{equation}\label{eq:Fuchs_mp}
\frac{dY}{dx}=
\left(
\frac{A_0}{x}+\frac{A_1}{x-1}+\frac{A_t}{x-t}
\right)Y
\end{equation}
where $A_0$, $A_1$, and $A_t$ are $4 \times 4$ matrices satisfying the following conditions
\begin{equation}\label{eq:eigen_condition}
A_0 \sim
\begin{pmatrix}
O_2 & O_2 \\
O_2 & \theta^0 I_2
\end{pmatrix},\
A_1 \sim
\begin{pmatrix}
O_2 & O_2 \\
O_2 & \theta^1 I_2
\end{pmatrix},\
A_t \sim
\begin{pmatrix}
O_2 & O_2 \\
O_2 & \theta^t I_2
\end{pmatrix},
\end{equation}
and
\begin{equation}\label{residue_infty}
A_\infty:=-(A_0+A_1+A_t)=
\mathrm{diag}(\theta^{\infty}_1,\theta^{\infty}_1,\theta^{\infty}_2,\theta^{\infty}_3).
\end{equation}
Thus the spectral type of the Fuchsian system (\ref{eq:Fuchs_mp}) is $22,22,22,211$.
Taking the trace of (\ref{residue_infty}), we have the Fuchs relation
\[
2(\theta^0+\theta^1+\theta^t+\theta^\infty_1)
+\theta^\infty_2+\theta^\infty_3=0.
\]
The explicit parametrization of (\ref{eq:Fuchs_mp}) is as follows~(\cite{K, KNS}):
\begin{equation}
\begin{split}
A_{\xi}&=
(U \oplus I_2)^{-1}X^{-1}\hat{A}_{\xi}X(U \oplus I_2)
\quad(\xi=0,1,t),\\
\hat{A}_0&=
\begin{pmatrix}
I_2 \\
O
\end{pmatrix}
\begin{pmatrix}
\theta^0I_2 & \frac{1}{t}Q-I_2
\end{pmatrix},\quad
\hat{A}_1=
\begin{pmatrix}
I_2 \\
PQ-\Theta
\end{pmatrix}
\begin{pmatrix}
\theta^1I_2-PQ+\Theta & I_2
\end{pmatrix},\\
\hat{A}_t&=
\begin{pmatrix}
I_2 \\
tP
\end{pmatrix}
\begin{pmatrix}
\theta^tI_2+QP & -\frac{1}{t}Q
\end{pmatrix}, \quad U \in \mathrm{GL}(2),
\end{split}
\end{equation}
where the matrices $Q$, $P$, and $\Theta$ are given by
\begin{align}\label{eq:canonical_var}
Q=
\begin{pmatrix}
q_1 & u\\
-q_2/u & q_1
\end{pmatrix},
\quad
P=
\begin{pmatrix}
p_1/2 & -p_2u\\
(p_2 q_2-\theta-\theta^\infty_1-\theta^\infty_2)/u & p_1/2
\end{pmatrix},
\quad
\Theta=
\begin{pmatrix}
\theta^{\infty}_2 & \\
& \theta^{\infty}_3
\end{pmatrix}.
\end{align}
Here we have set $\theta=\theta^0+\theta^1+\theta^t$.
Note that $P$ and $Q$ satisfies $[P, Q]=(\theta+\theta^\infty_1)I_2+\Theta$.
The matrix $X$ is given by
$X=
\begin{pmatrix}
I_2 & O \\
Z & I_2
\end{pmatrix}$
where
\begin{align*}
Z&=(\theta^{\infty}_1-\Theta)^{-1}
[-\theta^1(QP+\theta+\theta^{\infty}_1)
+(QP+\theta+\theta^{\infty}_1)^2
-t(PQ+\theta^t)P].
\end{align*}
Note that the gauge parametrization is slightly different from that in \cite{KNS}, see also Appendix~\ref{sec:appendix_data} of the present paper.
As mentioned in Section~\ref{sec:intro},
the isomonodromic deformation equation of (\ref{eq:Fuchs_mp}) is equivalent to the compatibility condition of the following Lax pair:
\begin{equation}\label{eq:Lax_mp6}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=
\left(
\frac{A_0}{x}+\frac{A_1}{x-1}+\frac{A_t}{x-t}
\right)Y,\\
\frac{\partial Y}{\partial t}&=-\frac{A_t}{x-t}Y.
\end{aligned}
\right.
\end{equation}
Moreover, the compatibility condition of the Lax pair (\ref{eq:Lax_mp6}) is equivalent to
{\small
\begin{align}
t(t-1)\frac{dQ}{dt}&=(Q-t)PQ(Q-1)+Q(Q-1)P(Q-t) \nonumber \\
&\quad+(\theta^0+1)Q(Q-1)+(\theta+2\theta^\infty_1-1)Q(Q-t)+\theta^t(Q-1)(Q-t), \label{eq:MP6-q}\\
t(t-1)\frac{dP}{dt}&=-(Q-1)P(Q-t)P-P(Q-t)PQ-PQ(Q-1)P \nonumber \\
&\quad-\left[(\theta^0+1)\{P(Q-1)+QP\}+(\theta+2\theta^\infty_1-1)\{P(Q-t)+QP\}+\theta^t\{P(Q-t)+(Q-1)P\}\right] \nonumber \\
&\quad-(\theta+\theta^\infty_1)(\theta^0+\theta^t+\theta^\infty_1), \label{eq:MP6-p}\\
t(t-1)\frac{dU}{dt}&=\left\{-\theta^1Q+(Q-t)(PQ+QP)+2(\theta+\theta^\infty_1)Q-\theta^t t\right\}U.
\end{align}
}
Equations (\ref{eq:MP6-q}) and (\ref{eq:MP6-p}) can be written in the following form:
\begin{align}
\frac{dq_i}{dt}&=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{VI}}}{\partial p_i},\quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{VI}}}{\partial q_i}\quad (i=1,2),\\
\frac{du}{dt}&=-2q_1(q_1-1)(q_1-t)p_2
+\{q_1(q_1-1)+q_1(q_1-t)+(q_1-1)(q_1-t)-q_2\}p_1\nonumber\\
&\quad+(2p_2q_2-\theta^1-\theta^t-2\theta^\infty_2)q_1
+(2p_2q_2-\theta^0-2\theta^1-\theta^t-2\theta^\infty_1-2\theta^\infty_2+1)(q_1-1)\nonumber\\
&\quad+(2p_2q_2+\theta^0+\theta^1+2\theta^t+2\theta^\infty_1-1)(q_1-t)
\end{align}
where the Hamiltonian is given by
\begin{multline}
t(t-1)H^{\mathrm{Mat}}_{\mathrm{VI}}\left({-\theta^0-\theta^t-\theta^\infty_1,-\theta^1,\theta^t \atop
\theta^0+1,\theta+\theta^\infty_1+\theta^\infty_2};t;
{q_1, p_1\atop q_2, p_2}\right)\\=
\mathrm{tr}\Big[Q(Q-1)(Q-t)P^2+
\{(\theta^0+1 -(\theta+\theta^\infty_1+\Theta))Q(Q-1)+\theta^t(Q-1)(Q-t)
\\
+(\theta+2\theta^\infty_1-1)Q(Q-t)\}P
+(\theta+\theta^\infty_1)(\theta^0+\theta^t+\theta^\infty_1)Q\Big].
\end{multline}
We call (\ref{eq:MP6-q}) and (\ref{eq:MP6-p}) the non-abelian description of the sixth matrix Painlev\'e system
(see Appendix~\ref{sec:non-abel}).
From~(\ref{eq:canonical_var}), we can write the symplectic form as $\mathrm{tr}(dP \wedge dQ)$.
In this section, we list the linear systems of ramified type which are degenerated from~(\ref{eq:Fuchs_mp})
together with their deformations (i.e. systems of $t$-direction) and associated Hamiltonians.
We provide in advance the Hamilonians of the matrix Painlev\'e systems:
{\allowdisplaybreaks
\begin{align}
&t(t-1)H^{\mathrm{Mat}}_{\mathrm{VI}}
\left({\alpha, \beta, \gamma \atop \delta, \zeta}; t; Q,P\right) \nonumber \\
&=\mathrm{tr}[Q(Q-1)(Q-t)P^2 \nonumber \\
&\quad+\{(\delta-\zeta K)Q(Q-1)-(2\alpha+\beta+\gamma+\delta)Q(Q-t)
+\gamma(Q-1)(Q-t)\}P \nonumber \\
&\quad+\alpha(\alpha+\beta)Q],\\
&tH^{\mathrm{Mat}}_{\mathrm{V}}
\left({\alpha, \beta \atop \gamma, \zeta}; t; Q,P\right)=
\mathrm{tr}[P(P+t)Q(Q-1)+\beta PQ+\gamma P-(\alpha+\gamma) tQ],\\
&H^{\mathrm{Mat}}_{\mathrm{IV}}(\alpha, \beta, \zeta;t;Q,P)=\mathrm{tr}[PQ(P-Q-t)+\beta P+\alpha Q],\\
&tH^{\mathrm{Mat}}_{\mathrm{III}(D_6)}(\alpha, \beta, \zeta;t;Q,P)=\mathrm{tr}[P^2Q^2-(Q^2-\beta Q-t)P-\alpha Q],\\
&tH^{\mathrm{Mat}}_{\mathrm{III}(D_7)}(\alpha, \zeta;t;Q,P)=\mathrm{tr}[P^2Q^2+\alpha PQ+tP+Q],\\
&tH^{\mathrm{Mat}}_{\mathrm{III}(D_8)}(\zeta;t;Q,P)=\mathrm{tr}[P^2Q^2+PQ-Q-tQ^{-1}],\\
&H^{\mathrm{Mat}}_{\mathrm{II}}(\alpha, \zeta;t;Q,P)=\mathrm{tr}[P^2-(Q^2+t)P-\alpha Q],\\
&H^{\mathrm{Mat}}_{\mathrm{I}}(\zeta;t;Q,P)=\mathrm{tr}[P^2-Q^3-tQ].
\end{align}
}
Here the parameter $\zeta$
is included in such a manner as $[P, Q]=\zeta K$ where $K=\mathrm{diag}(1,-1)$.
\subsection{Singularity pattern $\frac32+1+1$}
\subsubsection{Spectral type $(2)_2, 22, 211$}
The Riemann scheme is given by
\[
\left(
\begin{array}{ccc}
x=0 & x=1\ \left( \frac12 \right) & x=\infty \\
\begin{array}{c} 0 \\ 0 \\ \theta^0 \\ \theta^0 \end{array} &
\overbrace{\begin{array}{cc}
\sqrt{t} & 0\\
\sqrt{t} & 0\\
-\sqrt{t} & 0\\
-\sqrt{t} & 0
\end{array}}&
\begin{array}{c} \theta^{\infty}_1 \\ \theta^{\infty}_1 \\ \theta^{\infty}_2 \\ \theta^{\infty}_3 \end{array}
\end{array}
\right) ,
\]
and the Fuchs-Hukuhara relation is written as
$2\theta^0+2\theta_1^\infty+\theta_2^\infty+\theta_3^\infty=0$.
The Lax pair is expressed as
\begin{equation}\label{eq:Lax(2)_2,22,211}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=
\left(
\frac{A_0^{(0)}}{x}+\frac{A_1^{(1)}}{(x-1)^2}+\frac{A_1^{(0)}}{x-1}
\right)Y ,\\
\frac{\partial Y}{\partial t}&=\left(\frac{-\frac{1}{t}A_1^{(1)}}{x-1}\right)Y .
\end{aligned}
\right.
\end{equation}
Here
$A_0^{(0)}$, $A_1^{(1)}$, and $A_1^{(0)}$ are given as follows:
\begin{align*}
A_{\xi}^{(k)}&=
(U \oplus I_2)^{-1}\hat{A}_{\xi}^{(k)}(U \oplus I_2),\\
\hat{A}_1^{(1)}&=
G_1
\begin{pmatrix}
O & -tI \\
O & O
\end{pmatrix}
G_1^{-1}
=
\begin{pmatrix}
I_2 \\
-\frac{1}{t}Z
\end{pmatrix}
\begin{pmatrix}
-Z & -tI
\end{pmatrix},\\
\hat{A}_1^{(0)}&=
G_1
\begin{pmatrix}
PQ-\theta^\infty_1 & tP\\
I_2 & -PQ+\theta^\infty_1
\end{pmatrix}
G_1^{-1},\\
\hat{A}_0^{(0)}&=
\begin{pmatrix}
P \\
-\frac{1}{t}(ZP+QP+\theta^0)
\end{pmatrix}
\begin{pmatrix}
-Z-Q & -tI
\end{pmatrix},\\
Z&=(\theta^{\infty}_1-\Theta)^{-1}
( -QPQ-\theta^0 Q-t ),\quad
G_1=
\begin{pmatrix}
I_2 & O_2 \\
-\frac{1}{t}Z & I_2
\end{pmatrix}.
\end{align*}
Here $Q$, $P$, and $\Theta$ are
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u \\
-q_2/u & q_1
\end{pmatrix},\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u\\
(p_2 q_2-\theta^0-\theta^\infty_1-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^{\infty}_2 & \\
& \theta^{\infty}_3
\end{pmatrix}.
\end{align*}
The compatibility condition of (\ref{eq:Lax(2)_2,22,211}) is equivalent to
{\small
\begin{align}
t\frac{dQ}{dt}&=2QPQ-Q^2-(2\theta^\infty_1-1)Q+t, \label{eq:(2)_2,22,211q}\\
t\frac{dP}{dt}&=-2PQP+PQ+QP+(2\theta^\infty_1-1)P+\theta^0, \label{eq:(2)_2,22,211p}\\
t\frac{dU}{dt}U^{-1}&=-2PQ+2\theta^\infty_1.
\end{align}
}
Equations (\ref{eq:(2)_2,22,211q}) and (\ref{eq:(2)_2,22,211p}) can be written in the following form:
\begin{align}
&\frac{dq_i}{dt}=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}}{\partial p_i},\quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}}{\partial q_i}\quad (i=1,2), \\
&t\frac{1}{u}\frac{du}{dt}=-2q_1(q_1p_2+1)+2(p_1q_1+p_2q_2)-2(\theta^0+2\theta^\infty_1+\theta^\infty_2)+1
\end{align}
where the Hamiltonian is given by
\begin{align}
&tH^{\mathrm{Mat}}_{\mathrm{III}(D_6)}\left({\theta^0, -2\theta^\infty_1+1,
\theta^0+\theta^\infty_1+\theta^\infty_2};t;{q_1, p_1 \atop q_2, p_2}\right)\\
&=\mathrm{tr}\Big[ P^2Q^2-\{Q^2+(2\theta^\infty_1-1)Q-t\}P-\theta^0Q \Big].\nonumber
\end{align}
\subsubsection{Spectral type $(11)_2, 22, 22$}
The Riemann scheme is given by
\[
\left(
\begin{array}{ccc}
x=0 & x=1 & x=\infty \ \left( \frac12 \right)\\
\begin{array}{c} 0 \\ 0 \\ \theta^0 \\ \theta^0 \end{array} &
\begin{array}{c} 0 \\ 0 \\ \theta^1 \\ \theta^1 \end{array} &
\overbrace{\begin{array}{cc}
\sqrt{t} & \theta^\infty_2/2 \\
\sqrt{t} & \theta^\infty_3/2 \\
-\sqrt{t} & \theta^\infty_2/2 \\
-\sqrt{t} & \theta^\infty_3/2
\end{array}}
\end{array}
\right) ,
\]
and the Fuchs-Hukuhara relation is written as
$2\theta^0 +2\theta^1+\theta_2^\infty+\theta_3^\infty =0$.
The Lax pair is expressed as
\begin{equation}\label{eq:Lax(11)_2,22,22}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=
\left(A_\infty+\frac{A_0}{x}+\frac{A_1}{x-1}
\right)Y, \\
\frac{\partial Y}{\partial t}&=(N x+B_1)Y ,
\end{aligned}
\right.
\end{equation}
where
\begin{align*}
A_\infty&=tN,\quad
N=
\begin{pmatrix}
O_2 & I_2 \\
O_2 & O_2
\end{pmatrix},\quad
A_0=
\begin{pmatrix}
O_2 \\
I_2
\end{pmatrix}
\begin{pmatrix}
I_2-P & \theta^0 I_2
\end{pmatrix},\quad
A_1=
\begin{pmatrix}
QP+\theta^1\\
P
\end{pmatrix}
\begin{pmatrix}
I_2 & -Q
\end{pmatrix}.
\end{align*}
Furthermore the matrix $B_1$ is given by
\begin{align*}
B_1&=\frac1t
\begin{pmatrix}
PQ-\theta^0 & O_2\\
I_2 & -QP-\theta^1
\end{pmatrix}.
\end{align*}
Here $Q$, $P$, and $\Theta$ are
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u \\
-q_2/u & q_1
\end{pmatrix},\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u\\
(p_2 q_2-\theta^0-\theta^1-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^{\infty}_2 & \\
& \theta^{\infty}_3
\end{pmatrix}.
\end{align*}
The compatibility condition of (\ref{eq:Lax(11)_2,22,22}) is equivalent to
\begin{align}
t\frac{dQ}{dt}&=PQ^2+Q^2P-Q^2-(\theta^0-\theta^1)Q+t, \label{eq:(11)_2,22,22q}\\
t\frac{dP}{dt}&=-P^2Q-QP^2+PQ+QP+(\theta^0-\theta^1)P+\theta^1. \label{eq:(11)_2,22,22p}
\end{align}
These are apparently different from the expressions in Appendix~\ref{sec:non-abel}.
However, using the relation $[P, Q]=\theta^0+\theta^1+\Theta$ and a gauge transformation
\[
Q=G^{-1}\tilde{Q}G, \quad P=G^{-1}\tilde{P}G, \quad G=\mathrm{diag}(t^{-\theta^\infty_2}, t^{-\theta^\infty_3}),
\]
we can see that (\ref{eq:(11)_2,22,22q}) and (\ref{eq:(11)_2,22,22p}) are essentially the same as (\ref{eq:non_abel_III(D6)}) in Appendix~\ref{sec:non-abel}.
Equations (\ref{eq:(11)_2,22,22q}), (\ref{eq:(11)_2,22,22p}) can be written in the following form:
\begin{align}
&\frac{dq_i}{dt}=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}}{\partial p_i},\quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}}{\partial q_i}\quad (i=1,2),\\
&t\frac{1}{u}\frac{du}{dt}=2(p_1q_1+p_2q_2)-2q_1(q_1p_2+1)-\theta^0+\theta^1.
\end{align}
The Hamiltonian is given by
\begin{equation}
tH^{\mathrm{Mat}}_{\mathrm{III}(D_6)}\left({\theta^1,\theta^1-\theta^0,\theta^0+\theta^1+\theta^\infty_2};t;
{q_1, p_1 \atop q_2, p_2}\right)=
\mathrm{tr}\Big[ P^2Q^2-(Q^2+(\theta^0-\theta^1)Q-t)P-\theta^1Q \Big].
\end{equation}
\subsection{Singularity pattern $\frac52+1$}
\subsubsection{Spectral type $(((2)))_2, 211$}\label{sec:(((2)))_2,211}
The Riemann scheme is given by
\[
\left(
\begin{array}{cc}
x=0 \ \left( \frac12 \right) & x=\infty \\
\overbrace{\begin{array}{cccc}
1& 0 & -t/2 & 0 \\
1& 0 & -t/2 & 0 \\
-1& 0 & t/2 & 0 \\
-1& 0 & t/2 & 0
\end{array}}
& \begin{array}{c} \theta^{\infty}_1 \\ \theta^{\infty}_1 \\ \theta^{\infty}_2 \\ \theta^{\infty}_3 \end{array}
\end{array}
\right) ,
\]
and the Fuchs-Hukuhara relation is written as
$2\theta_1^\infty+\theta_2^\infty+\theta_3^\infty=0$.
The Lax pair is expressed as
\begin{equation}\label{eq:Lax(((2)))_2,211}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=
\left(
\frac{A_{2}}{x^3}+\frac{A_{1}}{x^2}+\frac{A_{0}}{x}
\right)Y ,\\
\frac{\partial Y}{\partial t}&=\frac{A_{2}}{x}Y .
\end{aligned}
\right.
\end{equation}
The matrices $A_0$, $A_1$, and $A_2$ are given as follows:
\begin{align*}
A_{k}&=
(U \oplus I_2)^{-1}\hat{A}_{k}(U \oplus I_2),\\
\hat{A}_{2}&=
G_0
\begin{pmatrix}
O & I_2 \\
O & O
\end{pmatrix}
G_0^{-1},\quad
\hat{A}_{1}=
G_0
\begin{pmatrix}
Q & -P \\
I & -Q
\end{pmatrix}
G_0^{-1},\quad
\hat{A}_{0}=-
\begin{pmatrix}
\theta^{\infty}_1I_2 & O \\
O & \Theta
\end{pmatrix},\\
G_0&=
\begin{pmatrix}
I & O \\
Z & I
\end{pmatrix},
\quad
Z=(\theta^{\infty}_1-\Theta)^{-1}(P-Q^2-t).
\end{align*}
Here, $Q$, $P$, and $\Theta$ are
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u\\
-q_2/u & q_1
\end{pmatrix},
\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u\\
(p_2q_2-\theta^\infty_1-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^{\infty}_2 & \\
& \theta^{\infty}_3
\end{pmatrix}.
\end{align*}
The compatibility condition of (\ref{eq:Lax(((2)))_2,211}) is
\begin{align}
\frac{dQ}{dt}&=2P-Q^2-t, \quad
\frac{dP}{dt}=PQ+QP-2\theta^\infty_1+1, \label{eq:(((2)))_2,211} \\
\frac{dU}{dt}U^{-1}&=2Q.
\end{align}
Equations (\ref{eq:(((2)))_2,211}) are written in the following form:
\begin{align}
\frac{dq_i}{dt}=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{II}}}{\partial p_i},\quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{II}}}{\partial q_i}\quad (i=1,2),\quad
\frac{1}{u}\frac{du}{dt}=-2(q_1+p_2),
\end{align}
where the Hamiltonian is given by
\begin{equation}
H^{\mathrm{Mat}}_{\mathrm{II}}\left(-2\theta^\infty_1+1, \theta^\infty_1+\theta^\infty_2 ;t;{q_1, p_1\atop q_2, p_2}\right)
=\mathrm{tr}\Big[ P^2-(Q^2+t)P+(2\theta^\infty_1-1)Q \Big].
\end{equation}
\subsubsection{Spectral type $(((11)))_2, 22$}\label{sec:(((11)))_2,22}
The Riemann scheme is given by
\[
\left(
\begin{array}{cc}
x=0 & x=\infty \ \left( \frac12 \right) \\
\begin{array}{c} 0 \\ 0 \\ \theta^0 \\ \theta^0\end{array}
&\overbrace{\begin{array}{cccc}
1 & 0 & -t/2 & \theta^\infty_2/2 \\
1 & 0 & -t/2 & \theta^\infty_3/2 \\
-1& 0 & t/2 & \theta^\infty_2/2 \\
-1& 0 & t/2 & \theta^\infty_3/2
\end{array}}
\end{array}
\right) ,
\]
and the Fuchs-Hukuhara relation is written as
$2\theta^0 +\theta_2^\infty +\theta_3^\infty =0$.
The Lax pair is expressed as
\begin{equation}\label{eq:Lax(((11)))_2,22}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=
\left(
A_0x+A_1+\frac{A_2}{x}
\right)Y ,\\
\frac{\partial Y}{\partial t}&=(-A_0x+B_1)Y ,
\end{aligned}
\right.
\end{equation}
where
\begin{align*}
A_0&=
\begin{pmatrix}
O_2 & I_2\\
O_2 & O_2
\end{pmatrix},\quad
A_1=
\begin{pmatrix}
O_2 & P-tI_2\\
I_2 & O_2
\end{pmatrix},\quad
A_2=
\begin{pmatrix}
-Q \\
I_2
\end{pmatrix}
\begin{pmatrix}
-P & -PQ+\theta^0I_2
\end{pmatrix},\\
B_1&=
\begin{pmatrix}
O_2 & -2P+tI_2\\
-I_2 & O_2
\end{pmatrix}.
\end{align*}
Here $Q$, $P$, and $\Theta$ are
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u \\
-q_2/u & q_1
\end{pmatrix},\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u \\
(p_2q_2-\theta^0-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^\infty_2 & \\
& \theta^\infty_3
\end{pmatrix}.
\end{align*}
The compatibility condition of (\ref{eq:Lax(((11)))_2,22}) is equivalent to
\begin{align}
\frac{dQ}{dt}=2P-Q^2-t, \quad \frac{dP}{dt}=PQ+QP-\theta^0,
\end{align}
and these are written in the following form:
\begin{align}
\frac{dq_i}{dt}=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{II}}}{\partial p_i}, \quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{II}}}{\partial q_i}, \quad
\frac{1}{u}\frac{du}{dt}=-2(q_1+p_2).
\end{align}
Here the Hamiltonian is given by
\begin{equation}
H^{\mathrm{Mat}}_{\mathrm{II}}\left(-\theta^0, \theta^0+\theta^\infty_2 ;t;{q_1, p_1\atop q_2, p_2}\right)
=\mathrm{tr}\Big[ P^2-(Q^2+t)P+\theta^0 Q \Big].
\end{equation}
\subsection{Singularity pattern $\frac32+2$}
\subsubsection{Spectral type $(2)_2, (2)(11)$}
The Riemann scheme is given by
\[
\left(
\begin{array}{cc}
x=0 \ \left( \frac12 \right)& x=\infty \\
\overbrace{\begin{array}{cc}
\sqrt{-t} & 0 \\
\sqrt{-t} & 0 \\
-\sqrt{-t} & 0 \\
-\sqrt{-t} & 0
\end{array}}
&
\overbrace{\begin{array}{cc}
0 & \theta^\infty_1 \\
0 & \theta^\infty_1 \\
-1 & \theta^\infty_2 \\
-1 & \theta^\infty_3
\end{array}}
\end{array}
\right) ,
\]
and the Fuchs-Hukuhara relation is written as
$2\theta_1^\infty+\theta_2^\infty+\theta_3^\infty=0$.
The Lax pair is expressed as
\begin{equation}\label{eq:Lax(2)_2,(2)(11)}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=
\left(
\frac{A_0}{x^2}+\frac{A_1}{x}+A_2
\right)Y ,\\
\frac{\partial Y}{\partial
t}&=-\frac{1}{x}\left(\frac{A_0}{t}\right)Y,
\end{aligned}
\right.
\end{equation}
where
$A_0$, $A_1$, and $A_2$ are given as follows:
\begin{align*}
A_{\xi}&=
(U \oplus I_2)^{-1}\hat{A}_{\xi}(U \oplus I_2),\\
\hat{A}_0&=t
\begin{pmatrix}
I_2 \\
P
\end{pmatrix}
\begin{pmatrix}
-P & I_2
\end{pmatrix},\quad
\hat{A}_1=
\begin{pmatrix}
-\theta^\infty_1I_2 & -Q \\
-Z & -\Theta
\end{pmatrix},\quad
\hat{A}_2=
\begin{pmatrix}
O & O \\
O & I_2
\end{pmatrix},\\
Z&=(QP+2\theta^{\infty}_1)P+I.
\end{align*}
Here $Q$, $P$, and $\Theta$ are
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u\\
-q_2/u & q_1
\end{pmatrix},
\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u\\
(p_2q_2-\theta^\infty_1-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^{\infty}_2 & \\
& \theta^{\infty}_3
\end{pmatrix}.
\end{align*}
The compatibility condition of (\ref{eq:Lax(2)_2,(2)(11)}) is
\begin{align}
&t\frac{dQ}{dt}=2QPQ+2\theta^\infty_1Q+t, \quad
t\frac{dP}{dt}=-2PQP-2\theta^\infty_1P-1, \label{eq:(2)_2,(2)(11)qp}\\
&t\frac{dU}{dt}U^{-1}=2(QP+\theta^\infty_1).
\end{align}
Equations~(\ref{eq:(2)_2,(2)(11)qp}) are written in the following form:
\begin{align}
&\frac{dq_i}{dt}=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_7)}}{\partial p_i},\quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_7)}}{\partial q_i}\quad (i=1,2),\\
&t\frac{1}{u}\frac{du}{dt}=2(p_1q_1+p_2q_2-p_2q_1^2-\theta^\infty_2).
\end{align}
The Hamiltonian is given by
\begin{align}
&tH^{\mathrm{Mat}}_{{\mathrm{III}}(D_7)}
\left({2\theta^\infty_1,\theta^\infty_1+\theta^\infty_2};t;
{q_1,p_1\atop q_2,p_2}\right)
=\mathrm{tr}\Big[ P^2Q^2+2\theta^\infty_1PQ+tP+Q \Big].
\end{align}
\subsubsection{Spectral type $(11)_2, (2)(2)$}\label{sec:(11)_2,(2)(2)}
The Riemann scheme is given by
\[
\left(
\begin{array}{cc}
x=0 & x=\infty \ \left( \frac12 \right)\\
\overbrace{\begin{array}{cc}
0 & 0 \\
0 & 0 \\
t & \theta^0 \\
t & \theta^0
\end{array}}
&
\overbrace{\begin{array}{cc}
1 & \theta^\infty_2/2 \\
1 & \theta^\infty_3/2 \\
-1 & \theta^\infty_2/2 \\
-1 & \theta^\infty_3/2
\end{array}}
\end{array}
\right) ,
\]
and the Fuchs-Hukuhara relation is written as
$2\theta^0+\theta_2^\infty+\theta_3^\infty=0$.
The Lax pair is expressed as
\begin{equation}\label{eq:Lax(11)_2,(2)(2)}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=\left( \frac{A_0}{x^2}+\frac{A_1}{x}+A_2 \right)Y, \\
\frac{\partial Y}{\partial t}&=\left(B_0+\frac{B_1}{x}\right)Y,
\end{aligned}
\right.
\end{equation}
where $A_0$, $A_1$, $A_2$, $B_0$, and $B_1$ are given as follows:
\begin{align*}
A_0&=t
\begin{pmatrix}
O_2 \\
I_2
\end{pmatrix}
\begin{pmatrix}
P & I_2
\end{pmatrix},\quad
A_1=
\begin{pmatrix}
QP & Q \\
I_2 & -PQ+\theta^0
\end{pmatrix},\quad
A_2=
\begin{pmatrix}
O & I \\
O & O
\end{pmatrix},\\
B_0&=
-\frac1t
\begin{pmatrix}
O & Q \\
O & O
\end{pmatrix},\quad
B_1=
-\begin{pmatrix}
O_2 \\
I_2
\end{pmatrix}
\begin{pmatrix}
P & I_2
\end{pmatrix}.
\end{align*}
Here $Q$, $P$, and $\Theta$ are
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u\\
-q_2/u & q_1
\end{pmatrix},
\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u\\
(p_2q_2-\theta^0-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^{\infty}_2 & \\
& \theta^{\infty}_3
\end{pmatrix}.
\end{align*}
The compatibility condition of (\ref{eq:Lax(11)_2,(2)(2)}) is
\begin{align}
t\frac{dQ}{dt}=2QPQ-\theta^0 Q+t,\quad t\frac{dP}{dt}=-2PQP+\theta^0 P-1,
\end{align}
and these are written in the following form:
\begin{align}
\frac{dq_i}{dt}=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_7)}}{\partial p_i}, \quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_7)}}{\partial q_i}, \quad
t\frac{1}{u}\frac{du}{dt}=2p_1q_1+2p_2q_2-2p_2q_1^2-3\theta^0-2\theta^\infty_2.
\end{align}
Here the Hamiltonian is given by
\begin{equation}
tH^{\mathrm{Mat}}_{\mathrm{III}(D_7)}\left(-\theta^0, \theta^0+\theta^\infty_2 ;t;{q_1, p_1\atop q_2, p_2}\right)
=\mathrm{tr}\Big[ P^2Q^2-\theta^0 PQ+tP+Q \Big].
\end{equation}
\subsection{Singularity pattern $\frac72$}
\subsubsection{Spectral type $(((((11)))))_2$}
The Riemann scheme is given by
\[
\left(
\begin{array}{c}
x=\infty \ \left( \frac12 \right) \\
\overbrace{\begin{array}{cccccc}
1 & 0 & 0 & 0 & t/2 & \theta^\infty_2/2\\
1 & 0 & 0 & 0 & t/2 & \theta^\infty_3/2\\
-1 & 0 & 0 & 0 & -t/2 & \theta^\infty_2/2\\
-1 & 0 & 0 & 0 & -t/2 & \theta^\infty_3/2
\end{array}}
\end{array}
\right) ,
\]
and the Fuchs-Hukuhara relation is written as
$\theta_2^\infty+\theta_3^\infty =0$.
The Lax pair is expressed as
\begin{equation}\label{eq:Lax_(((((11)))))_2}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=
\left(
A_0x^2+A_1x+A_2
\right)Y ,\\
\frac{\partial Y}{\partial t}&=(A_0x+B_1)Y ,
\end{aligned}
\right.
\end{equation}
where
\begin{align*}
A_0&=
\begin{pmatrix}
O & I \\
O & O
\end{pmatrix},\quad
A_1=
\begin{pmatrix}
O & Q \\
I & O
\end{pmatrix},\quad
A_2=
\begin{pmatrix}
-P & Q^2+t \\
-Q & P
\end{pmatrix},\\
B_1&=
\begin{pmatrix}
O & 2Q \\
I & O
\end{pmatrix}.
\end{align*}
Here $Q$, $P$, and $\Theta$ are
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u \\
-q_2/u & q_1
\end{pmatrix},\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u \\
(p_2q_2-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^\infty_2 & \\
& \theta^\infty_3
\end{pmatrix}.
\end{align*}
The compatibility condition of (\ref{eq:Lax_(((((11)))))_2}) is
\begin{equation}
\frac{dQ}{dt}=2P, \quad \frac{dP}{dt}=3Q^2+t,
\end{equation}
and they are written in the following form:
\begin{align}
\frac{dq_i}{dt}=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{I}}}{\partial p_i}, \quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{I}}}{\partial q_i}, \quad
\frac{1}{u}\frac{du}{dt}=-2p_2.
\end{align}
The Hamiltonian is given by
\begin{equation}
H^{\mathrm{Mat}}_{\mathrm{I}}\left({\theta^\infty_2};t;{q_1,p_1\atop q_2,p_2}\right)=\mathrm{tr}\Big[ P^2-Q^3-tQ \Big].
\end{equation}
\subsection{Singularity pattern $\frac32+\frac32$}
\subsubsection{Spectral type $(2)_2, (11)_2$}
The Riemann scheme is given by
\[
\left(
\begin{array}{cc}
x=0 \ \left( \frac12 \right) & x=\infty \ \left( \frac12 \right)\\
\overbrace{\begin{array}{cc}
\sqrt{t} & 0 \\
\sqrt{t} & 0 \\
-\sqrt{t} & 0 \\
-\sqrt{t} & 0
\end{array}}
&
\overbrace{\begin{array}{cc}
1 & \theta^\infty_2/2 \\
1 & \theta^\infty_3/2 \\
-1 & \theta^\infty_2/2 \\
-1 & \theta^\infty_3/2
\end{array}}
\end{array}
\right) ,
\]
and the Fuchs-Hukuhara relation is written as
$\theta_2^\infty+\theta_3^\infty=0$.
The Lax pair is expressed as
\begin{equation}\label{eq:Lax(2)_2,(11)_2}
\left\{
\begin{aligned}
\frac{\partial Y}{\partial x}&=\left( \frac{A_0}{x^2}+\frac{A_1}{x}+A_2 \right)Y, \\
\frac{\partial Y}{\partial t}&=\left(B_0+\frac{B_1}{x}\right)Y,
\end{aligned}
\right.
\end{equation}
where $A_0$, $A_1$, $A_2$, $B_0$, and $B_1$ are given as follows:
\begin{align*}
A_0&=
\begin{pmatrix}
O_2 & O_2 \\
-tQ^{-1} & O_2
\end{pmatrix},\quad
A_1=
\begin{pmatrix}
QP & -Q \\
I_2 & -PQ-I_2
\end{pmatrix},\quad
A_2=
\begin{pmatrix}
O & I \\
O & O
\end{pmatrix},\\
B_0&=
\frac1t
\begin{pmatrix}
O & Q \\
O & O
\end{pmatrix},\quad
B_1=
\begin{pmatrix}
O_2 & O_2 \\
Q^{-1} & O_2
\end{pmatrix}.
\end{align*}
Here $Q$, $P$, and $\Theta$ are
\begin{align*}
Q=
\begin{pmatrix}
q_1 & u\\
-q_2/u & q_1
\end{pmatrix},
\quad
P=
\begin{pmatrix}
p_1/2 & -p_2 u\\
(p_2q_2-\theta^\infty_2)/u & p_1/2
\end{pmatrix},\quad
\Theta=
\begin{pmatrix}
\theta^{\infty}_2 & \\
& \theta^{\infty}_3
\end{pmatrix}.
\end{align*}
The compatibility condition of (\ref{eq:Lax(2)_2,(11)_2}) is
\begin{align}
t\frac{dQ}{dt}=2QPQ+Q,\quad t\frac{dP}{dt}=-2PQP-P+I-tQ^{-2},
\end{align}
and these are written in the following form:
\begin{align}
\frac{dq_i}{dt}=\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_8)}}{\partial p_i}, \quad
\frac{dp_i}{dt}=-\frac{\partial H^{\mathrm{Mat}}_{\mathrm{III}(D_8)}}{\partial q_i}, \quad
t\frac{1}{u}\frac{du}{dt}=2p_1q_1+2p_2q_2-2p_2q_1^2-2\theta^\infty_2+1.
\end{align}
Here the Hamiltonian is given by
\begin{equation}
tH^{\mathrm{Mat}}_{\mathrm{III}(D_8)}\left( \theta^\infty_2 ;t;{q_1, p_1\atop q_2, p_2}\right)
=\mathrm{tr}\Big[ P^2Q^2+PQ-Q-tQ^{-1} \Big].
\end{equation}
\section{Laplace transform}\label{sec:Laplace
In this section, we describe correspondences of linear systems through the Laplace transform.
So far the configuration of the singular points of a linear system is not important.
However, when we consider the Laplace transform, the singular point $x=\infty$ is distinguished from
the other singular points.
Hence, only in this section, we indicate the spectral type corresponding to $x=\infty$ with $\infty$ over the spectral type.
There are five Hamiltonians in the degeneration scheme of the matrix Painlev\'e systems in Section~\ref{sec:intro} which have more than one
associated linear systems; that is, $H^{\mathrm{Mat}}_{\mathrm{V}}$, $H^{\mathrm{Mat}}_{\mathrm{IV}}$,
$H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}$, $H^{\mathrm{Mat}}_{\mathrm{III}(D_7)}$,
and $H^{\mathrm{Mat}}_{\mathrm{II}}$.
The correspondences concerning $H^{\mathrm{Mat}}_{\mathrm{V}}$ and $H^{\mathrm{Mat}}_{\mathrm{IV}}$ are given in \cite{KNS}.
Here we see the correspondences concerning $H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}$, $H^{\mathrm{Mat}}_{\mathrm{III}(D_7)}$,
and $H^{\mathrm{Mat}}_{\mathrm{II}}$.
In the case of linear systems of the following form:
\begin{equation}
\frac{d}{dx}Y=\left[ B\left(xI_l-T\right)^{-1}C+S\right] Y
\end{equation}
where $B$ is an $m \times l$ matrix and $C$ is an $l \times m$ matrix,
the correspondence is very symmetric~(\cite{B2, Hrd}).
To see this, we rewrite this equation as
\begin{equation}
\begin{pmatrix}
\frac{d}{dx}-S & -B \\
-C & xI_l-T
\end{pmatrix}
\begin{pmatrix}
Y \\
Z
\end{pmatrix}=0.
\end{equation}
Applying the Laplace transform $(x, d/dx)\mapsto (-d/d\xi , \xi)$, we have
\begin{equation}
\begin{pmatrix}
\xi -S & -B \\
-C & -\frac{d}{d\xi}-T
\end{pmatrix}
\begin{pmatrix}
\hat{Y} \\
\hat{Z}
\end{pmatrix}=0.
\end{equation}
Here we have expressed the transformed dependent variables using $\hat{~}$.
If we regard this equation as the equation of $\hat{Z}$, the equation reads
\begin{equation}
\frac{d}{d\xi}\hat{Z}=-\left[ C\left(\xi I_m-S\right)^{-1}B+T\right]\hat{Z}.
\end{equation}
Through this procedure, we see the following correspondences of spectral types:
\begin{align*}
H^{\mathrm{Mat}}_{\mathrm{III}(D_6)}&:\ \stackrel{\infty}{(2)}_2, 22, 211 \leftrightarrow (2)(2), \stackrel{\infty}{(2)(11)}, \quad
\stackrel{\infty}{(11)}_2, 22, 22 \leftrightarrow (2)(11), \stackrel{\infty}{(2)(2)}, \\
H^{\mathrm{Mat}}_{\mathrm{III}(D_7)}&:\ \stackrel{\infty}{(2)}_2,(2)(11) \leftrightarrow (2)(2), \stackrel{\infty}{(11)}_2.
\end{align*}
The correspondences concerning $H^{\mathrm{Mat}}_{\mathrm{II}}$ are rather complicated.
For an example, let us look at the system $(((11)))_2, 22$ (Section~\ref{sec:(((11)))_2,22}):
\begin{equation}
\frac{d}{dx}Y=\left( \frac{BC}{x}+A_1+A_0 x \right) Y,
\end{equation}
where
\begin{equation*}
B=
\begin{pmatrix}
-Q \\
I
\end{pmatrix}, \quad
C=
\begin{pmatrix}
-P & -PQ+\theta^0
\end{pmatrix}.
\end{equation*}
Note that this is rewritten into the following form:
\begin{equation}
\begin{pmatrix}
\frac{d}{dx}-A_1-A_0 x & -B \\
-C & xI_2
\end{pmatrix}
\begin{pmatrix}
Y \\
Z
\end{pmatrix}=0.
\end{equation}
Applying the Laplace transform, we have
\begin{equation}\label{eq:Laplace(((11)))_2,22}
\left\{
\begin{aligned}
A_0 \hat{Y}'&=-(\xi-A_1)\hat{Y}+B\hat{Z}, \\
\hat{Z}'&=-C\hat{Y}.
\end{aligned}
\right.
\end{equation}
Now we partition the dependent variable $\hat{Y}$ as
\begin{equation*}
\hat{Y}=
\begin{pmatrix}
Y_1 \\
Y_2
\end{pmatrix}
\end{equation*}
where $Y_1$ and $Y_2$ are $2 \times 1$ matrices.
By using (\ref{eq:Laplace(((11)))_2,22}) we can eliminate $Y_1$.
Then the system satisfied by $\begin{pmatrix} Y_2 \\ \hat{Z} \end{pmatrix}$ is
\begin{align*}
\frac{d}{d\xi}
\begin{pmatrix} Y_2 \\ \hat{Z} \end{pmatrix}=
\left[
\xi^2
\begin{pmatrix}
-I & O \\
O & O
\end{pmatrix}
+\xi
\begin{pmatrix}
O & I \\
P & O
\end{pmatrix}
+
\begin{pmatrix}
P-t & -Q \\
PQ-\theta^0 & -P
\end{pmatrix}
\right]
\begin{pmatrix} Y_2 \\ \hat{Z} \end{pmatrix}.
\end{align*}
This system has spectral type $(((2)))(((11)))$.
As to the system $(((2)))_2, 211$ (Section \ref{sec:(((2)))_2,211}), we have to change the independent and dependent variables as
$x \to 1/x,\ Y \to x^{\theta^\infty_1} Y$, so that the Riemann scheme is of the form
\[
\left(
\begin{array}{cc}
x=0 & x=\infty \ \left( \frac12 \right) \\
\begin{array}{c} 0 \\ 0 \\ \theta^{\infty}_2-\theta^\infty_1 \\ \theta^{\infty}_3-\theta^\infty_1 \end{array} &
\overbrace{\begin{array}{cccc}
1& 0 & -t/2 & \theta^\infty_1 \\
1& 0 & -t/2 & \theta^\infty_1 \\
-1& 0 & t/2 & \theta^\infty_1 \\
-1& 0 & t/2 & \theta^\infty_1
\end{array}}
\end{array}
\right).
\]
Then, in the same way as above, we can see the correspondence $(((2)))_2, 211 \leftrightarrow (((2)))(((11)))$.
As the result, we obtain the following correspondences:
\begin{align*}
H^{\mathrm{Mat}}_{\mathrm{II}}:\
\stackrel{\infty}{(((11)))}_2, 22 \leftrightarrow (((2)))(((11))), \quad
\stackrel{\infty}{(((2)))}_2, 211 \leftrightarrow (((2)))(((11))).
\end{align*}
|
1,116,691,499,035 | arxiv | \section{Introduction
Our aim of this paper is twofold. We first establish an iteration
theory of the Maslov-type index associated with a Lagrangian
subspace of $(\mathbb{R}^{2n},\omega_0)$ for symplectic paths
starting from identity. The Bott-type iteration formulas and some
abstract precise iteration formulas are obtained here. Then as the
application of this theory, we consider the brake orbit problem on a
fixed energy hypersurface of the autonomous Hamiltonian systems. The
multiplicity results are obtained in this paper.
\subsection {Main results for the brake orbit problem}
Let $V\in C^2({\bf R}^n, {\bf R})$ and $h>0$ such that ${\Omega}\equiv \{q\in
{\bf R}^n|V(q)<h\}$ is nonempty, bounded, open and connected. Consider
the following fixed energy problem of the second order autonomous
Hamiltonian system
\begin{eqnarray} && \ddot{q}(t)+V'(q(t))=0, \quad {\rm for}\;q(t)\in{\Omega}, \label{1.1}\\
&& \frac{1}{2}|\dot{q}(t)|^2+V(q(t))= h, \qquad\forall t\in{\bf R}, \label{1.2}\\
&& \dot{q}(0)=\dot{q}(\frac{\tau}{2})=0, \label{1.3}\\
&& q(\frac{\tau}{2}+t)=q(\frac{\tau}{2}-t),\qquad q(t+\tau)=q(t),
\quad \forall t\in{\bf R}. \label{1.4}
\end{eqnarray}
A solution $(\tau,q)$ of (\ref{1.1})-(\ref{1.4}) is called a {\it
brake orbit$\;$} in ${\Omega}$. We call two brake orbits $q_1$ and
$q_2:{\bf R}\to{\bf R}^n$ {\it geometrically distinct} if $q_1({\bf R})\neq
q_2({\bf R})$.
We denote by $\mathcal{O}({\Omega})$ and $\td{\mathcal{O}}({\Omega})$ the sets
of all brake orbits and geometrically distinct brake orbits in ${\Omega}$
respectively.
Let $J=\left(\begin{array}{cc}0&-I\{\cal I}&0\end{array}\right)$ and
$N=\left(\begin{array}{cc}-I&0\\0&I\end{array}\right)$ with $I$
being the identity in ${\bf R}^n$. Suppose that $H\in
C^2({\bf R}^{2n}\setminus\{0\},{\bf R})\cap C^1({\bf R}^{2n},{\bf R})$ satisfying \begin{equation}
H(Nx)=H(x),\qquad \forall\, x\in{\bf R}^{2n}.\label{1.5}\end{equation}
We consider the following fixed energy problem
\begin{eqnarray}
\dot{x}(t) &=& JH'(x(t)), \label{1.6}\\
H(x(t)) &=& h, \label{1.7}\\
x(-t) &=& Nx(t), \label{1.8}\\
x(\tau+t) &=& x(t),\; \forall\,t\in{\bf R}. \label{1.9} \end{eqnarray}
A solution $(\tau,x)$ of (\ref{1.6})-(\ref{1.9}) is also called a
{\it brake orbit} on ${\Sigma}:=\{y\in{\bf R}^{2n}\,|\, H(y)=h\}$.
\noindent{\bf Remark 1.1.} It is well known that via \begin{equation} H(p,q) =
{1\over 2}|p|^2 + V(q), \label{1.10}\end{equation} $x=(p,q)$ and $p=\dot q$, the
elements in $\mathcal{O}(\{V<h\})$ and the solutions of
(\ref{1.6})-(\ref{1.9}) are one to one correspondent.
In more general setting, let ${\Sigma}$ be a $C^2$ compact hypersurface
in ${\bf R}^{2n}$ bounding a compact set $C$ with nonempty interior.
Suppose ${\Sigma}$ has non-vanishing Guassian curvature and satisfies
the reversible condition $N({\Sigma}-x_0)={\Sigma}-x_0:=\{x-x_0|x\in {\Sigma}\}$
for some $x_0\in C$. Without loss of generality, we may assume
$x_0=0$. We denote the set of all such hypersurface in ${\bf R}^{2n}$ by
$\mathcal{H}_b(2n)$. For $x\in {\Sigma}$, let $N_{\Sigma}(x)$ be the unit
outward
normal vector at $x\in {\Sigma}$. Note that here by the reversible
condition there holds $N_{\Sigma}(Nx)=NN_{\Sigma}(x)$. We consider the
dynamics problem of finding $\tau>0$ and an absolutely continuous
curve $x:[0,\tau]\to {\bf R}^{2n}$ such that
\begin{eqnarray} \dot{x}(t)&=&JN_{\Sigma}(x(t)), \qquad x(t)\in {\Sigma},\label{1.11}\\
x(-t) &=& Nx(t), \qquad x(\tau+t) = x(t),\qquad {\rm
for\;\; all}\;\; t\in {\bf R}.\label{1.12}\end{eqnarray}
A solution $(\tau,x)$ of the problem (\ref{1.11})-(\ref{1.12}) is
a special closed characteristic on ${\Sigma}$, here we still call it a
brake orbit on ${\Sigma}$.
We also call two brake orbits $(\tau_1, x_1)$ and $(\tau_2,x_2)$
{\it geometrically distinct} if $x_1({\bf R})\ne x_2({\bf R})$, otherwise we
say they are equivalent. Any two equivalent brake orbits are
geometrically the same. We denote by ${\mathcal{J}}_b({\Sigma})$ the set
of all brake orbits on ${\Sigma}$, by $[(\tau,x)]$ the equivalent class
of $(\tau,x)\in {\mathcal{J}}_b({\Sigma})$ in this equivalent relation
and by $\td{\mathcal{J}}_b({\Sigma})$ the set of $[(\tau,x)]$ for all
$(\tau,x)\in {\mathcal{J}}_b({\Sigma})$. From now on, in the notation
$[(\tau,x)]$ we always assume $x$ has minimal period $\tau$. We also
denote by $\tilde {\mathcal {J}}({\Sigma})$ the set of all geometrically
distinct closed characteristics on ${\Sigma}$.
\noindent{\bf Remark 1.2.} Similar to the closed characteristic
case, $^{\#}\td{\mathcal{J}}_b({\Sigma})$ doesn't depend on the choice of
the Hamiltonian function $H$ satisfying (\ref{1.5}) and the
conditions that $H^{-1}({\lambda})={\Sigma}$ for some ${\lambda}\in{\bf R}$ and $H'(x)\neq
0$ for all $x\in {\Sigma}$.
Let $(\tau,x)$ be a solution of (\ref{1.6})-(\ref{1.9}). We consider
the boundary value problem of the linearized Hamiltonian system \begin{eqnarray}
&&\dot{y}(t) = JH''(x(t))y(t), \label{1.13}\\
&&y(t+\tau)=y(t), \quad y(-t)=Ny(t), \qquad \forall t\in{\bf R}.
\label{1.14} \end{eqnarray}
Denote by ${\gamma}_x(t)$ the fundamental solution of the system
(\ref{1.13}), i.e., ${\gamma}_x(t)$ is the solution of the following
problem
\begin{eqnarray}
\dot{{\gamma}_x}(t) &=& JH''(x(t)){\gamma}_x(t), \label{1.15}\\
{\gamma}_x(0) &=& I_{2n}. \label{1.16} \end{eqnarray}
We call ${\gamma}_x\in C([0,\tau/2],{\rm Sp}(2n))$ the {\it
associated symplectic path} of $(\tau, x)$.
The eigenvalues of ${\gamma}_x(\tau)$ are called {\it Floquet
multipliers} of $(\tau,x)$. By Proposition I.6.13 of Ekeland's book
\cite{Ek}, the Floquet multipliers of $(\tau,x)\in
\mathcal{J}_b({\Sigma})$ do not depend on the particular choice of the
Hamiltonian function $H$ satisfying conditions in Remark 1.2.
\noindent{\bf Definition 1.1.} {\it A brake orbit $(\tau,x)\in
{\mathcal{J}}_b({\Sigma})$ is called nondegenerate if 1 is its double
Floquet multiplier.}
Let $B^n_1(0)$ denote the open unit ball ${\bf R}^n$ centered at the
origin $0$. In \cite{Se1} of 1948, H. Seifert proved
$\td{\mathcal{O}}({\Omega})\neq \emptyset$ provided $V'\neq 0$ on
$\partial {\Omega}$, $V$ is analytic and ${\Omega}$ is homeomorphic to
$B^n_1(0)$. Then he proposed his famous conjecture: {\it
$^{\#}\tilde{\mathcal{O}}({\Omega})\geq n$ under the same conditions}.
After 1948, many studies have been carried out for the brake orbit
problem. S. Bolotin proved first in \cite{Bol}(also see \cite{BolZ})
of 1978 the existence of brake orbits in general setting.
K. Hayashi in \cite {Ha1}, H. Gluck and W. Ziller
in \cite{GZ1}, and V. Benci in \cite {Be1} in 1983-1984
proved $^{\#}\td{\mathcal{O}}({\Omega})\geq 1$ if $V$ is
$C^1$, $\bar{{\Omega}}=\{V\leq h\}$ is compact, and $V'(q)\neq 0$ for all
$q\in \partial{{\Omega}}$. In 1987, P. Rabinowitz in \cite{Ra1} proved
that if $H$ satisfies (\ref{1.5}), ${\Sigma}\equiv H^{-1}(h)$ is
star-shaped, and $x\cdot H'(x)\neq 0$ for all $x\in {\Sigma}$, then
$^{\#}\td{\mathcal{J}}_b({\Sigma})\geq 1$. In 1987, V. Benci and F.
Giannoni gave a different proof of the existence of one brake orbit
in \cite{BG}.
In 1989, A. Szulkin in \cite{Sz} proved that
$^{\#}\td{{\cal J}_b}(H^{-1}(h))\geq n$, if $H$ satisfies conditions in
\cite{Ra1} of Rabinowitz and the energy hypersurface $H^{-1}(h)$ is
$\sqrt{2}$-pinched. E. van Groesen in \cite{Gro} of 1985 and A.
Ambrosetti, V. Benci, Y. Long in \cite{ABL1} of 1993 also proved
$^{\#}\td{\mathcal{O}}({\Omega})\geq n$ under different pinching
conditions.
Note that the above mentioned results on the existence of multiple
brake orbits are based on certain pinching conditions. Without
pinching condition, in \cite{LZZ} Y. Long, C. Zhu and the second
author of this paper proved the following result:
{\it For $n\ge 2$, suppose $H$ satisfies
(H1) (smoothness) $H\in C^2({\bf R}^{2n}\setminus\{0\},{\bf R})\cap
C^1({\bf R}^{2n},{\bf R})$,
(H2) (reversibility) $H(Ny)=H(y)$ for all $y\in{\bf R}^{2n}$.
(H3) (convexity) $H''(y)$ is positive definite for all
$y\in{\bf R}^{2n}\setminus\{0\}$,
(H4) (symmetry) $H(-y)=H(y)$ for all $y\in{\bf R}^{2n}$.
\noindent Then for any given $h>\min \{ H(y)|\; y\in {\bf R}^{2n}\}$ and
${\Sigma}=H^{-1}(h)$, there holds } $$^{\#}\td{{\cal J}}_b({\Sigma})\ge 2.$$
As a consequence they also proved that: {\it For $n\geq 2$, suppose
$V(0)=0$, $V(q)\geq 0$, $V(-q)=V(q)$ and $V''(q)$ is positive
definite for all $q\in {\bf R}^n\setminus\{0\}$. Then for ${\Omega}\equiv
\{q\in{\bf R}^n|V(q)<h\}$ with $h>0$, there holds}
$$^{\#}\td{\mathcal{O}}({\Omega})\ge 2.$$
\noindent{\bf Definition 1.2.} {\it We denote
$$\begin{array}{ll}\mathcal{H}_b^{c}(2n)=\{{\Sigma}\in \mathcal{H}_{b}(2n)|\;{\Sigma}\; { is\;
strictly\; convex\;} \},\\\mathcal{H}_b^{s,c}(2n)=\{{\Sigma}\in
\mathcal{H}_{b}^c(2n)| \;-{\Sigma}={\Sigma}\}.\end{array}$$}
\noindent{\bf Definition 1.3.} {\it For
${\Sigma}\in\mathcal{H}_b^{s,c}(2n)$, a brake orbit $(\tau,x)$ on ${\Sigma}$
is called symmetric if
$x({\bf R})=-x({\bf R})$. Similarly, for a $C^2$ convex symmetric bounded
domain $\Omega\subset {\bf R}^n$, a brake orbit $(\tau,q)\in \mathcal
{O}(\Omega)$ is called symmetric if $q({\bf R})=-q({\bf R})$.}
Note that a brake orbit $(\tau,x)\in \mathcal {J}_b({\Sigma})$ with
minimal period $\tau$
is symmetric if
$x(t+\tau/2)=-x(t)$ for $t\in {\bf R}$, a brake orbit $(\tau,q)\in
\mathcal {O}(\Omega)$ with minimal period $\tau$ is symmetric if
$q(t+\tau/2)=-q(t)$ for $t\in{\bf R}$.
In this paper, we denote by ${\bf N}$, ${\bf Z}$, ${\bf Q}$ and ${\bf R}$ the sets of
positive integers, integers, rational numbers and real numbers
respectively. We denote by $\langle \cdot,\cdot\rangle$ the standard
inner product in ${\bf R}^n$ or ${\bf R}^{2n}$, by $(\cdot,\cdot)$ the inner
product of corresponding Hilbert space. For any $a\in {\bf R}$, we denote
$E(a)=\inf\{k\in {\bf Z}|k\ge a\}$ and $[a]=\sup\{k\in {\bf Z}|k\le a\}$.
The following are the main results for brake orbit problem of this
paper.
\noindent{\bf Theorem 1.1.} {\it For any
${\Sigma}\in\mathcal{H}_b^{s,c}(2n)$, we have }
$$^{\#}\td{{\cal J}}_b({\Sigma})\ge \left[\frac{n}{2}\right]+1 .$$
\noindent{\bf Corollary 1.1.} {\it Suppose $V(0)=0$, $V(q)\geq 0$,
$V(-q)=V(q)$ and $V''(q)$ is positive definite for all $q\in
{\bf R}^n\setminus\{0\}$. Then for any given $h>0$ and ${\Omega}\equiv
\{q\in{\bf R}^n|V(q)<h\}$, we
have}
$$^{\#}\td{\mathcal{O}}({\Omega})\ge \left[\frac{n}{2}\right]+1.$$
\noindent{\bf Theorem 1.2.} {\it For any
${\Sigma}\in\mathcal{H}_b^{s,c}(2n)$, suppose that all brake orbits on
${\Sigma}$ are nondegenerate. Then we have
$$^{\#}\td{{\cal J}}_b({\Sigma})\ge n+\mathfrak{A}({{\Sigma}}),$$
where $2\mathfrak{A}(\Sigma)$ is the number of geometrically
distinct asymmetric brake orbits on ${\Sigma}$.}
As a direct consequence of Theorem 1.2, for ${\Sigma}\in
\mathcal{H}_b^{s,c}(2n)$, if $^{\#}\td{{\cal J}}_b({\Sigma})=n$ and all brake
orbits on ${\Sigma}$ are nondegenerate, then all $[(\tau,x)]\in
\td{\mathcal {J}}_b({\Sigma})$ are symmetric. Moreover, we have the
following result.
\noindent{\bf Corollary 1.2.} {\it For ${\Sigma}\in
\mathcal{H}_b^{s,c}(2n)$, suppose $^{\#}\tilde {\mathcal
{J}}({\Sigma})=n$ and all closed characteristics on ${\Sigma}$ are
nondegenerate. Then all the $n$ closed characteristics are symmetric
brake orbits up to a suitable translation of time.}
\noindent{\bf Remark 1.3.} We note that $^{\#}\tilde {\mathcal
{J}}({\Sigma})=n$ implies $^{\#}\tilde {\mathcal {J}}_b({\Sigma})\le n$, and
Theorem 1.2 implies $^{\#}\tilde {\mathcal {J}}_b({\Sigma})\ge n$. So we
have $^{\#}\tilde {\mathcal {J}}_b({\Sigma})=n$. Thus Corollary 1.2
follows from Theorem 1.2. Motivated by Corollary 1.2, we tend to
believe that if ${\Sigma}\in\mathcal {H}_b^c$ and $^\#\tilde{\mathcal
{J}}({\Sigma}) <+\infty$, then all of them are brake orbits up to a
suitable translation of time. Furthermore, if ${\Sigma}\in\mathcal
{H}_b^{s,c}$ and $^\#\tilde{\mathcal {J}}({\Sigma})<+\infty$, then we
believe that all of them are symmetric brake orbits up to a suitable
translation of time.
\noindent{\bf Corollary 1.3.} {\it Under the same conditions of
Corollary 1.1 and the condition that all brake orbits in ${\Omega}$ are
nondegenerate, we have
$$^{\#}\td{\mathcal{O}}({\Omega})\ge n+\mathfrak{A}(\Omega),$$
where $2\mathfrak{A}(\Omega)$ is the number of geometrically
distinct asymmetric brake orbits in ${\Omega}$. Moreover, if the second
order system (\ref{1.1})-(\ref{1.2}) possesses exactly $n$
geometrically distinct periodic solutions in ${\Omega}$ and all periodic
solutions in ${\Omega}$ are nondegenerate, then all of them are symmetric
brake orbits. }
A typical example of ${\Sigma}\in \mathcal{H}_b^{s,c}(2n)$ is the
ellipsoid $\mathcal {E}_n(r)$ defined as follows. Let
$r=(r_1,\cdots,r_n)$ with $r_j>0$ for $1\le j\le n$. Define
$$\mathcal {E}_n(r)=\left\{x=(x_1,\cdots,x_n, y_1,\cdots,y_n)\in{\bf R}^{2n}\;
\left|\;\sum_{k=1}^n\frac{x_k^2+y_k^2}{r_k^2}=1\right.\right\}.$$
If $r_j/r_k\notin {\bf Q}$ whenever $j\ne k$, from \cite{Ek} one can see
that there are precisely $n$ geometrically distinct symmetric brake orbits on $\mathcal
{E}_n(r)$ and all of them are nondegenerate.
Since the appearance of \cite{HWZ},
Hofer, among others, has popularized in many talks the following
conjecture: {\it For $n\ge 2$, $^{\#}\tilde {\mathcal {J}}({\Sigma})$ is
either $n$ or $+\infty$ for any $C^2$ compact convex hypersurface
${\Sigma}$ in ${\bf R}^{2n}$. } Motivated by the above conjecture and the
Seifert conjecture, we tend to believe the following statement.
\noindent{\bf Conjecture 1.1.} {\it For any integer $n\ge 2$, there
holds
\begin{eqnarray} \left\{^\#\td{\mathcal{J}}_b({\Sigma})|{\Sigma}\in
\mathcal{H}_b^{c}(2n)\right\}=\{n, \;+\infty\}.\nonumber\end{eqnarray}}
For ${\Sigma}\in\mathcal{H}_b^{s,c}(2n)$, Theorem 1.1 supports Conjecture
1.1 for the case $n=2$ and Theorem 1.2 supports Conjecture 1.1 for
the nondegenerate case. However, without the symmetry assumption of
${\Sigma}$, the estimate $^\#\td{\mathcal{J}}_b({\Sigma})\ge 2$ has not been
proved yet. It seems that there are no effective methods so far to
prove Conjecture 1.1 completely.
\subsection{Iteration formulas for Maslov-type index theory associated with a Lagrangian subspace}
We observe that the problem (\ref{1.6})-(\ref{1.9}) can be
transformed to the following problem \begin{eqnarray}
&&\dot{x}(t) = JH'(x(t)), \nonumber\\
&&H(x(t))= h, \nonumber\\
&&x(0)\in L_0,\;\; x(\tau/2)\in L_0, \nonumber
\end{eqnarray}
where $L_0=\{0\}\times {\bf R}^n\subset{\bf R}^{2n}$.
An index theory suitable for the study of this problem was
developed in \cite{Liu2} for any Lagrangian subspace $L$. In order
to prove Theorems 1.1-1.2, we need to establish an iteration theory
for this so called $L$-index theory.
We consider a linear Hamiltonian system
\begin{equation}\dot
x(t)=JB(t)x(t),\label{1.17}\end{equation}
with $B\in C([0,1],
\mathcal{L}_s({\bf R}^{2n})$, where $ \mathcal {L}({\bf R}^{2n})$ denotes the
set of $2n\times 2n$ real matrices and $\mathcal{L}_s({\bf R}^{2n})$
denotes its subset of symmetric ones. It is well known that the
fundamental solution $\gamma_B$ of (\ref{1.17}) is a symplectic path
starting from the identity $I_{2n}$ in the symplectic group
$${\rm Sp}(2n)=\{M\in \mathcal{L}({\bf R}^{2n})| M^TJM=J\}, $$ i.e., $\gamma_B\in \mathcal{P}(2n)$ with
$$\mathcal{P}_{\tau}(2n)=\{\gamma\in C([0,\tau],{\rm Sp}(2n))| \gamma(0)=I_{2n}\}, \;{\rm and}\;\mathcal{P}(2n)=\mathcal{P}_1(2n).$$
We denote the nondegenerate subset of $\mathcal{P}(2n)$ by
$$\mathcal{P}^*(2n)=\{\gamma\in \mathcal{P}(2n)| {\rm det}({\gamma}(1)-I_{2n})\ne 0\}.$$
In the study of periodic solutions of Hamiltonian systems, the
Maslov-type index pair $(i(\gamma),\nu(\gamma))$ of $\gamma$ was
introduced by C. Conley and E. Zehnder in \cite{CoZ} for
$\gamma\in\mathcal{ P}^*(2n)$ with $n\ge 2$, by Y. Long and E.
Zehnder in \cite{LZe} for $\gamma\in\mathcal {P}^*(2)$, by Long in
\cite{Long4} and C. Viterbo in \cite{V} for $\gamma\in\mathcal
{P}(2n)$. In \cite{Long0}, Long introduced the $\omega$-index which
is an index function $(i_{\omega}(\gamma),\nu_{\omega}(\gamma))\in
{\bf Z}\times \{0,1,\cdots,2n\}$ for $\omega\in {\bf U}:=\{z\in{\bf C}|\,|z|=1\}$.
In many problems related to nonlinear Hamiltonian systems, it is
necessary to study iterations of periodic solutions. In order to
distinguish two geometrically distinct periodic solutions, one way
is to study the Maslov-type indices of the iteration paths
of the fundamental solutions of the corresponding linearized
Hamiltonian systems. For ${\gamma}\in\mathcal{P}(2n)$, we define $\;\tilde \gamma(t)=\gamma(t-j)\gamma(1)^j$, $j\le t\le
j+1$, $j\in{\bf N}$, and the $k$-times iteration path of ${\gamma}$ by $\gamma^k=\td{\gamma}|_{[0,k]}$, $\forall\, k\in {\bf N}$.
In the paper \cite{Long0} of Long, the following result was proved
\begin{equation} i(\gamma^k)=\sum_{\omega^k=1}i_{\omega}(\gamma),
\;\;\nu(\gamma^k)=\sum_{\omega^k=1}\nu_{\omega}(\gamma). \label{1.18}\end{equation}
From this result, various iteration index formulas were obtained and
were
used to study the multiplicity and stability problems related
to the nonlinear Hamiltonian systems. We refer to the book of Long
\cite{Long1} and the references therein for these topics.
In \cite{LZZ}, Y. Long, C. Zhu and the second author of this paper
studied the multiple solutions of the brake orbit problem on a
convex hypersurface, there they introduced indices
$(\mu_1(\gamma),\nu_1({\gamma}))$ and $(\mu_2(\gamma),\nu_2({\gamma}))$ for
symplectic path $\gamma$. Recently, the first author of this
paper in \cite{Liu2} introduced an index theory associated with a
Lagrangian subspace for symplectic paths. For a symplectic path
$\gamma\in \mathcal{P}(2n)$, and a Lagrangian subspace $L$, by
definition the $L$-index is assigned to a pair of integers
$(i_L(\gamma), \nu_L(\gamma))\in {\bf Z}\times \{0,1,\cdots, n\}$. This
index theory is suitable for studying the Lagrangian boundary value
problems ($L$-solution, for short) related to nonlinear Hamiltonian
systems. In \cite{Liu0} the first author of this paper applied this
index theory to study the $L$-solutions of some asymptotically
linear Hamiltonian systems. The indices $\mu_1(\gamma)$ and
$\mu_2(\gamma)$ are essentially special cases of the $L$-index
$i_L(\gamma)$ for Lagrangian subspaces $L_0=\{0\}\times {\bf R}^n$ and
$L_1={\bf R}^n\times \{0\}$ respectively up to a constant $n$.
In order to study the
brake orbit problem, it is necessary to study the iterations of the
brake orbit. In order to do this, one way is to study the
$L_0$-index of iteration path $\gamma^k$ of the fundamental solution
$\gamma$ of the linear system (\ref{1.17}) for any $k\in {\bf N}$. In
this case, the $L_0$-iteration path $\gamma^k$ of $\gamma$ is
different from that of the general periodic case mentioned above.
Its definition is given in (\ref{4.3}) and (\ref{4.4}) below.
In 1956, Bott in \cite{Bott} established the famous iteration Morse
index formulas for closed geodesics on Riemannian manifolds. For
convex Hamiltonian systems, Ekeland developed the similar Bott-type
iteration index formulas for Ekeland index(cf. \cite{Ek}). In 1999,
Long in the paper \cite{Long0} established the Bott-type iteration
formulas (\ref{1.18}) for Maslov-type index. In this paper, we
establish the following Bott-type iteration formulas for the
$L_0$-index (see Theorem 4.1 below).
\newpage
\noindent{\bf Theorem 1.3.} {\it Suppose $\gamma\in\mathcal
{P}_{\tau}(2n)$, for the iteration symplectic paths $\gamma^k$
defined in (\ref{4.3})-(\ref{uvw}) below, when $k$ is odd, there
hold \begin{equation}
i_{L_0}(\gamma^{k})=i_{L_0}(\gamma^1)+\sum_{i=1}^\frac{k-1}{2}i_{\omega_{k}^{2i}}(\gamma^2),\;
\nu_{L_0}(\gamma^{k})=\nu_{L_0}(\gamma^1)+\sum_{i=1}^\frac{k-1}{2}\nu_{\omega_{k}^{2i}}(\gamma^2),
\label{1.19}\end{equation}
when $k$ is even, there hold
\begin{equation}
i_{L_0}(\gamma^{k})=i_{L_0}(\gamma^1)+i^{L_0}_{\sqrt{-1}}(\gamma^1)+\sum_{i=1}^{\frac{k}{2}-1}i_{\omega_{k}^{2i}}(\gamma^2),\;
\nu_{L_0}(\gamma^{k})=\nu_{L_0}(\gamma^1)+\nu^{L_0}_{\sqrt{-1}}(\gamma^1)+\sum_{i=1}^{\frac{k}{2}-1}\nu_{\omega_{k}^{2i}}(\gamma^2),
\label{1.20}\end{equation} where $\omega_k=e^{\pi\sqrt{-1}/k}$ and
$(i_{\omega}(\gamma),\;\nu_{\omega}(\gamma))$ is the $\omega$ index
pair of the symplectic path $\gamma$ introduced in \cite{Long0}, and
the index pair
$(i^{L_0}_{\sqrt{-1}}(\gamma^1),\nu^{L_0}_{\sqrt{-1}}(\gamma^1))$ is
defined in Section 3.}
\noindent{\bf Remark 1.4. $\;\;$(i).} Note that the types of
iteration formulas of Ekeland and (\ref{1.18}) of Long are the same
as that of Bott while the type of our Bott-type iteration formulas
in Theorem 1.3 is somewhat different from theirs. In fact, their
proofs depend on the fact that the natural decomposition of the
Sobolev space under the corresponding quadratical form is
orthogonal, but the natural decomposition in our case is no longer
orthogonal under the corresponding quadratical form. The index pair
$(i^{L_0}_{\sqrt{-1}}(\gamma^1),\nu^{L_0}_{\sqrt{-1}}(\gamma^1))$
established in this paper is an index theory associated with two
Lagrangian subspaces.
{\bf(ii).} In \cite{LZZ}, by using $\hat{\mu}_1(x)>1$ for any brake
orbit in convex Hamiltonian systems and the dual variational method
the authors proved the existence of two geometrically distinct brake
orbits on ${\Sigma}\in\mathcal{H}_b^{s,c}(2n)$ , where $\hat{\mu}_1(x)$
is the mean $\mu_1$-index of $x$ defined in \cite{LZZ}. Based on the
Bott-type iteration formulas in Theorem 1.3, we can deal with the
brake orbit problem more precisely to obtain the existence of more
geometrically distinct brake orbits on
${\Sigma}\in\mathcal{H}_b^{s,c}(2n)$.
From the Bott-type formulas in Theorem 1.3, we prove the abstract
precise iteration index formula of $i_{L_0}$ in Section 5 below.
\noindent{\bf Theorem 1.4.} {\it Let ${\gamma}\in
\mathcal{P}_{\tau}(2n)$, ${\gamma}^k$ is defined by
(\ref{4.3})-(\ref{uvw}) below, and $M={\gamma}^2(2\tau)$.
Then for every $k\in 2{\bf N}-1$, there holds
\begin{eqnarray}
i_{L_0}(\gamma^k)= i_{L_0}(\gamma^1)+\frac
{k-1}{2}(i(\gamma^2)+S^+_M(1)-C(M))
+\sum_{\theta\in(0,2\pi)}E\left(\frac{k\theta}{2\pi}\right)S_M^-(e^{\sqrt{-1}\theta})-C(M),\label{1.21}\end{eqnarray}
where $C(M)$ is defined by
$$C(M)=\displaystyle\sum_{\theta\in(0,2\pi)}S^-_M(e^{\sqrt{-1}\theta})$$
and
$$S^{\pm}_M(\omega)=\lim_{\varepsilon\to 0+}i_{\omega exp(\pm
\sqrt{-1}\varepsilon)}(\gamma^2)-i_{\omega}(\gamma^2)$$ is the
splitting number of the symplectic matrix $M$ at $\omega$ for
$\omega\in {\bf U}$. (cf. \cite{Long0}, \cite{Long1}).
For every $k\in 2{\bf N}$, there holds
\begin{eqnarray} i_{L_0}(\gamma^k)&=& i_{L_0}(\gamma^2)+\left(\frac
k2-1\right)\left(i(\gamma^2)+S^+_M(1)-C(M)\right)\nonumber\\&&-C(M)-\displaystyle\sum_{\theta\in(\pi,2\pi)}S^-_M(e^{\sqrt{-1}\theta})
+\sum_{\theta\in(0,2\pi)}E\left(\frac{k\theta}{2\pi}\right)S_M^-(e^{\sqrt{-1}\theta}).\label{1.23}\end{eqnarray}}
Using the iteration formulas in Theorems 1.3-1.4, we establish the
common index jump theorem of the $i_{L_0}$-index for a finite
collection of symplectic paths starting from identity with positive
mean $i_{L_0}$-indices. In the following of this paper, we write
$(i_{L_0}(\gamma,k),\nu_{L_0}(\gamma,k))=(i_{L_0}(\gamma^k),\nu_{L_0}(\gamma^k))$
for any symplectic path $\gamma\in \mathcal {P}_{
{\tau}}(2n)$ and $k\in {\bf N}$.
\noindent{\bf Theorem 1.5.} {\it Let ${\gamma}_j\in \mathcal {P}_{
{\tau_j}}(2n)$ for $j=1,\cdots,q$. Let
$M_j={\gamma}(2\tau_j)$, for $j=1,\cdots,q$. Suppose
\begin{equation} \hat{i}_{L_0}({\gamma}_j)>0, \quad
j=1,\cdots,q.\label{6.12}\end{equation}
Then there exist infinitely many $(R, m_1, m_2,\cdots,m_q)\in {\bf N}^{q+1}$ such that
(i) $\nu_{L_0}({\gamma}_j, 2m_j\pm 1)=\nu_{L_0}({\gamma}_j)$,
(ii) $i_{L_0}({\gamma}_j, 2m_j-1)+\nu_{L_0}({\gamma}_j,2m_j-1)=R-(i_{L_1}({\gamma}_j)+n+S_{M_j}^+(1)-\nu_{L_0}({\gamma}_j))$,
(iii)$i_{L_0}({\gamma}_j,2m_j+1)=R+i_{L_0}({\gamma}_j)$.}
\subsection{Sketch of the proofs of Theorems 1.1-1.2}
For reader's convenience we briefly sketch the proofs of Theorems
1.1 and 1.2.
Fix a hypersurface ${\Sigma}\in \mathcal{H}_b^{s,c}(2n)$ and suppose
$^\#\td{\mathcal{J}}_b({\Sigma})<+\infty$, we will carry out the proof of
Theorem 1.1 in Section 7 below in the following three steps.
\noindent{\it Step 1.} Using the Clarke dual variational method, as
in \cite{LZZ}, the brake orbit problem is transformed to a fixed
energy problem of Hamiltonian systems whose Hamiltonian function is
defined by $H_{\Sigma}(x)=j_{\Sigma}^2(x)$ for any $x\in {\bf R}^{2n}$ in terms of
the gauge function $j_{\Sigma}(x)$ of ${\Sigma}$. By results in \cite{LZZ}
brake orbits in $\mathcal{J}_b({\Sigma},2)$ (which is defined in Section
6 after (\ref{7.7})) correspond to critical points of
$\Phi_{\Sigma}=\Phi|_{M_{\Sigma}}$ where $M_{\Sigma}$ and $\Phi$ are defined by
(\ref{7.10}) and (\ref{7.11}) in Section 6 below. Then in Section 6
we obtain the injection map $\phi: {\bf N}+K\to \mathcal
{V}_{\infty,b}({\Sigma},2)\times {\bf N}$, where $K$ is a nonnegative integer
and the infinitely variationally visible subset
$\mathcal{V}_{\infty,b}({\Sigma},2)$ of $\td{\mathcal{J}}_b({\Sigma},2)$ is defined in Section
6
such that
(i) For any $k\in {\bf N}+K$, $[(\tau,x)]\in
\mathcal{V}_{\infty,b}({\Sigma},2)$ and $m\in {\bf N}$ satisfying
$\phi(k)=([(\tau \;,x)],m)$, there holds
\begin{equation} i_{L_0}(x^m)\le k-1\le i_{L_0}(x^m)+\nu_{L_0}(x^m)-1,\label{1.25}\end{equation}
where $x$ has minimal period $\tau$, and $x^m$ is the $m$-times
iteration of $x$ for $m\in {\bf N}$. We remind that we have written
$i_{L_0}(x)=i_{L_0}(\gamma_x)$ for a brake orbit $(\tau,x)$ with
associated symplectic path $\gamma_x$.
(ii) For any $k_j\in {\bf N}+K$, $k_1<k_2$, $(\tau_j,x_j)\in
\mathcal{J}_b({\Sigma},2)$ satisfying $\phi(k_j)=([(\tau_j \;,x_j)],m_j)$ with
$j=1,2$ and $[(\tau_1 \;,x_1)]=[(\tau_2 \;,x_2)]$, there holds
$$m_1<m_2.$$
\noindent{\it Step 2.} Any symmetric
$(\tau,x)\in\mathcal{J}_b({\Sigma},2) $ with minimal period $\tau$
satisfies
\begin{equation}
x(t+\frac{\tau}{2})=-x(t),\qquad \forall t\in {\bf R},\label{1.26}\end{equation} any
asymmetric $(\tau,x)\in \mathcal{J}_b({\Sigma},2)$ satisfies
\begin{equation} (i_{L_0}(x^m),\nu_{L_0}(x^m))=(i_{L_0}((-x)^m),\nu_{L_0}((-x)^m)),\quad
\forall m\in {\bf N}.\label{1.27}\end{equation}
Denote the numbers of symmetric and asymmetric
elements in $\td{\mathcal{J}}_b({\Sigma},2)$ by $p$ and $2q$.
We can
write
$$\td{\mathcal{J}}_b({\Sigma},2)=\{[(\tau_j,x_j)]|j=1,2,\cdots,p\}\cup \{[(\tau_k,x_k)],[(\tau_k,-x_k)]|k=p+1,p+2,\cdots,p+q\},$$
where $\tau_j$ is the minimal period of $x_j$ for
$j=1,2,\cdots,p+q$.
Applying Theorem 1.5 to the associated symplectic paths of
$$(\tau_1,x_1),(\tau_2,x_2),\cdots,(\tau_{p+q},x_{p+q}),(2\tau_{p+1},x^2_{{p+1}}),(2\tau_{p+2},x^2_{{p+2}}),\cdots,(2\tau_{p+q},x^2_{{p+q}})$$
we obtain an integer $R$ large enough and the iteration times
$m_1,m_2,\cdots,m_{p+q},m_{p+q},m_{p+q+1},\cdots,m_{p+2q}$ such that
the precise information on the $(\mu_1,\nu_1)$-indices of
$(\tau_j,x_j)$'s are given in (\ref{8.49})-(\ref{8.56}).
By the injection map $\phi$ and Step 2,
without loss of generality, we can further set
\begin{equation} \phi(R-s+1)=([(\tau_{k(s)},x_{(k(s)})],m(s))\quad {\rm
for}\; s=1,2,\cdots,\left[\frac{n}{2}\right]+1,\label{1.28}\end{equation}
where $m(s)$ is the iteration time of $(\tau_{k(s)},x_{k(s)})$.
\noindent{\it Step 3.} Let \begin{equation} S_1=\left\{\left.s\in
\{1,2,\cdots,\left[\frac{n}{2}\right]+1\}\right|k(s)\le
p\right\},\quad
S_2=\left\{1,2,\cdots,\left[\frac{n}{2}\right]+1\right\}\setminus{S_1}.\label{1.29}\end{equation}
In Section 7 we should show that
\begin{equation} ^\#S_1\le p\quad {\rm and }\quad ^\#S_2\le 2q.\label{1.30}\end{equation}
In fact, (\ref{1.30}) implies Theorem 1.1.
To prove the first estimate in (\ref{1.30}),
in Section 7 below we prove the following result.
\noindent{\bf
Lemma 1.1.} {\it Let $(\tau ,x)\in \mathcal{J}_b({\Sigma},2)$ be
symmetric in the sense that $x(t+\frac{\tau}{2})=-x(t)$ for all
$t\in {\bf R}$ and ${\gamma}$ be the associated symplectic path of $(\tau,x)$.
Set $M={\gamma}(\frac{\tau}{2})$. Then there is a continuous symplectic
path
\begin{equation}
\Psi(s)=P(s) M P(s)^{-1}, \quad s\in [0,1]\label{8.1}\end{equation}
such that
\begin{equation} \Psi(0)=M,\qquad \Psi(1)=(-I_2)\diamond \tilde{M},\;\;\;\; \td{M}\in {\rm Sp}(2n-2),\label{8.2}\end{equation}
\begin{equation} \nu_1(\Psi(s))=\nu_1(M), \quad \nu_2(\Psi(s))=\nu_2(M),
\quad \forall \;s\in [0,1],\label{8.3}\end{equation}
where
$P(s)=\left(\begin{array}{cc}\psi(s)^{-1}&0\\0&\psi(s)^T\end{array}\right)$
and $\psi$ is a continuous $n\times n$ matrix path with
${\rm det}\psi(s)>0$ for all $s\in [0,1]$.}
In other words, the
symplectic path $\gamma|_{[0,\tau/2]}$ is $L_j$-homotopic to a
symplectic path $\gamma^*$ with $\gamma^*(\tau/2)=(-I_2)\diamond
\td{M}$ for $j=0,1$(see Definition 2.6 below for the notion of
$L$-homotopic). This observation is essential in the proof of the
estimate
\begin{equation}|(i_{L_0}({\gamma})+\nu_{L_0}({\gamma}))-((i_{L_1}({\gamma})+\nu_{L_1}({\gamma}))|\le
n-1\label{abc}\end{equation}
in Lemma 7.1 for ${\gamma}$ being the associated symplectic path of the symmetric
$(\tau,x)\in \mathcal{J}_b({\Sigma},2)$ in the sense that $x(t+\frac{\tau}{2})=-x(t)$ for all
$t\in {\bf R}$.
We note that in the estimate of the
Maslov-type index $i(\gamma)$, the basic normal form theory usually
plays an important role such as in \cite{LZ}, while for the
$i_{L}$-index theory, only under the symplectic transformation of
$P(s)$ defined in Lemma 1.1, the index pairs
$(i_{L_0}({\gamma}),\nu_{L_0}({\gamma}))$ and $((i_{L_1}({\gamma}),\nu_{L_1}({\gamma}))$
are both invariant, so the basic normal form theory can not be
applied directly.
\noindent{\bf
Lemma 1.2.} {\it Let $(\tau ,x)\in \mathcal{J}_b({\Sigma},2)$ be
symmetric in the sense that $x(t+\frac{\tau}{2})=-x(t)$ for all
$t\in {\bf R}$ and ${\gamma}$ be the associated symplectic path of $(\tau,x)$.
Then we have the estimate \begin{equation}
i_{L_1}({\gamma})+S_{\gamma(\tau)}^+(1)-\nu_{L_0}({\gamma})\ge
\frac{1-n}{2}.\label{abcd}\end{equation}}
{\bf Proof.} We set $\mathcal
{A}=i_{L_1}({\gamma})+S_{\gamma(\tau)}^+(1)-\nu_{L_0}({\gamma})$, and dually
$\mathcal {B}=i_{L_0}({\gamma})+S_{\gamma(\tau)}^+(1)-\nu_{L_1}({\gamma})$.
From (\ref{abc}), we have $|\mathcal{A}-\mathcal{B}|\le n-1$. It is
easy to see from Lemma 4.1 of \cite{LLZ} that
$\mathcal{A}+\mathcal{B}\ge 0$. So we have
$$\mathcal {A}\ge \frac{1-n}{2}.$$
\hfill\vrule height0.18cm width0.14cm $\,$
Combining the index estimate (\ref{abcd}) and Lemma
7.3 below, we show that $m(s)=2m_{k(s)}$ for any $s\in S_1$. Then by
the injectivity of $\phi$ we obtain an injection map from $S_1$ to
$\{[(\tau_j,x_j)]|1\le j\le p\}$ and hence $^\#S_1\le p$.
Note that $i({\gamma})=i_{{\omega}}({\gamma})$ for ${\omega}=1$, so one can estimate
$i({\gamma})+2S^+_{\gamma(\tau)}-\nu(\gamma)$ as in Lemma 4.1 of
\cite{LLZ} and $\rho_n({\Sigma})$ as in \cite{LZ} by using the splitting
number theory. While the relation between the splitting number
theory and the $i_{L}$-index theory is not clear, so we have to
estimate $\mathcal{A}$ by the above method indirectly.
To prove the second estimate of (\ref{1.30}),
using the precise index information in (\ref{8.49})-(\ref{8.56}) and
Lemmas 7.2-7.3 we can conclude that $m(s)$ is either $2m_{k(s)}$ or
$2m_{k(s)}-1$ for $s\in S_2$. Then by the injectivity of $\phi$ we
can define a map from $S_2$ to ${\Gamma}\equiv \{[(\tau_j,x_j)]|p+1\le
j\le p+q\}$ such that any element in ${\Gamma}$ is the image of at most
two elements in $S_2$. This yields that $^\#S_2\le 2q$.
In the following we sketch the proof of Theorem 1.2 briefly.
Suppose $^\#\td{\mathcal{J}}_b({\Sigma})<+\infty$, we set \begin{equation}
\td{\mathcal{J}}_b({\Sigma},2)=\{[(\tau_j,x_j)]|j=1,2,\cdots,p\}\cup
\{[(\tau_k,x_k)],[(\tau_k,-x_k)]|k=p+1,p+2,\cdots,p+q\},\label{1.31}\end{equation}
where we have set $q=\mathfrak{A}({\Sigma})$, and $\tau_j$ is the minimal
period of $x_j$ for $j=1,2,\cdots,p+q$.
Set $r=p+q$. Applying Theorem 1.5 to the associated symplectic paths
of $(\tau_1,x_1),\cdots,(\tau_r,x_r)$, we obtain an integer $R$
large enough and the iteration times $m_1,\cdots,m_r$ such that the
$i_{L_0}$-indices of iterations of $(\tau_j,x_j)$'s are given in
(\ref{9.1})-(\ref{9.3}).
Similar to (\ref{1.28}) we can set \begin{equation}
\phi(R-s+1)=([(\tau_{k(s)},x_{k(s)})],m(s)) \quad{\rm for}\;
s=1,2,\cdots,n,\label{1.32}\end{equation} where $m(s)$ is the iteration time of
$(\tau_{k(s)},x_{k(s)})$. Then by Lemma 7.3,
(\ref{9.1})-(\ref{9.3}), and that $x_j^m$ is nondegenerate for $1\le
j\le r$ and $m\in{\bf N}$ , we prove that $m(s)=2m_{k(s)}$. Then by the
injectivity of $\phi$ we have
$$^\#\td{\mathcal{J}}_{b}({\Sigma})=^\#\td{\mathcal{J}}_{b}({\Sigma},2)=p+2q=r+q\ge
n+q=n+\mathfrak{A}({\Sigma}).$$
This paper is organized as follows. In
Section 2, we briefly introduce the $L$-index theory associated with
Lagrangian subspace $L$ for symplectic paths and give upper bound
estimates for $|i_{L_0}-i_{L_1}|$ and
$|(i_{L_0}+\nu_{L_0})-(i_{L_1}+\nu_{L_1})|$. In Section 3, we
introduce an ${\omega}$-index theory for symplectic paths associated with
a Lagrangian subspace. Then in Section 4 we establish the Bott-type
iteration formulas of the Maslov-type indices $i_{L_0}$ and
$i_{L_1}$. Based on these Bott-type iteration formulas we prove
Theorems 1.4 and 1.5 in Section 5. In Section 6, we obtain the
injection map $\phi$ which is also basic in the proofs of Theorems
1.1 and 1.2. Based on these results in Sections 5 and 6, we prove
Theorem 1.1 in Section 7, and we finally prove Theorem 1.2 in
Section 8.
\setcounter{equation}{0}
\section {Maslov type $L$-index theory associated with a Lagrangian subspace for symplectic paths
}
In this section, we give a brief introduction to the Maslov type $L$-index theory.
We refer to the papers \cite{Liu2} and \cite{Liu0} for the
details.
Let $({\bf R}^{2n}, \omega_0)$ be the standard linear symplectic space with $\omega_0=\sum_{j=1}^n dx_j\wedge dy_j$.
A Lagrangian subspace $L$ of $({\bf R}^{2n}, \omega_0)$ is an $n$ dimensional subspace
satisfying $\omega_0|_L=0$. The set of all Lagrangian subspaces in $({\bf R}^{2n}, \omega_0)$ is denoted by $\Lambda(n)$.
For a symplectic path $\gamma\in \mathcal{P}(2n)$, we write it in the following
form
\begin{equation}\gamma(t)=\left(\begin{array}{cc}S(t)&V(t)\{\bf T}(t)&U(t)\end{array}\right),\label{2.1}\end{equation}
where $S(t), T(t), V(t), U(t)$ are $n\times n$ matrices.
The $n$ vectors coming from the columns of the matrix $\left(\begin{array}{c}
V(t)\{\bf U}(t)\end{array}\right)$ are linear independent and they span a
Lagrangian subspace path of $({\bf R}^{2n}, \omega_0)$.
For ${L_{0}}=\{0\}\times {\bf R}^n\in \Lambda(n)$,
we define the following two subsets of ${\rm Sp}(2n)$ by
$${\rm Sp}(2n)_{L_{0}}^*=\{M\in{\rm Sp}(2n)|\, {\rm det} V\neq 0\},$$
$${\rm Sp}(2n)_{L_{0}}^0=\{M\in{\rm Sp}(2n)|\, {\rm det} V= 0\},$$
for $M=\left(\begin{array}{cc}S & V\{\bf T} & U\end{array}\right)$.
Since the space ${\rm Sp}(2n)$ is path connected, and the set of $n\times n$
non-degenerate matrices has two path connected components consisting of matrices with positive and
negative determinants respectively.
We denote by
$${\rm Sp}(2n)_{L_{0}}^{\pm}=\{M\in{\rm Sp}(2n)|\, \pm{\rm det} V>0 \}, $$
$$\mathcal{P}(2n)_{L_{0}}^*=\{\gamma\in \mathcal{P}(2n)|\, \gamma(1)\in{\rm Sp}(2n)_{L_{0}}^*\},$$
$$ \mathcal{P}(2n)_{L_{0}}^0=\{\gamma\in \mathcal{P}(2n)|\, \gamma(1)\in{\rm Sp}(2n)_{L_{0}}^0\}. $$
\noindent{\bf Definition 2.1.}(\cite{Liu2}) {\it We define the ${L_{0}}$-nullity of any
symplectic path $\gamma\in \mathcal{P}(2n)$ by
\begin{equation}\nu_{L_{0}}(\gamma)=\dim\ker V(1)
\label{2.2}\end{equation}
with the $n\times n$ matrix function $V(t)$ defined in
(\ref{2.1}).}
We note that the complex matrix
$U(t)\pm\sqrt{-1}V(t)$ is invertible. We define a complex matrix function by
\begin{equation}\mathcal{Q}(t)=[U(t)-\sqrt{-1}V(t)] [U(t)+\sqrt{-1}V(t)]^{-1}. \label{2.3}\end{equation}
The matrix $\mathcal {Q}(t)$ is unitary for
any $t\in [0,1]$. We denote by
$$M_+= \left(\begin{array}{cc} 0 & I_n\\ -I_n & 0\end{array}\right), \;\; M_-=\left(\begin{array}{cc} 0 & J_n\\
-J_n & 0\end{array}\right), \;\;J_n={\rm diag}(-1,1,\cdots,1). $$
It is clear that $M_{\pm}\in{\rm Sp}(2n)_{L_{0}}^{\pm}$.
For a path $\gamma\in \mathcal{P}(2n)_{L_{0}}^*$, we define a symplectic path by
\begin{equation}\tilde {\gamma}(t)=\left\{\begin{array}{lr} I\cos \frac{(1-2t)\pi}{2}+J\sin\frac{(1-2t)\pi}{2}, \;\;& t\in [0,1/2],\\
\gamma(2t-1),\; & t\in [1/2,1]\end{array}\right. \label{2.4}\end{equation}
and choose a symplectic path $\beta(t)$ in ${\rm Sp}(2n)_{L_{0}}^*$ starting from
$\gamma(1)$ and ending at $M_+$ or $M_-$ according to $\gamma(1)\inSp(2n)_{L_{0}}^+$
or $\gamma(1)\inSp(2n)_{L_{0}}^-$, respectively. We now define a joint path by
\begin{equation}\bar{\gamma}(t)=\beta*\tilde {\gamma}:=\left\{\begin{array}{lr} \tilde {\gamma}(2t), \;\;& t\in [0,1/2],\\
\beta(2t-1),\;\; & t\in [1/2,1].\end{array}\right. \label{2.5}\end{equation}
By the definition, we see that the symplectic path $\bar{\gamma}$
starts from $-M_+$ and ends at either $M_+$ or $M_-$.
As above, we define
\begin{equation}\bar {\mathcal{Q}}(t)=[\bar {U}(t)-\sqrt{-1}\bar {V}(t)] [\bar {U}(t)+\sqrt{-1}\bar {V}(t)]^{-1}.
\label{2.6}\end{equation}
for $\bar {\gamma}(t)=\left(\begin{array}{cc} \bar {S}(t) & \bar {V}(t)\\\bar {T}(t) & \bar
{U}(t)\end{array}\right)$. We can choose a continuous function $\bar
{\Delta}(t)$ on $[0,1]$ such that
\begin{equation}{\rm det} \bar {\mathcal {Q}}(t)=e^{2\sqrt{-1}\bar{\Delta}(t)}. \label{ 2.7}\end{equation}
By the above arguments, we see that the number $\frac{1}{\pi}(\bar
{\Delta}(1)-\bar{\Delta}(0))\in {\bf Z}$ and it does not depend on
the choice of the function $\bar{\Delta}(t)$.
\noindent{\bf Definition 2.2.}(\cite{Liu2}) {\it For a symplectic
path $\gamma\in \mathcal{P}(2n)_{L_{0}}^*$, we define the ${L_{0}}$-index of
$\gamma$ by \begin{equation} i_{L_{0}}(\gamma)=\frac{1}{\pi}(\bar
{\Delta}(1)-\bar{\Delta}(0)). \label {2.8}\end{equation}}
\noindent{\bf Definition 2.3.}(\cite{Liu2}) {\it For a symplectic path $\gamma\in
\mathcal{P}(2n)_{L_{0}}^0$, we define the ${L_{0}}$-index of $\gamma$ by \begin{equation}
i_{L_{0}}(\gamma)=\inf\{i_{L_{0}}(\gamma^*)|\,\gamma^*\in
\mathcal{P}(2n)_{L_{0}}^*, { \,\gamma^*\,is\,sufficiently\,close \,to}
\,\gamma \}. \label {2.9}\end{equation}}
$\quad$ In the general situation, let
$L\in \Lambda(n)$. It is well known that $\Lambda(n)=U(n)/O(n)$,
this means that for any linear subspace $L\in \Lambda(n)$,
there is an orthogonal symplectic matrix $P=\left(\begin{array}{cc} A & -B\\
B & A\end{array}\right)$ with $A\pm \sqrt {-1}B\in U(n)$ such that
$PL_0=L$. We define the conjugated symplectic path $\gamma_c\in
\mathcal{P}(2n)$ of $\gamma$ by $\gamma_c(t)=P^{-1}\gamma(t)P$.
\noindent{\bf Definition 2.4.}(\cite{Liu2}) {\it We define the
$L$-nullity of any
symplectic path $\gamma\in \mathcal{P}(2n)$ by
\begin{equation}\nu_L(\gamma)=\dim\ker V_c(1), \label{2.10}\end{equation}
the $n\times n$ matrix function $V_c(t)$ is defined in (\ref{2.1}) with the symplectic path
$\gamma$ replaced by $\gamma_c$, i.e.,
\begin{equation}\gamma_c(t)=\left(\begin{array}{cc} S_c(t) & V_c(t)\{\bf T}_c(t) & U_c(t)\end{array}\right). \label {2.11}\end{equation}
}
\noindent{\bf Definition 2.5.}(\cite{Liu2}) {\it For a symplectic path $\gamma\in
\mathcal{P}(2n)$, we define the ${L}$-index of $\gamma$ by \begin{equation}
i_{L}(\gamma)=i_{L_0}(\gamma_c). \label {2.12}\end{equation} } We define a Hilbert
space $E^1=E^1_{L_0}=W^{1/2,2}_{L_0}([0,1],{\bf R}^{2n})$ with $L_0$
boundary conditions by \begin{eqnarray} E^1_{L_0}=\left\{x\in
L^2([0,1],{\bf R}^{2n})|
x(t)=\sum_{j\in{\bf Z}}{\rm exp}(j\pi tJ) \left(\begin{array}{c} 0\\
a_j\end{array}\right), a_j\in {\bf R}^n,
\;\|x\|^2:=\sum_{j\in{\bf Z}}(1+|j|)|a_j|^2<\infty\right\}.\nonumber\end{eqnarray}
For any Lagrangian subspace $L\in \Lambda(n)$, suppose $P\in
{\rm Sp}(2n)\cap O(2n)$ such that $L=PL_0$. Then we define
$E^1_L=PE^1_{L_0}$. We define two operators on $E^1_L$ by \begin{equation}
(Ax,y)=\int^1_0\<-J\dot x,y\>\,dt,\;\; (Bx,y)=\int^1_0\langle B(t)
x,y\>\,dt,\;\;\forall\; x,\,y\in E^1_{L},\label{2.13}\end{equation} where
$(\cdot,\cdot)$ is the inner product in $E^1_L$ induced from
$E^1_{L_0}$.
By the Floquet theory we have
$$\nu_{L}(\gamma_B)=\dim\ker(A-B). $$
We denote by $E^{L_0}_m=\left\{z\in E^1_{L_0}\left|\,
z(t)=\displaystyle\sum_{k=-m}^m-J{\rm exp}(k\pi tJ)a_k\right.\right\}$
the finite dimensional truncation of $E^1_{L_0}$, and
$E^L_m=PE^{L_0}_m$.
Let $P_m:\,E^1_L\to E^L_m$ be the orthogonal projection for
$m\in{\bf N}$. Then $\Gamma=\{P_m|\;m\in{\bf N}\}$ is a Galerkin approximation
scheme with respect to $A$ defined in (\ref{2.13}), i.e., there
hold
$$P_m\to I\;{\rm strongly\;as}\;m\to \infty $$
and
$$P_mA=AP_m. $$
For $d>0$, we denote by $m^*_d(\cdot)$ for $*=+, 0, -$ the dimension
of the total eigenspace corresponding to the eigenvalues $\lambda$
belonging to $[d,+\infty), (-d,d)$ and $(-\infty, -d]$ respectively,
and denote by $m^*(\cdot)$ for $*=+,0,-$ the dimension of the
total eigenspace corresponding to the eigenvalues $\lambda$
belonging to $(0,+\infty), \{0\}$ and $(-\infty, 0)$ respectively.
For any self-adjoint operator $T$, we denote $T^{\sharp}=(T|_{Im
T})^{-1}$ and $P_mTP_m=(P_mTP_m)|_{E^L_m}$.
If $\gamma_B\in \mathcal {P}(2n)$ is the fundamental solution of the
system (\ref{1.17}), we write $i_L(B)=i_L(\gamma_B)$ and
$\nu_L(B)=\nu_L(\gamma_B)$.
The following Galerkin approximation result will be used in this
paper.
\noindent{\bf Proposition 2.1.} (Theorem 2.1 of \cite{Liu0}) {\it
For any $B\in C([0,1], \mathcal{L}_s({\bf R}^{2n}))$ with the $L$-index
pair $(i_L(B),\nu_L(B))$ and any constant $0<d\le \frac
14\|(A-B)^{\sharp}\|^{-1}$, there exists $m_0>0$ such that for $m\ge
m_0$, we have
\begin{eqnarray} && m^+_d(P_m(A-B)P_m)=mn-i_L(B)-\nu_L(B),\nonumber\\
&& m^-_d(P_m(A-B)P_m)=mn+i_L(B)+n,\label{2.14}\\&&
m^0_d(P_m(A-B)P_m)=\nu_L(B).\nonumber\end{eqnarray}
}
$\quad$ The Galerkin approximation formula for the Maslov-type
index theory associated with periodic boundary value was proved in
\cite{FQ} by Fei and Qiu.
\noindent{\bf Remark 2.1.} Note that $mn=m^-_d(P_mAP_m)$, so we
have
$m^-_d(P_m(A-B)P_m)-mn=I(A,A-B)$, where
$I(A,A-B)$ is defined in
Definition 3.1 below. So we have
\begin{equation} I(A,A-B)=i_L(B)+n.\label{c1}\end{equation}
\noindent{\bf Definition 2.6.} (\cite{Liu2}) {\it For two paths
$\gamma_0,\;\gamma_1\in
\mathcal{P}(2n)$, we say that they are $L$-homotopic and denoted by
$\gamma_0\sim_L\gamma_1$, if there is a map $\delta:[0,1]\to
\mathcal{P}(2n)$ such that $\delta(j)=\gamma_j$ for $j=0,1$, and
$\nu_L(\delta(s))$ is constant for $s\in [0,1]$.
}
For any two $2k_i\times 2k_i$ matrices
of square block form, $M_i=\left(\begin{array}{cc} A_i & B_i\\
C_i & D_i \end{array}\right)$ with $i=1, 2$,
the $\diamond$-product of
$M_1$ and $M_2$ is defined to be the $2(k_1+k_2)\times 2(k_1+k_2)$
matrix
$$ M_1\diamond M_2=\left(\begin{array}{cccc} A_1 & 0 & B_1 & 0\\
0 & A_2 & 0 & B_2\\
C_1 & 0 & D_1 & 0\\
0 & C_2 & 0 & D_2 \end{array}\right). $$
\noindent{\bf Theorem 2.1.}(\cite{Liu2}) {\it If $\gamma_0\sim_L\gamma_1$, there hold
$$i_L(\gamma_0)=i_L(\gamma_1),\;\nu_L(\gamma_0)=\nu_L(\gamma_1).$$
}
\noindent{\bf Theorem 2.2.}(\cite{Liu2}) {\it If $\gamma=\gamma_1\diamond \gamma_2\in
\mathcal{P}(2n)$, and correspondingly $L=L'\oplus L''$, then
$$i_L(\gamma)=i_{L'}(\gamma_1)+i_{L''}(\gamma_2),\;\nu_L(\gamma)=\nu_{L'}(\gamma_1)+\nu_{L''}(\gamma_2).$$
}
\noindent{\bf Theorem 2.3.} {\it For $L_0=\{0\}\times {\bf R}^n,
L_1={\bf R}^n\times \{0\}$, then for $\gamma\in \mathcal{P}(2n)$ \begin{equation}
|i_{L_0}(\gamma)-i_{L_1}(\gamma)|\le
n,\;|i_{L_0}(\gamma)+\nu_{L_0}(\gamma)-i_{L_1}(\gamma)-\nu_{L_1}(\gamma)|\le
n.\label{2.14''}\end{equation} Moreover, the left hand sides of the above two
inequalities depend only on the end matrix $\gamma(1)$, in
particular, if $\gamma(1)\in O(2n)\cap Sp(2n)$, there holds \begin{equation}
i_{L_0}(\gamma)=i_{L_1}(\gamma).\label{2.14'}\end{equation} }
\noindent {\bf
Proof.} We only need to prove the first inequality in
(\ref{2.14''})
\begin{equation}|i_{L_0}(\gamma)-i_{L_1}(\gamma)|\le n.\label{2.15}\end{equation}
For the
second inequality in (\ref{2.14''}), we can choose a symplectic path
$\gamma_1$ such that
$$i_{L_0}(\gamma)+\nu_{L_0}(\gamma)=i_{L_0}(\gamma_1),\;i_{L_1}(\gamma)+\nu_{L_1}(\gamma)=i_{L_1}(\gamma_1).$$
Then by (\ref{2.15}) we have
\begin{eqnarray} |i_{L_0}(\gamma_1)-i_{L_1}(\gamma_1)|\le n\nonumber\end{eqnarray}
which yields the second inequality of (\ref{2.14''}).
Note that (\ref{2.15}) holds from Theorem 3.3 of \cite{LZZ} and
Proposition 5.1 below. Here we give another proof directly from the
definitions of $i_{L_0}$ and $i_{L_1}$.
We write $\bar\gamma(t)$ in (\ref{2.5}) in its polar decomposition
form $\bar\gamma(t)=\bar O(t)\bar P(t)$, $\bar O(t)\in O(2n)\cap
Sp(2n)$, and $\bar P(t)$ is a positive definite matrix function. By
(4.1) of \cite{Liu2} we have
$$\bar \Delta(t)=\bar \Delta_{\bar O}(t)+\bar \Delta_{\bar P}(t).$$
Since $\bar P(0)=\bar P(1)=I_{2n}$ and the set of positive definite
symplectic matrices is contractible, we have
$$\bar \Delta_{\bar P}(1)-\bar \Delta_{\bar P}(0)=0,$$
so $$\bar \Delta(1)-\bar \Delta(0)=\bar \Delta_{\bar O}(1)-\bar
\Delta_{\bar O}(0).$$ On the other hand,
$\gamma_c(t)=J^{-1}\gamma(t)J=O(t)(J^{-1}P(t)J)$. We also write
$\bar \gamma_c=\bar O_c \bar P_c$. So by the definitions of $\bar
{\gamma}_c$ and $\bar {\gamma}$ we have $\bar O_c(t)=\bar O(t)$ for $t\in
[0,\frac{1}{2}]$ in (\ref{2.5}). Then (\ref{2.15}) follows from the
fact that the only difference between $\bar O_c$ and $\bar O$ is
that $\td {\gamma}_c(1)$ and $\td {\gamma}(1)$ in (\ref{2.4}) may be
connected to different matrices $M^+$ or $M^-$ by $\beta_c$ and
$\beta$ in (\ref{2.5}) respectively. The statement that the left
hand sides of the two inequalities in (\ref{2.14''}) depend only on
the end matrix $\gamma(1)$ is a consequence of Corollary 4.1 of
\cite{Liu2}. For the proof of (\ref{2.14'}), suppose $\gamma(1)\in
O(2n)\cap Sp(2n)$, we can take $\gamma(t)\in O(2n)\cap Sp(2n)$ since
the number on the left side of inequality (\ref{2.15}) depends only
on $\gamma(1)$. For $\gamma(t)\in O(2n)\cap Sp(2n)$, we have
$\gamma_c(t)=J^{-1}\gamma(t)J=\gamma(t)$. Thus we have
$i_{L_0}(\gamma)=i_{L_1}(\gamma)$. \hfill\vrule height0.18cm width0.14cm $\,$
\noindent{\bf Theorem 2.4.} (Lemma 5.1 of \cite{Liu2}) {\it If
$\gamma\in \mathcal{P}(2n)$ is the fundamental solution of
$$\dot x(t)=JB(t)x(t)$$ with symmetric matrix function
$B(t)=\left(\begin{array}{cc} b_{11}(t) & b_{12}(t)\\b_{21}(t) & b_{22}(t)
\end{array}\right)$ satisfying $b_{22}(t)>0$ for any $t\in R$, then there holds
$$i_{L_0}(\gamma)=\sum_{0<s<1}\nu_{L_0}(\gamma_s),\;\gamma_s(t)=\gamma(st).$$
Similarly, if $b_{11}(t)>0$ for any $t\in {\bf R}$, there holds
$$i_{L_1}(\gamma)=\sum_{0<s<1}\nu_{L_1}(\gamma_s),\;\gamma_s(t)=\gamma(st).$$ }
\setcounter{equation}{0}
\section{ $\omega$-index theory associated with
a Lagrangian subspace for symplectic paths
Let $E$ be a separable Hilbert space, and $Q=A-B: E\to E$ be a
bounded self-adjoint linear operators with $B:E\to E$ being a
compact self-adjoint operator. Suppose that $N=\ker Q$ and $\dim
N<+\infty$. $Q|_{N^{\bot}}$ is invertible. $P:E\to N$ is the
orthogonal projection. We denote $d=\frac 14
\|(Q|_{N^{\bot}})^{-1}\|^{-1}$. Suppose
$\Gamma=\{P_k|k=1,2,\cdots\}$ is the Galerkin approximation sequence
of $A$ with
\quad(1) $E_k:=P_kE$ is finite dimensional for all $k\in{\bf N}$,
\quad(2) $P_k\to I$ strongly as $k\to +\infty$
\quad(3) $P_kA=AP_k$.
For a self-adjoint operator $T$, we denote by $M^{*}(T)$ the
eigenspaces of $T$ with eigenvalues belonging to $(0,+\infty)$,
$\{0\}$ and $(-\infty, 0)$ with $*=+,0$ and $*=-$, respectively. We
denote by $m^*(T)=\dim M^*(T)$. Similarly, we denote by $M_d^{*}(T)$
the $d$-eigenspaces of $T$ with eigenvalues belonging to
$(d,+\infty)$, $(-d,d)$ and $(-\infty, -d)$ with $*=+,0$ and $*=-$,
respectively. We denote by $m_d^*(T)=\dim M_d^*(T)$.
\noindent{\bf Lemma 3.1.} {\it There exists $m_0\in {\bf N}$ such that
for all $m\ge m_0$, there hold}
\begin{equation} m^-(P_m(Q+P)P_m)=m_d^-(P_m(Q+P)P_m) \label {3.1}\end{equation} and \begin{equation}
m^-(P_m(Q+P)P_m)=m_d^-(P_mQP_m). \label{3.2}\end{equation}
\noindent{\bf Proof.} The proof of
(\ref{3.1}) is essential the same as that of Theorem 2.1 of
\cite{Fei}, we note that $\dim\ker(Q+P)=0$.
By considering the operators $Q+sP$ and $Q-sP$ for small $s>0$, for
example $s<\min \{1, d/2\}$, there exists $m_1\in{\bf N}$ such that
\begin{equation}
m^-_{d}(P_mQP_m)\le m^-(P_m(Q+sP)P_m), \;\forall \, m\ge m_1 \label
{3.3}\end{equation} and \begin{equation} m^-_{d}(P_mQP_m)\ge
m^-(P_m(Q-sP)P_m)-m^0_{d}(P_mQP_m),\; \forall \, m\ge m_1.
\label{3.4}\end{equation}
In fact, the claim (\ref{3.3}) follows from
$$P_m(Q+sP)P_m=P_mQP_m+sP_mPP_m $$
and for $x\in M^-_{d}(P_mQP_m)$,
$$(P_m(Q+sP)P_mx,x)\le -d\|x\|^2+s\|x\|^2\le -\frac{d}{2}\|x\|^2. $$
The claim (\ref{3.4}) follows from that for $x\in
M^-(P_m(Q-sP)P_m)$,
$$(P_mQP_mx,x)\le s(P_mPP_mx,x)< d\|x\|^2. $$
By the Floquet theory, for $m\ge m_1$ we have
$m^0_{d}(P_mQP_m)=\dim N=\dim Im(P_mPP_m)$, and by
$Im(P_mPP_m)\subseteq M^0_{d}(P_mQP_m)$ we have $Im(P_mPP_m)=
M^0_{d}(P_mQP_m)$. It is easy to see that $M^0_{d}(P_mQP_m)\subseteq
M^+_{d}(P_m(Q+sP)P_m)$. By using
$$P_m(Q-sP)P_m=P_m(Q+sP)P_m-2sP_mPP_m $$
we have \begin{equation} m^-(P_m(Q-sP)P_m)\ge m^-(P_m(Q+sP)P_m)+m^0_{d}(P_mQP_m),
\;\forall\,m\ge m_1. \label {3.5}\end{equation} Now (\ref{3.2}) follows from
(\ref{3.3})-(\ref{3.5}). \hfill\vrule height0.18cm width0.14cm $\,$
Since $M^-(Q+P)=M^-(Q)$ and the two operators $Q+P$ and $Q$ have the
same negative spectrum, moreover, $P_m(Q+P)P_m\to Q+P$ and
$P_mQP_m\to Q$ strongly, one can prove (\ref{3.2}) by the spectrum
decomposition theory.
The following result was proved in \cite{CLL}.
\noindent{\bf Lemma 3.2.} {\it Let $B$ be a linear symmetric
compact
operator, $P:E\to \ker A$ be the orthogonal projection. Suppose that
$A-B$ has a bounded inverse. Then the difference of the Morse
indices
$$m^-(P_m(A-B)P_m)-m^-(P_m(A+P)P_m) $$
eventually becomes a constant independent of $m$, where $A:E\to E$ is a
bounded self-adjoint operator with a finite dimensional kernel,
and the restriction $A|_{(\ker A)^\bot}$ is invertible, and
$\Gamma=\{P_k\}$ is a Galerkin approximation sequence with respect to $A$.
}
By Lemmas 3.1 and 3.2, we have the following result.
\noindent{\bf Lemma 3.3.} {\it Let $B$ be a linear symmetric compact
operator. Then the difference of the $d$-Morse
indices
\begin{equation} m_d^-(P_m(A-B)P_m)-m_d^-(P_mAP_m) \label {3.6}\end{equation}
eventually becomes a constant independent of $m$, where $d>0$ is determined by the operators $A$ and $A-B$.
Moreover $m^0_d(P_m(A-B)P_m)$ eventually becomes a constant independent of
$m$ and for large $m$, there holds
\begin{equation} m_d^0(P_m(A-B)P_m)=m^0(A-B). \label{3.7}\end{equation} }
\noindent{\bf Proof.}
We only need to prove (\ref{3.7}). It is easy to show that there is
a constant $m_1>0$ such that for $m\ge m_1$
$$\dim P_m\ker (A-B)=\dim\ker(A-B). $$
Since $B$ is compact, there is $m_2\ge m_1$ such that for $m\ge m_2$
$$\|(I-P_m)B\|\le 2d. $$
Take $m\ge m_2$, let $E_m=P_m\ker(A-B)\bigoplus Y_m$, then
$Y_m\subseteq {\rm Im} (A-B)$. For $y\in Y_m$ we have
$$y=(A-B)^{\sharp} (A-B)y=(A-B)^{\sharp}(P_m(A-B)P_my+(P_m-I)By).$$
It implies
$$\|P_m(A-B)P_my\|\ge 2d\|y\|, \;\forall y\in Y_m. $$
Thus we have \begin{equation} m_d^0(P_m(A-B)P_m)\le m^0(A-B).\label{3.8}\end{equation}
On the other hand,
for $x\in P_m\ker(A-B)$, there exists $y\in \ker(A-B)$, such that
$x=P_my$. Since $P_m\to I$ strongly, there exists $m_3\ge m_2$ such
that for $m\ge m_3$
$$\|I-P_m\|<\frac 12,\;\;P_m(A-B)(I-P_m)\le \frac d2. $$
So we have
$$\|P_m(A-B)P_m x\|=\|P_m(A-B)(I-P_m) y\|\le \frac d2\|y\|<d\|x\|. $$
It implies that \begin{equation} m_d^0(P_m(A-B)P_m)\ge m^0(A-B).\label{3.9}\end{equation}
(\ref{3.7}) holds from (\ref{3.8}) and (\ref{3.9}). \hfill\vrule height0.18cm width0.14cm $\,$
\noindent{\bf Definition 3.1.} {\it For the self-adjoint Fredholm
operator $A$ with a Galerkin approximation sequence $\Gamma$ and the
self-adjoint compact operator $B$ on Hilbert space $E$, we define
the relative index by \begin{equation} I(A,A-B)=
m_d^-(P_m(A-B)P_m)-m_d^-(P_mAP_m),\;\;\;\; m\ge m^*, \label {3.10}\end{equation}
where
$m^*>0$ is a constant large enough such that the difference in
(\ref{3.6}) becomes a constant independent of $m\ge m^*$. }
The spectral flow for a parameter family of linear self-adjoint
Fredholm operators was introduced by Atiyah, Patodi and Singer in
\cite{APS}. The following result shows that the relative index in
Definition 3.1 is a spectral flow.
\noindent{\bf Lemma 3.4.} {\it For the
operators $A$ and $B$ in Definition 3.1, there holds
\begin{equation} I(A,A-B)=-{\rm sf}\{A-sB,\,0\le s\le 1\}, \label
{3.11}\end{equation}
where ${\rm sf}(A-sB,\,0\le s\le 1)$ is the spectral flow of the operator family $A-sB$, $s\in[0,1]$ (cf.
\cite{ZL}).}
\noindent{\bf Proof.} For simplicity, we set $I_{\rm
sf}(A,A-B)=-{\rm sf}\{A-sB,\,0\le s\le 1\}$ which is exact the
relative Morse index defined in \cite{ZL}. By the Galerkin
approximation formula in Theorem 3.1 of \cite{ZL},
\begin{eqnarray} I_{\rm
sf}(A,A-B)=I_{\rm sf}(P_mAP_m,\,P_m(A-B)P_m)\label{a1}\end{eqnarray} if
$\ker(A)=\ker(A-B)=0$.
By (2.17) of \cite{ZL}, we have \begin{eqnarray} I_{\rm
sf}(P_mAP_m,\,P_m(A-B)P_m)&=&m^-(P_m(A-B)P_m)-m^-(P_mAP_m)\nonumber\\
&=&m_d^-(P_m(A-B)P_m)-m_d^-(P_mAP_m)\nonumber\\&=&I(A,A-B)\label{a3}\end{eqnarray}
for
$d>0$ small enough. Hence (\ref{3.11}) holds in the nondegenerate
case. In general, if $\ker(A)\ne 0$ or $\ker(A-B)\ne 0$, we can
choose $d>0$ small enough such that $\ker(A+d {\rm Id})=\ker(A-B+d
{\rm Id})=0$, here ${\rm Id}:\; E\to E$ is the identity operator. By
(2.14) of \cite{ZL} we have
\begin{eqnarray} I_{\rm
sf}(A,A-B)&=&I_{\rm sf}(A,A+d {\rm Id})+ I_{\rm sf}(A+d {\rm
Id},A-B+d {\rm Id})+I_{\rm sf}(A-B+d
{\rm Id},A-B)\nonumber\\
&=&I_{\rm sf}(A+d {\rm Id},A-B+d {\rm Id})=I(A+d {\rm Id},A-B+d {\rm Id})\nonumber\\
&=&m^-(P_m(A-B+d {\rm Id})P_m)-m^-(P_m(A+d
{\rm Id})P_m)\nonumber\\
&=&m_d^-(P_m(A-B)P_m)-m_d^-(P_mAP_m)= I(A,A-B).\label{a2}
\end{eqnarray}
In the second equality of (\ref{a2}) we note that $I_{\rm sf}(A,A+d
{\rm Id})=I_{\rm sf}(A-B+d {\rm Id},A-B)=0$ for $d>0$ small enough
since the spectrum of $A$ is discrete and $B$ is a compact operator,
in the third and the forth equalities of (\ref{a2}) we have applied
(\ref{a3}). \hfill\vrule height0.18cm width0.14cm $\,$
A similar way to define the relative index of two
operators was appeared in \cite{CLL}. A different way to study the
relative index theory was appeared in \cite{Fei}.
For $\omega=e^{\sqrt{-1}\theta}$ with $\theta\in{\bf R}$, we define a
Hilbert space $E^{\omega}=E^{\omega}_{L_0}$ consisting of those
$x(t)$ in $L^2([0,1], {\bf C}^{2n})$ such that $e^{-\theta t J}x(t)$ has
Fourier expending
$$e^{-\theta t J}x(t)=\sum_{j\in {\bf Z}}e^{j\pi tJ}\left(\begin{array}{cc} 0\\a_j\end{array}\right),\;a_j\in {\bf C}^n $$
with
$$\|x\|^2:=\sum_{j\in{\bf Z}}(1+|j|)|a_j|^2<\infty. $$
For $x\in E^{\omega}$, we can write \begin{eqnarray} x(t)&=&e^{\theta
tJ}\sum_{j\in{\bf Z}}e^{j\pi tJ}\left(\begin{array}{cc} 0\\a_j\end{array}\right)
=\sum_{j\in{\bf Z}}e^{(\theta+j\pi)tJ}\left(\begin{array}{cc} 0\\a_j\end{array}\right) \nonumber\\
&=&\sum_{j\in{\bf Z}}e^{(\theta+j\pi)t\sqrt{-1}}\left(\begin{array}{cc} \sqrt{-1}a_j/2\\a_j/2\end{array}\right)+
e^{-(\theta+j\pi)t\sqrt{-1}}\left(\begin{array}{cc}
-\sqrt{-1}a_j/2\\a_j/2\end{array}\right).\label{4.7}\end{eqnarray}
So we can write \begin{equation} x(t)=\xi(t)+N\xi(-t), \;
\xi(t)=\sum_{j\in{\bf Z}}e^{(\theta+j\pi)t\sqrt{-1}} \left(\begin{array}{cc}
\sqrt{-1}a_j/2\\a_j/2\end{array}\right). \label {3.12}\end{equation} For
$\omega=e^{\sqrt{-1}\theta}$, $\theta\in [0,\pi)$, we define two
self-adjoint operators $A^{\omega}, B^{\omega}\in \mathcal
{L}(E^{\omega})$ by \begin{eqnarray} (A^{\omega}x,y)=\int^1_0\<-J\dot
x(t),y(t)\>dt,\;\;(B^{\omega}x,y)=\int^1_0\<B(t)x(t),y(t)\>dt
\nonumber\end{eqnarray}
on $E^{\omega}$. Then $B^{\omega}$ is also compact.
\noindent{\bf Definition 3.2.} {\it We define the index function
$$i_{\omega}^{L_0}(B)=I(A^{\omega}, \;\;A^{\omega}-B^{\omega}),\;\;\nu_{\omega}^{L_0}(B)=m^0(A^{\omega}-B^{\omega}),\;
\forall\,\omega=e^{\sqrt{-1}\theta},\;\;\theta\in (0,\pi). $$
}
By the Floquet theory,
we have $M^0(A^{\omega},B^{\omega})$ is isomorphic to the
solution space of the following linear Hamiltonian system
$$\dot x(t)=JB(t)x(t) $$
satisfying the following boundary condition
$$x(0)\in L_0, \;\;x(1)\in e^{\theta J}L_0.$$
If $m^0(A^{\omega},B^{\omega})>0$, there holds $$\gamma(1)L_0\cap e^{\theta J}L_0\neq
\{0\}$$
which is equivalent to $$\omega^2=e^{2\theta\sqrt{-1}}\in
\sigma\left([U(1)-\sqrt{-1}V(1)][U(1)+\sqrt{-1}V(1)]^{-1}\right).$$
This claim follows from the fact that if $\gamma(1)L_0\cap e^{\theta J}L_0\neq
\{0\}$, there exist $a, b\in {\bf C}^n\setminus \{0\}$ such that
$$[U(1)+\sqrt{-1}V(1)]a=\omega^{-1}b,\;\;[U(1)-\sqrt{-1}V(1)]a=\omega b. $$
So we have \begin{equation}\nu_{\omega}^{L_0}(B)= \dim (\gamma(1)L_0\cap
e^{\theta J}L_0),\;\; \forall\,
\omega=e^{\sqrt{-1}\theta},\;\theta\in (0,\pi). \label{ 3.14}\end{equation}
\noindent{\bf Lemma 3.5.} {\it The index function
$i_{\omega}^{L_0}(B)$ is locally constant. For
$\omega_0=e^{\sqrt{-1}\theta_0},\;\theta_0\in (0,\pi)$ is a point of
discontinuity of $i_{\omega}^{L_0}(B)$, then
$\nu_{\omega_0}^{L_0}(B)>0$ and so $\dim (\gamma(1)L_0\cap
e^{\theta_0 J}L_0)>0$. Moreover there hold
\begin{eqnarray} && |i_{\omega_0+}^{L_0}(B)-i_{\omega_0-}^{L_0}(B)|\le
\nu_{\omega_0}^{L_0}(B),\;\qquad
|i_{\omega_0+}^{L_0}(B)-i_{\omega_0}^{L_0}(B)|\le
\nu_{\omega_0}^{L_0}(B),\nonumber\\&&
|i_{\omega_0-}^{L_0}(B)-i_{\omega_0}^{L_0}(B)|\le
\nu_{\omega_0}^{L_0}(B),\;\qquad\;\;
|i_{L_0}(B)+n-i_{1+}^{L_0}(B)|\le \nu_{L_0}(B), \label {3.15}\end{eqnarray}
where $i_{\omega_0+}^{L_0}(B)$, $i_{\omega_0-}^{L_0}(B)$ are the
limits on the right and left respectively of the index function $i_{\omega}^{L_0}(B)$
at $\omega_0=e^{\sqrt{-1}\theta_0}$ as a function of $\theta$.}
\noindent{\bf Proof.} For $x(t)=e^{\theta tJ}u(t),
u(t)=\displaystyle\sum_{j\in{\bf Z}}e^{j\pi tJ}\left(\begin{array}{cc}
0\\a_j\end{array}\right)$, we have
$$((A^{\omega}-B^{\omega})x,x)=\int^1_{0}\<-J\dot u(t),u(t)\>dt+
\int^1_{0}\<(\theta-e^{-\theta tJ}B(t)e^{\theta tJ})u(t),u(t)\>dt.$$
So we have
$$((A^{\omega}-B^{\omega})x,x)=(q_{\omega}u,u) $$
with
$$(q_{\omega} u,u)=
\int^1_{0}\<-J\dot u(t),u(t)\>dt+ \int^1_{0}\<(\theta-e^{-\theta
tJ}B(t)e^{\theta tJ})u(t),u(t)\>dt.
$$
Since $\dim (\gamma(1)L_0\cap
e^{\theta J}L_0)>0$ at only finite (up to $n$) points $\theta\in
(0,\pi)$, for the point $\theta_0\in (0,\pi)$ such that
$\nu_{\omega_0}^{L_0}(B)=0$, then $\nu_{\omega}^{L_0}(B)=0$ for
$\omega=e^{\sqrt{-1}\theta}$, $\theta\in
(\theta_0-\delta,\theta_0+\delta)$, $\delta>0$ small enough. By
using the notations as in Lemma 3.3, we have
$$ (P_m^{\omega}(A^{\omega}-B^{\omega})P_m^{\omega}x,x)=(P_mq_{\omega}P_mu,u).$$
By Lemma 3.3, we have
$$m^0_d(P_m^{\omega}(A^{\omega}-B^{\omega})P_m^{\omega})=m^0(A^{\omega}-B^{\omega})=\nu_{\omega}^{L_0}(B)=0. $$
So by the continuity of the eigenvalue of a continuous family of
operators we have that
$$m^-_d(P_m^{\omega}(A^{\omega}-B^{\omega})P_m^{\omega}) $$
must be constant for $\omega=e^{\sqrt{-1}\theta}$, $\theta\in
(\theta_0-\delta,\theta_0+\delta)$. Since
$m^-_d(P_m^{\omega}A^{\omega}P_m^{\omega})$ is constant for
$\omega=e^{\sqrt{-1}\theta}$, $\theta\in
(\theta_0-\delta,\theta_0+\delta)$, we have $i_{\omega}^{L_0}(B)$ is
constant for $\omega=e^{\sqrt{-1}\theta}$, $\theta\in
(\theta_0-\delta,\theta_0+\delta)$.
The results in (\ref{3.15}) now follow from some standard arguments.
\hfill\vrule height0.18cm width0.14cm $\,$
By (\ref{c1}), Definition 3.2 and Lemma 3.5, we see that for
any $\omega_0=e^{\sqrt{-1}\theta_0},\;\theta_0\in (0,\pi)$, there
holds \begin{equation} i^{L_0}_{\omega_0}(B)\ge
i_{L_0}(B)+n-\sum_{\omega=e^{\sqrt{-1}\theta},\; 0\le \theta\le
\theta_0}\nu^{L_0}_{\omega}(B). \label {3.16}\end{equation}
We note that
\begin{equation} \sum_{\omega=e^{\sqrt{-1}\theta},\;
0\le \theta\le \theta_0}\nu^{L_0}_{\omega}(B)\le n. \label{3.17}\end{equation}
So we have
\begin{equation} i_{L_0}(B)\le i^{L_0}_{\omega_0}(B)\le
i_{L_0}(B)+n. \label {3.18}\end{equation}
\setcounter{equation}{0}
\section{ Bott-type index formula for $L$-index}
In this section, we establish the Bott-type iteration formula for
the $L_j$-index theory with $j=0,1$. Without loss of generality, we
assume $\tau=1$. Suppose the continuous symplectic path $\gamma:
[0,1]\to {\rm Sp}(2n)$
is the fundamental solution of the following linear Hamiltonian
system
\begin{equation} \dot z(t)=J B(t)z(t),\quad t\in {\bf R} \label{4.1}\end{equation}
with $B(t)$ satisfying $B(t+2)=B(t)$ and $B(1+t)N=NB(1-t))$ for $t\in {\bf R}$. This implies
$B(t)N=NB(-t)$ for $t\in {\bf R}$. By the unique existence theorem of
the linear differential equations, we get
\begin{equation}\gamma(1+t)=N\gamma(1-t)\gamma(1)^{-1}N\gamma(1), \gamma(2+t)=\gamma(t)\gamma(2). \label{4.2}\end{equation}
For $j\in {\bf N}$, we define the $j$-times iteration path ${\gamma}^j:[0,j]\to {\rm Sp}(2n)$ of $\gamma$ by
$$\gamma^1(t)=\gamma(t), \;t\in [0,1], $$
$$\gamma^2(t)=\left\{\begin{array}{l} \gamma(t), \;t\in [0,1],\\
N\gamma(2-t)\gamma(1)^{-1}N\gamma(1), \;t\in [1,2], \end{array}\right.$$
and in general, for $k\in{\bf N}$, we define
\begin{eqnarray}\gamma^{2k-1}(t)=\left\{\begin{array}{l} \gamma(t), \;t\in [0,1],\\
N\gamma(2-t)\gamma(1)^{-1}N\gamma(1), \;t\in [1,2],\\\cdots\cdots\\
N\gamma(2k-2-t)\gamma(1)^{-1}N\gamma(1)\gamma(2)^{2k-5}, \;t\in [2k-3,2k-2],\\
\gamma(t-2k+2)\gamma(2)^{2k-4}, \;t\in [2k-2,2k-1],\end{array}\right.
\label{4.3}\end{eqnarray}
\begin{eqnarray}\gamma^{2k}(t)=\left\{\begin{array}{l}\gamma(t), \;t\in [0,1],\\
N\gamma(2-t)\gamma(1)^{-1}N\gamma(1), \;t\in [1,2],\\\cdots\cdots\\
\gamma(t-2k+2)\gamma(2)^{2k-4}, \;t\in [2k-2,2k-1], \\
N\gamma(2k-t)\gamma(1)^{-1}N\gamma(1)\gamma(2)^{2k-3}, \;t\in [2k-1,2k].\end{array}\right.\label{4.4}\end{eqnarray}
For $\gamma\in \mathcal {P}_{\tau}(2n)$, we define \begin{equation}\gamma^k(\tau
t)=\td{\gamma}^k(t)\;{\rm with}\; \td{\gamma}(t)=\gamma(\tau
t)\label{uvw}.\end{equation}
For the $L_0$-index of the iteration path $\gamma^k$, we have
the following Bott-type formulas.
\noindent{\bf Theorem 4.1.} {\it Suppose $\omega_k=e^{\pi
\sqrt{-1}/k}$. For odd $k$ we have
\begin{eqnarray} i_{L_0}(\gamma^k)=i_{L_0}(\gamma^1)+\sum_{i=1}^{(k-1)/2}i_{\omega_k^{2i}}(\gamma^2),\nonumber\\
\nu_{L_0}(\gamma^k)=\nu_{L_0}(\gamma^1)+\sum_{i=1}^{(k-1)/2}\nu_{\omega_k^{2i}}(\gamma^2),\nonumber\end{eqnarray}
and for even $k$, we have \begin{eqnarray} &&
i_{L_0}(\gamma^k)=i_{L_0}(\gamma^1)+i^{L_0}_{\omega_k^{k/2}}(\gamma^1)+
\sum_{i=1}^{k/2-1}i_{\omega_k^{2i}}(\gamma^2),\;
\nonumber\\
&&
\nu_{L_0}(\gamma^k)=\nu_{L_0}(\gamma^1)+\nu^{L_0}_{\omega_k^{k/2}}(\gamma^1)+
\sum_{i=1}^{k/2-1}\nu_{\omega_k^{2i}}(\gamma^2). \nonumber\end{eqnarray}}
We note
that $\omega_k^{k/2}=\sqrt{-1}$.
Before proving Theorem 4.1, we give some notations and definitions.
We define the Hilbert space \begin{eqnarray} E^k_{L_0}=\left\{x\in L^{2}([0,k],
{\bf C}^{2n})\,|\, x(t)=\sum_{j\in{\bf Z}}e^{jt\pi/kJ}\left(\begin{array}{cc}
0\\a_j\end{array}\right),
\;a_j\in{\bf C}^{n},\;\|x\|^2:=\sum_{j\in{\bf Z}}(1+|j|)|a_j|^2<\infty\right\},
\nonumber\end{eqnarray}
where we still denote $L_0=\{0\}\times{\bf C}^n\subset {\bf C}^{2n}$ which is the Lagrangian
subspace of the linear complex symplectic space
$({\bf C}^{2n},\omega_0)$.
For $x\in E^k_{L_0}$, we can write
\begin{eqnarray} x(t)&=&\sum_{j\in{\bf Z}}e^{jt\pi/kJ}\left(\begin{array}{cc} 0\\a_j\end{array}\right)
=\sum_{j\in {\bf Z}}\left(\begin{array}{cc}
-\sin(jt\pi/k)a_j\\\cos(jt\pi/k)a_j\end{array}\right)\nonumber\\&=&\sum_{j\in{\bf Z}}\left\{
e^{j\pi t\sqrt{-1}/k}\left(\begin{array}{cc}
\sqrt{-1}a_j/2\\a_j/2\end{array}\right)+e^{-j\pi t\sqrt{-1}/k}\left(\begin{array}{cc}
-\sqrt{-1}a_j/2\\a_j/2\end{array}\right)\right\}.\label{4.5}\end{eqnarray}
On $E^k_{L_0}$ we define two self-adjoint operators and a quadratical form by
\begin{equation} (A_kx,\,y)=\int_0^k\<-J\dot{x}(t),\,y(t)\>dt,\quad
(B_kx,\,y)=\int_0^k\<B(t)x(t),y(t)\>dt, \label{4.5'}\end{equation}
\begin{equation} Q^k_{L_0}(x,y)=((A_k-B_k) x,y),\label{4.6}\end{equation}
where in this section $\<\cdot,\cdot\>$ is the standard Hermitian inner product
in ${\bf C}^{2n}$.
\noindent{\bf Lemma 4.1.} {\it $E_{L_0}^k$ has the following natural
decomposition
\begin{equation} E^k_{L_0}=\bigoplus_{l=0}^{k-1}E_{L_0}^{\omega_k^l}, \label{4.8}\end{equation}
here we have extended the domain of functions in
$E_{L_0}^{\omega_k^l}$ from $[0,1]$ to $[0,k]$ in the obvious
way, i.e.,
$$E_{L_0}^{\omega_k^l}=\left\{x\in E^k_{L_0}\,|\,x(t)=e^{l\pi
tJ/k}\sum_{j\in{\bf Z}}e^{j\pi tJ}\left(\begin{array}{cc} 0\\a_j\end{array}\right)\right\}.$$
}
\noindent{\bf Proof.} Any element $x\in E^k_{L_0}$ can be written
as
\begin{eqnarray} x(t)&=&\sum_{j\in{\bf Z}}\left\{
e^{j\pi t\sqrt{-1}/k}\left(\begin{array}{cc}
\sqrt{-1}a_j/2\\a_j/2\end{array}\right)+e^{-j\pi t\sqrt{-1}/k}\left(\begin{array}{cc}
-\sqrt{-1}a_j/2\\a_j/2\end{array}\right)\right\}\nonumber\\
&=&\sum_{l=0}^{k-1}\sum_{j\equiv l \,(mod k)}\left\{ e^{j\pi
t\sqrt{-1}/k}\left(\begin{array}{cc} \sqrt{-1}a_j/2\\a_j/2\end{array}\right)+e^{-j\pi
t\sqrt{-1}/k}\left(\begin{array}{cc}
-\sqrt{-1}a_j/2\\a_j/2\end{array}\right)\right\}\nonumber\\
&=&\sum_{l=0}^{k-1}\sum_{j\in{\bf Z}}\left\{e^{l\pi t\sqrt{-1}/k} e^{j\pi
t\sqrt{-1}}\left(\begin{array}{cc} \sqrt{-1}b_j/2\\b_j/2\end{array}\right)+e^{-l\pi
t\sqrt{-1}/k} e^{-j\pi t\sqrt{-1}}\left(\begin{array}{cc}
-\sqrt{-1}b_j/2\\b_j/2\end{array}\right)\right\}\nonumber\\&:=&
\xi_x(t)+N\xi_x(-t),\;\xi_x(t)=\sum_{l=0}^{k-1}\sum_{j\in{\bf Z}}e^{l\pi
t\sqrt{-1}/k} e^{j\pi t\sqrt{-1}}\left(\begin{array}{cc}
\sqrt{-1}b_j/2\\b_j/2\end{array}\right),\label{4.9}\end{eqnarray}
where $b_j=a_{jk+l}$. By setting $\omega_k=e^{\pi \sqrt{-1}/k}$, and comparing (\ref{4.7}) and (\ref{4.9}), we
obtain (\ref{4.8}).
\hfill\vrule height0.18cm width0.14cm $\,$
Note that the natural decomposition (\ref{4.8}) is not orthogonal
under the quadratical form $Q_{L_0}^k$ defined in (\ref{4.6}). So
the type of the iteration formulas in Theorem 4.1 is somewhat
different from the original Bott formulas in \cite{Bott} of the
Morse index theory for closed geodesics and (\ref{1.21}) of
Maslov-type index theory for periodic solutions of Hamiltonian
systems and the Bott-type formulas in \cite{Ek}. This is also our
main difficulty in the proof of Theorem 4.1. However, after
recombining the terms in the decomposition in Lemma 4.1, we can
obtain an orthogonal decomposition under the quadratical form
$Q_{L_0}^k$.
For $1\le l<\frac {k}{2}$ and $l\in {\bf N}$, we set
$$E_{L_0}^{\omega_k,l}=E_{L_0}^{\omega_k^l}\oplus E_{L_0}^{\omega_k^{k-l}}.$$
So for odd $k$, we decompose $E_{L_0}^k$ as
$$E_{L_0}^k=E_{L_0}^1\oplus\bigoplus_{l=1}^{(k-1)/2}E_{L_0}^{\omega_k,l}, \eqno(C_{odd})$$
for even $k$, we decompose $E_{L_0}^k$ as
$$E_{L_0}^k=E_{L_0}^1\oplus E_{L_0}^{\omega_k^{k/2}}
\oplus\bigoplus_{l=1}^{\frac k2-1}E_{L_0}^{\omega_k,l}. \eqno(C_{even})$$
\noindent{\bf Lemma 4.2.} {\it The above two decompositions
($C_{odd}$) and ($C_{even}$) are orthogonal under the quadratical
form $Q_{L_0}^k$ for $k$ is odd and even respectively. Moreover, for
$x\in E_{L_0}^{\omega_k^i}$ and $y\in
E_{L_0}^{\omega_k^j}$, $i,j\in{\bf Z}\cap[0,k-1]$, we have
\begin{eqnarray} &&(B_k
x,y)=\int^k_0 \<B(t)
x(t),y(t)\>\,dt=0, \;\;{ if}\; i\neq j,\;i+j\ne k,\label{4.10}\\
&& (B_k x,y)=\int^k_0 \<B(t) x(t),y(t)\>\,dt\nonumber\\
&&\quad\quad\quad\;\;\,=k\int^1_0 \<B(t)
x(t),y(t)\>\,dt=k(B^{{\omega}_k^i}x,y), \;\;{ if}\; i=j=
0,\frac {k}{2}, \label{4.11}\\
&& (B_k x,y)=\int^k_0 \<B(t)
x(t),y(t)\>\,dt\nonumber\\
&&=k\left(\int^1_0 \<B(t) \xi_x(t),\xi_y(t)\>\,dt+\int^1_0 \<B(t)
N\xi_x(-t),N\xi_y(-t)\>\,dt\right), \;{ if}\; i=j\neq 0,\frac
{k}{2}, \label{4.12}\end{eqnarray} \begin{eqnarray} && (B_kx,y)=k\left(\int^1_0 \<B(t)
N\xi_x(-t),\xi_y(t)\>\,dt\right.\nonumber\\
&&\qquad\qquad\;\;\quad\quad\left.+\int^1_0 \<B(t)
\xi_x(t),N\xi_y(-t)\>\,dt\right),\;\;{ if}\; i\ne j,\; i+j=k,\label{4.13}\\
&& (A_k x,y)=\int^k_0 \<-J\dot x(t),y(t)\>\,dt=0, \;\;{ if}\; i\neq j,\label{4.13'}\\
&& (A_k x,y)=\int^k_0 \<-J\dot x(t),y(t)\>\,dt=k\int^1_0 \<-J\dot
x(t),y(t)\>\,dt=k(A^{{\omega}_k^i}x,y), \;\;{ if}\; i=j, \label{4.14}\end{eqnarray}
where the operators $A^{\omega}$, $B^{\omega}$ are defined
in Section 3.}
\noindent{\bf Proof.} We first prove the formulas
(\ref{4.10})-(\ref{4.14}). It is easy to see that, we only need to
prove them in the case
\begin{eqnarray} &&
x(t)=e^{it\pi\sqrt{-1}/k}e^{pt\pi
\sqrt{-1}}\alpha_p+e^{-it\pi\sqrt{-1}/k}e^{-pt\pi
\sqrt{-1}}N\alpha_p,\nonumber\\
&& y(t)=e^{jt\pi\sqrt{-1}/k}e^{mt\pi
\sqrt{-1}}\alpha_m+e^{-jt\pi\sqrt{-1}/k}e^{-mt\pi
\sqrt{-1}}N\alpha_m,\nonumber\\&& \alpha_s=\left(\begin{array}{cc} \sqrt{-1}
a_s\\a_s\end{array}\right),\nonumber\end{eqnarray} for any integers $p$ and $m$.
In this case,
\begin{eqnarray} (B_kx,y)&=&\int^k_0\<B(t)\alpha_p,\;e^{(j-i)t\pi\sqrt{-1}/k}e^{(m-p)t\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&\,&\;+\int^k_0\<B(t)\alpha_p,\;e^{-(j+i)t\pi\sqrt{-1}/k}e^{-(m+p)t\pi\sqrt{-1}}N\alpha_m\>\,dt\nonumber\\
&\,&\;+\int^k_0\<B(t)N\alpha_p,\;e^{(j+i)t\pi\sqrt{-1}/k}e^{(m+p)t\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&\,&\;+\int^k_0\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(p-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt\nonumber\\
&=&\sum_{s=1}^k\int^s_{s-1}\<B(t)\alpha_p,\;e^{(j-i)t\pi\sqrt{-1}/k}e^{(m-p)t\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&\,&\;+\sum_{s=1}^k\int^s_{s-1}\<B(t)\alpha_p,\;e^{-(j+i)t\pi\sqrt{-1}/k}e^{-(m+p)t\pi\sqrt{-1}}N\alpha_m
\>\,dt\nonumber\\
&\,&\;+\sum_{s=1}^k\int^s_{s-1}\<B(t)N\alpha_p,\;e^{(j+i)t\pi\sqrt{-1}/k}e^{(m+p)t\pi\sqrt{-1}}\alpha_m)\>\,dt\nonumber\\
&\,&\;+\sum_{s=1}^k\int^s_{s-1}\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(p-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt\nonumber\\
&:=&I_1+I_2+I_3+I_4.\nonumber\end{eqnarray}
By using the relations $B(1+t)N=NB(1-t)$
and $B(t)N=NB(-t)$, we have
$$$$
\begin{eqnarray} && \int^{s+1}_{s}\<B(t)\alpha_p,\;e^{(j-i)t\pi\sqrt{-1}/k}e^{(m-p)t\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&=&\int^{s}_{s-1}\<B(1+t)\alpha_p,\;e^{(j-i)(1+t)\pi\sqrt{-1}/k}e^{(m-p)(1+t)\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&=&\int^{s}_{s-1}\<NB(1-t)N\alpha_p,\;e^{(j-i)(1+t)\pi\sqrt{-1}/k}e^{(m-p)(1+t)\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&=&\int^{s}_{s-1}\<B(t-1)\alpha_p,\;e^{(j-i)(1+t)\pi\sqrt{-1}/k}e^{(m-p)(1+t)\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&=&\int^{s-1}_{s-2}\<B(t)\alpha_p,\;e^{(j-i)(2+t)\pi\sqrt{-1}/k}e^{(m-p)(2+t)\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&=&e^{2(i-j)\pi\sqrt{-1}/k}\int^{s-1}_{s-2}\<B(t)\alpha_p,\;
e^{(j-i)t\pi\sqrt{-1}/k}e^{(m-p)t\pi\sqrt{-1}}\alpha_m\>\,dt.\nonumber\end{eqnarray}
Similarly, we have
\begin{eqnarray} &&\int^{s+1}_{s}\<B(t)\alpha_p,\;e^{-(j+i)t\pi\sqrt{-1}/k}e^{-(m+p)t\pi\sqrt{-1}}N\alpha_m\>\,dt\nonumber\\
&=&e^{2(j+i)\pi\sqrt{-1}/k}\int^{s-1}_{s-2}
\<B(t)\alpha_p,\;e^{-(j+i)t\pi\sqrt{-1}/k}e^{-(m+p)t\pi\sqrt{-1}}N\alpha_m\>\,dt.\nonumber\\
&&\int^{s+1}_{s}\<B(t)N\alpha_p,\;e^{(j+i)t\pi\sqrt{-1}/k}e^{(m+p)t\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&=&e^{-2(j+i)\pi\sqrt{-1}/k}\int^{s-1}_{s-2}
\<B(t)N\alpha_p,\;e^{-(j+i)t\pi\sqrt{-1}/k}e^{-(m+p)t\pi\sqrt{-1}}\alpha_m\>\,dt.\nonumber\\
&&\int^{s+1}_{s}\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(p-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt\nonumber\\
&=&e^{2(j-i)\pi\sqrt{-1}/k}\int^{s-1}_{s-2}
\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(p-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt.\nonumber\\
&&\int^{2}_{1}\<B(t)\alpha_p,\;e^{(j-i)t\pi\sqrt{-1}/k}e^{(m-p)t\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&=&e^{2(i-j)\pi\sqrt{-1}/k}\int^{1}_{0}
\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(p-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt.\nonumber\\
&&\int^{2}_{1}\<B(t)\alpha_p,\;e^{-(j+i)t\pi\sqrt{-1}/k}e^{-(m+p)t\pi\sqrt{-1}}N\alpha_m\>\,dt\nonumber\\
&=&e^{2(j+i)\pi\sqrt{-1}/k}\int^{1}_{0}
\<B(t)N\alpha_p,\;e^{(j+i)t\pi\sqrt{-1}/k}e^{(m+p)t\pi\sqrt{-1}}\alpha_m\>\,dt.\nonumber\\
&&\int^{2}_{1}\<B(t)N\alpha_p,\;e^{(j+i)t\pi\sqrt{-1}/k}e^{(m+p)t\pi\sqrt{-1}}\alpha_m\>\,dt\nonumber\\
&=&e^{-2(j+i)\pi\sqrt{-1}/k}\int^{1}_{0}
\<B(t)\alpha_p,\;e^{-(j+i)t\pi\sqrt{-1}/k}e^{-(m+p)t\pi\sqrt{-1}}N\alpha_m\>\,dt.\nonumber\\
&&\int^{2}_{1}\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(p-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt\nonumber\\
&=&e^{2(j-i)\pi\sqrt{-1}/k}\int^{1}_{0}
\<B(t)\alpha_p,\;e^{(j-i)t\pi\sqrt{-1}/k}e^{(m-p)t\pi\sqrt{-1}}\alpha_m\>\,dt.\nonumber\end{eqnarray}
From these observations, we find that
$$\;I_2+I_3=0, \;{\rm if }\;i+j\neq 0,k$$
and
$$I_1+I_4=0, \;{\rm if }\;i\neq j$$
which yield (\ref{4.10}).
In fact, by setting $\mu=e^{2(i-j)\pi \sqrt{-1}/k}$, then
$\mu^k=1$, for $k=2q$ with $q\in {\bf N}$, we have
\begin{eqnarray}
I_1&=&(1+\mu+\cdots+\mu^{q-1})\int^1_0\<B(t)\alpha_p,\;e^{(j-i)t\pi\sqrt{-1}/k}
e^{(m-p)t\pi\sqrt{-1}}\alpha_m\>\,dt
\nonumber\\&&+(\mu+\cdots+\mu^q)\int^1_0
\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(p-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt.\nonumber\end{eqnarray}
\begin{eqnarray}
I_4&=&(\mu^{-1}+\cdots+\mu^{-q})\int^1_0\<B(t)\alpha_p,\;e^{(j-i)t\pi\sqrt{-1}/k}e^{(m-p)t\pi\sqrt{-1}}\alpha_m\>\,dt
\nonumber\\&&+(1+\mu^{-1}+\cdots+\mu^{-q+1})\int^1_0
\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(p-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt.\nonumber\end{eqnarray}
Noting
$$\mu^{-1}+\cdots+\mu^{-q}+1+\mu+\cdots+\mu^{q-1}=\frac{\mu^{-q}(1-\mu^{2q})}{1-\mu}=0$$
and
$$\mu+\cdots+\mu^q+1+\mu^{-1}+\cdots+\mu^{-q+1}=\frac{\mu^{-q+1}(1-\mu^{2q})}{1-\mu}=0,$$
we have $I_1+I_4=0$ provided $i-j\neq 0$. For $k=2q-1$ with $q\in
{\bf N}$, in the similar way we also have $I_1+I_4=0$ provided $i-j\neq
0$. That $I_2+I_3=0$ provided $i+j\neq 0,k$ is proved in the same
way.
For the case $i=j=0$ and the case $i=j=\frac{k}{2}$ if $k$ is even,
from the above observation we have
$$\int^k_0\<B(t)x(t),\;y(t)\>dt=k\int^1_0 \<B(t)x(t),\;y(t)\>dt$$
which yields (\ref{4.11}).
For the cases $i=j\neq 0,\frac{k}{2}$, we have $I_2+I_3=0$ and
\begin{eqnarray} (B_kx,y)&=&I_1+I_4\nonumber\\
&=&k\left(\int^1_0\<B(t)\alpha_p,\;e^{(j-i)t\pi\sqrt{-1}/k}e^{(m-l)t\pi\sqrt{-1}}\alpha_m\>\,dt\right.\nonumber\\
&&\;\;\;\;\;+\left.\int^1_0\<B(t)N\alpha_p,\;e^{(i-j)t\pi\sqrt{-1}/k}e^{(l-m)t\pi\sqrt{-1}}N\alpha_m\>\,dt\right)\nonumber\\
&=&k\left(\int^1_0\<B(t)\xi_x(t),\;\xi_y(t)\>\,dt+\int^1_0\<B(t)N\xi(-t),\;N\eta(-t)\>\,dt\right),\label{4.15}\end{eqnarray}
where for $x,y\in E^{\omega_k^i}_{L_0}$, $\xi_x$ and $\xi_y$ are
defined in as in (\ref{4.9}). So (\ref{4.12}) holds from
(\ref{4.15}). The claim (\ref{4.13}) is proved by the same way. By
direct computation we have (\ref{4.13'}) and (\ref{4.14}), moreover
$$(A_kx,y)=k\left(\int^1_0\<-J\frac {d}{dt}\xi_x(t),\;\xi_y(t)\>\,dt+\int^1_0\<-J\frac{d}{dt}N\xi_x(-t),
\;N\xi_y(-t)\>\,dt\right),\; {\rm if }\; i=j.$$ The orthogonality
statement in Lemma 4.2 follows from (\ref{4.10}) and (\ref{4.13'}).
\hfill\vrule height0.18cm width0.14cm $\,$
\noindent{\bf Proof of Theorem 4.1.} Let $1\le l<\frac{k}{2}$, $l\in
{\bf N}$.
For $x\in
E_{L_0}^{\omega_k^{l}}$,
$$ x(t)=\sum_{j\in{\bf Z}}e^{l\pi\sqrt{-1}t/k}e^{j\pi\sqrt{-1}t}\left(\begin{array}{cc} \sqrt{-1}\alpha_j\\\alpha_j\end{array}\right)+
e^{-l\pi\sqrt{-1}t/k}e^{-j\pi\sqrt{-1}t}\left(\begin{array}{cc}
-\sqrt{-1}\alpha_j\\\alpha_j\end{array}\right).$$
For $y\in E_{L_0}^{\omega_k^{k-l}}$,
$$ y(t)=\sum_{j\in{\bf Z}}e^{-l\pi\sqrt{-1}t/k}e^{-j\pi\sqrt{-1}t}\left(\begin{array}{cc} \sqrt{-1}\beta_j\\\beta_j\end{array}\right)+
e^{l\pi\sqrt{-1}t/k}e^{j\pi\sqrt{-1}t}\left(\begin{array}{cc}
-\sqrt{-1}\beta_j\\\beta_j\end{array}\right).$$
Thus for $z=x+y\in E_{L_0}^{\omega_k,l}$ with $x\in
E_{L_0}^{\omega_k^{l}}$ and $y\in E_{L_0}^{\omega_k^{k-l}}$,
\begin{eqnarray} z(t)&=&
\sum_{j\in{\bf Z}}e^{l\pi\sqrt{-1}t/k}e^{j\pi\sqrt{-1}t}\left(\begin{array}{cc}
\sqrt{-1}\alpha_j\\\alpha_j\end{array}\right)+
e^{-l\pi\sqrt{-1}t/k}e^{-j\pi\sqrt{-1}t}\left(\begin{array}{cc}
-\sqrt{-1}\alpha_j\\\alpha_j\end{array}\right)\nonumber\\
&\;&+e^{-l\pi\sqrt{-1}t/k}e^{-j\pi\sqrt{-1}t}\left(\begin{array}{cc}
\sqrt{-1}\beta_j\\\beta_j\end{array}\right)+
e^{l\pi\sqrt{-1}t/k}e^{j\pi\sqrt{-1}t}\left(\begin{array}{cc}
-\sqrt{-1}\beta_j\\\beta_j\end{array}\right)\nonumber\\
& =&\xi_x(t)+N\xi_x(-t)+\xi_y(-t)+N\xi_y(t).\nonumber\end{eqnarray}
So for $z=x+y\in E_{L_0}^{\omega_k,l}$ with $x\in
E_{L_0}^{\omega_k^{l}}$ and $y\in E_{L_0}^{\omega_k^{k-l}}$, we have
\begin{eqnarray}
(B_kz,z)&=&(B_kx,x)+(B_ky,y)+(B_kx,y)+(B_ky,x)\nonumber\\
&=&k\left(\int_0^1\<B(t)\xi_x(t),\;\xi_x(t)\>dt+\int_0^1\<B(t)\xi_x(t),\;N\xi_y(t)\>dt+\right.\nonumber\\
&&+\int_0^1\<B(t)N\xi_x(-t),\;N\xi_x(-t)\>dt+\int_0^1\<B(t)N\xi_x(-t),\;\xi_y(-t)\>dt+\nonumber\\
&&+\int_0^1\<B(t)\xi_y(-t),\;\xi_y(-t)\>dt+\int_0^1\<B(t)\xi_y(-t),\;N\xi_x(-t)\>dt+\nonumber\\
&&+\left.\int_0^1\<B(t)N\xi_y(t),\;N\xi_y(t)\>dt+\int_0^1\<B(t)N\xi_y(t),\;\xi_x(t)\>dt\right)\nonumber\\
&=& k\int^1_{-1}\<B(t)(\xi_x(t)+N\xi_y(t)),\; \xi_x(t)+N\xi_y(t)\>dt\nonumber\\
&=& k\int^2_{0}\<B(t)(\xi_x(t)+N\xi_y(t)),\; \xi_x(t)+N\xi_y(t)\>dt,\nonumber\end{eqnarray}
where in the second equality we have used (\ref{4.12}) and
(\ref{4.13}).
We note that
\begin{eqnarray} u(t)&=&\xi_x(t)+N\xi_y(t)=\displaystyle\sum_{j\in{\bf Z}}e^{l\pi\sqrt{-1}t/k}e^{j\pi\sqrt{-1}t}
\left(\begin{array}{cc}
\sqrt{-1}(\alpha_j-\beta_j)\\(\alpha_j+\beta_j)\end{array}\right)\nonumber\\&=&\sum_{j\in{\bf Z}}e^{l\pi\sqrt{-1}t/k}e^{j\pi\sqrt{-1}t}u_j,\;\;
u_j\in {\bf C}^{2n}.\nonumber\end{eqnarray} We set
$$E_{\omega_k^{2l} }=\left\{u\in L^{2}([0,2],{\bf C}^{2n})\,|\,u(t)=e^{l\pi\sqrt{-1}t/k}
\sum_{j\in{\bf Z}}e^{j\pi\sqrt{-1}t}u_j, \;\|u\|^2:=\sum_{j\in{\bf Z}}(1+|j|)|u_j|^2<+\infty\right\}. $$
We define self-adjoint operators on $E_{\omega_k^{2l} }$ by
$$(A_{\omega_k^{2l} }u,v)=\int^2_0\<-J\dot u(t),\;v(t)\>dt,\;(B_{\omega_k^{2l} }u,v)=\int^2_0\<B(t) u(t),
\;v(t)\>dt$$
and a quadratic form
$$Q_{\omega_k^{2l} }(u)=((A_{\omega_k^{2l} }-B_{\omega_k^{2l} })u,u), \;u\in E_{\omega_k^{2l} }. $$
Here $Q_{\omega }$ is just the quadratic form $f_{\omega}$ defined
on p$_{133}$ of
\cite{Long1}. In order to complete the proof of Theorem 4.1, we
need the following result.
\noindent{\bf Lemma 4.3.} {\it For a symmetric 2-periodic matrix
function $B$ and ${\omega}\in {\bf U}\setminus\{1\}$, there hold
\begin{eqnarray}
&&I(A_{{\omega }},A_{\omega }-B_{\omega })=i_{\omega
}({\gamma}^2),\label{b1}\\
&&m^0(A_{\omega}-B_{\omega })=\nu_{\omega }({\gamma}^2).\label{b2}\end{eqnarray}}
\noindent{\bf Proof.} In fact, (\ref{b1}) follows directly from
Definition 2.3 and Corollary 2.1 of \cite{LZ1} and Lemma 3.4,
(\ref{b2}) follows from the Floquet theory. We note also that
(\ref{b1}) is the eventual form of the Galerkin approximation
formula. We can also prove it step by step as the proof of Theorem
3.1 of \cite{Liu0} by using the saddle point reduction formula in
Theorem 6.1.1 of \cite{Long1}. \hfill\vrule height0.18cm width0.14cm $\,$
\noindent{\it Continue the proof of Theorem 4.1. } By Lemma 4.3, we
have
\begin{eqnarray} I(A_{{\omega_k^{2l} }},A_{\omega_k^{2l} }-B_{\omega_k^{2l} })=i_{\omega_k^{2l} }({\gamma}^2),\;\;
m^0(A_{\omega_k^{2l} }-B_{\omega_k^{2l} })=\nu_{\omega_k^{2l} }({\gamma}^2),\;\,1\le l< \frac{k}{2},\;\,l\in {\bf N}.\label{4.16}\end{eqnarray}
By Definition 3.2, we have
\begin{eqnarray} I(A^{\sqrt{-1}},A^{\sqrt{-1}}-B^{\sqrt{-1}})=i_{\sqrt{-1}}^{L_0}({\gamma}),\;\;\;\;m^0(A^{\sqrt{-1}}-B^{\sqrt{-1}})=
\nu_{\sqrt{-1}}^{L_0}({\gamma}).\label{4.17}\end{eqnarray}
By (\ref{c1}) we have
\begin{equation} I(A^1,A^1-B^1)=i_{L_0}({\gamma})+n,\;\;\;\;m^0(A^1-B^1)=\nu_{L_0}({\gamma}),\label{4.18}\end{equation}
and \begin{equation}
I(A_k,A_k-B_k)=i_{L_0}({\gamma}^k)+n,\;\;\;\;m^0(A_k-B_k)=\nu_{L_0}({\gamma}^k)
.\label{4.19}\end{equation} By (\ref{4.11}), (\ref{4.14}), Lemma 3.3, Definition
3.1 and Lemma 4.2, for odd $k$, sum the first equality in
(\ref{4.16}) for $l=1,2,\cdots,\frac{k-1}{2}$ and the first equality
of (\ref{4.18}) correspondingly. By comparing with the first
equality of (\ref{4.19}) we have
\begin{equation}
i_{L_0}({\gamma}^k)=i_{L_0}({\gamma})+\sum_{l=1}^{\frac{k-1}{2}}i_{{\omega}_k^{2l}}({\gamma}^2),\label{4.20}\end{equation}
and for even $k$, sum the first equality in (\ref{4.16}) for
$l=1,2,\cdots,\frac{k}{2}-1$ and the first equalities of
(\ref{4.17})-(\ref{4.18}) correspondingly. By comparing with the
first equality of (\ref{4.19}) we have
\begin{equation}
i_{L_0}({\gamma}^k)=i_{L_0}({\gamma})+i_{\sqrt{-1}}^{L_0}({\gamma})+\sum_{l=1}^{\frac{k}{2}-1}i_{{\omega}_k^{2l}}({\gamma}^2).\label{4.21}\end{equation}
Similarly we have \begin{eqnarray}
&&\nu_{L_0}({\gamma}^k)=\nu_{L_0}({\gamma})+\sum_{l=1}^{\frac{k-1}{2}}\nu_{{\omega}_k^{2l}}({\gamma}^2),\quad {\rm if \; k\; is\; odd},\label{4.22}\\
&&\nu_{L_0}({\gamma}^k)=\nu_{L_0}({\gamma})+\nu_{\sqrt{-1}}^{L_0}({\gamma})+\sum_{l=1}^{\frac{k}{2}-1}\nu_{{\omega}_k^{2l}}({\gamma}^2),\quad
{\rm if \; k\; is\; even}.\label{4.23}\end{eqnarray}
Then Theorem 4.1 holds from
(\ref{4.20})-(\ref{4.23}) and the fact that ${\omega}_k^{k/2}=\sqrt{-1}$.
\hfill\vrule height0.18cm width0.14cm $\,$
From the formulas in Theorem 4.1, we note that
$$i_{L_0}(\gamma^2)=i_{L_0}(\gamma^1)+i^{L_0}_{\sqrt{-1}}(\gamma^1),\;\;
\nu_{L_0}(\gamma^2)=\nu_{L_0}(\gamma^1)+\nu^{L_0}_{\sqrt{-1}}(\gamma^1).$$
It implies (\ref{1.20}).
\noindent{\bf Definition 4.1.} {\it The mean $L_0$-index of $\gamma$
is defined
by
$$\hat {i}_{L_0}(\gamma)=\lim_{k\to +\infty}\frac{i_{L_0}(\gamma^k)}{k}.
$$}
By definitions of $\hat{i}_{L_0}({\gamma})$ and $\hat{i}({\gamma}^2)$(cf.
\cite{Long1} for example), the following result is obvious.
\noindent{\bf Proposition 4.1.} {\it The mean $L_0$-index of $\gamma$ is
well defined, and
\begin{eqnarray}\hat {i}_{L_0}(\gamma)=\frac
{1}{2\pi}\int^{\pi}_0i_B(e^{\sqrt{-1}\theta})d\theta=\frac{\hat
{i}(\gamma^2)}{2}, \label{4.24}\end{eqnarray} }
here we have written
$i_B(\omega)=i_{\omega}(B)=i_{\omega}(\gamma_B)$.
For $L_1={\bf R}^n\times\{0\}$, we have the $L_1$-index theory
established in \cite{Liu2}. Similarly as in Definition 3.2, for
$\omega=e^{\theta\sqrt{-1}},\;\theta\in(0,\pi)$, we define
$$E^{\omega}_{L_1}=\left\{x\in
L^2([0,1],{\bf C}^{2n})\,|\,x(t)=e^{\theta t
J}\displaystyle\sum_{j\in{\bf Z}}e^{j\pi tJ}\left(\begin{array}{cc} a_j\\0\end{array}\right),
\;a_j\in{\bf C}^n,\;
\|x\|:=\sum_{j\in{\bf Z}}(1+|j|)|a_j|^2<+\infty\right\}.$$
In
$E^{\omega}_{L_1}$ we define two operators $A^{\omega}_{L_1}$ and
$B^{\omega}_{L_1}$ by the same way as the definitions of operators $A^{\omega}$ and
$B^{\omega}$ in the section 3, but the
domain is $E^{\omega}_{L_1}$. We define
$$i^{L_1}_{\omega}(B)=I(A^{\omega}_{L_1},A^{\omega}_{L_1}-B^{\omega}_{L_1}),\;\nu^{L_1}_\omega(B)=m^0(A^{\omega}_{L_1}-B^{\omega}_{L_1})). $$
\noindent{\bf Theorem 4.2.} {\it Suppose $\omega_k=e^{\pi
\sqrt{-1}/k}$. For odd $k$ we have \begin{eqnarray} &&
i_{L_1}(\gamma^k)=i_{L_1}(\gamma^1)+\sum_{i=1}^{\frac{k-1}{2}}i_{\omega_k^{2i}}(\gamma^2),\nonumber\\
&&\nu_{L_1}(\gamma^k)=\nu_{L_1}(\gamma^1)+\sum_{i=1}^{\frac{k-1}{2}}\nu_{\omega_k^{2i}}(\gamma^2).\end{eqnarray}
For even $k$, we have
\begin{eqnarray} &&
i_{L_1}(\gamma^k)=i_{L_1}(\gamma^1)+i^{L_1}_{\omega_k^{k/2}}(\gamma^1)+
\sum_{i=1}^{k/2-1}i_{\omega_k^{2i}}(\gamma^2),\;
\nonumber\\
&&
\nu_{L_1}(\gamma^k)=\nu_{L_1}(\gamma^1)+\nu^{L_1}_{\omega_k^{k/2}}(\gamma^1)+
\sum_{i=1}^{k/2-1}\nu_{\omega_k^{2i}}(\gamma^2). \nonumber\end{eqnarray} }
\noindent{\bf Proof.} The proof is almost the same as that of
Theorem 4.1. The only thing different from that is the matrix $N$
should be replaced by $N_1=-N$. \hfill\vrule height0.18cm width0.14cm $\,$
It is easy to see that
$i(\gamma^2)=i_{L_0}(\gamma^1)+i_{L_1}(\gamma^1)+n$, see Proposition
C of \cite{LZZ} for a proof, we remind that
$\mu_1(\gamma)=i_{L_0}(\gamma)+n$ and
$\mu_2(\gamma)=i_{L_1}(\gamma)+n$ (see (\ref{6.9}) below). So by the
Bott-type formula (see \cite{Long0}) for the $\omega$-index of
$\gamma^2$ at $\omega=-1$, we have
$$i_{-1}(\gamma^2)=i^{L_0}_{\sqrt{-1}}(\gamma^1)+i^{L_1}_{\sqrt{-1}}(\gamma^1), $$
$$\nu_{-1}(\gamma^2)=\nu^{L_0}_{\sqrt{-1}}(\gamma^1)+\nu^{L_1}_{\sqrt{-1}}(\gamma^1). $$
We now give a direct proof of this result.
\noindent{\bf Proposition 4.2.} {\it There hold \begin{eqnarray}
&& i(\gamma^2)=i_{L_0}(\gamma^1)+i_{L_1}(\gamma^1)+n, \label{4.25}\\
&& \nu_1(\gamma^2)=\nu_{L_0}(\gamma^1)+\nu_{L_1}(\gamma^1), \label{4.26}\\
&& i_{-1}(\gamma^2)=i^{L_0}_{\sqrt{-1}}(\gamma^1)+i^{L_1}_{\sqrt{-1}}(\gamma^1),\label{4.27}\\
&&
\nu_{-1}(\gamma^2)=\nu^{L_0}_{\sqrt{-1}}(\gamma^1)+\nu^{L_1}_{\sqrt{-1}}(\gamma^1).
\label{4.28}\end{eqnarray} } \noindent{\bf Proof.} Set $E_1=W^{1/2,2}(S^1,
{\bf C}^{2n})$ with $S^1={\bf R}/(2{\bf Z})$. We note that $E_{\omega}=e^{J\theta
t}E_1$ for $\omega=e^{2\theta\sqrt{-1}}$. For any $z\in E_1$, we
have
$$z(t)=\sum_{j\in{\bf Z}}e^{jt\pi J}c_j=\sum_{j\in{\bf Z}}e^{jt\pi J}\left(\begin{array}{cc} 0\\a_j\end{array}\right)
+\sum_{j\in{\bf Z}}e^{jt\pi J}\left(\begin{array}{cc} b_j\\0\end{array}\right), \;c_j\in
{\bf C}^{2n},\;a_j,\;b_j\in {\bf C}^{n}.$$
So we have $E_{\omega}=E_{L_0}^{\omega}\oplus E_{L_1}^{\omega}$.
For $x\in E_{L_0}^{\omega}$ and $y\in E_{L_1}^{\omega}$, we can
write
\begin{eqnarray} x(t)&=&e^{J\theta t} \sum_{j\in{\bf Z}}e^{jt\pi J}\left(\begin{array}{cc} 0\\a_j\end{array}\right):=e^{J\theta t}x_0(t),\nonumber\\
y(t)&=&e^{J\theta t} \sum_{j\in{\bf Z}}e^{jt\pi J}\left(\begin{array}{cc} b_j\\0\end{array}\right):=e^{J\theta
t}y_0(t).\nonumber\end{eqnarray}
By setting $\tilde {B}(t)=e^{-J\theta t}B(t)e^{J\theta t}$, we get
$$\int^2_0\<B(t)x(t),y(t)\>dt=\int^2_0\<\tilde {B}(t)x_0(t),y_0(t)\>dt. $$
In the cases of $\theta=0,\frac{\pi}{2}$, we have $\tilde {B}(t+2)=\tilde
{B}(t)$ and $\tilde B(1+t)=N\tilde B(1-t)N$. As in (\ref{3.12}), we write
$x_0(t)=\xi(t)+N\xi(-t)$ and $y_0(t)=\eta(t)-N\eta(-t)$ with
$$\xi(t)=\sum_{j\in {\bf Z}} e^{j\pi t\sqrt{-1}}\left(\begin{array}{cc} \sqrt{-1}a_j\\a_j\end{array}\right),\;
\eta(t)=\sum_{j\in {\bf Z}} e^{j\pi t\sqrt{-1}}\left(\begin{array}{cc} b_j\\-\sqrt{-1}b_j\end{array}\right).$$
\begin{eqnarray} &&\int^2_1\<\tilde B(t)x_0(t),\;y_0(t)\>dt=\int^2_1\<\tilde B(t)(\xi(t)+N\xi(-t)),\;\eta(t)-N\eta(-t)\>dt\nonumber\\
&&=
\sum_{j,l\in{\bf Z}}\int^1_0\left\<\tilde B(1+t)\left(e^{j\pi(t+1)\sqrt{-1}}\left(\begin{array}{cc}
\sqrt{-1}a_j\\a_j\end{array}\right)+
e^{-j\pi(t+1)\sqrt{-1}}\left(\begin{array}{cc} -\sqrt{-1}a_j\\a_j\end{array}\right)\right)\right.,\;\nonumber\\
&& \quad\quad\quad\quad\quad\quad
\left. e^{l\pi(t+1)\sqrt{-1}}\left(\begin{array}{cc} b_j\\-\sqrt{-1}b_j
\end{array}\right)+e^{-l\pi(t+1)\sqrt{-1}}\left(\begin{array}{cc} b_j\\\sqrt{-1}b_j\end{array}\right)\right\>dt\nonumber\\
&&=
\sum_{j,l\in{\bf Z}}(-1)^{j+l}\int^1_0\<N\tilde B(1-t)N(\xi(t)+N\xi(-t)),\;\eta(t)-N\eta(-t)\>dt\nonumber\\
&&=
\sum_{j,l\in{\bf Z}}(-1)^{j+l}\int^1_0\<N\tilde B(t)N(\xi(1-t)+N\xi(t-1)),\;\eta(1-t)-N\eta(t-1)\>dt\nonumber\\
&&=
\sum_{j,l\in{\bf Z}}(-1)^{2(j+l)}\int^1_0\<\tilde B(t)(N\xi(-t)+\xi(t)),\;-\eta(t)+N\eta(-t)\>dt\nonumber\\
&&=
-\int^1_0\<\tilde B(t)(\xi(t)+N\xi(-t)),\;\eta(t)-N\eta(-t)\>dt=-\int^1_0\<\tilde
B(t)x_0(t),\;y_0(t)\>dt.\nonumber\end{eqnarray}
It implies that
\begin{equation}\int^2_0\<\tilde B(t)x_0(t),\;y_0(t)\>dt=0. \label{4a}\end{equation}
It is easy to see that
\begin{equation}\int^2_0\<-J\dot x(t),\;y(t)\>dt=0.\label{4b} \end{equation}
By
defining
$$Q_{\omega}(x,y)=\int^2_0\<-J\dot x(t),\;y(t)\>dt-\int^2_0\<B(t)x(t),\;y(t)\>dt, \;x,\;y\in E_{\omega}, $$
(\ref{4a}) and (\ref{4b}) imply that the decomposition
$E_{\omega}=E_{L_0}^{\omega}\oplus E_{L_1}^{\omega}$ is
$Q_{\omega}$-orthogonal in the cases $\theta=0, \frac{\pi}{2}$. So
we get the formulas (\ref{4.25})-(\ref{4.28}) by the similar
argument in the proof of Theorem 4.1. \hfill\vrule height0.18cm width0.14cm $\,$
\setcounter{equation}{0}
\section {Proof of Theorems 1.4 and 1.5}
\noindent{\bf Proof of Theorem 1.4.} By the definition of the
splitting number, we have
$$i_{\omega_0}(\gamma^2)=i(\gamma^2)+\sum_{0\le \theta<\theta_0}S^+_M(e^{\sqrt{-1}\theta})-
\sum_{0< \theta\le \theta_0}S^-_M(e^{\sqrt{-1}\theta}), $$ where
$\omega_0=e^{\sqrt{-1}\theta_0}$. So for $k\in 2{\bf N}-1$, let
$m=\frac{k-1}{2}$, we have
\begin{eqnarray} && \sum_{i=1}^m i_{\omega_k^{2i}}(\gamma^2)=m
i(\gamma^2)+\sum_{i=1}^m\left(\sum_{0\le \theta<\frac{2i\pi}{k}}S^+_M(e^{\sqrt{-1}\theta})-
\sum_{0< \theta\le
\frac{2i\pi}{k}}S^-_M(e^{\sqrt{-1}\theta})\right)\nonumber\\
&&=m(i(\gamma^2)+S^+_M(1))+\sum_{\theta\in(0,\pi)}\left(\sum_{\frac{k\theta}{2\pi}< i\le m}
S^+_M(e^{\sqrt{-1}\theta})-
\sum_{\frac{k\theta}{2\pi}\le i\le
m}S^-_M(e^{\sqrt{-1}\theta})\right)\nonumber\end{eqnarray}
\begin{eqnarray}&&=m(i(\gamma^2)+S^+_M(1))+\sum_{\theta\in(0,\pi)}\left(\left(m
-\left[\frac{k\theta}{2\pi}\right]\right)S^+_M(e^{\sqrt{-1}\theta})-\left[m+1
-\frac{k\theta}{2\pi}\right]S^-_M(e^{\sqrt{-1}\theta})\right)\nonumber\\
&&=m(i(\gamma^2)+S^+_M(1))\nonumber\\
&&\quad +\sum_{\theta\in(0,\pi)}\left(\left(m
-\left[\frac{k\theta}{2\pi}\right]\right)S^-_M(e^{\sqrt{-1}(2\pi-\theta)})
-\left(m+1
-E\left(\frac{k\theta}{2\pi}\right)\right)S^-_M(e^{\sqrt{-1}\theta})\right)\nonumber\\
&&=m(i(\gamma^2)+S^+_M(1))+\sum_{\theta\in(\pi,2\pi)}\left(m
-\left[\frac{k(2\pi-\theta)}{2\pi}\right]\right)S^-_M(e^{\sqrt{-1}\theta})\nonumber\\
&&\quad-\sum_{\theta\in(0,\pi)}\left(m+1
-E\left(\frac{k\theta}{2\pi}\right)\right)S^-_M(e^{\sqrt{-1}\theta})\nonumber\\
&&= m(i(\gamma^2)+S^+_M(1))+\sum_{\theta\in(0,\pi)\cup
(\pi,2\pi)}\left(-(m+1)+E\left(\frac{k\theta}{2\pi}\right)\right)S^-_M(e^{\sqrt{-1}\theta})\nonumber\\
&&=m(i(\gamma^2)+S^+_M(1))-(m+1)C(M)+\sum_{\theta\in
(0,2\pi)}E\left(\frac{k\theta}{2\pi}\right)S^-_M(e^{\sqrt{-1}\theta})\nonumber\\
&&=m(i(\gamma^2)+S^+_M(1)-C(M))
+\sum_{\theta\in(0,2\pi)}E\left(\frac{k\theta}{2\pi}\right)S_M^-(e^{\sqrt{-1}\theta})-C(M),\nonumber
\end{eqnarray}
where in the fourth equality and sixth equality we have used the
facts that
$$S_M^+(e^{\sqrt{-1}\theta})=S_M^-(e^{\sqrt{-1}(2\pi-\theta)}),\;\;
$$
$k=2m+1$ and $E(a)+[b]=a+b$ if $a,\; b\in {\bf R}$ and $a+b\in {\bf Z}$,
especially $E(-a)+[a]=0$ for any $a\in {\bf R}$. By using Theorem 4.1 and
$m=\frac {k-1}{2}$ we get (\ref{1.21}).
Similarly we obtain (\ref{1.23}).
\hfill\vrule height0.18cm width0.14cm $\,$
\noindent{\bf Corollary 5.1.} {\it For mean $L_0$-index, there holds
$$\hat{i}_{L_0}(\gamma)=\frac{1}{2}\hat{i}({\gamma}^2)
=\frac 12
(i(\gamma^2)+S^+_M(1)-C(M))+\sum_{\theta\in(0,2\pi)}\frac{\theta}{2\pi}S^-_M(e^{\sqrt{-1}\theta}).
$$}
\noindent{\bf Proof.} The above equality follows from Theorem 5.1
and the definition of the mean $L_0$-index
$$\hat {i}_{L_0}(\gamma)=\lim_{k\to \infty}\frac{i_{L_0}(\gamma^k)}{k}.$$
\hfill\vrule height0.18cm width0.14cm $\,$
In \cite{LZ} the following
common index jump theorem of symplectic paths was proved.
\noindent{\bf Proposition 5.1.}(Theorem 4.3 in \cite{LZ}) {\it Let
${\gamma}_j\in \mathcal {P}_{\tau_j}(2n)$ for $j=1,\cdots,q$ be a finite
collection of
Symplectic paths. Extend ${\gamma}_j$ to $[0,+\infty)$
by ${\gamma}_j(t+\tau_j)={\gamma}_j(t){\gamma}_j(\tau_j)$ and let
$M_j={\gamma}(\tau_j)$, for $j=1,\cdots,q$ and $t>0$.
Suppose \begin{eqnarray} \hat{i}({\gamma}_j)>0, \quad
j=1,\cdots,q.\nonumber\end{eqnarray}
Then there exist infinitely many $(R, m_1, m_2,\cdots,m_q)\in {\bf N}^{q+1}$ such that
(i) $\nu({\gamma}_j, 2m_j\pm 1)=\nu({\gamma}_j)$,
(ii) $i({\gamma}_j, 2m_j-1)+\nu({\gamma}_j,
2m_j-1)=2R-(i({\gamma}_j)+2S_{M_j}^+(1)-\nu({\gamma}_j))$,
(iii)$i({\gamma}_j,2m_j+1)=2R+i({\gamma}_j)$,
\noindent where we have set $i({\gamma}_j, n_j)=i({\gamma}_j, [0,
n_j\tau_j])$, $\nu({\gamma}_j, n_j)=\nu({\gamma}_j, [0, n_j\tau_j])$ for
$n_j\in{\bf N}$.}
\noindent{\bf Proof of Theorem 1.5.} We divide our proof in three
steps.
\noindent{\it Step 1.} Application of Proposition 5.1.
By (\ref{6.11}) and (\ref{6.12}), we have
\begin{equation} \hat{i}({\gamma}_j^2)=2\hat{i}_{L_0}({\gamma}_j)>0.\label{6.13}\end{equation}
So we have
\begin{equation} \hat{i}({\gamma}_j^2)>0, \quad
j=1,\cdots,q,\label{6.14}\end{equation}
where ${\gamma}_j^2$ is the 2-times iteration of ${\gamma}_j$ defined by
(\ref{4.4}).
Hence the symplectic paths ${\gamma}_j^2, j=1,2,\cdots,q$ satisfy the condition in Theorem
6.1,
so there exist infinitely $(R, m_1, m_2,\cdots,m_q)\in {\bf N}^{q+1}$
such that
\begin{eqnarray} \nu({\gamma}_j^2, 2m_j\pm 1)&=&\nu({\gamma}_j^2),\label{6.15}\\
i({\gamma}_j^2, 2m_j-1)+\nu({\gamma}_j^2,
2 m_j-1)
&=&2R-(i({\gamma}_j^2)+2S_{M_j}^+(1)-\nu({\gamma}_j^2)),\label{6.16}\\
i({\gamma}_j^2,2m_j+1)&=&2R+i({\gamma}_j^2).\label{6.17}\end{eqnarray}
\noindent{\it Step 2.} Verification of (i).
By Theorems 4.1 and 4.2, we have
\begin{eqnarray}
\nu_{L_0}({\gamma}_j,2m_j\pm 1)=\nu_{L_0}({\gamma}_j)+\frac{\nu({\gamma}_j^2,2m_j\pm 1)-\nu({\gamma}^2_j)}{2},\label{6.20}\\
\nu_{L_1}({\gamma}_j,2m_j\pm 1)=\nu_{L_1}({\gamma}_j)+\frac{\nu({\gamma}_j^2,2m_j\pm 1)-\nu({\gamma}^2_j)}{2}. \label{6.20'}\end{eqnarray}
Hence (i) follows from (\ref{6.15}) and (\ref{6.20}).
\noindent{\it Step 3.} Verifications of (ii) and (iii).
By Theorems 4.1 and 4.2, we have
\begin{eqnarray}
&& i_{L_0}({\gamma}^m)-i_{L_1}({\gamma}^m)=i_{L_0}({\gamma})-i_{L_1}({\gamma}),
\quad \forall m\in 2{\bf N}-1,\label{6.21}\\
&&i_{L_0}({\gamma}^m)-i_{L_1}({\gamma}^m)=i_{L_0}({\gamma}^2)-i_{L_1}({\gamma}^2), \quad
\forall m\in 2{\bf N}.\label{6.22}\end{eqnarray}
By (\ref{6.10}), (\ref{6.9}) and (\ref{6.21}) we have \begin{equation}
2i_{L_0}({\gamma}_j,2m_j\pm 1)=i({\gamma}_j^2,2m_j\pm
1)-n+i_{L_0}({\gamma}_j)-i_{L_1}({\gamma}_j).\label{6.23}\end{equation}
By (\ref{6.15}),
(\ref{6.16}) and (\ref{6.23}) we have
\begin{equation}
2i_{L_0}({\gamma}_j,,2m_j-1)=2R-(i({\gamma}_j^2)-2S_{M_j}^+(1)+n-i_{L_0}({\gamma}_j)+i_{L_1}({\gamma}_j)).\label{6.24}\end{equation}
So by (\ref{6.10}) we have
\begin{equation}
i_{L_0}({\gamma}_j,2m_j-1)=R-(i_{L_1}({\gamma}_j)+n+S_{M_j}^+(1)).\label{6.25}\end{equation}
Together with (i), this yields (ii).
By (\ref{6.17}) and (\ref{6.23}) we have
\begin{equation}
2i_{L_0}({\gamma}_j,2m_j+1)=2R+i({\gamma}_j^2)-n+i_{L_0}({\gamma}_j)-i_{L_1}({\gamma}_j).\label{6.26}\end{equation}
By (\ref{6.10}) and (\ref{6.26}) we have \begin{equation}
i_{L_0}({\gamma}_j,2m_j+1)=R+i_{L_0}({\gamma}_j).\label{6.27}\end{equation} Hence (iii)
holds and the proof of Theorem 1.5 is complete. \hfill\vrule height0.18cm width0.14cm $\,$
\noindent{\bf Remark 5.1.} From (\ref{6.12}) and (iii) of
Theorem 1.5, it is easy to see that for any $\mathcal {R}>0$, among
the infinitely many vectors $(R,m_1,m_2,\cdots,m_q)\in {\bf N}^{q+1}$ in
Theorem 1.5, there exists one vector such that its first component
$R$ satisfies $R>\mathcal {R}$.
\setcounter{equation}{0}
\section{Variational set up
In this section, we briefly recall the variational set up and some
corresponding results proved in \cite{LZZ}. Based on these results
we obtain an injection map in Lemma 6.3 bellow which is basic in
the proofs of Theorems 1.1 and 1.2.
For ${\Sigma}\in \mathcal{H}_b^{s,c}(2n)$, let $j_{\Sigma}: {\Sigma} \rightarrow[0,+\infty)$ be the
gauge function of ${\Sigma}$ defined by
\begin{equation}
j_{{\Sigma}}(0)=0,\quad {\rm and} \quad j_{\Sigma}(x)=\inf\{\lambda >0\mid
\frac{x}{\lambda}\in C\}, \quad \forall x \in
{\bf R}^{2n}\setminus\{0\},\label{7.1}
\end{equation}
where $C$ is the domain enclosed by ${\Sigma}$.
Define
\begin{equation} H_\alpha(x)=(j_{\Sigma}(x))^\alpha,\;\alpha>1,\quad
H_{\Sigma}(x)=H_2(x),\; \forall x \in
{\bf R}^{2n}.\label{7.2}
\end{equation}
Then $H_{\Sigma} \in C^2 ({\bf R}^{2n}\backslash \{0\},{\bf R})\cap
C^{1,1}({\bf R}^{2n},{\bf R})$. Its Fenchel conjugate (cf.\cite{EH},\cite{Ek}) is the function
$H_{\Sigma}^*$ defined by
\begin{equation}
H_{\Sigma}^*(y)=\max\{(x\cdot y
-H_{\Sigma}(x))|\, x\in {\bf R}^{2n}\}.\label{7.3}\end{equation}
We consider the following fixed energy problem
\begin{eqnarray}
\dot{x}(t) &=& JH_{\Sigma}'(x(t)), \label{7.4}\\ H_{\Sigma}(x(t)) &=& 1, \label{7.5}\\
x(-t) &=& Nx(t), \label{7.6}\\ x(\tau+t) &=& x(t),\quad \forall\,
t\in{\bf R}. \label{7.7} \end{eqnarray} \
Denote by
$\mathcal{J}_b({\Sigma},2)\;(\mathcal{J}_b({\Sigma},\alpha)$ for $\alpha=2$ in
(\ref{7.2})) the set of all solutions $(\tau,x)$ of problem
(\ref{7.4})-(\ref{7.7}) and by $\tilde{\mathcal{J}}_b({\Sigma},2)$ the
set of all geometrically distinct solutions of
(\ref{7.4})-(\ref{7.7}). By Remark 1.2 or discussion in \cite{LZZ},
elements in $\mathcal{J}_b({\Sigma})$ and $\mathcal{J}_b({\Sigma},2)$ are one
to one correspondent. So we have
$^\#\td{\mathcal{J}}_b({\Sigma})$=$^\#\td{\mathcal{J}}_b({\Sigma},2)$.
For $S^1={\bf R} / {\bf Z}$, as in \cite{LZZ} we define the Hilbert space $E$
by
\begin{eqnarray} E = \left\{ x\in W^{1,2}(S^1,{\bf R}^{2n})\left| x(-t)=Nx(t),\quad {\rm for\; all} \;
t\in {\bf R} \;\;{\rm and} \;\; \int_0^1x(t)dt=0 \right.\right\}. \label{7.8} \end{eqnarray}
The inner product on $E$ is given by
\begin{equation}
(x,y)=\int_0^1 \< \dot{x}(t), \dot{y}(t) \> dt.\label{7.9}
\end{equation}
The $C^{1,1}$ Hilbert manifold $M_{\Sigma} \subset E$ associated to ${\Sigma}$ is
defined by
\begin{equation} M_{\Sigma}=\left\{ x\in E \left| \int_0^1H^*_{\Sigma}(-J\dot{x}(t))dt=1\; {\rm and} \;
\int_0^1\< J\dot{x}(t), x(t)\>dt <0\right.\right\}. \label{7.10}\end{equation}
Let ${\bf Z}_2=\{-id, id\}$ be the usual ${\bf Z}_2$ group. We define the ${\bf Z}_2$-action on $E$ by
$$-id(x)=-x,\quad id(x)=x, \qquad \forall x\in E.$$
Since $H^*_{\Sigma}$ is even, $M_{\Sigma}$ is symmetric to 0, i.e., ${\bf Z}_2$
invariant. $M_{\Sigma}$ is a paracompact ${\bf Z}_2$-space. We
define
\begin{equation}
\Phi(x)=\frac{1}{2}\int_0^1\< J\dot{x}(t), x(t)\>dt, \label{7.11}
\end{equation}
then $\Phi$ is a ${\bf Z}_2$ invariant function and $\Phi\in C^\infty (E,{\bf R})$. We denote by $\Phi_{\Sigma}$
the restriction of $\Phi$ to $M_{\Sigma}$, we remind that $\Phi$ and
$\Phi_{{\Sigma}}$ here
are the functionals $A$ and $A_{{\Sigma}}$ in \cite{LZZ} respectively.
Suppose $z\in M_{\Sigma}$ is a critical point of $\Phi_{\Sigma}$. By Lemma 7.1
of \cite{LZZ} there is a $c_1(z)\in 0\times {\bf R}^n$ such that
$x(z)(t)=(|\Phi_{\Sigma}(z)|^{-1}(z(|\Phi_{\Sigma}(z)|t)+c_1(z))$ is a
$\tau$-periodic solution of the fixed energy problem
(\ref{1.11})-(\ref{1.12}), i.e., $(\tau ,x)\in \mathcal{J}_b({\Sigma},2)$
with $\tau=|\Phi_{\Sigma}(z)|^{-1}$.
Following the ideas of Ekeland and Hofer in \cite{EH}, Long, Zhu
and the second author of this paper in \cite{LZZ} proved the
following result(see Corollary 7.10 of \cite{LZZ}).
\noindent{\bf Lemma 6.1.} {\it If
$^\#\td{\mathcal{J}}_b({\Sigma})<+\infty$, then for each $k\in {\bf N}$, there
exists a critical points $z_k\in M_{\Sigma}$ of $\Phi_{\Sigma}$ such that the
sequence $\{\Phi_{{\Sigma}}(z_k)\}$ increases strictly to zero as $k$
goes to $+\infty$ and there holds
$$m^{-}(z_k)\le k-1\le m^-(z_k)+m^0(z_k),$$
where $m^{-}(z_k)$ and $m^0(z_k)$ are Morse index and nullity of the
formal Hessian $Q_{z_k}$ of $\Phi_{\Sigma}$ at $z$ defined by (7.36) of
\cite{LZZ} as follows:
\begin{equation} Q_{z_k}(h)=\frac {1}{2}\int_0^1\langle
J\dot{h}(t),h(t)\rangle dt-\frac {1}{2}\Phi(z_k)\int_0^1\langle
(H^{*}_{\Sigma})''(-J\dot {z}_k(t))J\dot{h}(t),J\dot{h}(t)\rangle dt,\;\;h\in
T_{z_k}M_{\Sigma}.\label{7.14}\end{equation}}
We remind that $L_0=\{0\}\times{\bf R}^n$ and $L_1={\bf R}^n\times\{0\}\subset
{\bf R}^{2n}$. The following two maslov-type indices are defined in
\cite{LZZ}.
\noindent{\bf Definition 6.1.} {\it For
$M=\left(\begin{array}{cc}A&B\{\bf C}&D\end{array}\right)\in {\rm Sp}(2n)$, we
define
\begin{equation} \nu_1(M)=\dim \ker B,\quad
{\rm and}\quad
\nu_2(M)=\dim \ker
C.\label{6.1}\end{equation}
For $\Psi\in C([a,b],{\rm Sp}(2n))$, we define
\begin{equation} \nu_1(\Psi)=\nu_1 (\Psi(b)),\quad
\quad
\nu_2(\Psi)=\nu_2 (\Psi(b))\label{6.2}\end{equation}
and
\begin{equation} \mu_1(\Psi,[a,b])=i_{{CLM}_{{\bf R}^{2n}}}(L_0, \Psi L_0,
[a,b]),\quad \mu_2(\Psi,[a,b])=i_{{CLM}_{{\bf R}^{2n}}}(L_1, \Psi L_1,
[a,b]),\label{6.3}\end{equation}
where the Maslov index $i_{{CLM}_{{\bf R}^{2n}}}$ for Lagrangian subspace
paths is defined in \cite{CLM}. We will omit the interval $[a,b]$ in
the index notations when there is no confusion.}
By Proposition
C of \cite{LZZ}, we have
\begin{eqnarray} \mu_1({\gamma})+\mu_2({\gamma})=i({\gamma}^2)+n,\quad
\nu_1({\gamma})+\nu_2({\gamma})=\nu({\gamma}^2),\label{6.10}\end{eqnarray} where ${\gamma}^2$ is the
2-times iteration of ${\gamma}$ defined by (\ref{4.4}).
For convenience in the further proofs of Theorems 1.1 and 1.2 in
this paper, we firstly give a relationship between the Maslov-type
indices $\mu_1$, $\mu_2$ and $i_{L_0}$, $i_{L_1}$.
\noindent{\bf Proposition
6.1.} {\it For any ${\gamma}\in \mathcal{P}_\tau(2n)$, there hold \begin{eqnarray}
&&\nu_1({\gamma})=\nu_{L_0}({\gamma}),\quad
\nu_2({\gamma})=\nu_{L_1}({\gamma}),\label{6.8}\\
&&\mu_1({\gamma})=i_{L_0}({\gamma})+n, \quad
\mu_2({\gamma})=i_{L_1}({\gamma})+n.\label{6.9}\end{eqnarray}
}
From (\ref{4.24}) and (\ref{6.10})-(\ref{6.9}), we have \begin{eqnarray}
\hat{\mu}_1({\gamma})=\hat{\mu}_2({\gamma})=\hat{i}_{L_0}({\gamma})=\hat{i}_{L_1}({\gamma})=\frac{1}{2}\hat{i}({\gamma}^2),\label{6.11}\end{eqnarray}
where $\hat{\mu}_j({\gamma})$ is the $\mu_j$-mean index for $j=1,2$
defined in \cite{LZZ}.
\noindent{\bf Proof.} (\ref{6.8}) follows from the definitions of
$\nu_{L_0}$ and $\nu_{L_1}$ in Definitions 2.1 and 2,4 and the
definitions of $\nu_1$ and $\nu_2$ in Definitions 6.1.
(\ref{6.9}) follows from (\ref{c1}) and Theorem 2.4 of
\cite{Zhang2}. We note that for $x,y\in W_1$, there hold
$$(Ax,y)=2(A^1x,y),\;\;(Bx,y)=2(B^1x,y),$$
where $W_1$, $A,\;B$ were defined in \cite{Zhang2} before Theorem
2.4.
\hfill\vrule height0.18cm width0.14cm $\,$
By Proposition 5.1,
Lemma 8.3 of \cite{LZZ} and Lemma 6.1, we have the following
result which is also basic in the proof of Theorems 1.1 and 1.2.
\noindent{\bf Lemma 6.2.} {\it If
$^\#\tilde{\mathcal{J}}_b({\Sigma})<+\infty$, there is an sequence
$\{c_k\}_{k\in {\bf N}}$, such that \begin{eqnarray}
-\infty<c_1<c_2<\cdots<c_k<c_{k+1}<\cdots<0,\label{7.15}\\
c_k\rightarrow 0\quad {\rm as}\;k\rightarrow +\infty.\label{7.16}\end{eqnarray}
For any $k\in {\bf N}$, there exists a brake orbit $(\tau, x)\in
\mathcal{J}_b({\Sigma},2)$ with $\tau$ being the minimal period of $x$
and $m\in {\bf N}$ satisfying $m\tau=(-c_k)^{-1}$ such that
for
\begin{equation}
z(x)(t)=(m\tau)^{-1}x(m\tau t)-\frac{1}{(m\tau)^2}\int_0^{m\tau}
x(s)ds, \quad t\in S^1 , \label{7.17}\end{equation}
$z(x)\in M_{\Sigma}$ is a critical point of $\Phi_{\Sigma}$ with
$\Phi_{\Sigma}(z(x))=c_k$ and \begin{equation} i_{L_0}(x,m)\le k-1\le
i_{L_0}(x,m)+\nu_{L_0}(x,m)-1,\label{z}\label{7.18}\end{equation}
where we denote by $(i_{L_0}(x,m),\nu_{L_0}(x,m))=(i_{L_0}({\gamma}_x,m),\nu_{L_0}({\gamma}_x,m))$ and ${\gamma}_x$ the associated
symplectic path of $(\tau,x)$. }
\noindent{\bf
Definition 6.2.} {\it We call $(\tau,x)\in\mathcal{J}_b({\Sigma},2)$ with
minimal period $\tau$ {\it infinitely variational visible} if there
are infinitely many $m's\in {\bf N}$ such that $(\tau,x)$ and $m$ satisfy
conclusions in Lemma 6.2. We denote by $\mathcal
{V}_{\infty,b}({\Sigma},2)$ the subset of $\td{\mathcal{J}}_b({\Sigma},2)$
consisting of $[(\tau,x)]$ in which there is an infinitely
variational visible representative.}
As in \cite{LZ}, we have the following injective map lemma.
\noindent {\bf Lemma 6.3.} {\it Suppose
$^\#\tilde{\mathcal{J}}_b({\Sigma})<+\infty$. Then there exist an integer
$K\ge 0$ and an injection map $\phi: {\bf N}+K\mapsto
\mathcal{V}_{\infty,b}({\Sigma},2)\times {\bf N}$ such that
(i) For any $k\in {\bf N}+K$, $[(\tau,x)]\in
\mathcal{V}_{\infty,b}({\Sigma},2)$ and $m\in {\bf N}$ satisfying
$\phi(k)=([(\tau \;,x)],m)$, there holds
$$i_{L_0}(x,m)\le k-1\le i_{L_0}(x,m)+\nu_{L_0}(x,m)-1,$$
where $x$ has minimal period $\tau$.
(ii) For any $k_j\in {\bf N}+K$, $k_1<k_2$, $(\tau_j,x_j)\in \mathcal
{J}_b({\Sigma},2)$ satisfying $\phi(k_j)=([(\tau_j \;,x_j)],m_j)$ with
$j=1,2$ and $[(\tau_1 \;,x_1)]=[(\tau_2 \;,x_2)]$, there holds
$$m_1<m_2.$$}
\noindent{\bf Proof.} Since $^\#\tilde{\mathcal{J}}_b({\Sigma})<+\infty$,
there is an integer $K\ge 0$ such that all critical values $c_{k+K}$
with $k\in {\bf N}$ come from iterations of elements in
$\mathcal{V}_{\infty,b}({\Sigma},2)$. Together with Lemma 6.2, for each
$k\in {\bf N}$, there is a $(\tau,x)\in \mathcal{J}_{b}({\Sigma},2)$ with
minimal period $\tau$ and $m\in {\bf N}$ such that (\ref{7.17}) and
(\ref{7.18}) hold for $k+K$ instead of $k$. So we define a map
$\phi:{\bf N}+K\mapsto \mathcal {V}_{\infty,b}({\Sigma},2)\times {\bf N}$ by
$\phi(k+K)=([(\tau,x)],m)$.
For any $k_1<k_2\in {\bf N}$, if $\phi(k_j)=([\tau_j,x_j)],m_j)$ for
$j=1,2$. Write $[(\tau_1,x_1)]=[(\tau_2,x_2)]=[(\tau,x)]$ with
$\tau$ being the minimal period of $x$, then by Lemma 6.2 we have
\begin{equation} m_j\tau=(-c_{k_j+K})^{-1},\quad j=1,2.\label{7.19}\end{equation}
Since $k_1<k_2$ and $c_k$ increases strictly to 0 as $k\rightarrow
+\infty$, we have
\begin{equation} m_1<m_2.\label{7.20}\end{equation}
So the map $\phi$ is injective, also (ii) is proved.
The proof of this Lemma 6.3 is complete.
\hfill\vrule height0.18cm width0.14cm $\,$
\setcounter{equation}{0}
\section{Proof of Theorem 1.1}
We first prove Lemma 1.1.
\noindent{\bf Proof of Lemma 1.1.} We set
${\gamma}(\frac{\tau}{2})=\left(\begin{array}{cc}A&B\{\bf C}&D\end{array}\right)$
in square block form.
Since $(\tau ,x)\in
\mathcal{J}_b({\Sigma},2)$, we have
\begin{equation} \dot{x}(t)=JH'_{\Sigma}(x(t)),\quad t\in {\bf R}.\label{8.4}\end{equation}
By the definition of $H_{\Sigma}$ in (\ref{7.2}),
$H_{\Sigma}$ is 2-homogeneous and $H'_{\Sigma}$ is 1-homogeneous . So we have
\begin{equation} \dot{x}(t)=JH_{\Sigma}''(x(t))x(t),\quad t\in {\bf R}.\label{8.5}\end{equation}
Differentiating (\ref{8.4}) we obtain
\begin{equation} \ddot{x}(t)=JH_{\Sigma}''(x(t))\dot{x}(t),\quad t\in {\bf R}.\label{8.6}\end{equation}
Since ${\gamma}$ is the associated symplectic path of $(\tau,x)$,
${\gamma}(t)$ is the solution of the problem
\begin{eqnarray}
\dot{{\gamma}}(t) &=& JH_{\Sigma}''(x(t)){\gamma}(t), \label{8.7}\\
{\gamma}(0) &=& I_{2n}. \label{8.8} \end{eqnarray}
So we have
\begin{equation} x(t)={\gamma}(t)x(0),\quad \dot{x}(t)={\gamma}(t)\dot{x}(0), \qquad t\in
{\bf R}.\label{8.9}\end{equation}
Denote by $x(t)=(p(t),q(t))\in {\bf R}^n\times{\bf R}^n$. Since
\begin{equation} x(-t)=Nx(t),\quad x(t+\tau)=x(t),\qquad t\in {\bf R}, \label{8.10}\end{equation}
we have
\begin{eqnarray} p(0)=0=p(\frac{\tau}{2}), \;q(0)\neq 0,\label{8.11}\\
\dot{p}(0)\neq 0,\; \dot{q}(0)=0=\dot{q}(\frac{\tau}{2}).\label{8.12}\end{eqnarray}
Since $(\tau,x)$ is symmetric, by (\ref{8.9}) we have
\begin{eqnarray}
\left(\begin{array}{c}0\\-q(0)\end{array}\right)&=&\left(\begin{array}{c}0\\q(\frac{\tau}{2})\end{array}\right)=
\left(\begin{array}{c}p(\frac{\tau}{2})\\q(\frac{\tau}{2})\end{array}\right)=
\left(\begin{array}{cc}A&B\{\bf C}&D\end{array}\right)\left(\begin{array}{c}p(0)\\q(0)\end{array}\right)\nonumber\\
&=&\left(\begin{array}{cc}A&B\{\bf C}&D\end{array}\right)\left(\begin{array}{c}0\\q(0)\end{array}\right)
=\left(\begin{array}{c}Bq(0)\\Dq(0)\end{array}\right),\\\nonumber\\
\left(\begin{array}{c}-\dot{p}(0)\\0\end{array}\right)&=&\left(\begin{array}{c}\dot{p}(\frac{\tau}{2})\\0\end{array}\right)=
\left(\begin{array}{c}\dot{p}(\frac{\tau}{2})\\\dot{q}(\frac{\tau}{2})\end{array}\right)=
\left(\begin{array}{cc}A&B\{\bf C}&D\end{array}\right)\left(\begin{array}{c}\dot{p}(0)\\
\dot{q}(0)\end{array}\right)\nonumber\\
&=&\left(\begin{array}{cc}A&B\{\bf C}&D\end{array}\right)\left(\begin{array}{c}\dot{p}(0)\\0\end{array}\right)
=\left(\begin{array}{c}A\dot{p}(0)\{\bf C}\dot{p}(0)\end{array}\right). \end{eqnarray}
So we have
\begin{eqnarray} &&B q(0)=0,\quad C\dot{p}(0)=0,\label{8.15}\\
&&D q(0)=-q(0),\quad A\dot{p}(0)=-\dot{p}(0). \label{8.16}\end{eqnarray}
Since
\begin{equation} \langle Jx(0),\dot{x}(0)\rangle= \langle
Jx(0),JH'_{\Sigma}(x(0))\rangle=\langle
x(0),H'_{\Sigma}(x(0))\rangle=2H_{\Sigma}(x(0))=2,\label{8.17}\end{equation}
where we have used the fact that $(\tau,x)\in \mathcal{J}_b({\Sigma},2)$
and $H_{\Sigma}$ is 2-homogeneous, we have
\begin{equation} \langle q(0), \dot{p}(0)\rangle=-\langle
Jx(0),\dot{x}(0)\rangle=-2.\label{8.18}\end{equation}
Denote by $\xi=-\frac{1}{\sqrt{2}}\dot{p}(0)$ and $\eta=\frac{1}{\sqrt{2}}q(0)$.
We have
\begin{equation} \xi^T\eta=1, \label{8.19}\end{equation}
and
\begin{eqnarray} &&B\eta=0,\quad C\xi=0,\label{8.20}\\
&&D\eta=-\eta,\quad A\xi=-\xi, \label{8.21}\end{eqnarray}
where we denote by $\xi^T$ the transpose of $\xi$.
\noindent{\it Claim.} There exist two $n\times (n-1)$ matrices $F$
and $G$ such that ${\rm det}(\xi F)>0$ and the matrix $\left(\begin{array}{cc} (\xi
F)
&0\\0&(\eta G)\end{array}\right)\in {\rm Sp}(2n)$, where $(\xi
F)$ and $(\eta G)$ are $n\times n$ matrices whose first columns are
$\xi$ and $\eta$, and the other $n-1$ columns are the matrices $F$ and $G$ respectively.
\noindent{\it Proof of the claim.} We divide the proof into two
cases.
\noindent {\it Case 1.} $\xi={\lambda} \eta$ for some ${\lambda}\in
{\bf R}\setminus\{0\}$. Denote by ${\rm span} \{e_2,e_3,\cdots,e_n\}$ the
orthogonal complement of ${\rm span}\{\xi\}$ in ${\bf R}^n$ in the standard
inner product sense, where $e_2, e_3,\cdots,e_n$ are unit and mutual
orthogonal. Define the $n\times(n-1)$ matrix
$\td{F}=(e_2\;e_3\;\cdots\;e_n)$ whose columns are
$e_2,e_3,\cdots,e_n$. If ${\rm det} (\xi \tilde{F})>0$, we define
$F=G=(e_2\;e_3\;\cdots\;e_n)$. Otherwise we define
$F=G=\left((-e_2)\;e_3\;e_4\;\cdots\;e_n\right)$. By direct
computation we always have ${\rm det}(\xi F)>0$ and the matrix
$\left(\begin{array}{cc}(\xi F)
&0\\0&(\eta G)\end{array}\right)\in {\rm Sp}(2n)$.
\noindent {\it Case 2.} $\xi\neq {\lambda} \eta$ for all ${\lambda}\in
{\bf R}\setminus\{0\}$, i.e., $\dim {\rm span}\{\xi,\eta\}=2$. Denote by
${\rm span} \{e_3,\cdots,e_n\}$ the orthogonal complement of
${\rm span}\{\xi,\eta\}$ in ${\bf R}^n$ in the standard inner product sense,
where $ e_3,\cdots,e_n$ are unit and mutual orthogonal. Denote by $
{\rm span}\{\xi,\eta\}={\rm span}\{e_1,e_2\}$ where $e_1$ and $e_2$ are unit
and orthogonal and ${\lambda} e_1=\xi$ for some ${\lambda}\in {\bf R}$. Since
$\xi^T\eta=1$ we have $\eta={\lambda}^{-1}e_1+re_2$ for some $r\in
{\bf R}\setminus\{0\}$. Then we define the matrix $\td{F}=(({\lambda}
e_1-r^{-1}e_2)\; e_3\;.\;.\;.\;e_n)$ whose columns are $ {\lambda}
e_1-r^{-1}e_2,\;e_3,\cdots,e_n$. If ${\rm det}(\xi\,\td{F})>0$, we define
$F=(({\lambda} e_1-r^{-1}e_2)\; e_3\;e_4\;.\;.\;.\;e_n)$ and $G=(
(-re_2)\; e_3\;e_4\;.\;.\;.\;e_n)$. Otherwise we define $F=(({\lambda}
e_1-r^{-1}e_2)\; e_3\;.\;.\;.\;(-e_n))$ and $G=(-re_2\;
e_3\;e_4\;.\;.\;.\;(-e_n))$. By direct computation we always have
${\rm det}(\xi F)>0$ and the matrix $\left(\begin{array}{cc}(\xi F)
&0\\0&(\eta G)\end{array}\right)\in {\rm Sp}(2n)$.
By the discussion in cases 1 and 2, the claim is proved.
By this claim, there exist two $n\times (n-1)$ matrices $F$
and $G$ such that ${\rm det}(\xi F)>0$ and the matrix $\left(\begin{array}{cc}(\xi
F)
&0\\0&(\eta G)\end{array}\right)\in {\rm Sp}(2n).$ So we have
\begin{equation} (\eta G)=((\xi F)^T)^{-1}.\label{8.22}\end{equation}
Applying (\ref{8.20})-(\ref{8.22}), by direct
computation we have
\begin{eqnarray} && \left(\begin{array}{cc}(\eta G)^T
&0\\0&(\xi F)^T\end{array}\right)\left(\begin{array}{cc}A
&B\{\bf C}&D\end{array}\right)\left(\begin{array}{cc}(\xi F)
&0\\0&(\eta G)\end{array}\right)\nonumber\\
&=& \left(\begin{array}{cccc}-1
&\eta^TAF&0&\eta^TBG\\0&G^TAF&0&G^TBG\\0&\xi^TCF&-1&\xi^TDG\\
0&F^TCF&0&F^TDG\end{array}\right).\label{8.23}\end{eqnarray}
Since the above matrix is still a symplectic matrix, by Lemma 1.1.2
of \cite{Long1}, we have that both $\left(\begin{array}{cc}-1
&0\\(\eta^TAF)^T&(AF)^TG\end{array}\right)\left(\begin{array}{cc}0
&\xi^TCF\\0&F^TCF\end{array}\right)$ and
$ \left(\begin{array}{cc}0
&0\\(\eta^TBG)^T&G^TB^TG
\end{array}\right)\left(\begin{array}{cc}-1&\xi^TDG
\\ 0&F^TDG\end{array}\right)$
are symmetric and
\begin{eqnarray} \left(\begin{array}{cc}-1
&0\\(\eta^TAF)^T&(AF)^TG\end{array}\right)\left(\begin{array}{cc}-1
&\xi^TDG\\0&F^TDG\end{array}\right)
-\left(\begin{array}{cc}0
&0\\(\xi^T(CF))^T&(CF)^TF\end{array}\right)\left(\begin{array}{cc}0
&\eta^TBG\\0&G^TBG\end{array}\right)=I_n.\nonumber\end{eqnarray}
So by the above three facts and direct computation we have \begin{eqnarray}
\eta^TAF=0, \quad \eta^TBG=0,\quad
\xi^TCF=0,\quad \xi^TDG=0.\label{8.25}\end{eqnarray}
Set $\td{M}=\left(\begin{array}{cc}G^TAF
&G^TBG\\F^TCF&F^TDG\end{array}\right)$. By
(\ref{8.23}) and (\ref{8.25}), there hold
$\td{M}\in {\rm Sp}(2n-2)$ and
\begin{eqnarray} \left(\begin{array}{cc}(\eta G)^T
&0\\0&(\xi F)^T\end{array}\right)\left(\begin{array}{cc}A
&B\{\bf C}&D\end{array}\right)\left(\begin{array}{cc}(\xi F)
&0\\0&(\eta G)\end{array}\right)
= (-I_2)\diamond \tilde{M}.\label{8.26}\end{eqnarray}
Since ${\rm det}(\xi F)>0$, there is a continuous matrix path $\psi(s)$
for $s\in [0,1]$ joints $(\xi F)$ and $I_n$ such that $\psi(0)=I_n$
and $\psi(1)=(\xi F)$ and ${\rm det}(\psi(s))>0$ for all $s\in [0,1]$.
For $s\in [0,1]$, we define \begin{equation}
\Psi(s)=\left(\begin{array}{cc}\psi(s)^{-1}&0\\0&\psi(s)^T\end{array}\right)
\left(\begin{array}{cc}A&B\{\bf C}&D\end{array}\right)
\left(\begin{array}{cc}\psi(s)&0\\0&(\psi(s)^T)^{-1}\end{array}\right).\label{8.27}\end{equation}
Then by (\ref{8.22}) and (\ref{8.26}), $\Psi$ satisfies the
conclusions in Lemma 1.1 and the proof is complete. \hfill\vrule height0.18cm width0.14cm $\,$
In order to prove Theorem 1.1, we need the following three results.
\noindent{\bf Lemma 7.1.} {\it For any symmetric
$(\tau ,x)\in \mathcal{J}_b({\Sigma},2)$, denote by ${\gamma}$ the symplectic
path associated to $(\tau,x)$. We have
\begin{equation}
\left|\left(i_{L_0}({\gamma})+\nu_{L_0}({\gamma})\right)-
\left(i_{L_1}({\gamma})+\nu_{L_1}({\gamma})\right)\right|\le
n-1.\label{8.28}\end{equation}}
\noindent{\bf Proof.}
By Lemma 1.1 there exist a symplectic path
${\gamma}^*\in \mathcal{P}_\frac{\tau}{2}(2n)$ and $\td{M}\in {\rm Sp}(2n-2)$
such that \begin{equation} {\gamma}\; \sim_{L_j}\; {{\gamma}^*} \qquad {\rm for}\quad
j=0,\;1,\label{8.29}\end{equation}
\begin{equation} {{\gamma}^*}(\frac{\tau}{2})=(-I_2)\diamond
\td{M}.\label{8.30}\end{equation}
So by Theorem 2.1, we have
\begin{eqnarray}
&&\left|\left(i_{L_0}({\gamma})+\nu_{L_0}({\gamma})\right)-
\left(i_{L_1}({\gamma})+\nu_{L_1}({\gamma})\right)\right|\nonumber\\
&=&\left|\left(i_{L_0}({\gamma}^*)+\nu_{L_0}({\gamma}^*)\right)-
\left(i_{L_1}({\gamma}^*)+\nu_{L_1}({\gamma}^*)\right)\right|.
\label{8.31}\end{eqnarray}
We choose a special symplectic
path $\td{{\gamma}}={\gamma}_1\diamond {\gamma}_2\in
\mathcal{P}_{\frac{\tau}{2}}(2n)$, where ${\gamma}_1\in
\mathcal{P}_{\frac{\tau}{2}}(2)$, ${\gamma}_1({\frac{\tau}{2}})=-I_2$ and
${\gamma}_2\in \mathcal{P}_{\frac{\tau}{2}}(2n-2)$,
${\gamma}_2({\frac{\tau}{2}})=\td{M}$.
By Theorems 2.2 and 2.3, we have
\begin{eqnarray} &&\left|\left(i_{L_0}({\gamma}^*)+\nu_{L_0}({\gamma}^*)\right)-
\left(i_{L_1}({\gamma}^*)+\nu_{L_1}({\gamma}^*)\right)\right|\nonumber\\
&=&\left|\left(i_{L_0}(\td{{\gamma}})+\nu_{L_0}(\td{{\gamma}})\right)-
\left(i_{L_1}(\td{{\gamma}})+\nu_{L_1}(\td{{\gamma}})\right)\right|\nonumber\\
&=&|\left(i_{L_0}({\gamma}_1)+\nu_{L_0}({\gamma}_1)\right)-
\left(i_{L_1}({\gamma}_1)+\nu_{L_1}({\gamma}_1)\right)\nonumber\\
&&\;+\left(i_{L_0}({\gamma}_2)+\nu_{L_0}({\gamma}_2)\right)-
\left(i_{L_1}({\gamma}_2)+\nu_{L_1}({\gamma}_2)\right)|.\label{8.32}\end{eqnarray}
Since $-I_2\in O(2)\cap {\rm Sp}(2)$, by Theorem 2.3 again we have \begin{eqnarray}
&&\left(i_{L_0}({\gamma}_1)+\nu_{L_0}({\gamma}_1)\right)-
\left(i_{L_1}({\gamma}_1)+\nu_{L_1}({\gamma}_1)\right)=0,\label{8.33}\\
&&|\left(i_{L_0}({\gamma}_2)+\nu_{L_0}({\gamma}_2)\right)-
\left(i_{L_1}({\gamma}_2)+\nu_{L_1}({\gamma}_2)\right)|\le
n-1.\label{8.34} \end{eqnarray} By (\ref{8.32})-(\ref{8.34}), we have
\begin{eqnarray}
\left|\left(i_{L_0}({\gamma}^*)+\nu_{L_0}({\gamma}^*)\right)-
\left(i_{L_1}({\gamma}^*)+\nu_{L_1}({\gamma}^*)\right)\right|
\le n-1,\nonumber\end{eqnarray}
together with (\ref{8.31}), it implies Lemma 7.1.
\hfill\vrule height0.18cm width0.14cm $\,$
Note that we can also prove Lemma 7.1 by Lemma 1.1, Proposition 6.1
and computation of the H${\rm \ddot{o}}$rmander index similarly as
the proof of Theorem 3.3 of \cite{LZZ}.
\noindent{\bf Lemma 7.2.} {\it Let ${\gamma}\in {\cal P}_\tau(2n)$ be extended
to $[0,+\infty)$ by ${\gamma}(\tau+t)={\gamma}(t){\gamma}(\tau)$ for all $t>0$.
Suppose ${\gamma}(\tau)=M=P^{-1}(I_2\diamond \td{M})P$ with $\td{M}\in
{\rm Sp}(2n-2)$ and $i({\gamma})\ge n$. Then we have
\begin{equation} i({\gamma},2)+2S_{M^2}^+(1)-\nu({\gamma},2)\ge n+2.\label{8.35}\end{equation}}
{\bf Proof.}
The proof is similar to that of Lemma 4.1 in \cite{LLZ} (also
Lemma 15.6.3 of \cite{Long1}). We write it down briefly.
By (19) and (20) of the proof of Lemma 3 on p.349-350 in \cite{Long1}. We have
\begin{eqnarray} && i({\gamma},2)+2S_{M^2}^+(1)-\nu({\gamma},2)\nonumber\\
&=& 2i({\gamma})+2S_M^+(1)+\sum_{\theta\in
(0,\pi)}(S_M^+(e^{\sqrt{-1}\theta})\nonumber\\
&&-(\sum_{\theta\in
(0,\pi)}(S_M^-(e^{\sqrt{-1}\theta})+(\nu(M)-S_M^-(1))+(\nu_{-1}(M)-S_M^-(-1)))\nonumber\\
&\ge& 2n+2S_M^+(1)-n\nonumber\\
&=&n+2S_M^+(1)\nonumber\\
&\ge& n+2,\label{8.36}\end{eqnarray}
where in the last inequality we have used ${\gamma}(\tau)=M=P^{-1}(I_2\diamond
\tilde{M})P$ and the fact $S_{I_2}^+(1)=1$.
\hfill\vrule height0.18cm width0.14cm $\,$
\noindent{\bf Lemma 7.3.} {\it For any $(\tau,x)\in
\mathcal{J}_b({\Sigma},2)$ and $m\in {\bf N}$, we have
\begin{eqnarray} i_{L_0}(x,m+1)-i_{L_0}(x,m)&\ge& 1,\label{8.37}\\
i_{L_0}(x,m+1)+\nu_{L_0}(x,m+1)-1&\ge&
i_{L_0}(x,m+1)>i_{L_0}(x,m)+\nu_{L_0}(x,m)-1.\label{8.38}\end{eqnarray}}
{\bf Proof.} Let ${\gamma}$ be the associated symplectic path of
$(\tau,x)$ and we extend ${\gamma}$ to $[0,+\infty)$ by
$\gamma|_{[0,\frac{k\tau}{2}]}=\gamma^k$ with $\gamma^k$ defined in
(\ref{uvw}) for any $k\in{\bf N}$. By (\ref{8.5}) and (\ref{8.9}), for
any $m\in {\bf N}$ we have
\begin{equation} \nu_{L_0}(x,m)\ge 1, \qquad \forall m\in {\bf N}.\label{8.39}\end{equation}
Since $H_{\Sigma}$ is strictly convex, $H_{\Sigma}''(x(t))$ is positive for
all $t\in {\bf R}$. So by Theorem 5.1 and Lemma 5.1 of \cite{Liu2}(see
Theorem 2.4 in Section 2), we have
\begin{eqnarray} i_{L_0}(x,m+1)&=&
\sum_{0< t<\frac{(m+1)\tau}{2}}\nu_{L_0}({\gamma}(t))\nonumber\\&\ge&
\sum_{0<
t\le\frac{m\tau}{2}}\nu_{L_0}({\gamma}(t))\nonumber\\&=&\sum_{0<
t<\frac{m\tau}{2}}\nu_{L_0}({\gamma}(t))+\nu_{L_0}({\gamma}(\frac{m\tau}{2}))\nonumber\\&=&
i_{L_0}(x,m)+\nu_{L_0}(x,m)\nonumber\\&>&i_{L_0}(x,m)+\nu_{L_0}(x,m)-1.\label{8.40}\end{eqnarray}
Thus we get (\ref{8.37}) and (\ref{8.38}) from (\ref{8.39})
and (\ref{8.40}). This proves Lemma 7.3.
\hfill\vrule height0.18cm width0.14cm $\,$
$\,$
\noindent{\bf Proof of Theorem 1.1.}
It is suffices to consider the case
$^\#\tilde{\mathcal{J}}_b({\Sigma})<+\infty$. Since $-{\Sigma}={\Sigma}$, for
$(\tau,x) \in \mathcal{J}_b({\Sigma},2)$ we have
\begin{eqnarray} &&H_{\Sigma}(x)=H_{\Sigma}(-x),\label{8.41}\\
&&H_{\Sigma}'(x)=- H_{\Sigma}'(-x),\label{8.42}\\
&&H_{\Sigma}''(x)= H_{\Sigma}''(-x).\label{8.43}\end{eqnarray}
So $(\tau,-x)\in \mathcal{J}_b({\Sigma},2)$. By (\ref{8.43}) and the
definition of ${\gamma}_x$ we have that
\begin{equation} {\gamma}_x={\gamma}_{-x}.\label{8.44}\end{equation}
So we have
\begin{eqnarray} &&(i_{L_0}(x,m),\nu_{L_0}(x,m))=(i_{L_0}(-x,m),\nu_{L_0}(-x,m)),\nonumber\\
&&(i_{L_1}(x,m),\nu_{L_1}(x,m))=(i_{L_1}(-x,m),\nu_{L_1}(-x,m)),\quad \forall m\in
{\bf N}.\label{8.45}\end{eqnarray}
So we can write
\begin{equation} \td{\mathcal {J}}_b({\Sigma},2)=\{[(\tau_j,x_j)]|
j=1,\cdots,p\}\cup\{[(\tau_k,x_k)],[(\tau_k,-x_k)]|k=p+1,\cdots,p+q\}.\label{8.46}\end{equation}
with $x_j({\bf R})=-x_j({\bf R})$ for $j=1,\cdots,p$ and $x_k({\bf R})\neq
-x_k({\bf R})$ for $k=p+1,\cdots,p+q$. Here we remind that $(\tau_j,x_j)$
has minimal period $\tau_j$ for $j=1,\cdots,p+q$ and
$x_j(\frac{\tau_j}{2}+t)=-x_j(t), \;t\in{\bf R}$ for $j=1,\cdots,p$.
By Lemma 6.3 we have an integer $K\ge 0$ and an injection map
$\phi: {\bf N}+K\to \mathcal
{V}_{\infty,b}({\Sigma},2)\times {\bf N}$. By (\ref{8.45}), $(\tau_k,x_k)$ and
$(\tau_k,-x_k)$ have the same $(i_{L_0},\nu_{L_0})$-indices.
So by Lemma 6.3,
without loss of generality, we can further require that
\begin{equation} {\rm Im} (\phi)\subseteq \{[(\tau_k,x_k)]|k=1,2,\cdots,p+q\}\times
{\bf N}.\label{8.47}\end{equation}
By the strict convexity of $H_{\Sigma}$ and (\ref{6.11}), we have
\begin{equation} \hat{i}_{L_0}(x_k)>0,\quad k=1,2,\cdots,p+q.\label{8.48}\end{equation}
Applying Theorem 1.5 and Remark 5.1 to the following associated
symplectic paths
$${\gamma}_1,\;\cdots,\;{\gamma}_{p+q},\; {\gamma}_{p+q+1},\;\cdots,\;{\gamma}_{p+2q}$$
of
$(\tau_1,x_1),\;\cdots,\;(\tau_{p+q},x_{p+q}),\;(2\tau_{p+1},x_{p+1}^2),\;\cdots,\;
(2\tau_{p+q},x_{p+q}^2)$ respectively,
there exists a vector $(R,m_1,\cdots,m_{p+2q})\in {\bf N}^{p+2q+1}$ such
that $R>K+n$ and
\begin{eqnarray} &&i_{L_0}(x_k, 2m_k+1)=R+i_{L_0}(x_k),\label{8.49}\\
&& i_{L_0}(x_k,2m_k-1)+\nu_{L_0}(x_k,2m_k-1)\nonumber\\
&=&R-(i_{L_1}(x_k)+n+S_{M_k}^+(1)-\nu_{L_0}(x_k)),\label{8.50}\end{eqnarray}
for $k=1,\cdots,p+q,$ $M_k={\gamma}_k(\tau_k)$, and
\begin{eqnarray} &&i_{L_0}(x_k, 4m_k+2)=R+i_{L_0}(x_k,2),\label{8.51}\\
&&i_{L_0}(x_k,4m_k-2)+\nu_{L_0}(x_k,4m_k-2)\nonumber\\
&=&R-(i_{L_1}(x_k,2)+n+S_{M_k}^+(1)-\nu_{L_0}(x_k,2)),\label{8.52}\end{eqnarray}
for $k=p+q+1,\cdots,p+2q$ and $M_k={\gamma}_k(2\tau_k)={\gamma}_k(\tau_k)^2$.
By Proposition 5.1 and the proof of Theorem 1.5, we also have \begin{eqnarray}
i(x_k,
2m_k+1)&=&2R+i(x_k),\label{8.53}\\
i(x_k,2m_k-1)+\nu(x_k,2m_k-1)
&=&2R-(i(x_k)+2S_{M_k}^+(1)-\nu(x_k)),\label{8.54}\end{eqnarray}
for $k=1,\cdots,p+q,$ $M_k={\gamma}_k(\tau_k)$, and
\begin{eqnarray} i(x_k, 4m_k+2)&=&2R+i(x_k,2),\label{8.55}\\
i(x_k,4m_k-2)+\nu(x_k,4m_k-2)
&=&2R-(i(x_k,2)+2S_{M_k}^+(1)-\nu(x_k,2)),\label{8.56}\end{eqnarray}
for $k=p+q+1,\cdots,p+2q$ and $M_k={\gamma}_k(2\tau_k)$.
From (\ref{8.47}), we can set
\begin{eqnarray} \phi(R-(s-1))=([(\tau_{k(s)}, x_{k(s)})],m(s)),\qquad
\forall s\in S:=\left\{1,2,\cdots,\left[\frac{n}{2}\right]+1\right\},\label{8.57}\end{eqnarray}
where $k(s)\in \{1,2,\cdots,p+q\}$ and $m(s)\in {\bf N}$.
We continue our proof to study the symmetric and asymmetric orbits
separately. Let \begin{equation} S_1=\{s\in S|k(s)\le p\},\qquad S_2=S\setminus
S_1.\label{8.58}\end{equation}
We shall prove that
$^\#S_1\le p$ and $^\#S_2\le 2q$, together with the definitions of
$S_1$ and $S_2$, these yield Theorem 1.1.
\noindent{\it Claim 1.} $^\#S_1\le p$.
\noindent {\it Proof of Claim 1.} By the definition of $S_1$,
$([(\tau_{k(s)},
x_{k(s)})],m(s))$ is symmetric when $k(s)\le p$. We further prove
that $m(s)=2m_{k(s)}$ for $s\in S_1$.
In fact, by the definition of $\phi$ and Lemma 6.3, for all $s=1,2,\cdots,\left[\frac{n}{2}\right]+1$ we have
\begin{eqnarray} i_{L_0}(x_{k(s)},m(s))&\le & (R-(s-1))-1=R-s \nonumber\\
&\le &
i_{L_0}(x_{k(s)},m(s))+\nu_{L_0}(x_{k(s)},m(s))-1.\label{8.59}\end{eqnarray}
By the strict convexity of $H_{\Sigma}$, from Theorem 2.4, we have $i_{L_0}(x_{k(s)})\ge 0$, so there holds
\begin{eqnarray} i_{L_0}(x_{k(s)},m(s))\le R-s< R\le R+i_{L_0}(x_{k(s)})=i_{L_0}(x_{k(s)},2m_{k(s)}+1),\label{8.60}\end{eqnarray}
for every $s=1,2,\cdots,\left[\frac{n}{2}\right]+1$, where we have used
(\ref{8.49}) in the last equality. Note that the proofs of (\ref{8.59}) and
(\ref{8.60}) do not depend on the condition $s\in S_1$.
By Lemma 1.2, we have
\begin{equation}
i_{L_1}(x_k)+S_{M_k}^+(1)-\nu_{L_0}(x_k)\ge \frac{1-n}{2},\quad
\forall k=1,\cdots,p.\label{8.64}\end{equation}
Also for $1\le
s\le \left[\frac{n}{2}\right]+1$, we have
\begin{equation} -\frac{n+3}{2}<-(1+\frac{n}{2})\le -(\left[\frac{n}{2}\right]+1)\le
-s.\label{8.65}\end{equation}
Hence by (\ref{8.59}),(\ref{8.64}) and(\ref{8.65}), if $k(s)\le p$
we have
\begin{eqnarray}
&&i_{L_0}(x_{k(s)},2m_{k(s)}-1)+\nu_{L_0}(x_{k(s)},2m_{k(s)}-1)-1\nonumber\\
&=&
R-(i_{L_1}(x_{k(s)})+n+S_{M_{k(s)}}^+(1)-\nu_{L_0}(x_{k(s)}))-1\nonumber\\
&\le&R-\frac{1-n}{2}-1-n=R-\frac{n+3}{2}<R-s\nonumber\\
&\le&
i_{L_0}(x_{k(s)},m(s))+\nu_{L_0}(x_{k(s)},m(s))-1.\label{8.66}\end{eqnarray}
Thus
by (\ref{8.60}) and (\ref{8.66}) and Lemma 7.3 we have
\begin{equation} 2m_{k(s)}-1< m(s)<2m_{k(s)}+1.\label{8.67}\end{equation}
Hence
\begin{equation} m(s)=2m_{k(s)}.\label{8.68}\end{equation}
So we have
\begin{equation} \phi(R-s+1)=([(\tau_{k(s)},x_{k(s)})],2m_{k(s)}),\qquad \forall
s\in S_1.\label{8.69}\end{equation}
Then by the injectivity of $\phi$, it induces another injection map
\begin{equation} \phi_1:S_1\rightarrow \{1,\cdots,p\}, \;s\mapsto k(s).\label{8.70}\end{equation}
There for $^\#S_1\le p$. Claim 1 is proved.
\noindent{\it Claim 2.} $^\#S_2\le 2q$.
\noindent{\it Proof of Claim 2.} By the formulas
(\ref{8.53})-(\ref{8.56}), and (59) of \cite{LLZ} (also Claim 4 on
p. 352 of \cite{Long1}), we have \begin{equation} m_k=2m_{k+q}\quad {\rm for}\;\;
k=p+1,p+2,\cdots,p+q.\label{8.71}\end{equation}
We set $\mathcal {A}_k=i_{L_1}(x_k,2)+S_{M_k}^+(1)-\nu_{L_0}(x_k,2)$
and $\mathcal {B}_k=i_{L_0}(x_k,2)+S_{M_k}^+(1)-\nu_{L_1}(x_k,2)$,
$p+1\le k\le p+q$, where $M_k={\gamma}_k(2\tau_k)={\gamma}(\tau_k)^2$. By
(\ref{6.10}), we have
\begin{equation}
\mathcal {A}_k+\mathcal
{B}_k=i(x_k,2)+2S_{M_k}^+(1)-\nu(x_k,2)-n,\;\;\;p+1\le k\le p+q
.\label{8.72}\end{equation} By similar discussion of the proof of Lemma 1.1, for
any $p+1\le k\le p+q$ there exist $P_k\in {\rm Sp}(2n)$ and $\td{M}_k\in
{\rm Sp}(2n-2)$ such that \begin{equation} {\gamma}(\tau_k)=P_k^{-1}(I_2\diamond
\td{M}_k)P_k.\label{8.73}\end{equation}
Hence by Lemma 7.2 and (\ref{8.72}), we
have \begin{equation} \mathcal {A}_k+\mathcal {B}_k\ge n+2-n=2.\label{8.74}\end{equation} By
Theorem 2.3, there holds
\begin{eqnarray}
|\mathcal {A}_k-\mathcal
{B}_k|&=&|(i_{L_0}(x_k,2)+\nu_{L_0}(x_k,2))-(i_{L_1}(x_k,2)+\nu_{L_1}(x_k,2))|\le
n.\label{8.75}\end{eqnarray}
So by (\ref{8.74}) and (\ref{8.75}) we have \begin{equation}
\mathcal {A}_k\ge \frac{1}{2}((\mathcal {A}_k+\mathcal
{B}_k)-|\mathcal {A}_k-\mathcal {B}_k|)\ge \frac{2-n}{2},\quad
p+1\le k\le p+q.\label{8.76}\end{equation}
By
(\ref{8.52}), (\ref{8.59}), (\ref{8.65}), (\ref{8.71}) and
(\ref{8.76}), for $p+1\le k(s)\le p+q$ we have
\begin{eqnarray} &&i_{L_0}(x_{k(s)},2m_{k(s)}-2)+\nu_{L_0}(x_{k(s)},2m_{k(s)}-2)-1\nonumber\\
&=&i_{L_0}(x_{k(s)},4m_{k(s)+q}-2)+\nu_{L_0}(x_{k(s)},4m_{k(s)+q}-2)-1\nonumber\\
&=&R-(i_{L_1}(x_{k(s)},2)+n+S_{M_{k(s)}}^+(1)-\nu_{L_0}(x_{k(s)},2))-1\nonumber\\
&=&R-\mathcal{A}_{k(s)}-1-n\nonumber\\
&\le&R- \frac{2-n}{2}-1-n\nonumber\\
&=& R-(2+\frac{n}{2})\nonumber\\
&<& R-s\nonumber\\
&\le&
i_{L_0}(x_{k(s)},m(s))+\nu_{L_0}(x_{k(s)},m(s))-1.\label{8.77}\end{eqnarray}
Thus by (\ref{8.60}), (\ref{8.77}) and Lemma 7.3, we have
\begin{equation} 2m_{k(s)}-2<m(s)<2m_{k(s)}+1,\qquad p<k(s)\le p+q.\label{8.78}\end{equation}
So \begin{equation} m(s)\in \{2m_{k(s)}-1,2m_{k(s)}\}, \qquad {\rm
for}\;\;p<k(s)\le p+q.\label{8.79}\end{equation}
Especially this yields that for any $s_0$ and $s\in
S_2$, if $k(s)=k(s_0)$, then
\begin{equation} m(s)\in
\{2m_{k(s)}-1,2m_{k(s)}\}=\{2m_{k(s_0)}-1,2m_{k(s_0)}\}.\label{8.80}\end{equation}
Thus by the injectivity of the map $\phi$ from Lemma 3.3, we have
\begin{equation} ^\#\{s\in S_2|k(s)=k(s_0)\}\le 2.\label{8.81}\end{equation} This yields Claim
2.
By Claim 1 and Claim 2, we have
\begin{eqnarray}
^\#\td{\mathcal{J}}_b({\Sigma})=^\#\td{\mathcal{J}}_b({\Sigma},2)=p+2q\ge
^\#S_1+^\#S_2
=\left[\frac{n}{2}\right]+1.\label{8.82}\end{eqnarray}
The proof of Theorem 1.1 is complete. \hfill\vrule height0.18cm width0.14cm $\,$
\setcounter{equation}{0}
\section {Proof of Theorem 1.2.}
{\bf Proof of Theorem 1.2.} We prove Theorem 1.2 in three steps.
\noindent{\it Step 1.} Applying Theorem 1.5.
If
$^\#\tilde{\mathcal{J}}_b({\Sigma})<+\infty$, we write
\begin{eqnarray} \td{\mathcal {J}}_b({\Sigma},2)=\{[(\tau_j,x_j)]|
j=1,\cdots,p\}\cup\{[(\tau_k,x_k)],[(\tau_k,-x_k)]|k=p+1,\cdots,p+q\},
\nonumber\end{eqnarray} where $(\tau_j,x_j)$ is symmetric with minimal period
$\tau_j$ for $j=1,\cdots,p$, and $(\tau_k,x_k)$ is asymmetric with
minimal period $\tau_k$ for $k=p+1,\cdots,p+q$, for simplicity we
have set $q=\mathfrak{A}({\Sigma})$ with $\mathfrak{A}({\Sigma})$ defined in
Theorem 1.2.
By Lemma 6.3, there exist $0\le K\in {\bf Z}$ and injection map $\phi: {\bf N}+K\to \mathcal
{V}_{\infty,b}({\Sigma},2)\times {\bf N}$ such that (i) and (ii) in Lemma 6.3
hold. By the same reason for (\ref{8.47}), we can require that
\begin{equation} {\rm Im} (\phi)\subseteq \{[\tau_k,x_k)]|k=1,2,\cdots,p+q\}\times
{\bf N}.\label{8.47'}\end{equation}
Set $r=p+q$.
By (\ref{8.48}) we have ${\hat{i}}_{L_0}(x_j)>0$ for $j=1,\cdots,r$. Applying Theorem
1.5 and Remark 5.1 to the collection of symplectic paths ${\gamma}_1,\,{\gamma}_2,\,\cdots,\,{\gamma}_r$,
there exists a vector $(R, m_1, m_2,\cdots,m_r)\in {\bf N}^{r+1}$
such that $R>K+n$ and
\begin{eqnarray} &&\nu_{L_0}({\gamma}_j, 2m_j\pm 1)=\nu_{L_0}({\gamma}_k),\label{9.1}\\
&& i_{L_0}({\gamma}_j, 2m_j-1)+\nu_{L_0}({\gamma}_j,2m_k-1)=R-(i_{L_1}({\gamma}_j)+n+S_{M_j}^+(1)-\nu_{L_0}({\gamma}_j)),\label{9.2}\\
&&i_{L_0}({\gamma}_j,2m_k+1)=R+i_{L_0}({\gamma}_j),\label{9.3}\end{eqnarray} where ${\gamma}_j$
is the associated symplectic path of
$(\tau_j,x_j)$ and $M_j={\gamma}_j(\tau_j)$, $1\le j\le r$.
\noindent{\it Step 2.} We prove that \begin{eqnarray} K_1:=\min
\{i_{L_1}({\gamma}_j)+S_{M_j}^+(1)-\nu_{L_0}({\gamma}_j)|j=1,\cdots,r\}\ge 0.\label{9.4}\end{eqnarray}
By the strict convexity of $H_{\Sigma}$, Theorem 2.4 yields
\begin{equation} i_{L_1}({\gamma}_j)\ge 0.\label{9.5}\end{equation}
By the
nondegenerate assumption in Theorem 1.2 we have $\nu_{L_0}({\gamma}_j, m)=1$ for
$ 1\le j\le r,\;m\in {\bf N}$. By similar discussion of Lemma 1.1,
there exist $P_j\in {\rm Sp}(2n)$ and $\tilde {M}_j\in {\rm Sp}(2n-2)$ such
that
$$M_j=P_j^{-1}(I_2\diamond \tilde{M}_j)P_j.$$
So we have
\begin{equation} S_{M_j}^+(1)=S_{I_2\diamond \tilde{M}_j}^+(1)=S_{I_2}^+(1)
+S_{\td{M}_j}^+(1)\ge S_{I_2}^+(1)=1.\label{9.6}\end{equation}
Thus (\ref{9.5}) and (\ref{9.6}) yield
$$K_1\ge 0.$$
\noindent{\it Step 3.} Complete the proof of Theorem 1.2.
By (\ref{8.47'}), we set
$\phi(R-(s-1))=([(\tau_j(s),x_{j(s)})],m(s))$ with $j(s)\in
\{1,\cdots,r\}$ and $m(s)\in {\bf N}$ for $s=1,\cdots,n$. By Lemma 6.2 we
have
$$i_{L_0}(x_{j(s)},m(s))\le R-(s-1)-1=R-s\le i_{L_0}(x_{j(s)},m(s))+\nu_{L_0}(x_{j(s)},m(s))-1.$$
By (\ref{9.2}) and (\ref{9.4}) for $s=1,\cdots,n$,
\begin{eqnarray} &&i_{L_0}(x_{j(s)},2m_{j(s)}-1)+\nu_{L_0}(x_{j(s)},2m_{j(s)}-1)-1\le
R-K_1-1-n<R-n\nonumber\\
&&\le R-s\le i_{L_0}(x_{j(s)},m(s))+\nu_{L_0}(x_{j(s)},m(s))-1.\nonumber\end{eqnarray}
By (\ref{8.38}), we have
$$2m_{j(s)}-1<m(s), \quad s=1,\cdots,n.$$
For $s=1,\cdots,n$, there holds
$$i_{L_0}(x_{j(s)}, m(s))\le R-s<R\le
i_{L_0}(x_{j(s)},2m_{j(s)}+1),$$
then by (\ref{8.38}), we have
$$m(s)<2m_{j(s)}+1, \quad s=1,\cdots,n.$$
Thus
\begin{eqnarray} m(s)=2m_{j(s)},\quad \quad s=1,\cdots,n.\label{9.7}\end{eqnarray}
By (ii) of Lemma 6.3 again, if $s_1\neq s_2$, we have $m(s_1)\neq
m(s_2)$. By (\ref{9.7}) we have $j(s_1)\neq j(s_2)$. So $j(s)'s$ are
mutually different for $s=1,\cdots,n$. Since $j(s)\in
\{1,2,\cdots,r\}$, we have
$$r\ge n.$$
Hence \begin{equation}
^\#\tilde{\mathcal{J}}_b({\Sigma})=^\#\tilde{\mathcal{J}}_b({\Sigma},2)=
p+2q=r+q\ge n+q=n+\mathfrak{A}({\Sigma}).\label{9.8}\end{equation} The proof of Theorem
1.2 is complete. \hfill\vrule height0.18cm width0.14cm $\,$
\noindent{\it Acknowledgments.} The authors thank Professor Y. Long
for stimulating and very useful discussions, and for encouraging us
to study Maslov-type index theory and its iteration theory. They
also thank Professor C. Zhu for his valuable suggestions.
\bibliographystyle{abbrv}
|
1,116,691,499,036 | arxiv | \section{Introduction}
O$_2$ is a magnetic molecule with a spin--1, originating with two unpaired electrons on antibonding $\pi^*$ orbitals.
The magnetism of three structural phases of solid O$_2$ has been studied long ago \cite{Hemert, Uyeda}.
The magnetic exchange interaction between O$_2$ molecules depends strongly on their distance as well as relative displacement
\cite{Hemert, Uyeda}.
O$_2$ molecules adsorbed in a porous metal complexes have been investigated by a magnetic susceptibility
and high magnetic field magnetization \cite{kobayashi, hori}.
The O$_2$--array is confined in the nano--channel region and exhibits a meta--magnetic transition under high--magnetic field,
which is considered to be due to a configurational transition of O$_2$ dimer.
Other candidate for O$_2$ molecular based magnet is achieved by a encapsulation of O$_2$ molecules
in single--walled carbon nanotubes.
From the magnetic susceptibility and high--magnetic field magnetization, it is proposed as a spin--1 Haldane state \cite{hagiwara}.
Since then, the magnetism of O$_2$ molecule has been attracted much attention.
Magnetism of alkali superoxide, AO$_2$ (A = Na, K, Rb and Cs), originates in unpaired $\pi$--electron on the O$_2^-$ molecular anion,
where one electron transfers from the alkali metal to O$_2$ making a spin--$\frac{1}{2}$ state.
The transferred electron has a freedom which orbital ($\pi^*_x$ or $\pi^*_y$) to select and the orbital ordering would be expected to realize
the three--dimensional magnetic exchange pathways.
At room temperature, it is proposed that KO$_2$, RbO$_2$ and CsO$_2$ have the same tetragonal crystal structure (I4/mmm) while
NaO$_2$ has a cubic crystal structure (Fm$\bar3$m).
Recently, CsO$_2$ has been attracted considerable attention because of the one--dimensional (1D) antiferromagnetic (AF) nature \cite{Blake}.
Below the structural phase transition temperature $T_{\rm S} = 70$ K, the magnetic susceptibility follows a well--known Bonner--Fisher curve.
It is proposed that the structural phase transition is accompanied with the $\pi$--orbital ordering of O$_2$ molecules, which leads to a zig--zag like 1D chain along the $b$--axis.
If this is the case, the 1D AF super--exchange pathway might be via the Cs atom.
NMR experiment shown a power--law decay in inverse spin--lattice relaxation rate at low temperature,
suggesting that the ground state of CsO$_2$ is a Tomonaga--Luttinger liquid (TLL) state \cite{Klanjsek}.
At lower temperatures, it was proposed that CsO$_2$ showed an AF transition at $T_{{\rm N}} = 9.6$ K \cite{Blake, Hesse}, but
the low temperature magnetism of CsO$_2$ have not yet been clarified.
KO$_2$ and RbO$_2$ also showed AF transitions at $T_{\rm{N}} = 7$ and 15 K, respectively \cite{Hesse}.
In spite of the same crystal structures as CsO$_2$ at room temperature,
their magnetic susceptibilities show Curie--Weiss behavior from room temperature to $T_{\rm{N}}$, but do not show the 1D behaviors.
As the low--temperature structures in KO$_2$ and RbO$_2$ have not been settled, the magnetic exchange pathway have not been determined yet.
Moreover, in NaO$_2$, both the low temperature structure and the magnetic ground state is still under debate.
Therefore, clarifying how the structure changes at low temperature is important subject to
understand the correlation between the orbital ordering and the magnetism.
Accordingly, alkali superoxide is one of the fascinating candidates for molecular based low--dimensional magnet
and may have a strong coupling between spin and orbital degrees of freedom.
In this Letter, we have shown the low--temperature magnetism of CsO$_2$, especially, high--magnetic field magnetization for the first time.
We will present the full temperature dependence of magnetization around $T_{\rm N}$ in order to discuss the magnetic phase diagram.
To discuss the dimensionality of CsO$_2$, the high--field magnetization will be compared with the theoretical calculation.
We synthesized CsO$_2$ powder using a liquid ammonia method.
Alkali--metal was placed in a glass tube in a Ar--filled glove box (O$_2$ and H$_2$O $< 0.1$ ppm) that was then dynamically pumped down to 10$^{-2}$ Pa.
The glass tube was cooled by liquid N$_2$ in order to condense the NH$_3$.
After the glass tube was filled with liquid NH$_3$ (typically, $\sim 10$ ml), O$_2$ gas was put in at a constant pressure of $\sim 0.1$ MPa.
The solution was kept at $-40 ^\circ$C.
The reaction can be recognized complete when the solution became colorless and the product precipitated.
Then, we removed the liquid NH$_3$ by dynamically pumping the glass tube, and obtained CsO$_2$ powder.
The color of CsO$_2$ powder is dark yellow.
The X--ray powder diffraction (XRPD) patterns of the samples were measured with synchrotron radiation at BL--8A of KEK--PF (wave length $\lambda = 0.99917 \AA$).
Rietveld refinement was performed to obtain the structural parameters using the GSAS II package \cite{GSAS}.
The final weighted $R$--factor, $R_{\rm wp}$, for room temperature structure was converged to 4.36 \%, indicating a good fit to the experimental data.
The magnetization, $M$, was measured using a SQUID magnetometer (MPMS--R2 and MPMS3, Quantum Design Co. Ltd.)
in the temperature region $> 2$ K.
High magnetic field magnetization was measured in pulsed magnetic fields up to 60 T below 1.3 K.
Because the CsO$_2$ sample is very sensitive to the atmosphere, the samples must be handled in the Ar--filled glove box.
Figure \ref{XRDandM} (a) shows the XRPD pattern of CsO$_2$ at room temperature.
All peaks can be indexed by the tetragonal symmetry (I4/mmm).
Very small amount of impurity was confirmed to be CsOH$\cdot$H$_2$O, which fraction was estimated to be less than 3 \%
from the XRPD analysis.
This impurity phase should not influence the intrinsic magnetic property of CsO$_2$ because it has no unpaired electrons.
The inset of figure \ref{XRDandM} (a) depicts the schematic figure of the crystal structure for the room temperature phase of CsO$_2$.
The lattice parameter of CsO$_2$ is estimated to be $a = 4.469 \AA$ and $c = 7.324 \AA$, which is consistent with the literature \cite{Blake}.
The O--O distance is estimated to be $1.11 \AA$, which may be effectively reduced by the libration of O$_2$ molecule.
\begin{figure}
\begin{center}
\includegraphics[clip, width= 0.4 \textwidth]{fig1new3.eps}
\caption{
(a) Room temperature X--ray powder diffraction pattern collected from CsO$_2$ using $\lambda = 0.99917 \AA$.
Observed data (red crosses), the calculated pattern (blue line), the candidate peak positions (light green bar for CsO$_2$
and pink bar for CsOH$\cdot$H$_2$O) and the difference between the observed and the calculated data (light blue line) are shown.
The inset depicts the room--temperature structure of CsO$_2$, where Cs and O atoms are shown by the green and the red spheres, respectively.
(b) Temperature dependence of inverse magnetic susceptibility, $\chi^{-1}$, of CsO$_2$ (black dots) with Curie--Weiss fits (pink and orange dotted lines).
The fitted parameters are described in the text.
(c) Temperature dependence of $\chi$ (black dots) below $T_{\rm S}$ with a Bonner--Fisher fit (light green line).
The anomaly at $T_{\rm{N}} = 9.6$ K corresponds to the AF ordering.
}
\label{XRDandM}
\end{center}
\end{figure}
Figure \ref{XRDandM} (b) shows the temperature dependence of inverse magnetic susceptibility, $\chi ^{-1}$, of CsO$_2$ under a field cooling condition.
The applied magnetic field is 0.1 T.
No Curie--tail at low temperatures indicates high--quality of this sample.
In the paramagnetic region,
from room temperature to $T_1 \sim 150$ K, the $\chi$ follows Curie--Weiss law with Weiss temperature, $\theta = -10.1$ K,
and effective magnetic moment, $\mu_{\rm eff} =1.95 \mu_{\rm B}$ while from $T_1$ to $T_{\rm S} \sim 70$ K, $\theta = -30.0$ K and $\mu_{\rm eff} = 2.05 \mu_{\rm B}$,
where $\mu_{\rm B}$ is Bohr magneton.
$T_{\rm S}$ corresponds to the reported structural phase transition temperature \cite{Blake}.
The $\mu_{\rm eff}$ is slightly larger than the value expected from spin--$\frac{1}{2}$, suggesting the orbital contribution.
These values of $\theta$ and $\mu_{\rm eff}$ are consistent with the previous results \cite{Blake, Zumsteg}.
We note that the change in $\chi$ around $T_1$ can be recognized in a different sample batch.
Accordingly, the AF interaction among the spins on O$_2$ molecules becomes more dominant below $T_1$.
In the inset of figure \ref{XRDandM} (b),
the (200) Bragg reflection in the high--temperature tetragonal phase splits into two peaks with (200) and (020) indices below 150 K,
suggesting a tetragonal--to--orthorhombic structural change around 150 K.
Thus, the structural phase transition at $T_1$ should be related with the enhancement of AF interactions.
We will discuss the structural study in a separated paper in detail.
Below $T_{\rm S}$, the $\chi$ shows a broad maximum around 28 K, indicating a low--dimensional character of the spin system.
As shown in the figure \ref{XRDandM} (c), the $\chi$ below $T_{\rm S}$ can be fitted by the well--known Bonner--Fisher formula \cite{BF}.
When we write the antiferromangetic Heisenberg Hamiltonian as
$\displaystyle {\mathcal{H} = -J_{\rm{1D}} \sum_i \bm{S_i}\cdot \bm{S_{i+1}} } $,
the 1D AF interaction, $J_{\rm{1D}}/k_{\rm{B}}$, is estimated to be $42.8$ K,
where $k_{\rm{B}}$ is a Boltzmann constant.
The obtained $J_{\rm{1D}}/k_{\rm{B}}$ is consistent with the previous results \cite{Blake, Klanjsek}.
We could not observe the shift of temperature, at which the $\chi$ showed the maximum, under the magnetic field up to 7 T.
The shift is evidenced in the TLL phase in the spin--$\frac{1}{2}$ AF Cu(C$_4$H$_4$N$_2$)(NO$_3$)$_2$ \cite{kono}.
\begin{figure}
\begin{center}
\includegraphics[width=0.4 \textwidth]{fig2new.eps}
\caption{
(a) Temperature dependence of $M/H$ for CsO$_2$ at several magnetic fields.
(b) Isothermal derivative magnetization data, d$M$/d$H$, as a function of magnetic field below $T_{\rm{N}}$.
The data at 1.3 K and 4.2 K were obtained by the pulsed magnetic field experiments while the others were obtained by differentiating the MPMS data.
The arrows point at anomaly in the d$M$/d$H$ plot for different temperatures, indicating the spin--flop transition fields.
The data are shifted for clarity.
(c) A possible phase diagram of CsO$_2$, where P, AF--1 and AF--2 represent a paramagnetic, an antiferromagnetic--1 and an antiferromagnetic--2 phase, respectively.
The red circles are obtained from the temperature dependence of magnetization (MT) while
the blue squares are obtained from the magnetic field dependence of magnetization (MH).
}
\label{SF}
\end{center}
\end{figure}
The anomaly at $T_{\rm{N}} = 9.6$ K corresponds to the AF ordering \cite{Blake}.
Figure \ref{SF} (a) shows the temperature dependence of $M/H$ at several magnetic fields.
When the magnetic field was increased above 3 T, the $M/H$ increased at low temperatures, suggesting the existence of a spin--flop transition around 3 T.
Figure \ref{SF} (b) shows the isothermal derivative magnetization data, d$M$/d$H$, as a function of magnetic field around $T_{\rm N}$.
The anomalies around 3 T (arrows in the figure \ref{SF} (b) ) are clearly observed and shifted higher fields side with increasing temperatures.
In pulsed magnetization experiments, the magnetization curves also indicate the magnetic phase transition around 2.49 T.
We summarize a possible phase diagram of CsO$_2$ under the magnetic field in the figure \ref{SF}(c).
This magnetic phase diagram is typical of the antiferromagnet with an easy--axis anisotropy, and the phase transition from the AF--1 to the AF--2 phase
should correspond to the spin--flop transition.
We note that no hysteresis in the magnetization curve was observed.
Figure \ref{HM} (a) shows the magnetization and its derivative curves for CsO$_2$ as a function of magnetic field up to
60 T at 1.3 K.
Below $T_{\rm{N}}$, remarkable up--turn curvature in the magnetization around a saturation field of $\sim 60$ T is found,
suggesting the low--dimensional nature of this spin system.
The saturated magnetization is also estimated to be $\sim 1 \mu_{\rm B}$, which corresponds to the spin--$\frac{1}{2}$.
This is consistent with the magnetic susceptibility experiments.
As shown in figure \ref{HM} (b),
the fit with the Bethe--ansatz curve \cite{Griffiths} gives the saturation magnetization of $H_{\rm S} = 50$ T and $J_{\rm 1D}/k_{\rm B } = 38.6 $ K.
From this, low--field magnetization can be reproduced by the exact curve,
but, at high--field region, especially around $H_{\rm S}$, the high--field magnetization seems to be inconsistent with the calculation.
On the other hand, if we used the $J_{\rm 1D}/k_{\rm B} = 42.8 $ K estimated from the Bonner--Fisher fit in fig. \ref{XRDandM} (c),
the calculated magnetization did not reproduce the experiments as a whole.
We may introduce a thermal effect and/or a higher--dimensionality of the spin system,
i.e., an interchain coupling, to settle these inconsistency.
The magnetization curve for temperature $T/J_{\rm 1D} = 0.1$ with $J_{\rm 1D}/k_{\rm B} = 38.6$ K in fig. \ref{HM} (b), which was calculated by a finite temperature
DMRG \cite{okunishi}, implies better agreement around the saturation field, but could not reproduce the experiments entirely.
Thus, the inconsistency around the saturation field in CsO$_2$ may be caused by the interchain couplings.
\begin{figure}
\begin{center}
\includegraphics[width=0.4 \textwidth]{fig3new3.eps}
\caption{
(a) High magnetic field magnetization, $M$, curve (red circles) and its derivative, d$M$/d$H$, curve (blue line) for CsO$_{2}$ at 1.3 K.
In the vertical axis, $M=1$ corresponds to $1 \mu_{\rm B}$ per the O$_2$ molecule, indicating the spin--$\frac{1}{2}$.
(b)
The dashed lines are Bethe--anzatz results for $T = 0$ with $J_{\rm 1D}/k_{\rm B} =38.6$ K (blue dashed line) and $J_{\rm 1D}/k_{\rm B}=42.8$
K (green dashed line).
The black line shows the theoretical calculation with $T/J_{\rm 1D} = 0.1$ and $J_{\rm 1D}/k_{\rm B}=38.6$ K.
}
\label{HM}
\end{center}
\end{figure}
Using the molecular field approximation, we estimate the magnetic exchange interaction and the anisotropy in CsO$_2$.
The spin flop field, $H_{\rm SF}$, can be written by the exchange field, $H_{\rm E}$, and the anisotropy field, $H_{\rm A}$, as follows;
\begin{equation}
H_{\rm SF} = \sqrt{2 H_{\rm E} H_{\rm A}}
\end{equation}
The $H_{\rm E}$ can be written as $H_{\rm E} = z J S / \mathit{g} \mu_{\rm B}$, where $z$ is a number of nearest neighbor spins.
If we assume $z = 2$ for the orbital ordered structure of CsO$_2$, which should corresponds to a 1D spin chain,
the magnetic exchange interaction $J_{\rm 1D}/k_{\rm B} = 38.6 $ K gives the $H_{\rm E}$ of 250 kOe.
From the figure \ref{SF}, the $H_{\rm SF}$ is estimated to be 2.49 Tesla at 2 K.
Using these values, $H_{\rm A}$ is obtained to be 1.24 kOe.
The $H_{\rm A}$ is almost comparable to the dipolar anisotropy field calculated in $\alpha$--phase of solid O$_2$ \cite{Uyeda}.
Knaflic {\it et al.} found a disappearance of electron paramagnetic resonance (EPR) signals in the vicinity of $T_{\rm N}$ and
observed an antiferromagnetic resonance (AFMR) around zero--magnetic field
below $T_{\rm N}$ using X--band frequency \cite{Knaflic}.
We also observed the disappearance of EPR signals in the vicinity of $T_{\rm N}$, implying a bulk long--range magnetic ordering.
The muon spin relaxation experiments ($\mu$SR) also show clear muon--spin precession even in the zero--field condition below $T_{\rm N}$, \cite{muSR}
proving that the ground state of CsO$_2$ under the magnetic field
is not a field--induced ordered state, which claimed by the authors \cite{Knaflic}.
Although we tried to measure the AFMR using the X--band frequency, no corresponding AFMR signal were found down to 5 K.
The AFMR relation strongly depends on the anisotropy as well as the magnetic field.
We speculate that our measured frequency is too high compared with the zero--field excitation energy to observe the AFMR signal.
On the contrary, the observed AFMR field may not be simply explained by a general AFMR relation with an uniaxial anisotropy.
As the EPR field at X--band frequency is about 0.3 T, which is one--order of magnitude smaller than the $H_{\rm SF}$,
the AFMR signal at the X--band frequency may be observed at a magnetic field
around the $H_{\rm SF}$, which is higher than the observed field, if we assume only the uniaxial anisotropy.
Therefore, other anisotropy may be taken into consideration to explain the observation of AFMR around zero magnetic field at the X--band frequency.
In fact, in the $\alpha$--phase of solid O$_2$, orthorhombic magnetic anisotropy was estimated from the AFMR, where
it is suggested that the one is originated from the O$_2$ molecule itself and the other from the dipolar interaction \cite{Uyeda}.
The magnetic exchange interactions have been calculated in the orbital--ordered KO$_2$,
in which both the crystal field from the cations and the Coulomb interactions are thought to be dominant \cite{Kim, Solovyev}.
Kim {\it et al.} suggested that the coherent ferro--orbital ordering of O$_2$ molecules was important to realize the experimentally observed AF structure \cite{Smith}.
In CsO$_2$, it is proposed from the XRPD and the Raman scattering experiments that the low temperature structure
has a $a \times 2b \times 2c$ periodicity, which may be accompanied with the coherent tilting of O$_2$ molecular axis \cite{Blake}.
This would leads to ferro--orbital ordering of O$_2$ molecules along the [100] direction and antiferro--orbital ordering
along the [010] and [111] directions.
If we consider the magnetic super--exchange interaction via Cs$^+$ ions between the nearest neighbor (NN) O$_2$ molecules
with the framework of the Kanamori--Goodenough rule \cite{KG},
it may be expected that the ferro--magnetic interaction is dominant along the [100] direction
while the antiferromagnetic interaction along the [010] and [111] directions.
The [010] direction had been proposed as the 1D axis.
If we refer the calculated exchange interaction in KO$_2$,
the interchain magnetic exchange interactions between NN O$_2$ molecules may be enough to induce the long--range magnetic ordering in CsO$_2$.
On the contrary, it has already mentioned that CsO$_2$ shows another structural phase change around $T_1 = 150 $ K.
Thus, to understand the magnetic exchange interaction between O$_2$ molecules, the exact orbital--ordered structure at low temperature should be indispensable.
In conclusion, we have synthesized high--quality CsO$_2$ and investigated the magnetic properties.
The obtained magnetic phase diagram is similar to that for the antiferromagnet with an easy axis anisotropy.
High magnetic field magnetization of CsO$_2$ was performed up to 60 Tesla for the first time and exhibited remarkable up--turn curvature around the saturation field,
implying that CsO$_2$ is a candidate for quasi--1D spin--$\frac{1}{2}$ antiferromagnet.
On the other hand, the theoretical calculation with $T=0$ and $T/J_{\rm{1D}}=0.1$ could not wholly reproduce the experiment.
To settle these inconsistency, further experiments including the low--temperature structure should be highly desirable.
We would like to thank Prof. T. C. Kobayashi for valuable discussions on magnetism of CsO$_2$.
The X--ray diffraction patterns were measured in research projects (2017G636) of KEK--PF.
This research is partly supported by KAKENHI grants from Japan Society for the Promotion of Science (15H03529).
|
1,116,691,499,037 | arxiv | \section{Introduction}\label{introduction}
The increasingly precise observations of the gravitational wave signals emitted by merging compact objects
provide unprecedented opportunities to test general relativity (GR) and the nature of black holes and
neutron stars~\cite{PhysRevLett.116.061102,PhysRevLett.116.221101,PhysRevLett.116.241103,PhysRevLett.118.221101,Abbott_2017,PhysRevLett.119.141101,PhysRevLett.119.161101,Abbott_2020,LIGOScientific:2020stg}.
Among the predictions of GR (and also other gravitational theories) is the existence of quasi-normal modes (QNMs),
which describe the characteristic spacetime oscillations of perturbed compact objects.
These (damped) oscillations can be observed after the violent merger of two compact objects.
In this phase, the so-called ringdown, the final remnant forming from the coalescence
settles into an equilibrium stationary configuration by radiating in QNMs.
If the final object is a black hole, the no-hair theorem of GR~\cite{Carter:1971zc,Robinson:1975bv} states that
it must be described by the Kerr geometry~\cite{Kerr:1963ud}, which is
fully characterized by the mass $M$ and spin $J$, with the
latter satisfying the ``Kerr bound'' $|J|\leq M^2$ (in the units $G=c=1$ that we utilize throughout this paper)
to avoid the presence of naked singularities.
While the no-hair theorem holds in GR,
gravitational theories modifying and/or extending it generally yield different black hole spacetimes~\cite{berti_review},
and also different equations for the gravitational perturbations over the background geometry~\cite{barausse_sotiriou,berti_review}.
One way to test GR is therefore to verify that the observed QNMs from
the remnant of a binary black hole merger match those of a Kerr black hole.
This is commonly referred to as ``black hole spectroscopy''~\cite{Berti_2009}, and may become feasible
with future ground or space-based gravitational wave detectors~\cite{Berti:2016lat,Shi:2019hqa}, or even with current data~\cite{Ghosh:2017gfp,Brito_2018,Carullo:2018sfu,Isi:2019aib,PhysRevLett.116.221101}.
From a practical point of view, there are different approaches to the problem.
The first one is ``top-down'', and consists of choosing a specific theory of gravity, finding black hole solutions,
deriving the gravitational perturbation equations, and computing the QNM spectrum.
While this allows one to make precise predictions based on a specific theory, it requires several non-trivial steps, and generally only provides insight on
the very theory under investigation.
A second ``bottom-up'' approach is supposed to be as theory agnostic as possible, and assumes a parametrized working ansatz for the black hole
background metric as a starting point.
One can then compute observables
that depend on the background metric alone, e.g. motion of small bodies
with weak internal gravity, which are relevant for instance for
extreme mass-ratio inspirals~\cite{Glampedakis:2005hs} and which follow geodesics of the background metric
in gravitational theories that satisfy the weak equilavence principle~\cite{Will:2018bme,Barausse:2016eii}.
Note however that the computation of the gravitational QNM spectrum cannot be performed easily in this second approach, because of the lack of field equations.
A possibility would be to parametrize the field equations as well. For instance, Refs.~\cite{PhysRevD.100.044040,glampedakis} consider
scalar tensor theories, which are the simplest extension of GR. These theories include
an extra scalar graviton polarization, which Refs.~\cite{PhysRevD.100.044040,glampedakis} couple (via free parameters) to the tensor gravitons of GR.
Refs.~\cite{PhysRevD.100.044040,glampedakis} then study the QNMs of this coupled scalar and tensor system over generic spherical and axisymmetric backgrounds,
in the eikonal limit (see also Refs.~\cite{Carson:2020iik,Carson:2020ter} for similar attempts). While this formalism is very general, in this work we will follow a simpler approach. In more detail,
we will look at the axial sector of the gravitational QNMs of a spherically symmetric and static parametrized black hole metric. The reason
for focusing on the axial sector is that at linear order, the scalar perturbations cannot mix with
the axial gravitational perturbations, because of parity. As a result, the equation for the
axial gravitational QNMs in generic scalar tensor theories is expected to be the same as the Regge-Wheeler equation of GR \cite{PhysRev.108.1063} (at least in the eikonal limit) although on a background differing from Schwarzschild and with a modified potential.
Note that it may be possible to extend this approach to the polar sector too, by resorting to
an effective field theory treatment such as that of Ref.~\cite{Franciolini:2018uyq}. An analysis of scalar perturbations on modified black-hole metrics obtained within effective field theories can be found in Ref.~\cite{Cano:2020cao}.
The main goal of this paper is then to address the question of
how many QNM observations (and with what precision)
one would need to reconstruct a parametrized black hole metric.
This problem was qualitatively tackled in Ref.~\cite{paper8} by using
a scalar QNM toy model with the metric proposed by Rezzolla and Zhidenko (RZ)~\cite{PhysRevD.90.084009} in the spherical and static limit (and
later generalized to the axisymmetric case by Ref.~\cite{Konoplya:2016jvv}).
By studying how much RZ QNMs differ from Schwarzschild QNMs when multiple RZ parameters are non-zero, it was possible to investigate a subset of the RZ parameter space in terms of a \textit{direct problem}.
From the reported results one should expect that the general \textit{inverse problem} is non-trivial, because certain RZ parameter combinations could lead to very similar QNMs, even when they are known with high accuracy.
A different and more general way to approach the inverse QNM problem of different types of non-rotating compact objects has been reported in \cite{paper2,paper5,paper6}.
These works focus on reconstructing the perturbation potential directly from the QNM spectrum by inverting generalized Bohr-Somemrfeld rules, but without direct access to the underlying metric.
\par
Here, we improve on the work presented in Ref.~\cite{paper8} by computing axial gravitational QNMs with a
higher order WKB method~\cite{PhysRevD.68.024018}, for the spherical and static parametrized RZ metric.
The RZ metric has proven to be a useful and economic approximation to exact black hole spacetimes in alternative theories of
gravity \cite{konoplya2020general}, and has been used in different type of applications, e.g., for
black hole shadows \cite{Younsi:2016azx,Mizuno:2018lxz} and for gravitational wave and X-ray tests of the Kerr spacetime \cite{Cardenas_Avendano_2020}.
Focusing on the fundamental $l=2$ and $l=3$ modes and their overtones, which are expected to dominate the ringdown of black holes
resulting from a binary mergers,
we construct a Bayesian pipeline allowing for estimating the
parameters of the RZ metric, given a set of QNM observations
from existing or future gravitational wave detectors.
This work is structured as follows. In Sec.~\ref{methods} we outline all our methods and explain the general framework.
This setup is applied to different scenarios in Sec.~\ref{applications}. We discuss our findings in Sec.~\ref{discussion},
before we present our conclusions in Sec.~\ref{conclusions}.
\section{Methods}\label{methods}
In this section we introduce the building blocks that define the framework of this work.
We start with an overview of the RZ metric in Sec.~\ref{RZ-metric} and discuss the equations for its perturbations in Sec.~\ref{perturbation equations}.
The computation of QNMs is described in Sec.~\ref{QNM}.
The different combinations of RZ parameters and the subsets of the QNM spectrum that we consider in this work are introduced in Sec.~\ref{def_spectra}.
A discussion of the range of validity of the RZ parameter space is presented in Sec.~\ref{param_space}.
In Sec.~\ref{noise} we discuss the precision of QNM measurements that can be expected with various
experimental setups, and use that information in the Markov chain Monte Carlo (MCMC) framework that is introduced in Sec.~\ref{pymc3}.
\subsection{The RZ Metric}\label{RZ-metric}
The RZ parametrized metric was introduced to model spherically symmetric black holes beyond GR, in a theory agnostic way.
We summarize its most important properties in the following, but refer to the original publication \cite{PhysRevD.90.084009} for full details.
The RZ metric is given by
\begin{align}
\text{d}s^2 = -N^2(r) \text{d}t^2 + \frac{B^2(r)}{N^2(r)} \text{d}r^2 + r^2 \text{d} \Omega^2,
\end{align}
with $\text{d}\Omega^2 = \text{d}\theta^2 + \sin^2 \theta \text{d}\phi^2$ and two functions $N(r)$ and $B(r)$, which describe the details of the spacetime.
For further convenience, let us remap the location of the event horizon $r_0$ into the dimensionless coordinate
\begin{align}
x \equiv 1-\frac{r_0}{r},
\end{align}
which ranges from $x=0$ at the event horizon to $x=1$ at spatial infinity.
Another function $A$ is introduced via
\begin{align}
N^2 = x A(x),
\end{align}
with $A(x)>0$ for $0 \leq x \leq 1$.
The two functions $A(x)$ and $B(x)$ are given by
\begin{align}
A(x) &= 1 - \varepsilon(1-x) + (a_0 -\varepsilon)(1-x)^2 + \tilde{A}(x)(1-x)^3,
\\
B(x) &= 1+ b_0(1-x) + \tilde{B}(x)(1-x)^2,
\end{align}
where
$\tilde{A}(x)$ and $\tilde{B}(x)$ describe
deviations from the Schwarzschild limit.
They are introduced as continued fraction expansion
\begin{align}
\tilde{A}(x) &= \frac{a_1}{1+\frac{a_2x}{1+\frac{a_3x}{1+\dots}}},
\\
\tilde{B}(x) &= \frac{b_1}{1+\frac{b_2x}{1+\frac{b_3x}{1+\dots}}}.
\end{align}
In the original work~\cite{PhysRevD.90.084009}, it was further shown how knowledge from
solar system tests of the parametrized post-Newtonian (PPN) metric~\cite{Will:2018bme}
constrains the parameters $a_0$ and $b_0$ to very small values.
Indeed, solar system tests imply
\begin{align}
\varepsilon &= - \left(1-\frac{2M}{r_0} \right),\label{varepsilon}
\\
a_0&=\frac{(\beta-\gamma)(1+\varepsilon)^2}{2},
\\
b_0&= \frac{(\gamma-1)(1+\varepsilon)}{2},
\end{align}
where the PPN parameters $\beta$ and $\gamma$ are constrained to be of order $ \sim 10^{-4}$~\cite{Will:2018bme}. Therefore, in
particular, one has $|a_0|, |b_0| \sim 10^{-4}$.
Nevertheless, we stress that there is no reason to expect PPN bounds to hold for black hole spacetimes
in theories of gravity that modify or extend GR, since Birkhoff's theorem generally does not hold in these
theories. Examples of theories that reproduce the 1PN metric of GR around stars, but which deviate
from GR at 1PN order in black hole spacetimes include scalar tensor theories. The latter
can present screening mechanisms (e.g. chameleon~\cite{chameleon}, K-mouflage~\cite{kmouflage}, symmetron~\cite{symmetron}, etc.) protecting
local physics from unwanted scalar effects around stars (therefore passing solar system tests),
while still allowing for the existence of scalar charges and 1PN scalar effects in vacuum (see
e.g. \cite{silva,herdeiro,doneva,dima} for such scalarized black holes).
For this reason, in most of this paper we will {\it not} impose PPN bounds on the RZ metric. However, to
allow for comparison with earlier works \cite{paper8,konoplya2020general,Cardenas_Avendano_2020}, we also present some results for
the case in which $a_0=b_0=0$, corresponding to a RZ metric
matching the Schwarzschild one at 1PN order.
\subsection{Perturbation Equations}\label{perturbation equations}
In general, the equations governing the evolution of
linear gravitational perturbations over a black hole background
depend on the gravitational theory under consideration. In GR,
the gravitational field only has two (tensor) polarizations, whose
properties and spectrum are encoded in the Regge-Wheeler~\cite{PhysRev.108.1063} and Zerilli~\cite{PhysRevD.2.2141} equations
(respectively for odd and even metric perturbations on Schwarzschild)
and in the Teukolsky equation~\cite{teuk} (for Kerr perturbations).
In theories extending GR (see e.g. Ref~\cite{berti_review} for a review), not only
can the background black hole spacetime differ from Schwarzschild/Kerr
(as modeled in Sec.~\ref{RZ-metric}), but even if the background
is the same as in GR (as may happen in specific theories~\cite{psaltis}),
new polarizations will generally be present and will alter the form
of the pertubation equations~\cite{barausse_sotiriou}.
Some insight on the form of the perturbation equations when one moves beyond GR
can nevertheless be gained by noting that additional modes (beyond the
spin-2 tensor gravitons of GR) will typically be coupled weakly to gravitational wave interferometers
if the gravitational theories under scrutiny obeys experimental bounds on the equivalence principle
(c.f. e.g.~\cite{DS,env_effects,STcollapse}). One may therefore safely focus on
the tensor polarizations, whose coupling to detectors is strongest.
In principle, non-tensor polarizations may couple with the tensor degrees of freedom, e.g. appearing as sources
for the equations governing the latter, but this is not a fundamental obstacle to
computing QNMs (see e.g. Refs.~\cite{glampedakis,mcmanus}). Note also that odd parity perturbations will generally be unaffected
by these couplings, at least in scalar-tensor theories respecting parity (e.g.
Fierz-Jordan-Brans-Dicke-like theories; dilatonic Gauss-Bonnet; Horndeski and beyond Horndeski theories; degenerate
higher order scalar tensor theories, khronometric theory/Ho\v rava gravity, etc).
This is because scalar perturbations have even parity, and therefore cannot mix with
the odd parity sector of the tensor perturbations at linear order.\footnote{Odd tensor modes can in principle mix with pseudoscalar degrees of
freedom (coupled to the Pontryagin density~\cite{DCS,marco_DCS}) or vector modes (e.g. Einstein-\AE ther theory~\cite{ae_theory}). Note
however that while Einstein-\AE ther theory is classically and quantum mechanically stable, theories
with pseudoscalars generically present ghosts, unless they are treated as effective field theories~\cite{marco_DCS}.}
To first approximation, we may therefore be tempted to model the equation for linear gravitational perturbations in the odd sector by the Regge-Wheeler equation of GR, but over a generic RZ background metric. This
generalized Regge-Wheeler equation can be obtained by first writing the spacetime metric as
$g_{\mu\nu}=g^{\rm RZ}_{\mu\nu}+\delta h_{\mu\nu}+{\cal O}(\delta)^2$, with $\delta$ a perturbative
book-keeping parameter and $h_{\mu\nu}$ the metric perturbation. Discarding the ${\cal O}(\delta)^0$ terms of the Einstein equations,
the linear ${\cal O}(\delta)$ terms $\delta R_{\mu\nu}=0$ yield~\cite{hughes}
\begin{equation}\label{eqH}
-\frac12\Box h_{\mu\nu}-\frac12 \nabla_\nu\nabla_\mu h+\nabla_\alpha\nabla_{(\mu} h^\alpha_{\nu)}=0\,,
\end{equation}
with $h=h^\mu_\mu$, $\Box=g_{\rm RZ}^{\mu\nu} \nabla_\mu \nabla_\nu$ and $\nabla$ the covariant derivative defined with the
background connection.
Assuming then that the metric perturbation has odd parity and adopting the Regge-Wheeler gauge, i.e.
\begin{widetext}
\begin{align}
h_{\mu\nu}=&\sum_{lm}h_{\mu\nu,lm} Y^{\ell m}e^{-i\omega t}\,,\\
h_{\mu\nu,lm}=&
\begin{pmatrix}
0&0&-h_0(r)\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}&h_0(r)\sin\theta\frac{\partial}{\partial\theta}\\
0&0&-h_1(r)\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}&h_1(r)\sin\theta\frac{\partial}{\partial\theta}\\
-h_0(r)\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}&-h_1(r)\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}&0&0\\
h_0(r)\sin\theta\frac{\partial}{\partial\theta}&h_1(r)\sin\theta\frac{\partial}{\partial\theta}&0&0
\end{pmatrix}\,,
\end{align}
\end{widetext}
with $Y^{\ell m}$ the spherical harmonics,
the same algebraic manipulations that in GR lead to the Regge-Wheeler equation yield
\begin{align}\label{wave-equation}
\frac{\text{d}^2}{\text{d}{r^{*}}^2}Z + \left[\omega^2 - V_l(r) \right] Z = 0\,,
\end{align}
where $Z=N^2 h_1/(r B)$, $\omega$ is the (complex) gravitational wave frequency, the tortoise coordinate $r^{*}$ is related to the areal radius by
\begin{align}
\frac{\text{d} r^{*}}{\text{d}r} = \frac{B(r)}{N^2(r)}\,,
\end{align}
and the potential reads
\begin{equation}\label{eqV}
V_l(r) = \frac{l(l+1)}{r^2}N^2(r) - \frac{3}{r} \frac{\text{d}}{\text{d}r^{*}} \frac{N^2(r)}{B(r)}.
\end{equation}
As can be explicitly verified, the potential reduces to the Regge-Wheeler potential
of GR in the limit in which the RZ metric reduces to Schwarzschild.
Note that in the geometric-optics limit $l\to\infty$,
Eq.~\eqref{eqH} for gravitational perturbations
must reduce to the (null) geodesics equation,
if gravitational waves are to move at the speed
of light (as verified experimentally to within relative errors
of $\sim 10^{-15}$~\cite{GW170817} and as expected if the weak equivalence principle
is to hold). Indeed, this can be seen by
noting that Eq.~\eqref{eqH} becomes $\Box h^{\mu\nu}+2 R^{\mu\alpha\nu\beta}h_{\alpha\beta}=0$ in the Lorenz gauge $h=\nabla_\nu h^{\nu\mu}=0$ (which can
be chosen on any curved vaccum background without loss of generality~\cite{hughes}). One can then insert the ansatz ${h}_{\mu\nu}\approx A_{\mu\nu} \exp(i S)$ into this equation, keeping only the dominant terms in the limit of large frequencies and wavenumbers ($\partial_\mu S\to\infty$).
This yields the Hamilton-Jacobi equation for massless particles, $g_{\rm RZ}^{\mu\nu} \partial_\mu S \partial_\nu S=0$,
which can be converted explicitly into the null geodesics equation by taking its derivative (see e.g. Sec. 7.8 of Ref.~\cite{defelice} for details).
The fact that gravitational perturbations follow null wavefronts in the geometric optics limit $l\to\infty$
has important implications for the potential \eqref{eqV}, which
should necessarily reduce to that of null geodesics in that limit. Indeed, one
one can easily verify that $V_l(r)\approx \frac{l(l+1)}{r^2}N^2(r)$
for $l\to\infty$, while null geodesics of the RZ metric satisfy
\begin{align}
&\frac{N^4}{E^2} \left(\frac{\text{d}{r^{*}}}{\text{d}{\lambda}}\right)^2+V_r=0\,,\\
&V_r=-1+\frac{b^2 N^2}{r^2}\,,
\end{align}
where $\lambda$ is an affine parameter, $b=L/E$ (with $E$ and $L$ respectively the conserved energy and angular
momentum of the orbit) is the impact parameter, and where we have assumed a reference frame where the orbit is
equatorial. (This latter assumption is non-restrictive since we are in spherical symmetry.) As can be seen, the effective
potential for the radial motion of null geodesics, $V_r$, matches $V_l\approx {l(l+1)}N^2(r)/r^2$
in the limit $l\sim b\to \infty$. In particular,
this implies that in the geometric optics limit $l\sim b\to \infty$, the
peak of the effective potential for gravitational perturbations asymptotes to that of the (unstable) circular photon orbit.
This correspondence in turn implies that at lowest order in the WKB expansion (i.e. in the geometric optics limit),
the real parts of the QNM frequencies are multiples of the orbital frequency of the circular photon orbit, while
their imaginary parts are related to the Lyapunov exponents of null geodesics near the circular photon orbit (and thus to the curvature of the effective potential
$V_r$ near its peak)~\cite{wkb1,wkb2}.
This can be intuitively interpreted by thinking of QNMs as generated at the circular photon orbit, and slowly leaking outwards
(since the circular photon orbit is unstable to radial perturbations).
Two consequences can be drawn from this correspondence between geodesics and gravitational perturbations. To begin with,
we can conclude that the first term in the effective potential \eqref{eqV} is more robust than the second. Indeed, the first term will
be present in any gravitational theory in which gravitational waves satisfy the equivalence principle and
travel at the speed of light, as required to high precision by experiments. The second term in Eq. \eqref{eqV}
is instead less robust, and may depend on the details of the gravitational theory under scrutiny.
To check the robustness of our results, we therefore consider also an alternative
phenomenological potential
\begin{equation}\label{eqV2}
V_l(r) = \frac{l(l+1)}{r^2}N^2(r) - \frac{K}{r} \frac{\text{d}}{\text{d}r^{*}} \frac{N^2(r)}{B(r)}\,.
\end{equation}
Note that $K=3$ corresponds to Eq. \eqref{eqV}, while $K=-1$ would correspond to
a scalar field satisfying the wave equation $\Box \phi=0$ on the RZ metric.
However, for generic scalar tensor theories respecting parity, $K$ will be a function of radius, determined by the background metric.\footnote{An example of a theory with $K$ function of $r$ is khronometric theory~\cite{prepFranchini}. In Ref.~\cite{Cardoso:2019mqo} a theory agnostic approach is presented in which the perturbation equations are parametrized.} For simplicity, in what follows we present results for $K=3$ (to be interpreted as a toy model for a situation where the theory of gravity is fixed and thus the equation for the perturbation is known), and for unknown (but constant) $K$. This latter case is a toy model for a situation in which the gravitational theory is unknown. Note that our method allows in principle for a generic function $K(r)$, which we can parametrize by its value and derivatives at the peak of the potential.
One possible future approach to connect parametrized black hole space-times with gravitational field equations has been proposed recently in Ref.~\cite{Suvorov:2020bvk} by building a gravitational theory around the space-time itself. However, connecting this with the Bayesian analysis conducted in this work seems non-trivial, because the underlying theory depends on the background space-time, which itself is varied throughout the analysis.
Furthermore, again in the light of the null geodesics/gravitational waves correspondence, let us note that it would make sense to combine the bounds on the RZ metric from QNM measurements with those coming from observations of the shadow of M87$^*$ by the Event Horizon Telescope (EHT)~\cite{eht,eht2}. We will address this in a forthcoming publication in Ref.~\cite{shadow_paper}.
\subsection{Quasi-Normal Modes}\label{QNM}
Starting from our most general form of the effective potential in Eq. \eqref{eqV2}, it is now our interest to compute the corresponding spectrum of QNM frequencies $\omega_n$.
To do so we assume the standard black hole boundary conditions that describe purely outgoing waves at spatial infinity and purely ingoing waves at the horizon.
The QNMs can then be computed by choosing among the many different techniques that have been reported in the literature over several decades.
Detailed information can be found in Refs.~\cite{Kokkotas:1999bd,Nollert_1999,Berti_2009}, which are classical reviews of the field.
\par
While the list of methods is long, not all are equally well suited for our application.
The rather general form of the RZ metric, which has in principle arbitrary many parameters, as well as the computational cost of Bayesian parameter estimation techniques, require an easily adaptable and fast method.
One such suitable technique is based on the Wentzel-Kramers-Brillouin (WKB) method, which can be used to find approximate solutions to certain types of differential equations \cite{1978amms.book.....B}.
\par
In the specific context of black hole QNMs \cite{1985ApJ...291L..33S,1987PhRvD..35.3621I,PhysRevD.35.3632,PhysRevD.37.3378,PhysRevD.41.374,Kokkotas_1991,PhysRevD.68.024018}, the method is well known for providing an approximate solution for the QNM spectrum $\omega_n$.
The method relies only on the knowledge of the Taylor expansion of the effective potential around its maximum and is known to different orders in WKB theory, e.g., the sixth order approximation has been derived in Ref.~\cite{PhysRevD.68.024018}
\begin{align}\label{wkb-expansion}
\frac{i Q_0}{\sqrt{2 Q^{\prime \prime}_0}} - \Lambda_2 - \Lambda_3 - \Lambda_4 - \Lambda_5 - \Lambda_6 = n + \frac{1}{2},
\end{align}
where $Q(r^{*}) \equiv \omega_n^2 - V_l(r^{*})$ is evaluated at the maximum and primes are derivatives with respect to the tortoise coordinate.
The full expressions of the terms $\Lambda_i$ are rather lengthy, but can be found in the original publication.
They include higher order derivatives of the potential evaluated at the maximum, as well as the overtone number $n$ under consideration.
There is no explicit dependency on $l$, because it appears explicitly in $V_l(r)$ itself.
Note that the order of the included derivatives increases by two for every WKB order $i$.
\par
In this work we follow an approach that allows for a general number of RZ parameters and therefore compute the derivatives numerically with finite differences.
Since the derivatives are taken with respect to the tortoise coordinate, one either has to compute the inverse transformation numerically or compute the derivatives in terms of $r$ or $x$, but then apply the chain rule iteratively.
Because both options can become problematic in terms of precision and computational time for higher order derivatives, especially when the RZ metric has many free parameters, we stop after $\Lambda_2$.
We have verified that the results are very similar to those obtained when $\Lambda_3$ is included as well.
Because the QNMs used for the parameter estimation are computed with the same method as those for the QNMs we consider as given data,
we circumvent the problem that WKB is an approximate method.
\par
When compared with full numerical results, those of the WKB method are expected to be valid for QNMs with $n < l$, but are less precise and eventually fail for $n \gg l$ (see Ref.~\cite{PhysRevD.68.024018} for a tabulated comparison).
The subsets of QNMs that we consider in this work fall within the valid range $n < l$ of the method.
Note that another advantage of the WKB method is that one can choose among different orders allowing one to adjust precision and computational cost, which is especially important for a Bayesian analysis.
\subsection{Sets of Models and QNMs}\label{def_spectra}
The most general form of the RZ metric has infinitely many free parameters, which obviously cannot be handled in a numerical approach.
Therefore, we study different realizations of the RZ metric, in which only a fixed number of free parameters is considered.
The parameters are not all equally important, as a result of the hierarchical structure of the continued fraction representation.
Besides the parameters of the RZ metric, we recall that we have introduced
a parameter $K$ in the potential given by Eq.~\ref{eqV2}.
\par
In the following, we will consider constraints on several \textit{models}, which
differ by the parameters that we allow to vary. In more detail, we consider the following models:
\begin{align}
\text{model}_{1} &\equiv \{M, \varepsilon\}, \\
\text{model}_{2} &\equiv \{M, \varepsilon, a_0, b_0\}, \\
\text{model}_{3} &\equiv \{M, \varepsilon, a_1, b_1\}, \\
\text{model}_{K1} &\equiv \{M, \varepsilon, K\}, \\
\text{model}_{K2} &\equiv \{M, \varepsilon, a_0, b_0, K\}.
\end{align}
\par
While the QNM spectrum for each model will in general contain infinitely many modes, any real gravitational wave experiment can only observe a finite subset of them, see Refs.~\cite{London_2014,Brito_2018,Carullo:2018sfu,PhysRevLett.116.221101,Isi:2019aib,Giesler:2019uxc,Cook:2020otn,Forteza:2020hbw} for recent works on this aspect.
The amplitudes with which QNMs are excited
depend on initial conditions of the black hole perturbations, or on the parameters of the progenitor binary
for QNMs produced after a black hole merger.
The modes that we use in this work correspond to the \textit{typical} QNMs that are excited in the ringdown of binary black hole mergers of comparable mass, see Refs.~\cite{London_2014,Brito_2018,Carullo:2018sfu,PhysRevLett.116.221101,Isi:2019aib,Giesler:2019uxc,Cook:2020otn,Forteza:2020hbw}.
We consider in particular the Schwarzschild fundamental mode $n=0$ and the first overtone $n=1$, for $l=2$ and $l=3$.
Since whether all four of these modes or only a subset of them can be observed depends on the source signal-to-noise ratio
and on the gravitational wave detector, we consider two cases (``spectra''), one in which
all four modes are observed, and one in which only the $l=2$ modes are detected. In more detail, we define
\begin{align}
\text{spectrum}_{1} &\equiv \{l=[2], n=[0,1] \}, \\
\text{spectrum}_{2} &\equiv \{l=[2, 3], n=[0,1] \}.
\end{align}
\par
The errors with which we assume that these modes can be measured will be discussed in Sec.~\ref{noise}.
\subsection{Remarks on the RZ Parameter Space}\label{param_space}
While the accuracy of the RZ metric parametrization to describe exact black hole solutions has been studied in several works (e.g. Refs.~\cite{PhysRevD.90.084009,Konoplya:2019fpy,Konoplya:2019goy,konoplya2020general}), using a multi parameter approach for the inverse QNM problem has not been done yet.
Some single parameter tests using non-QNM data can be found in Ref.~\cite{eht2} using the EHT shadow, or in Ref.~\cite{Cardenas_Avendano_2020} related to using X-ray data and early inspiral gravitational wave information.
In the following we elaborate on two different aspects that one should be aware of when using parametrized metrics for an inverse problem.
\par
The first and more fundamental one is what RZ parameter combinations actually describe black holes.
This is non-trivial to assess if multiple parameters are allowed to vary simultaneously, which could in principle lead to unphysical artifacts.
As a simple example consider the special case $M=1$ and only $\varepsilon$ as a free parameter.
The requirement that the RZ metric must represent a black hole bounds $-1 < \varepsilon \leq 1/2$~\cite{PC}.
Similarly, when
$\{\varepsilon,a_0,b_0\}$ or $\{\varepsilon,a_1,b_1\}$ are allowed to vary, with the other parameters set to zero,
this constraint becomes $-1 < \varepsilon \leq (1+a_0)/2$ or $-1 < \varepsilon \leq (1+a_1)/2$]~\cite{PC}.
\par
The second aspect is related to our choice of using the higher order WKB method.
It is a priori not clear what combinations of RZ parameters only lead to small deformations of the Regge-Wheeler potential, and what combinations describe
instead large and qualitative differences, e.g. regions where the potential is negative.
The latter case would question the validity of the higher order WKB method, and might lead to bound states.
To be sure that the method is justified, one has to quantify the \textit{allowed regions} of the parameter space, which can in principle be used as priors for the Bayesian parameter estimation that we will undertake below.
\par
We attempted to tackle this issue by sampling the parameter space by brute force,
checking at each point if the potentials becomes negative somewhere.
However, the number of total computations $N_\text{total}$ needed for a single choice of $l$ scales as
\begin{align}
N_\text{total} \propto N_\text{res-pot} \times \left( {N_\text{res-param}} \right)^{D},
\end{align}
where $N_\text{res-pot}$ and $N_\text{res-param}$ are the number of sampling points for
the potential and for each of the $D$ parameters. Therefore, it is evident that the problem becomes easily unmanageable from a computational point of view.
In practice, however, one already knows that the parameters describing the Schwarzschild limit are allowed, and
can start by sampling the parameter space around this limit, then progressively moving away from it.
Since the mass of the final black hole remnant is expected to be within 5--10\% of
the mass of the progenitor binary\footnote{Note that in
GR, if the masses of the progenitor black holes are known, the remnant's mass is also known~\cite{morozova}. However, here
we obviously cannot rely on GR, since our goal is to test it. Nevertheless, since 5--10\% is the typical mass loss due to
gravitational wave emission in GR~\cite{morozova}, it seems reasonable to assume that the final mass will be known to within at least that error, also beyond GR.}, in the following we consider $M$ to be within $[0.9, 1.1]$ and vary different RZ parameters.
From our numerical analysis it seems that the $l=2$ potentials are more prone to becoming negative.
\par
For the simplest model$_1$, we verified that $\varepsilon$ within $[-0.5, 0.5]$ does not lead to negative potentials.
For model$_2$ we find that ranges of $[-0.15, 0.15]$ for $\{\varepsilon, a_0, b_0\}$ are fine, but extending them further starts becoming problematic (i.e.
some combinations become invalid).
In Fig.~\ref{potential_region_Mea0b0_positive-negative} we show a sample of the potentials where the RZ parameters $\{\varepsilon, a_0, b_0\}$ are within $[-0.3, 0.3]$, and include a few of such invalid combinations.
Note that most of the potentials are positive everywhere, but some become negative close to the horizon.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{{param_space_potentials_Mea0b0_0.30}.png}
\caption{Potentials for $M$ in the range $[0.9, 1.1]$ and for the RZ parameters $\{\varepsilon, a_0, b_0\}$ in the range $[-0.3, 0.3]$. \label{potential_region_Mea0b0_positive-negative}}
\end{figure}
\par
This observation might seem troublesome for parameter estimation, unless priors are chosen such that
negative potentials are excluded from the start.
However, in practice, negative potentials are not only rare, but also tend to
produce large deviations from GR in the QNMs computed with our WKB approximation. As a result,
even if parameters producing negative potentials are drawn during sampling, they will almost always be rejected.
For the cases presented in this work, we explicitly show that the sampled parameter combinations do not include negative potentials or other possible large deviations in Sec.~\ref{applications}.
Also note that even if certain set of parameters may in principle produce large deviations of the potential near the horizon, the early time-evolution of perturbations would still contain modes similar to the QNMs computed with a WKB approximation at the potential's peak, while the true QNM spectrum might be different, but appears at later times.
This has been first studied for ultra compact stars in Ref.~\cite{1996gr.qc.....3024K} and further pursued in Refs.~\cite{PhysRevD.60.024004,2000PhRvD..62j7504F}. Nowadays this phenomenon related to the leakage of trapped $w$-modes is known as ``echoes'' and applies to exotic compact objects~\cite{Barausse:2014tra,Cardoso:2016rao}, some types of wormholes \cite{Bueno:2017hyj,paper5} and various types of modifications on the horizon scale and phenomenological models of quantum black holes \cite{2017PhRvD..96h2004A,Maggio:2017ivp,Nakano:2017fvh,Barcelo2017,Wang:2019rcf,Oshita:2019sat,Cardoso:2019apo,paper9}.
\subsection{Treatment of Noise}\label{noise}
While one can obtain the full QNM spectrum exactly (up to modeling errors due to the WKB approximation
and to numerical errors), real observations will always present a certain degree of uncertainty.
The details of the latter will depend on the specific source parameters, as well as on the properties of the detector and on the data analysis technique
used to extract the signal and estimate its parameters~\cite{Berti:2016lat}.
Since this is a major problem in itself, we have adopted here a simplified approach.
First, we treated observational errors in the reconstructed spectrum by adding a Gaussian noise to our theoretically computed QNM frequencies and decay times.
This produces an intrinsic variation in the reconstructed parameters, because every realization of the noise is unique. This is especially problematic because of
the relatively small number of modes that we use as data.
To account for this bias due to the realization of the errors, one would have to repeat each MCMC analysis for many different realizations of them.
While this analysis is beyond the scope of this work, we have explored several realizations to make sure that our parameter reconstruction works correctly.
However, for the rest of this work we will adopt the noiseless limit, i.e. we inject the exact QNMs as input data for the parameter estimation.
As for the variance of our Gaussian noise, we consider two possibilities.
To mimic the error on the measured QNMs that would be achieved with
Advanced LIGO and Virgo at design sensitivity and events similar to GW150914, we assume that
QNMs are known within $1\sigma$ relative uncertainties of about $10\,\%$: See e.g. Fig. 5 of Ref.~\cite{PhysRevLett.116.221101}, and Figs. 2 and 4 of Ref.~\cite{Isi:2019aib} for uncertainties on the measured QNMs with O1 data; Refs. \cite{Brito_2018,Carullo:2018sfu,Giesler:2019uxc,Cook:2020otn,Forteza:2020hbw} for
reports on the simultaneous extraction of several QNMs from numerical relativity simulations; and
Ref. \cite{Yang:2017zxs,Maselli:2017kvl} for the possibility of stacking several modes together to enhance tests of the no-hair theorem. Furthermore, we consider $1\sigma$ relative uncertainties of $ 1\,\%$ to mimic next generation detectors like the Einstein Telescope~\cite{ET} or LISA~\cite{lisa}, or especially loud events~\cite{Berti:2016lat}.
\subsection{Markov chain Monte Carlo}\label{pymc3}
Our Bayesian parameter estimation pipeline relies on Markov chain Monte Carlo (MCMC) techniques.
This class of methods allows for sampling the posterior distribution of the parameters of a model that is used to describe a given set of data.
Since a detailed introduction to Bayesian analysis and MCMC methods is beyond the scope of this work, we only summarize here the key aspects of our framework and refer the interested reader to Ref.~\cite{10.5555/971143} for a comprehensive introduction.
\par
To perform the MCMC analysis we utilize the
Metropolis Hastings sampler of the Python based probabilistic programming framework \textsc{PyMC3} \cite{pymc3},
which we couple (via a custom theano function) to an external \textsc{C++} code computing the potentials. To enhance the computational performance, we initialize 6 chains that are computed in parallel (12 for model$_3$ and model$_{K2}$). Furthermore, we set 10k tuning steps in the \textsc{PyMC3} subroutine to optimize the sampling, which are discarded from the analysis. Depending on the specific model, we remove at least the first 10k steps in each chain for burn in. In each chain there are at least 100k steps for the simple models and up to 2000k for the most complex one.
Depending on the choice of the model, the total number of steps and the provided QNMs, one analysis will typically take between several minutes to a few hours.
\par
Our likelihood follows from our simplistic assumption that the measured QNMs are affected
by Gaussian errors. In more detail, we write the likelihood as a product
of Gaussians for the real and imaginary parts of the QNMs that we assume
are measured, centered on the true values of the modes and with
standard deviation corresponding, as discussed above, to $10\,\%$ or $1\,\%$
of the true values.
\par
Bayesian analysis also requires one to specify priors for the parameters, which reflect our knowledge on them before looking at the data.
We assume a Gaussian prior on the black hole mass $M$.
As mentioned earlier, in GR the final mass is lower than the initial
total binary mass by 5-10\%~\cite{morozova} due to the emission of gravitational waves. The final
value of the mass can be computed from the initial mass of the binary via
simple formulae in GR~\cite{morozova}, but similar formulae do not yet exist
in modified gravitational theories. Nevertheless, one would expect GW energy losses beyond GR
to be of the same order of magnitude as in GR, which would make the final
mass known (because coinciding with the binary's initial mass) up to a 5-10\% uncertainty. Moreover,
this prior can be improved simply by calculating the energy flux in the gravitational wave signal (a calculation
that can be performed from the data alone, irrespective of the theory). To account for this additional information,
we choose a standard deviation of $2.5\%$ around the real value ($M=1$) for our Gaussian prior on the mass.
We will see in the following that this prior is rather uninformative, at least for models
with few parameters and for precise QNM measurements, i.e. the posteriors are dominated by the likelihood and not by the prior.
\par
From our discussion of the RZ parameter space in Sec.~\ref{param_space}, we know that the other prior ranges also have to be chosen with caution.
For all RZ parameters, $a_0$, $b_0$, $a_1$, $b_1$ and $\varepsilon$, we adopt flat priors centered on the Schwarzschild values $a_0=b_0=a_1=b_1=\varepsilon=0$
and with width of $\pm 1$. This width is motivated by the fact that one expects
values of these dimensionless parameters differing from GR by more than ${\cal O}(1)$
should be disfavored by other observations, and particularly GW observations of
the inspiral of BH binaries~\cite{Cardenas_Avendano_2020} (especially with future detectors, see e.g.~\cite{Barausse:2016eii}). For $K$, for similar reasons we adopt a flat prior centered on the GR value $K=3$, and width of $\pm 5$.
While these priors may contain parameters combinations for which the WKB approximation breaks down, we have verified a posteriori that
the sampling chains tend to avoid those combinations. The robustness of our results with respect to the choice of priors is discussed in detail in Sec.~\ref{discussion-priors}.
\section{Applications}\label{applications}
In this section, we apply our methods to the different models outlined above, and show representative examples of our results in Figs.~\ref{fig_modelMe}, \ref{fig_modelMea0b0}, \ref{fig_modelMea1b1}, \ref{fig_modelMeK}, \ref{fig_modelMea0b0K_1} and \ref{fig_modelMea0b0K_2}.
Each figure summarizes the MCMC parameter estimation, the reconstructed potentials and the reconstructed metric functions for a specific model with given QNM spectrum and QNM measurement errors, as described in the caption \footnote{Note that we split the left and right panel figure layout for model$_{K2}$ into two separate figures for better readability}.
Although we have studied all combinations of the two different QNM subsets with the two different assumptions for the
measurement errors for each of the five different models (i.e. a total of 20 combinations), we
do not show all of them here for reasons of space, and because the results for most models scale roughly linearly
with the assumed errors of the QNM measurements (once the spectrum and the model are fixed).
The structure of the figures is the same for most models and described in the following.
\par
Each of the MCMC parameter estimation results is shown on the left panel.
There, the diagonal sub-panels show the posterior distributions for the free parameters of the model under investigation.
The sub-panels above the diagonal are scatter plots in which each point stands for one step of the chain.
Because our chains contain around one million steps, we also show the corresponding contour plots in the sub-panels below the diagonal.
Adjacent contours correspond to values differing by 0.3 dex in logarithmic scale (i.e. by a factor 2).
\par
In all panels related to the potentials and metric functions we provide the exact injected functions in solid and dashed black lines, while colored lines show the reconstruction.
In order to visualize and quantify the uncertainties coming from the parameter estimation, we draw 1000 random samples from the MCMC chains and evaluate the potentials and metric functions for those parameters.
These are then added as semi-transparent colored lines, which make regions of high confidence appear more saturated.
Note that the potentials are always shown for $l=2$ and $l=3$, even when the QNM measured spectrum does not contain $l=3$.
In this case, we compute the $l=3$ potential from the reconstructed parameters from the $l=2$ spectrum, in order to see how it compares with the reconstructed spectrum when $l=3$ QNMs are included.
\begin{figure*}
\centering
\begin{minipage}{0.98\linewidth}
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_Me_K_3.0_nmin_0_nmax_1_lmin_2_lmax_2_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_100000_figure}.png}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\begin{minipage}{1.0\linewidth}
\vfill
\centering
\includegraphics[width=1.0\linewidth]{{model_Me_K_3.0_nmin_0_nmax_1_lmin_2_lmax_2_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_100000_potentials}.png}
\end{minipage}
\vfill
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_Me_K_3.0_nmin_0_nmax_1_lmin_2_lmax_2_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_100000_metrics}.png}
\end{minipage}
\vfill
\end{minipage}
\end{minipage}
\caption{Results for model$_1$ obtained by using spectrum$_1$ with $\pm 1\,\%$ relative errors.
\text{Left:} MCMC parameter estimation.
\text{Right top:} Exact (black lines) and reconstructed (colored lines) potentials $V_2(r)$ and $V_3(r)$.
\text{Right bottom:} Exact (black lines) and reconstructed (colored lines) metric functions $g_{tt}(r)$ and $g_{rr}(r)$.
\label{fig_modelMe}
}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}{0.98\linewidth}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1.0\linewidth]{{model_Mea0b0_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_500000_figure}.png}
\end{minipage}
\hfill
\begin{minipage}{0.455\linewidth}
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_Mea0b0_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_500000_potentials}.png}
\end{minipage}
\vfill
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_Mea0b0_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_500000_metrics}.png}
\end{minipage}
\end{minipage}
\end{minipage}
\caption{Results for model$_2$ obtained by using spectrum$_2$ with $\pm 1\,\%$ relative errors.
\text{Left:} MCMC parameter estimation.
\text{Right top:} Exact (black lines) and reconstructed (colored lines) potentials $V_2(r)$ and $V_3(r)$.
\text{Right bottom:} Exact (black lines) and reconstructed (colored lines) metric functions $g_{tt}(r)$ and $g_{rr}(r)$.
\label{fig_modelMea0b0}}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}{0.98\linewidth}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1.0\linewidth]{{model_Mea1b1_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_1000000_figure}.png}
\end{minipage}
\hfill
\begin{minipage}{0.455\linewidth}
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_Mea1b1_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_1000000_potentials}.png}
\end{minipage}
\vfill
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_Mea1b1_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_1000000_metrics}.png}
\end{minipage}
\end{minipage}
\end{minipage}
\caption{Results for model$_3$ obtained by using spectrum$_2$ with $\pm 1\,\%$ relative errors.
\text{Left:} MCMC parameter estimation.
\text{Right top:} Exact (black lines) and reconstructed (colored lines) potentials $V_2(r)$ and $V_3(r)$.
\text{Right bottom:} Exact (black lines) and reconstructed (colored lines) metric functions $g_{tt}(r)$ and $g_{rr}(r)$.
\label{fig_modelMea1b1}}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}{0.98\linewidth}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1.0\linewidth]{{model_MeK_K_3.0_nmin_0_nmax_1_lmin_2_lmax_2_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_100000_figure}.png}
\end{minipage}
\hfill
\begin{minipage}{0.455\linewidth}
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_MeK_K_3.0_nmin_0_nmax_1_lmin_2_lmax_2_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_100000_potentials}.png}
\end{minipage}
\vfill
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_MeK_K_3.0_nmin_0_nmax_1_lmin_2_lmax_2_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_100000_metrics}.png}
\end{minipage}
\end{minipage}
\end{minipage}
\caption{Results for model$_{K1}$ obtained by using spectrum$_1$ with $\pm 1\,\%$ relative errors.
\text{Left:} MCMC parameter estimation.
\text{Right top:} Exact (black lines) and reconstructed (colored lines) potentials $V_2(r)$ and $V_3(r)$.
\text{Right bottom:} Exact (black lines) and reconstructed (colored lines) metric functions $g_{tt}(r)$ and $g_{rr}(r)$.
\label{fig_modelMeK}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{{model_Mea0b0K_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_2000000_figure}.png}
\caption{MCMC parameter estimation for model$_{K2}$ obtained by using spectrum$_2$ with $\pm 1\,\%$ relative errors.
\label{fig_modelMea0b0K_1}}
\end{figure*}
\begin{figure}
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_Mea0b0K_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_2000000_potentials}.png}
\end{minipage}
\vfill
\begin{minipage}{1.0\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{{model_Mea0b0K_K_3.0_nmin_0_nmax_1_lmin_2_lmax_3_sigma_relative_real_0.01_sigma_relative_imag_0.01_what_sigma_M_0.025_M_1.0_e_-0.0_a0_0.0_b0_0.0_a1_0.0_b1_0.0_order_2_samples_2000000_metrics}.png}
\end{minipage}
\caption{Results for model$_{K2}$ obtained by using spectrum$_2$ with $\pm 1\,\%$ relative errors. \text{Top:} Exact (black lines) and reconstructed (colored lines) potentials $V_2(r)$ and $V_3(r)$. \text{Bottom:} Exact (black lines) and recon- structed (colored lines) metric functions $g_{tt}(r)$ and $g_{rr}(r)$.
\label{fig_modelMea0b0K_2}}
\end{figure}
\section{Discussion}\label{discussion}
In this section we discuss our findings, starting with the MCMC parameter estimation and related details in Sec.~\ref{discussion-parameter-estimation}, while the reconstructed potentials and metric functions are addressed in Sec.~\ref{discussion-potential-metric}.
More details on the results that we obtain when solar system PPN bounds are imposed as priors are presented in Sec.~\ref{discussion-ppn}.
We briefly comment on other recent works that use QNMs for the inverse problem in Sec.~\ref{discussion-alt-inv}.
Finally we discuss possible extensions of this work in Sec.~\ref{discussion-extensions}.
\subsection{Parameter Estimation}\label{discussion-parameter-estimation}
From the MCMC results presented in the left panels of Figs.~\ref{fig_modelMe}, \ref{fig_modelMea0b0}, \ref{fig_modelMea1b1} and \ref{fig_modelMeK}, as well as in Fig.~\ref{fig_modelMea0b0K_1}, one can see that for all models one can put constraints on all of the parameters, though there are great differences between them.
\par
In general, the posteriors obtained for the low-dimensional models, which include $\varepsilon$ as the only free parameter, are usually more constrained than those of the higher-dimensional models.
For example, using spectrum$_1$ for model$_1$ and model$_{K1}$ (with results shown in Fig.~\ref{fig_modelMe} and Fig.~\ref{fig_modelMeK}), one finds that $M$ and $\varepsilon$ can in both cases be well constrained, but the additional parameter $K$ clearly impacts the analysis.
While the posteriors of $K$ peak in all models very close to the GR value, the presence of $K$ increases the $68\,\%$ confidence interval of $M$ and $\varepsilon$ by roughly a factor $5$.
\par
In models with RZ parameters beyond $\varepsilon$, one still finds that the posteriors of all parameters have their maximum very close to the GR values, but their shapes can be very complex.
The RZ posteriors of model$_2$ (Fig.~\ref{fig_modelMea0b0}) are very steep around the GR values, and show little support further away.
In contrast, those of model$_3$, shown in Fig.~\ref{fig_modelMea1b1}, are clearly less constraining for $a_1$ and $b_1$ and admit a small secondary maximum.
A look at the contour plots reveals the strong correlations between certain RZ parameter combinations that produce QNMs very similar to those of Schwarzschild.
\par
The most complex posteriors are those of model$_{K2}$ in Fig.~\ref{fig_modelMea0b0K_1}.
As can be seen, the posteriors include the GR values, but also admit very small secondary maxima and even stronger correlations between the parameters.
\subsubsection{QNMs and Accuracy}\label{discussion-qnm-accuracy}
Since most of the models studied here have more than two free parameters, it is reasonable to quantify the benefit of measuring multiple QNMs.
Including the $l=3$ fundamental mode and its first overtone ($n=0, 1$) in addition to the $l=2$ fundamental mode and its first overtone improves the parameter estimation, though the individual improvements vary with the model and the assumed QNM measurement errors.
Since this can be seen clearly for the reconstructed potentials, we discuss this aspect for some models in Sec.~\ref{discussion-potentials} in more detail.
\par
Decreasing QNM measurement errors by an order of magnitude provides the strongest improvements.
QNMs measured with $1\,\%$ errors allow for constraining the posteriors within the prior bounds for all models.
For model$_{2K}$, which has five free parameters, the $l=3$ QNMs have to be used as well to achieve this.
For some of the other higher dimensional models, we also find non-trivial secondary maxima for the posteriors when the less precise QNMs are used.
\subsubsection{Scaling with Relative Errors}
When the posteriors are well constrained within the limits of the priors and are not multi-modal, we also look at whether our results scale with the relative errors with which we assume that the QNMs are measured.
To this purpose, we relate the width of the $68\,\%$ credible interval of the reconstructed parameters $P_i$, which we denote by $2 \times \sigma_{P_i}$, with the constant relative errors that we have assumed for a given set of QNMs, which we denote by $\delta_\text{QNM}$.
Although there are some minor variations between different models, we generally find
\begin{align}
\frac{\sigma_{P_i}}{\delta_\text{QNM}} \approx \text{constant}_i.
\end{align}
We have also verified this scaling for relative QNM errors of $0.1\,\%$.
The scaling is valid for model$_1$ and model$_{K1}$, while the higher dimensional models are more prone to presenting secondary peaks when the QNM precision is of $\pm 10\,\%$, in which case the notion of a credible interval becomes less clear/relevant.
For the same reasons, we note that the scaling is instead not expected to hold in the other extreme, i.e. for large (i.e. $100\,\%$) relative errors.
\subsubsection{Priors}\label{discussion-priors}
As expected, we find that the posteriors are more constrained in the lower dimensional models than in the higher dimensional ones, if one assumes the same spectrum
as data.
Since $M$ is the leading order parameter of the RZ metric, it is generally the best constrained one,
and the width of its posterior distribution is typically smaller than the already tight prior that
we assume on it. This is particularly evident for simple models, e.g. for model$_1$ in Fig.~\ref{fig_modelMe},
where $M$ has an $68\,\%$ credible interval of $[0.995, 1.005]$.
\par
Regarding the RZ parameters and $K$,
for which we recall that we assume large flat priors (respectively $[-1,1]$ and $[-2,8]$),
the posteriors are generally constrained to be well within the priors, i.e. our results are robust. Only for model$_{K2}$ one finds
tails that tend to extend
outside the prior ranges for $a_0$ and $b_0$. However,
as we have already mentioned, $a_0,\,b_0\gtrsim 1$ are
very likely disfavored by GW observations of the inspiral and X-ray tests~\cite{Cardenas_Avendano_2020}, and possibly by other observables not directly related to QNMs (e.g. gravitational redshift, geodesic motion, etc.).
For all of the cases shown here, the RZ posteriors peak around their Schwarzschild values, and also the posteriors of $K$ peak at the expected GR value $K=3$.
What RZ parameters can be best constrained depends however, to some extent, on the assumed set of measured QNMs and their errors.
For instance, for some of the higher dimensional models the posterior bounds are less stringent if the errors of the measured QNMs are
10\,\%, especially if the $l=3$ QNMs are not measured. In that case, at least for some parameters, the posterior widths may even be comparable with the
priors. In other cases, e.g. for model$_{K2}$ shown in Fig.~\ref{fig_modelMea0b0K_1}, the posteriors
have secondary maxima and present strong non-trivial correlations between the RZ parameters (even though it is unclear if the secondary maxima
correspond to RZ metrics describing non-pathological black holes).
\subsection{Reconstruction of Potentials and Metric}\label{discussion-potential-metric}
In the following, we first discuss the reconstruction of the potentials in Sec.~\ref{discussion-potentials}, before addressing the reconstruction of the metric functions in Sec.~\ref{discussion-metrics}.
The results of an additional model in which we enforce the PPN constraints is discussed separately in Sec.~\ref{discussion-ppn}.
\subsubsection{Effective Potentials}\label{discussion-potentials}
The reconstructed effective potentials $V_2(r)$ and $V_3(r)$ are shown in the right top panels of Figs.~\ref{fig_modelMe}, \ref{fig_modelMea0b0}, \ref{fig_modelMea1b1} and \ref{fig_modelMeK}, as well as in Fig.~\ref{fig_modelMea0b0K_2}.
The quality of the reconstruction is clearly related to how well the RZ parameters can be determined.
Since this depends in turn on the underlying model being used, it is not surprising that the potential obtained from model$_1$, shown in Fig.~\ref{fig_modelMe}, is more precisely reconstructed than the one for model$_{K2}$, shown in Fig.~\ref{fig_modelMeK}.
Since the QNM measurement errors play a major part in how well the parameters can be recovered, we note that the higher dimensional models can have
reconstructed potentials as good as lower dimensional models, if the latter use less precise QNMs.
\par
As expected from the asymptotic behavior of the RZ metric, the uncertainty in the potentials is minimal for large $r$, because in that region the behavior is dominated by $M$ only.
Since the QNMs are related to the potential at its peak, it is not surprising that the potential becomes drastically less determined away from the maximum, when one approaches the horizon. This is especially the case for higher dimensional models.
\par
Because adding the $l=3$ QNMs improves the reconstruction of the parameters, one might naively expect a difference between the $l=2$ and $l=3$ potentials according to whether the $l=3$ QNMs have been used or not.
However, because both potentials depend on the same number of parameters and are constructed almost identically, the impact of including the $l=3$ QNMs depends on the specific model.
When comparing the reconstructed potentials of model$_1$ in Fig.~\ref{fig_modelMe} with those of model$_{K1}$ in Fig.~\ref{fig_modelMeK}, one sees that the $l=2$ QNMs recover the $l=2$ and $l=3$ potentials with comparable precision for the first model, but the $K$ dependency in model$_{K2}$ makes the $l=3$ potential less constrained.
However, when adding the $l=3$ QNMs, we find that the reconstruction becomes comparable also for model$_{K2}$.
\par
We also note that because the reconstructed potentials present a single maxixum, using the WKB method is indeed justified.
\subsubsection{Metric Functions}\label{discussion-metrics}
The reconstructed metric functions $g_{tt}(r)$ and $g_{rr}(r)$ are shown in the bottom right panels in Figs.~\ref{fig_modelMe}, \ref{fig_modelMea0b0}, \ref{fig_modelMea1b1} and \ref{fig_modelMeK}, as well as in Fig.~\ref{fig_modelMea0b0K_2}.
The relatively small uncertainties for large values of $r$ are expected by construction, because the RZ metric approaches the Schwarzschild metric asymptotically and $M$ is well constrained.
Because the information obtained from the QNMs originates from the region around the maximum of the potential, the metric is also well reconstructed there.
As for the potentials, the reconstruction of the metric functions also shows some non-trivial differences throughout the different models and QNM subsets.
For models that have $\varepsilon$ as the only RZ parameter, the reconstruction is similar, but there are significant differences when one includes $b_0$ or $b_1$.
This can be seen most drastically when comparing the results shown in Fig.~\ref{fig_modelMe} with the ones in Fig.~\ref{fig_modelMea1b1}.
This finding can be explained with a closer look at the structure of $g_{tt}(r)$ and $g_{rr}(r)$ provided in Sec.~\ref{RZ-metric}, which reveals that $g_{tt}(r)$ only depends on $\varepsilon$ and $a_0$ or $a_1$, while $b_0$ or $b_1$ only appear in $g_{rr}(r)$.
The additional degree of freedom of $g_{rr}(r)$ causes its less precise reconstruction.
\subsection{PPN Constraints}\label{discussion-ppn}
The RZ parameters used in model$_3$ are inspired by the PPN constraints $|a_0|, |b_0| \sim 10^{-4}$, which would allow one to set those parameters essentially to zero.
While these bounds may not hold for all alternative theories of gravity,
as discussed in Sec.~\ref{RZ-metric}, we consider them here for comparison
with previous work assuming them \cite{paper8,konoplya2020general,Cardenas_Avendano_2020}.
Since $a_0=b_0=0$, model$_3$ includes the higher order parameters $a_1$ and $b_1$ with flat priors between $[-1,1]$.
The parameter estimation for this model, shown in Fig.~\ref{fig_modelMea1b1}, is more challenging than for model$_2$, which is shown in Fig.~\ref{fig_modelMea0b0}.
For this reason, we only report results for the optimistic case of spectrum$_2$ (i.e. with small relative errors of $ 1\,\%$ on the $l=2$ and $l=3$ modes).
Indeed, for less precise (i.e. 10\%) QNMs or with $l=2$ modes only, we could not constrain all parameters completely within the priors.
This may occur because $a_1$ and $b_1$ appear as higher order parameters, hence deviations of the potentials and metric functions
away from the Schwarzschild baseline only grow significantly close to the horizon.
Overall, our results show that even the higher order RZ parameters can be constrained by using QNMs, but only under optimistic conditions (i.e., multiple and precise QNM measurements).
\subsection{Alternative Inverse QNM Approaches}\label{discussion-alt-inv}
Finally, we also comment briefly on the differences between the present work and other recent related efforts on the inverse QNM spectrum problem of compact non-rotating objects \cite{paper2,paper5,Konoplya:2018ala,paper6,paper8}.
WKB theory comes in many realizations, and has been applied in different ways depending on the underlying type of QNM spectrum.
In the case of ultra-compact horizonless objects, for a review on which we refer to \cite{Cardoso:2017cqb}, one finds that there exist long lived trapped modes \cite{1991RSPSA.434..449C,1994MNRAS.268.1015K}.
For these systems, it is possible to use a generalized Bohr-Sommerfeld rule to describe the spectrum \cite{2014PhRvD..90d4069C,paper1}, and furthermore to invert it in order to constrain the potential \cite{paper2,paper5}.
While the potentials and QNMs are qualitatively different from Schwarzschild for those objects, the method itself does not require a metric or any type of arbitrary parametrization of the potential.
However, as a trade-off, the typical number of QNMs required for this method to work is large, and the reconstruction in general not unique; indeed, both features
are typical hallmarks of inverse spectrum problems.
The advantage of this approach is nevertheless that the reconstructed potentials could have details that are not described by a finite set of metric parameters, which is an intrinsic limitation when following a parametrization approach for inverse problems.
By following a related approach, it has also been possible to constrain potentials from Hawking radiation \cite{paper7}.
In Ref.~\cite{Konoplya:2018ala}, the higher order WKB method has been combined with a Morris-Thorne ansatz for the metric to approximate wormholes by using their high frequency QNM spectrum as assumed data.
\subsection{Possible Extensions}\label{discussion-extensions}
We consider the work presented here as a proof of principle effort, which quantifies how well QNMs can be used to constrain black hole metrics that deviate from GR.
A treatment of the full problem beyond our toy model requires the knowledge of the field equations of theories beyond GR, which are obviously unavailable in a theory agnostic approach such as ours.
Another limitation is our focus on non-rotating black holes, since binary black hole mergers will always produce a spinning remnant~\cite{spin1,spin2,spin3}.
While the RZ metric has also been generalized to describe rotating black holes in Ref.~\cite{Konoplya:2016jvv}, the lack of theory agnostic field equations in the rotating case makes it less clear how to proceed in this direction.
One possibility would be to work in terms of a slow rotation approximation, which for the axial sector has been studied in Ref.~\cite{2015EPJC...75..560P}.
Another possible extension of this work may be a Bayesian comparison between different realizations of the RZ metric (i.e. RZ metrics with
different numbers of parameters), to determine the optimal number of free parameters needed to describe a given set of QNM data.
Also, it may be beneficial to incorporate analytic constraints similar to those discussed in Sec.~\ref{param_space} on the RZ parameters directly on the MCMC sampling (using rejection methods). This would allow for a larger prior parameter space, which would be of interest in situations where the data
are not very informative and when even more RZ parameters are considered.
Finally, an extension of the present framework to incorporate other black hole constraints, like the size of the shadow as observed by the Event Horizon Telescope (EHT) collaboration, is currently in preparation~\cite{shadow_paper}.
\section{Conclusions}\label{conclusions}
Connecting the rising field of experimental gravitational wave physics with fundamental theoretical problems is among the most promising research avenues
in gravitational physics.
In this work we have demonstrated, as a proof of principle, how well the observation of black hole QNMs by gravitational interferometers can be used to constrain the spacetimes of non-rotating black holes.
By studying several realizations of the RZ metric, as well as an additional degree of freedom of the effective perturbation potential, we have explicitly connected QNMs with phenomenological parameters characterizing deviations from GR.
Since real experiments cannot observe the full QNM spectrum with infinite precision, we have limited our study to the the $l=2$ and $l=3$ modes,
considering both the fundamental mode and the first overtone ($n=0$ and $n=1$),
and we have assumed several possible measurement errors (between 1\,\% and 10\,\%) for the QNM frequencies and decay times, to mimic the
effect of various gravitational wave detectors.
\par
With this setup, knowledge of the $l=2$ fundamental and first overtone modes is already enough to constrain models with two or three free parameters.
The more involved models including up to five free parameters require also the corresponding $l=3$ modes for a reasonable parameter estimation.
As expected, the largest improvement in the parameter estimation is achieved when the QNMs are known with higher precision.
In this situation, in spite of the the limited number of QNMs, it is possible to constrain even the higher dimensional parametrization models.
Besides the reconstruction of the metric parameters, we have also quantified and visualized the errors on the corresponding potentials and metric functions.
While the general problem of rotating black holes is conceptually and computationally far from trivial, we have demonstrated here that Bayesian parameter estimation and the higher order WKB method provide a suitable framework at least for the non-rotating limit.
Overall our results suggest that studying the inverse QNM problem is very promising even in the presence of finite number of QNM measurements, and allows for using the ringdown to put constraints on parametrized black holes in gravitational theories beyond GR.
Since other observational approaches, e.g. shadows obtained by the EHT or X-ray spectroscopy, may also put constraints on the same parametrized black hole geometries, it would be interesting to combine them with QNM bounds. We will address this problem more thoroughly in future work.
\begin{acknowledgments}
We thank Luciano Rezzolla and Prashant Kocherlakota for useful discussions on the RZ parameter space and valuable feedback on the manuscript.
Furthermore we also thank Kostas D. Kokkotas for sharing his insights on several aspects of our work.
We also want to thank the anonymous referee for their valuable comments, which have strengthened this work considerably.
We acknowledge financial support provided under the European Union's H2020 ERC Consolidator Grant
``GRavity from Astrophysical to Microscopic Scales'' grant agreement no. GRAMS-815673.
\end{acknowledgments}
|
1,116,691,499,038 | arxiv | \section{Introduction}
B2~1215+30, also commonly referred to as ON~325 or 1ES~1215+303, was
first detected in the Bologna Northern Cross telescope survey
conducted at 408 MHz \citep{1970A&AS....1..281C}. It was one of the
first BL Lac-type objects to be identified \citep{1971Natur.231..515B}
and was one member of the small set of objects used to define the
class. The distance to this source is uncertain and two different
redshift values, both obtained from spectroscopic measurements, can be
found in the literature:
z~=~0.130 (\citealp{2003ApJS..148..275A};
NED\footnote{\url{http://ned.ipac.caltech.edu/}})
and z~=~0.237 (\citealp{1993ApJS...84..109L};
Simbad\footnote{\url{http://simbad.u-strasbg.fr/simbad/}}).
A 10-minute exposure with the FAST instrument on the FLWO 60''
telescope in 2011 did not yield any obvious emission lines in the
continuum spectrum to resolve this discrepancy (E.~Falco, priv. comm.).
Similarly, no spectral features were evident in a high SN
spectrum (SN 50-120 from the red to blue side) we obtained with the Lick
Observatory Kast double spectrograph on the Shane 3-m telescope on 13
February 2013 (MJD 56336).
BL Lac objects and flat spectrum radio quasars (FSRQs)
belong to the most extreme sub-class of active galactic nuclei (AGN),
named blazars. Their relativistic jet is oriented close to the
observer's line-of-sight. They show rapid variability at all
wavelengths with the fastest being observed at very high energies
(VHE; E$>$100~GeV) on time scales of minutes
\citep{2007ApJ...664L..71A,2011ApJ...730L...8A,2013ApJ...762...92A}.
The spectral energy distribution (SED) of blazars is dominated by
non-thermal emission and consists of two distinct, broad components.
The low-energy component ranges from radio to UV/X-rays and is
widely believed to be due to synchrotron emission from
ultra-relativistic electrons in the jet magnetic field.
To explain the origin of the second component, peaking between X-rays and
gamma rays, two fundamentally different scenarios exist, dominated by
either leptonic or hadronic emission.
In leptonic scenarios, the high-energy radiation is produced via
inverse-Compton scattering of the ultra-relativistic electrons
responsible for
the synchrotron emission. Possible seed photons are synchrotron
photons within the jet (synchrotron self-Compton, SSC model), or
external photons (external Compton, EC model) from the disk, the
broad line region, or the jet.
In hadronic scenarios, protons are accelerated to sufficiently high
energies and the high-energy emission is dominated by neutral pion
decay as well as hadronic synchrotron radiation.
For a review of different blazar models see
\citet{2012arXiv1205.0539B} and references therein.
Based on its SED, B2 1215+30 was suggested
by \citet{2002A&A...384...56C} as a potential TeV source. It is
now classified as a bright intermediate-frequency-peaked BL Lac object
(IBL) based on its
synchrotron peak location at $10^{15.6}$~Hz \citep{2006A&A...445..441N}.
It is listed in the \textit{Fermi} bright-source list
\citep{2009ApJS..183...46A}, and appears in later \textit{Fermi} catalogs
\citep[e.g.][]{2011ApJ...743..171A}, where it is classified as a
high-synchrotron-peaked BL Lac (HSP).
In early January 2011, B2~1215+30 was detected in the VHE band by
MAGIC during observations triggered by an optical high state
\citep{2011ATel.3100....1M}.
The flux above 200~GeV was $(7.7 \pm 0.9) \times 10^{-12} \,
\mathrm{cm}^{-2}\, \mathrm{s}^{-1}$ with a photon spectral index of
$\Gamma = 2.96 \pm 0.14$ \citep{2012A&A...544A.142A}.
In this paper we report on the results of VERITAS observations
taken in the direction of B2~1215+30 between December 2008 and
May 2012. This blazar is in the same field of view as
1ES~1218+304\footnote{The two sources are 0.76\degr\ away from each other.}, a
bright VHE blazar which is regularly observed
by VERITAS \citep{2011arXiv1110.0038W}.
A large part of the data presented here originates from observations
taken on 1ES~1218+304.
\section{VERITAS: VHE gamma-ray observations}
VERITAS is an array of four imaging atmospheric Cherenkov telescopes
located in southern Arizona. It is sensitive to gamma-ray energies
from 100~GeV to about 30~TeV and has been fully operational since Fall 2007.
Short Cherenkov light flashes produced in extensive air showers are
focused by 12~m diameter reflectors onto fast-recording
cameras. Each camera is equipped with 499 photomultiplier tubes with a
total field of view of 3.5\degr.
In Summer 2009, one of the four telescopes was moved to a new
location. This yielded about $30\%$
sensitivity improvement and reduced the observation time needed to
detect a 1\% Crab Nebula-like source with 5 standard deviations
($5\sigma$) from 48~hours to less than 30~hours \citep{2011arXiv1111.1225H}.
The observations reported here include observations of B2~1215+30 and
1ES~1218+304, two sources which are separated by 0.76\degr. All
observations were taken in ``wobble mode'' where the source
position is offset by 0.5\degr\ from the camera center to
allow for simultaneous background estimation
\citep[e.g.][]{2007A&A...466.1219B}. Combining the
observations on both sources, VERITAS observed B2~1215+30 for more
than 93 hours between December 2008 and May 2012.
The data are divided into three data sets, corresponding to yearly
observation epochs. The first one spans 34 hours from December 2008 to
May 2009 at a mean zenith angle of 20\degr, the second data set was
recorded between January and June 2011 (42 hours) at a mean zenith
angle of 15\degr, and the third data set was taken from January to
May 2012 (17.5 hours) with a mean zenith angle of 12\degr.
Most of the observations presented here had
1ES~1218+304 as the principal target, resulting in different pointing
offsets from the position of B2~1215+30 (from 0.3\degr\ to 1.3\degr).
Given that the radial acceptance of the camera is not flat, this causes a
lower average sensitivity for the VERITAS exposure. Correcting
for this effect, the total effective exposures on B2~1215+30, are 29,
38, and 15 hours for the different observation epochs.
After run selection and nightly calibration, image cleaning is
performed to remove the night sky background contamination from the
shower images. These images are then parameterized using a
second-moment analysis \citep{1985ICRC....3..445H}. Additionally, a
log-likelihood fitting algorithm is applied to recover truncated
images at the edge of the camera. After image quality cuts,
the shower direction and core location are reconstructed for events having a
minimum of three telescope images.
For each event the energy is estimated from lookup tables, with
an energy resolution of about 15-20\%.
To separate the gamma-ray events from the hadronic events, a set of optimized
cuts based on image parameters is applied, as described in
\citet{2008ApJ...679.1427A}. The optimization of those cuts has been
performed \textit{a priori} on a 5\% Crab Nebula-like source and yielded an
energy threshold of about 250~GeV for observations at 20\degr\ zenith
angle.
The remaining background is estimated using a ring-background
model. The ON region is circular,
centered on the source position
with radius $\theta \leq 0.09\degr$. The OFF region is defined as a
ring, placed around the ON region, with inner and outer radii of
0.46\degr\ and 0.54\degr, respectively. The radii are chosen so that
the ratio between ON and OFF area is 1:10. The normalization $\alpha$
is given by the area ratio modified by the radial acceptance of the camera.
Regions around bright stars (V magnitude $<$~7), as well as the region
around the position of 1ES~1218+304, were excluded from the background
estimation.
The analysis of the total data set over the time period from December
2008 to May 2012 yields 259 events excess over background.
The resulting detection significance is $8.9 \sigma$ according to
Eq.~17 in \citet{1983ApJ...272..317L}.
The results of the three observing periods are presented in
\reftab{tbl-1}.
In 2011, the source was clearly detected with a significance of
$10.4 \sigma$, while in 2008/09 and in 2012 the source is not detected
with a significance greater than $5 \sigma$.
Before presenting the results of these two latter periods, we
concentrate on the 2011 data set where the significant detection
allows for a spectral analysis.
\reffig{fig1} shows the significance sky map for the 2011
data set.
To determine the position of the VHE gamma-ray emission, a symmetric
2D Gaussian was fitted to the excess sky map (binned to
$0.05\degr$) of this data set. It revealed a point-like excess with
the best-fit source position at R.A. = $12^h17^m48.5^s \pm 1.7^s$,
DEC = $+30\degr06'06'' \pm 25''$ with a systematic uncertainty of $50''$.
The VERITAS source is thus named VER~J1217+301, and is positionally
consistent with the BL Lac object B2~1215+30 \citep{1998AJ....116..516M}.
The derived differential photon spectrum of the 2011 data set is shown
in \reffig{fig2}. It can be fitted by a power law ($\chi^2$/ndf = 1.25/2):
dN/dE = $F_0 (E/300\,\mathrm{GeV})^{-\Gamma}$,
with $F_0 = (2.3 \pm 0.5_{\mathrm{stat}} \pm 0.9_{\mathrm{syst}})
\times 10^{-11}$ cm$^{-2}$ s$^{-1}$
and $\Gamma = 3.6 \pm 0.4_{\mathrm{stat}} \pm 0.3_{\mathrm{syst}}$.
The flux above 200~GeV is $(8.0 \pm 0.9_{\mathrm{stat}} \pm 3.2_{\mathrm{syst}})
\times 10^{-12}\, \mathrm{cm}^{-2} \mathrm{s}^{-1}$.
This corresponds to 3.4\% of the Crab Nebula flux
\citep{1998ApJ...503..744H} above the same energy threshold.
A 29.5-day binned light curve above 200~GeV is produced and shown in
\reffig{fig3}. A constant fit to these flux points showed no evidence
for deviation from a steady flux ($\chi^2/ndf = 4.7/5$).
No significant flux variations within any monthly bin were detected
either.
An analysis of the data available outside the 2011 season revealed
lower gamma-ray fluxes (see \reftab{tbl-1}). The 2008/09 data set
analysis resulted in a gamma-ray excess of $1.1\sigma$ significance at
the source location.
Using the method of \citet{1983NIMPR.212..319H}, this excess
corresponds to a 99\% upper limit above 200~GeV of 2\% of the Crab
Nebula flux, assuming the same spectral index as derived in 2011.
In 2012, the source is observed with a significance of
$3.2\sigma$. Given that it is an established VHE emitter a flux is
derived:
$F(E>200\mathrm{GeV}) = (2.8 \pm 1.1_{\mathrm{stat}} \pm 1.1_{\mathrm{syst}})
\times 10^{-12}\, \mathrm{cm}^{-2} \mathrm{s}^{-1}$,
corresponding to about 1.2\% of the Crab Nebula flux above the same
energy threshold.
The hypothesis of a constant flux between the three seasons is
excluded at the level of $4.5\sigma$.
This shows that the source was significantly fainter in 2008/09 and 2012
compared to the relatively bright flux state in 2011, as shown both by
MAGIC and VERITAS measurements.
\section{Fermi-LAT: High-energy gamma-ray observations}
The Large Area Telescope (LAT) on board the {\it Fermi} satellite
is a pair-conversion gamma-ray telescope sensitive to photon
energies from 20 MeV to a few hundred GeV
\citep{2009ApJ...697.1071A}.
A binned likelihood analysis was performed using the LAT ScienceTools
(version v9r23p1) and P7SOURCE\_V6 instrument response functions.
``Diffuse'' class events with $0.2 < E/\mathrm{GeV} < 100$ in a
square region of interest (ROI) of $20\degr \times 20\degr$ around
B2~1215+30 were selected.
The center of the ROI was shifted by 4\degr\ towards the bright FSRQ
4C~+21.35 (8.87\degr\ away from B2~1215+30) to avoid edge effects. Further quality selection was performed by rejecting events with a
zenith angle $> 100\degr$ and a rocking angle $> 52\degr$ in order to
avoid contamination from albedo photons from the Earth's limb.
A background model was constructed including nearby gamma-ray sources
and diffuse emission. All known gamma-ray sources from the
second \textit{Fermi} catalogue \cite[2FGL; ][]{2012ApJS..199...31N} within
the ROI were included in the model. Sources outside the
ROI, but within 5\degr\ of the ROI edges, were also included to
account for possible photon contamination due to the large LAT point
spread function. As in the 2FGL catalogue, a log-parabola function was
used for sources with significant spectral curvature. Otherwise,
spectra were described as a power law. The spectral parameters of the
sources inside the ROI were left free during the fitting
procedure. Sources outside the ROI, but within 5\degr\ of the
ROI edges, had their spectral parameters fixed to the 2FGL catalog values.
The galactic and extragalactic diffuse gamma-ray emission together
with the residual instrumental background were also modeled using the
publicly-available files\footnote{The files used were gal\_2yearp7v6\_v0.fits
for the Galactic diffuse and iso\_p7v6source.txt for the isotropic
diffuse component as available at
\url{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}.}.
A 14-day binned light curve using the first 48 months of the {\it
Fermi} mission was produced.
During the period quasi-simultaneous with the 2011 VERITAS observations
(MJD $55560-55720$), the flux above 200~MeV is compatible with being
constant ($\chi^2/ndf = 11.1/10$), and the light curve is shown in
\reffig{fig3}.
A spectrum in the {\it Fermi}-LAT energy range was derived using this subset
of observations only. During that 160-day period, B2~1215+30
is detected with a test statistic value of $TS = 363$, corresponding
to a significance of about $19\sigma$.
The spectrum is compatible with a power law with a photon index
$\Gamma = 1.97 \pm 0.08$. The integral flux above 200~MeV is
$(3.45 \pm 0.34) \times 10^{-8}\, \mathrm{cm}^{-2}\, \mathrm{s}^{-1}$.
Potential contamination
from the nearby source 1ES~1218+304 (at 0.76\degr\ distance) was
checked by producing a residual $TS$ map; no features or
asymmetries in the $TS$ distribution of B2~1215+30 were seen.
It is worth noting that in the GeV range the flux of 1ES~1218+304 is
$\sim 0.4-0.6$ times that of B2~1215+30, in contrast to the VHE regime,
where 1ES~1218+304 is typically much brighter.
Given the clear variability seen in the VHE band, a mean flux above
200~MeV contemporaneous with the 2008/09 and 2012 VERITAS observations
was derived and is
$(1.8 \pm 0.3) \times 10^{-8}\, \mathrm{cm}^{-2}\, \mathrm{s}^{-1}$ and
$(3.0 \pm 0.4) \times 10^{-8}\, \mathrm{cm}^{-2}\, \mathrm{s}^{-1}$,
respectively. The hypothesis of a constant flux in the high-energy
regime between the three seasons contemporaneous with the VERITAS
observations can be rejected at the $3\sigma$ level.
\section{Swift-XRT: X-ray observations}
The X-ray telescope (XRT) on board the {\it Swift} satellite is designed
to measure X-rays in the $0.2-10$ keV energy range
\citep{2005SSRv..120..165B}. Target of opportunity observations were
obtained in January 2011 (MJD $55565-55573$), following the detection
of VHE emission from B2~1215+30, as well as on 10 nights in
April/May 2011 (MJD $555673-55686$).
All XRT data presented here were taken in photon counting mode with
negligible pile-up effects.
The data reduction and calibration were done using HEASoft, XSPEC
version 12.6.0 and the swxpc0to12s6\_20070901v011.rmf response
function.
The data were grouped, requiring a minimum of 20 counts/bin, and then
fitted with an absorbed power law model.
The galactic column density of $N_{\mathrm{H}} = 1.74 \times 10^{20}
\mathrm{cm}^{-2}$ was used, taken from the LAB neutral hydrogen survey
\citep{2005yCat.8076....0K}.
When it was left free during the fit, the column density value was
consistent with what was found by the LAB survey.
The spectral analysis of the two time periods shows the blazar in different
states. The observations performed in January indicate a harder
and brighter flux state, allowing the data to be fitted with an absorbed
power law up to 10~keV. The highest integrated flux was found on MJD
55565 with $F_{[2-10\mathrm{keV}]} = (3.31 \pm 0.22) \,\times 10^{-12}
\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ and
a photon index of $2.46 \pm 0.05$. It will be referred to as the high
X-ray state in the SED modeling section.
The observations taken in April/May show the object in a lower-flux state,
with too poor statistics in the individual observations in the energy
bins above 5~keV to constrain a
spectral fit. However, combining the exposures from all the observations of
April/May allows a fit in the 0.4 to 10 keV range with an integrated
flux of $F_{[0.4-10\mathrm{keV}]} = (4.25 \pm 0.16)\, \times 10^{-12}
\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ and
a photon index of $2.74 \pm 0.04$.
This average spectrum is then used to represent the low X-ray
state of the SED for the modeling in Section~6.
Additionally, an X-ray flux-index correlation study was performed on the
entire data set. The results are shown in
\reffig{fig4}. The correlation coefficient is found to
be $r = -0.88$ with an uncertainty of $< 0.1$. This implies a strong
(negative) correlation between spectral index and integrated flux of
the X-ray observations.
A similar anti-correlation could also be seen in several other
VHE-emitting blazars, e.g., the IBLs VER~J0521+211 \citep{verj0521} or
W~Comae \citep{2009ApJ...707..612A}. However, this trend of ``harder
when brighter'' is not always been observed in VHE blazars, e.g. no
correlation between X-ray flux and its spectral index could be
detected during the low VHE-flux state of the blazar 1ES~1959+650
\citep{2013ApJ...775....3A}.
\section{UV and optical observations}
A {\it Swift} Ultra Violet and Optical Telescope
\citep[UVOT;][]{2005SSRv..120...95R} analysis has been performed including all
the observations performed between January and May 2011.
Exposures were taken in V, B, U, UVW1, UVM2 and UVW2 pass bands in
{\it image mode}, discarding the photon-timing information.
The photometry was computed following
the general prescription of \citet{2008MNRAS.383..627P} and
\citet{2010MNRAS.406.1687B}, carefully excluding the
contribution from nearby faint objects.
A dedicated inter-calibration study between optical, UV and X-ray
datasets was carried out, adopting the $N_{\mathrm{H}}$ parameter for the hydrogen
column (obtained from the LAB survey).
The results were reddening corrected using $E(B-V)=0.023$ mag
\citep{1998ApJ...500..525S}.
Then, the corresponding optical/UV galactic extinction coefficients
were computed ($R_V=2.667$) and applied
\citep{1999PASP..111...63F}. The host galaxy
contribution of B2 1215+30 was estimated using the PEGASE-HR code
\citep{2004A&A...425..881L} extended for the ultraviolet UVOT filters
and by using the R-band photometric results of
\citet{2007A&A...475..199N}.
No zodiacal light correction was introduced.
For each filter, the integrated flux was computed by using the
related effective frequency and not convolving the filter transmission
with the source spectrum. This may produce a moderate overestimation
(around $10$\%) of the integrated flux. The total upper limit
systematic uncertainty is 15\%.
In the optical regime, we monitored B2~1215+30 using the Super-LOTIS
(Livermore Optical Transient Imaging System) robotic telescope over
the period December 2010 $-$ March 2012.
In addition to these R-band observations, B2~1215+30 was observed with
the 1.3~m McGraw-Hill telescope of the MDM observatory, located at
Kitt Peak, Arizona, during one week in May 2011 (MJD $55706 - 55709$),
using standard V, R, and I filters.
The data were bias-subtracted and flat-fielded using the
routines of the Image Reduction and Analysis Facitily (IRAF;
\citealp{1993ASPC...52..173T,1986SPIE..627..733T}).
Comparative photometry with stars of known magnitude was performed
and the resulting light curve is shown in \reffig{fig3} showing clear
variability contemporanous with the VERITAS measurements.
This is in line with the variability seen on the publicly available
light curves from the Tuorla
Observatory\footnote{ \url{http://users.utu.fi/kani/1m/ON_325.html}}.
For the construction of the optical SED using MDM observations,
the magnitudes were corrected for Galactic extinction according to
\cite{1998ApJ...500..525S}.
The values are A$_\mathrm{V}$ = 0.079, A$_\mathrm{R}$ = 0.064, and
A$_\mathrm{I}$ = 0.046, as provided by the NED.
\section{Spectral energy distribution and modeling}
An SED was constructed using the multi-wavelength data obtained in
2011. During this time, no variability was detected in the high-
or very-high-energy regimes. Given the low statistics in those energy
regimes, the \textit{Fermi}-LAT data contemporaneous with the
VERITAS observations in 2011 are used (MJD $55560-55720$).
Variability is clearly seen in X-rays and,
therefore, two spectra are extracted:
one to represent the high X-ray state in January (MJD
55565) and the other one to represent the low X-ray state observed in
April/May (using the combined spectrum from all observations between
MJD $55673-55686$).
\textit{Swift}-UVOT data simultaneous with the X-ray
observations were used when available.
Given the relatively large systematic uncertainty of the UVOT analysis,
the quasi-simultaneous optical spectrum from MDM (MJD $55706-55709$) is
additionally used in the SED representing the low X-ray state.
To complete the low energy part of the SED, archival data in the
micrometer wavelength regime, taken from \citet{2004MNRAS.352..673A},
are included. Unfortunately, no information on the
variability at those wavelengths is found, but since blazars are
usually variable at all wavelengths, the inclusion of these
archival data in the modeling will be discussed later.
The extracted broadband SED, in the $\nu F_{\nu}$
representation, can be found in \reffig{fig5}. It
shows a two-bump structure typical for blazars.
Based on the location of the synchrotron peak between UV and X-rays, the
source classification as an IBL according to
\citet{2006A&A...445..441N}, or as an HSP according to
\citet{2011ApJ...743..171A}, can be confirmed.
However, this classification might only be true for the observations
in 2011 reported here,
as it is known that some blazars undergo spectral changes during
flares which could change their SED classification, e.g. VER~J0521+211
\citep{verj0521}.
The SED is modeled with the synchrotron self-Compton
(SSC) model by \citet{2002ApJ...581..127B}, which assumes that the
plasma jet is
powered by accretion of material onto a super-massive black hole
\citep[for details see][]{2009ApJ...707..612A}.
The emission zone is modeled as a spherical volume
of radius $R$ moving with relativistic speed
$\beta_{\Gamma} c$ along the jet axis.
The jet is directed at a small angle $\theta$ with respect to the line
of sight to the observer. Since the observing angle is very hard to
measure, it is fixed within the model to the superluminal angle,
$\theta \simeq 1/\Gamma$, for which the (bulk) Lorentz factor $\Gamma$
equals the Doppler factor
$D = (\Gamma (1-\beta_{\Gamma} \cos \theta) )^{-1}$.
The results of the model depend mainly on the Doppler factor,
hence other combinations of $\theta$ and $\Gamma$ resulting in the
same Doppler factor are also possible.
Into the emission region, a population of ultra-relativistic
non-thermal electrons is injected
following a power-law distribution with low- and high-energy cutoffs
$\gamma_1$ and $\gamma_2$, respectively, so that
$Q_{\mathrm{e}}(\gamma,t) = Q_{\mathrm{0}}(t)
\gamma^{-q}$ for $\gamma_1 < \gamma < \gamma_2$.
The normalization of the electron distribution is related to the
magnetic field $B$ through a relative partition parameter
$\epsilon_{\mathrm{B}}$,
defined as $\epsilon_{\mathrm{B}} = L_{\mathrm{B}}/L_{\mathrm{e}}$.
$L_{\mathrm{e}}$ is the kinetic power in the relativistic electrons and
$L_{\mathrm{B}}$ is the power in the Poynting flux carried by the magnetic
field. The magnetic field itself is a free parameter within the
model.
As the emission region is propagating along the jet, the continuously
injected particles lose energy through synchrotron and SSC radiation
and may escape from the emitting region.
The particle escape is described by an escape timescale parameter
$\eta_{\mathrm{esc}} > 1$ with $t_{\mathrm{esc}} = \eta_{\mathrm{esc}} \cdot R/c$.
As a result of the assumed quasi-equilibrium between particle
injection, escape and radiative cooling, a break in the electron
distribution will occur self-consistently at a Lorentz factor
$\gamma_{\mathrm{b}}$, where $t_{\mathrm{esc}} = t_{\mathrm{cool}}(\gamma_{\mathrm{b}})$.
Depending on whether $\gamma_{\mathrm{b}}$ is larger or less than $\gamma_1$, the
system will be in the slow or fast cooling regime. In the fast cooling
regime ($\gamma_{\mathrm{b}} < \gamma_1$), the equilibrium electron distribution
will be a broken power law with $n(\gamma) \propto \gamma^{-2}$ for
$\gamma_{\mathrm{b}} < \gamma < \gamma_1$ and
$n(\gamma) \propto \gamma^{-(q+1)}$ for $\gamma_1 < \gamma < \gamma_2$.
In the slow cooling regime ($\gamma_{\mathrm{b}} > \gamma_1$), the broken power
law is of the form $n(\gamma) \propto \gamma^{-q}$ for
$\gamma_1 < \gamma < \gamma_{\mathrm{b}}$ and
$n(\gamma) \propto \gamma^{-(q+1)}$ for $\gamma_{\mathrm{b}} < \gamma <
\gamma_2$.
Altogether, the SSC model described here has eight free parameters,
listed in \reftab{tab:VERITASblazars}.
Several of these parameters can be estimated from observables like the
spectra obtained from multi-wavelength (MWL) observations as well as
the measured
variability time scales (see, e.g., \citealp{1998ApJ...509..608T}).
The absorption of VHE gamma rays on the extragalactic background light
(EBL) is accounted for in the predicted fluxes using the EBL model
of \citet{2010ApJ...712..238F}\footnote{This absorption is consistent
with the absorption level derived from EBL models of
\citet{2008A&A...487..837F} and \citet{2009MNRAS.399.1694G}.}.
Here, a redshift of $z=0.130$ is used for the modeling; we
will discuss later how the results are affected if a
redshift of $z=0.237$ is adopted instead.
In \reffig{fig5} the results of the SSC model are shown.
It can be seen that the overall SED for both X-ray states in 2011 can
be well described by the model.
The solid lines represent the model for which the archival data
in the micrometer waveband are taken into account.
At those frequencies of around $10^{11}$~Hz, a spectral break
occurs due to the transition from fast cooling to the
slow cooling regime. The position of this break is determined by the
escape time scale, resulting in a large value for $\eta_{\mathrm{esc}}$.
The Doppler factor is relatively large, with $D=30$.
Using a lower Doppler factor would
require a larger emission region and hence a lower magnetic field
strength, resulting in variability time scales
longer than days (following causality
arguments). This lower Doppler factor scenario would contradict the
measurements of X-ray variability during the January observations.
The magnetic field strength is found to be quite low,
resulting in a very small relative partition parameter.
A magnetic field far below equipartition ($\epsilon_{\mathrm{B}} \approx 0.1-1$),
as found here,
might indicate a particle-dominated jet, in which the magnetic field
in the emission region is self-generated and/or amplified in
shocks. In contrast, a magnetic field near or above equipartition
would be consistent with a Poynting-flux-dominated jet in which
magnetic field energy is transferred to particles, reaching
approximate equipartition in the high-energy emission zone.
In order to account for the two different X-ray states observed in
2011, the electron injection spectral index together with the
magnetic field strength within the modeled emission region were
changed.
The injection index during the high X-ray state in January ($q=2.8$)
is found to be harder than during the low X-ray state in April/May
($q=3.4$).
Under the assumption that the particles in the jet are accelerated
within relativistic shocks, the spectral change in the electron
distribution may be explained by a change in the shock field
obliquity: a larger angle between magnetic field and shock front
results in a harder particle spectrum
\citep[see, e.g.,][]{2009ApJ...698.1523S}.
This change of the injection spectrum also
leads to flux variations in the high-energy peak.
However, neither \textit{Fermi}-LAT nor VERITAS are sensitive to those
variations given the flux level of the source during the observation
period reported here.
All of the above results are obtained by taking the
archival data at around $10^{11}$~Hz into account.
However, due to possible variability in this waveband, we also modeled
the SED of the source ignoring those archival data.
The resulting model spectrum is represented by the dashed lines in
\reffig{fig5} and represents the SED well.
The only changes made to the model parameters were to reduce the value
of the escape time parameter $\eta_{\mathrm{esc}}$ and the injection power.
It was found that a value up to ten times smaller for
$\eta_{\mathrm{esc}}$ could
be used to model the SED. This means that the escape time scale may be
shorter and the cooling break would occur at higher energies compared to the
model including the archival data. Additionally, the system is closer to
equipartition, since the injection power for the electron distribution
is lower (with the same value for the magnetic field strength).
Due to the lack of simultaneous data in the $10^{11}$~Hz
domain, $\eta_{\mathrm{esc}}$ and $L_{\mathrm{e}}$ are unconstrained,
resulting in a range of parameter combinations that describe the
observed SED well. It is worth noting that in this, and most other
one-zone leptonic models, the synchrotron emission from the gamma-ray
emission region is self-absorbed at millimeter and longer
wavelengths. Therefore, these models are
often unable to account for the radio emission, which is generally
thought to be due to the superposition of self-absorbed synchrotron
components produced further out along the jet
\citep{1985ApJ...298..114M}, and is treated in one-zone emission
models as upper limit.
Another difficulty of the model and its possible interpretation is
the uncertainty in the redshift. We applied the same model to
the SED using $z=0.237$. We found that
both X-ray states can well be modeled by a change of the electron
injection spectral index together with the magnetic field strength.
In this case, the Doppler factor needs to be larger ($D=50$) to
compensate for the EBL absorption at high energies
and the model-predicted fluxes are found to be slightly below the
VERITAS spectral points. Nevertheless, this is still compatible with
the VERITAS measurement considering systematic errors, therefore, this
redshift cannot be excluded.
\section{Discussion}
As previously mentioned, B2~1215+30 was observed by MAGIC early in
2011. \citet{2012A&A...544A.142A} presented the source as
an ``exceptional VHE $\gamma$-ray emitting BL Lac'', mainly based on
its SED and the results obtained from the MWL modeling.
Here, the results of the modeling presented in Section~6 are compared
to those in the MAGIC publication and then put into perspective with
results obtained from other blazars detected by VERITAS.
MAGIC observed B2~1215+30 for approximately 20~hours between January and
February 2011 and their spectral points are shown in \reffig{fig5},
compatible with the VERITAS measurements.
Quasi-simultaneous MWL data, also compatible with those obtained here, were
used to construct the SED and were modeled by the SSC model of
\citet{2003ApJ...593..667M} using the redshift of $z=0.13$.
In the paper, \citet{2012A&A...544A.142A} represent the data with SSC
model parameters which are compatible with those obtained here.
They also found that the same SSC model can well reproduce the data
with a higher Doppler factor ($D=60$) or a higher minimum Lorentz
factor of the electron distribution ($\gamma_{\mathrm{min}} =
3\times10^{3}$). Using these values for the model parameters, we
did not succeed in fully representing the SED. The main reason is that
the modeled high-energy peak is not broad enough to represent the
low-energy points of the \textit{Fermi}-LAT spectrum. However, this
part of the spectrum was not used in the MAGIC publication.
To address the question whether B2~1215+30 is extreme in terms
of its SED and model parameters, the results of our modeling are
compared to those obtained on all the VERITAS-detected blazars which
have contemporaneous MWL data and are modeled with the SSC model by
\citet{2002ApJ...581..127B}.
In total, 6 HBLs and 3 IBLs are found; the model parameters are
listed in \reftab{tab:VERITASblazars}. Three of those blazars $-$
1ES~0806+524 \citep{2009ApJ...690L.126A}, Mrk~421
\citep{2009ApJ...703..169A}, and W~Comae
\citep{2008ApJ...684L..73A,2009ApJ...707..612A} $-$ were found in
different flux states during the MWL observations and have more than one
set of model parameters.
PKS~1424+240 \citep{2010ApJ...708L.100A} and 3C~66A
\citep{2011ApJ...726...43A} have been modeled using different redshift
assumptions. While PKS~1424+240 will not be included in the comparison
study due to the current lack of redshift constraint, two redshift
values are given in \reftab{tab:VERITASblazars} for 3C~66A,
i.e. $z=0.3$ and $z=0.444$, as they enclose the recently
published redshift limits which were found to be in the range of $0.3347 < z
\leq 0.41$ \citep{2013arXiv1302.2948F}.
As one can see in \reftab{tab:VERITASblazars}, most of the parameters
used to model the SED of B2~1215+30 are well within the range of those
used for previously detected blazars.
The Doppler factor, for example, is usually found to be in
the range of $D = 20-30$ for the applied model. This is in agreement
with other SSC models, e.g. \citet{2010MNRAS.401.1570T}.
However, there are two parameters which are outside this
``standard range'': the magnetic field strength $B$ and the escape
time parameter $\eta_{\mathrm{esc}}$. The first one is relatively low for
B2~1215+30 and results in a very low relative partition parameter. For some
of the other blazars, e.g. W~Comae and 3C~66A, this behavior has also
been seen. In those cases, an SSC model with an external radiation
field resulted in model parameters with larger magnetic field
strengths and closer to equipartition.
However, in the case of B2~1215+30 no improvement could be found by
adding an EC component to the model - neither in the representation of
the shape of the SED nor by bringing the system closer to equipartition.
In general, the magnetic field strength values obtained for the
different sources are consistent with results from other leptonic
blazar models.
The second parameter, $\eta_{\mathrm{esc}}$, is high
compared to the model parameters of the other blazars. Such a high
value for $\eta_{\mathrm{esc}}$ implies long escape time scales. This
could hint at a relatively well ordered (laminar) magnetic field in
the emission region.
However, it has already been shown in the previous section that the
value for $\eta_{\mathrm{esc}}$ can be lowered significantly when taking only
the contemporaneous data into account, without losing the ability to
reproduce the shape of the SED. This value is then closer
to values applied to the other VERITAS-modeled blazars.
In summary, the model parameters derived here for the applied SSC
model are in the range of those derived from previous VERITAS
blazar modeling. In this sense B2~1215+30 is a typical VHE-detected blazar.
\section{Summary and Conclusion}
We have presented long term observations of BL Lac B2~1215+30 at VHE energies
with VERITAS between December 2008 and May 2012.
During these observations, the source was clearly detected and showed
clear variability on time scales longer than months,
while variability on shorter time scales could not be detected.
In 2011, the source was found to be in a bright state and a spectral
analysis could be performed. The results are compatible with the MAGIC
results from early 2011 reported in \citet{2012A&A...544A.142A}.
MWL data, quasi-simultaneous with the VERITAS observations in 2011, were
used to construct the SED of B2~1215+30 and confirmed its
classification as an IBL.
During these VERITAS observations, B2~1215+30 showed different
flux states in the X-ray regime. These could be successfully
reproduced with an SSC model by changing the spectral index of the
injected electron distribution together with the magnetic field
strength.
Our study finds a model description for the SED of B2~1215+30 similar
to other TeV-detected blazars.
Observations of B2~1215+30 by VERITAS will
continue as part of a monitoring program on 1ES~1218+304, a
TeV blazar in the same field of view. This will allow a search
for variability on different time scales and could result in tighter
constraints for the input model parameters, as dedicated observations
of B2~1215+30 can be triggered in case of an increased flux state.
\acknowledgments
This research is supported by grants from the U.S. Department of
Energy Office of Science, the U.S. National Science Foundation and the
Smithsonian Institution, by NSERC in Canada, by Science Foundation
Ireland (SFI 10/RFP/AST2748) and by STFC in the U.K., as well as award
NNX12AJ12G from the NASA {\it Swift} Guest Investigator program.
We acknowledge
the excellent work of the technical support staff at the Fred Lawrence
Whipple Observatory and at the collaborating institutions in the
construction and operation of the instrument.
We are also grateful to Grant Williams and Daniel Kiminki for their
dedication to the operation and support of the Super-LOTIS telescope.
HP acknowledges support through the Young Investigators Program of
the Helmholtz Association.
MB acknowledges support by the South African Research Chairs
Initiative of the Department of Science and Technology and the
National Research Foundation of South Africa.
Support for MF was provided by NASA through Hubble Fellowship grant
HF-51305.01-A awarded by the Space Telescope Science Institute, which
is operated by the Association of Universities for Research in
Astronomy, Inc., for NASA, under contract NAS 5-26555.
|
1,116,691,499,039 | arxiv | \section{Introduction}
In quantum mechanics, the state of a system is represented by a positive trace class operator with unit trace - called a density matrix - acting on a separable Hilbert space $\mathcal{H}$. We denote the set of density matrices - the set of states - by $\mathcal{S}(\mathcal{H})$. Given some trace class operator $\widehat{\rho}$, it is in general very difficult to assess whether $ \widehat{\rho} \in \mathcal{S}(\mathcal{H})$. The main difficulty resides in the verification of the positivity condition:
\begin{equation}
(f | \widehat{\rho} f )_{\mathcal{H}} \ge 0,
\label{eqIntroduction1}
\end{equation}
for all $f \in \mathcal{H}$. This is particularly difficult in infinite dimensional Hilbert spaces. In this work we shall be concerned with the case $\mathcal{H}=L^2 (\mathbb{R}^n)$.
A very useful representation of density matrices, which casts position and momentum variables on equal footing and is akin to a classical probability density, is the Wigner distribution \cite{Wigner}. It is obtained from $\widehat{\rho}$ by way of the Weyl transform \cite{Birk,Wong}:
\begin{equation}
\widehat{\rho} \mapsto W \rho (x,p) = \frac{1}{(2 \pi \hbar)^n} \int_{\mathbb{R}^n} \rho \left(x+ \frac{y}{2},x- \frac{y}{2} \right) e^{- \frac{i}{\hbar} p \cdot y} dy,
\label{eqIntroduction2}
\end{equation}
where $\rho ( \cdot, \cdot ) \in L^2 (\mathbb{R}^{2n})$ is the Hilbert-Schmidt kernel of $\widehat{\rho}$. Here $h=2 \pi \hbar$ is Planck's constant and $x,p$ denote the particle's position and momentum respectively. We shall write them collectively as $z=(x,p) \in \mathbb{R}^{2n}$, a point in the particle's phase space $\mathbb{R}^n \times (\mathbb{R}^n)^{\ast} \simeq \mathbb{R}^{2n}$.
The Wigner distribution is not a true probability density as it may be negative \cite{Gro,Hudson}. Rather, it defines a finite signed measure:
\begin{equation}
A \mapsto \mu_{\rho} (A):= \int_{A} W \rho (x,p) dx dp ,
\label{eqIntroduction3}
\end{equation}
for Borel sets $A \in \mathcal{B}(\mathbb{R}^{2n})$, and $ \mu_{\rho} (\mathbb{R}^{2n})=1$.
This means that the covariance matrix $\operatorname*{Cov}(W \rho)$ of $W \rho$ might {\it a priori} not be positive definite. However, it can be shown that it is \cite{Narcow}. In fact, it obeys an even stronger constraint called the Robertson-Schr\"odinger uncertainty principle (RSUP) which states that \cite{Birk,Narcow2,Narcow3,Narconnell}
\begin{equation}
\operatorname*{Cov}(W \rho) + \frac{i \hbar}{2} J \ge 0,
\label{eqIntroduction4}
\end{equation}
where $J$ is the standard symplectic matrix:
\begin{equation}
J= \left(
\begin{array}{c c}
0 & I\\
-I & 0
\end{array}
\right).
\label{eqIntroduction5}
\end{equation}
It can be shown that condition (\ref{eqIntroduction4})
is a necessary but not sufficient condition for a phase space
function to be a Wigner distribution \cite{PLA}.
Nevertheless it has many interesting features. For a Gaussian
measure $G$ it is both a necessary and sufficient condition for
$G$ to be a Wigner distribution \cite{Narcow2}. It is invariant
under linear symplectic transformations (unlike the more
frequently used Heisenberg uncertainty relation). It has a nice
geometric interpretation in terms of Poincar\'e invariants
\cite{Narcow}, and it is intimately related with symplectic
topology and Gromov's non-squeezing theorem
\cite{physreps,Gromov}. By a suitable linear symplectic
transformation, the RSUP makes it a simple task to determine
directions in phase space of minimal uncertainty \cite{Narcow}. In
particular, we say that the RSUP is {\it saturated} if we can find
$n$ two-dimensional symplectic planes, where the uncertainty is
minimal. More specifically, the RSUP (\ref{eqIntroduction4}) is
saturated, whenever all the Williamson invariants of
$\operatorname*{Cov}(W \rho)$ are minimal \cite{physreps,Narcow2}:
\begin{equation}
\lambda_{\sigma,1}(\operatorname*{Cov}(W \rho))=\lambda_{\sigma,2}(\operatorname*{Cov}(W \rho))= \cdots=\lambda_{\sigma, n}(\operatorname*{Cov}(W \rho))= \frac{\hbar}{2}.
\label{eqIntroduction12}
\end{equation}
Having said that, there is nothing about inequality
(\ref{eqIntroduction4}) which is particularly quantum mechanical,
with the exception of the presence of Planck's constant. In fact,
(\ref{eqIntroduction4}) is only a requirement about a minimal
scale related to $\hbar$. This condition is not sufficient
to ensure that the state is quantum mechanical (not even if
saturated). We shall give an example of a function in phase space
which saturates the RSUP, but which is manifestly not a Wigner
function. More emphatically, we will show that {\it any}
measurable function in phase space $F$ with a positive definite
covariance matrix $ \operatorname*{Cov}(F)>0$ satisfies
(\ref{eqIntroduction4}) after a suitable dilation $F(z) \mapsto
\lambda^{2n} F(\lambda z)$, while most of them remain non
quantum. This means that being a quantum state is not only a
question of scale but also of shape. This prompted us to
look for an alternative
uncertainty principle which goes beyond the RSUP.
In order to state our results precisely, let us fix some notation. In the sequel
$\mathcal{F}_{\sigma} (F)$ denotes the symplectic Fourier
transform of the function $F$. Roughly speaking, it can be
obtained from the ordinary Fourier transform $\mathcal{F} (F)$ by
a symplectic rotation and a dilation $(\mathcal{F}_{\sigma} F)
(z)= \frac{1}{(2 \pi \hbar)^n} (\mathcal{F} F) \left(\frac{Jz}{2
\pi \hbar} \right)$.
For a given measurable phase-space function $F$, satisfying
\begin{equation}
\int_{\mathbb{R}^{2n}} F(z) dz \ne 0,
\label{eqIntroduction7}
\end{equation}
we write
\begin{equation}
\widetilde{F} (z):= \frac{F(z)}{\int_{\mathbb{R}^{2n}} F(z) dz}~ .
\label{eqIntroduction7.1}
\end{equation}
Morevover, we denote by
\begin{equation}
<z>_F= \int_{\mathbb{R}^{2n}} z \widetilde{F}(z) dz
\label{eqIntroduction5}
\end{equation}
the expectation value of $z$ regarded as a column vector, and by
\begin{equation}
\operatorname*{Cov}(F)= \int_{\mathbb{R}^{2n}} (z - <z>_F) (z - <z>_F)^T \widetilde{F}(z) dz
\label{eqIntroduction6}
\end{equation}
the covariance matrix. Notice that there is some abuse of language in this probabilistic terminology, as $F$ is not required to be non-negative.
The main result of this paper is Theorem \ref{TheoremERSUP2}, where we prove the following uncertainty principle, hereafter called {\it refined Robertson-Schr\"odinger uncertainty principle}:
\begin{equation}
\begin{array}{c}
\operatorname*{Cov}(W \rho) + \frac{i \hbar}{2}J \ge \\
\\
\ge \mathcal{P} \left[W \rho \right] \left( \operatorname*{Cov}(|\widetilde{W \rho}|^2)+\frac{1}{4} \operatorname*{Cov}(|\mathcal{F}_{\sigma}\widetilde{W \rho}|^2) + \frac{i \hbar}{2}J \right) \ge 0
\end{array}
\label{eqIntroduction8}
\end{equation}
for Wigner distributions $W \rho$ belonging to some appropriate
maximal functional space and where
\begin{equation}
\mathcal{P} \left[W \rho \right]:=(2 \pi \hbar)^n ||W \rho||_{L^2(\mathbb{R}^{2n})}^2
\label{eqIntroduction9}
\end{equation}
is the so-called purity of the state $\rho$. As before, we have defined:
\begin{equation}
\widetilde{W \rho } (z) := \frac{W \rho (z)}{||W \rho||_{L^2 (\mathbb{R}^{2n})}} , \hspace{1 cm} \mathcal{F}_{\sigma} \widetilde{W \rho } (\zeta) := \frac{\mathcal{F}_{\sigma} W\rho (\zeta )}{||W \rho||_{L^2 (\mathbb{R}^{2n})}}
\label{eqIntroduction9.1}
\end{equation}
to make sure that $|\widetilde{W \rho } (z) |^2$ and $|\mathcal{F}_{\sigma} \widetilde{W \rho } (\zeta)|^2$ are properly normalized probability densities.
Moreover, we will also show that the first inequality in (\ref{eqIntroduction8}) becomes an equality if an only if the
state is pure.
So, in fact, the refined RSUP amounts to two inequalities. The
first inequality is
\begin{equation}
\operatorname*{Cov}(|\widetilde{W \rho}|^2)+ \frac{1}{4} \operatorname*{Cov}(|\mathcal{F}_{\sigma}\widetilde{W \rho}|^2) + \frac{i \hbar}{2} J\ge 0.
\label{eqIntroduction9}
\end{equation}
In other words, the matrix $\operatorname*{Cov}(|\widetilde{W
\rho}|^2)+\frac{1}{4} \operatorname*{Cov}(|\mathcal{F}_{\sigma}\widetilde{W
\rho}|^2)$ also obeys the RSUP. The second inequality is
\begin{equation}
\begin{array}{c}
\operatorname*{Cov}(W \rho) + \frac{i \hbar}{2} J \ge \\
\\
\ge \mathcal{P} \left[W \rho \right] \left[ \operatorname*{Cov}(|\widetilde{W \rho}|^2)+ \frac{1}{4} \operatorname*{Cov}(|\mathcal{F}_{\sigma}\widetilde{W \rho}|^2) + \frac{i \hbar}{2} J \right] .
\end{array}
\label{eqIntroduction10}
\end{equation}
We notice that (\ref{eqIntroduction9}) and
(\ref{eqIntroduction10}) immediately imply the RSUP
(\ref{eqIntroduction4}).
Let us point out the main properties of the refined RSUP:
\vspace{0.3 cm}
\noindent
{\bf (1)} It is parsimonious, in the sense that it is a computable test as the RSUP, but not a complicated one as sets of necessary and sufficient conditions such as the Kastler, Loupias, Miracle-Sole (KLM) conditions \cite{Kastler,LouMiracle1,LouMiracle2}. In fact, we only have to compute the covariance matrices of $W \rho$, $|\widetilde{W \rho}|^2$ and $|\mathcal{F}_{\sigma} (\widetilde{W \rho})|^2$ and check inequalities (\ref{eqIntroduction8}).
\vspace{0.3 cm}
\noindent
{\bf (2)} It is invariant under linear symplectic and anti-symplectic transformations (see Theorem \ref{TheoremSymplecticCovariance}).
\vspace{0.3 cm} \noindent {\bf (3)} It makes a direct connection
with harmonic analysis, as it amounts to an inequality relating $W
\rho$ and its Fourier transform $\mathcal{F}_{\sigma}(W \rho)$.
Here we use the squares $|\widetilde{W\rho}|^2$ and $|\mathcal{F}_{\sigma}(\widetilde{W
\rho})|^2$, and so we are treating $W \rho$ as a wave function in
ordinary quantum mechanics on a $2n$-dimensional configuration
space\footnote{In this interpretation $\operatorname*{Cov}
(|W\rho|^2)$ is the covariance matrix of the $2n$ configurational
variables; and $\operatorname*{Cov}(|\mathcal{F}_{\sigma}W
\rho|^2)$ is the covariance matrix of the $2n$ conjugate
momenta.}.
\vspace{0.3 cm}
\noindent
{\bf (4)} It includes a pure state
condition. Indeed, inequality (\ref{eqIntroduction10}) is an
equality iff the state is pure.
\vspace{0.3 cm} \noindent {\bf (5)} It is stronger than the RSUP.
Indeed, inequality (\ref{eqIntroduction8}) implies immediately the
Robertson-Schr\"odinder uncertainty principle. Example
\ref{ExampleFinal1} shows that it is not equivalent to it.
\vspace{0.3 cm} \noindent {\bf (6)} It is a
deeper quantum mechanical requirement than the condition about a
minimal scale. For instance, in Example \ref{ExampleReview9}, we
show that the saturation (\ref{eqIntroduction12}) of the RSUP can
be easily achieved by many functions which are not Wigner
distributions. On the other hand, we prove in Theorem
\ref{TheoremSaturation} that the refined RSUP is saturated (i.e.
(\ref{eqIntroduction8}) and the saturation condition
(\ref{eqIntroduction12}) are both satisfied) if and only if the
state is a pure Gaussian Wigner function.
As a by-product of the refined RSUP, we also obtain a refinement of the Shannon and Hirschman inequalities \cite{Hirschman,Shannon} for Wigner distributions.
A famous theorem by Shannon \cite{Folland2,Shannon} states that if a
probability density
\begin{equation}
\mu(x) \ge0 , \hspace{1 cm} \int_{\mathbb{R}^{n}} \mu(x) dx=1,
\label{eqentropy1
\end{equation}
has finite covariance matrix $Cov(\mu)$, then its Boltzmann entropy
\begin{equation}
E(\mu) := - \int_{\mathbb{R}^{n}} \mu(x) \log\left( \mu(x) \right) dx
\label{eqentropy2
\end{equation}
is well defined and satisfies the inequality:
\begin{equation}
E(\mu) \le\frac{1}{2} \log\left[ (2 \pi e)^{n} \det\left( Cov(\mu) \right)
\right] .
\label{eqentropy3
\end{equation}
Another theorem due to Beckner \cite{Beckner}, Bialynicki-Birula and Mycielski \cite{Birula} and Hirschmann \cite{Hirschman} relates the entropy of
$|f|^{2}$, for $f\in L^{2}(\mathbb{R}^{n})$ and $||f||_{2}=1$ with that of
$|\mathcal{F}_{\hbar}f|^{2}$, where $(\mathcal{F}_{\hbar}f)$ is the $\hbar$-scaled Fourier transform. If the entropies of $|f|^{2}$ and $|\mathcal{F}_{\hbar
}f|^{2}$ are well defined then the Hirschman inequality reads:
\begin{equation}
\log\left( \pi\hbar e\right)^{n}\leq E\left( |f|^{2}\right) +E\left(
|\mathcal{F}_{\hbar}f|^{2}\right) .
\label{eqentropy5
\end{equation}
This inequality is sometimes called an entropic uncertainty principle as it
prevents a simultaneous sharp localization of $|f|^{2}$ and $|\mathcal{F
_{\hbar}f|^{2}$ and is saturated if and only if $f$ is a Gaussian with minimal
Heisenberg uncertainty.
Of course we may combine (\ref{eqentropy3}) and (\ref{eqentropy5})
and obtain the naive double inequality:
\begin{equation
\begin{array}
[c]{c
\log\left( \pi\hbar e \right)^{n} \le E \left( |f|^{2} \right) + E
\left( | \mathcal{F}_{\hbar} f|^{2} \right) \le\\
\\
\le\log\left[ (2 \pi e)^{n} \sqrt{\det\left( Cov (|f|^{2}) \right)
\cdot\det\left( Cov (|\mathcal{F}_{\hbar} f|^{2})\right) } \right]~.
\end{array}
\label{eqentropy6
\end{equation}
This can be stated in the following terms: if $|f|^{2} $ and $| \mathcal{F
_{\hbar}f|^{2}$ have finite covariance matrices, then they have well defined
entropies and inequality (\ref{eqentropy6}) holds. Moreover, we have
equalities throughout if and only if $f$ is a Gaussian. The inequality between
the first and the last term is, upon exponentiation, the Heinig-Smith
uncertainty principle \cite{Heinig}.
As a consequence of inequality (\ref{eqentropy6}) for the Wigner distribution and the refined RSUP (\ref{eqIntroduction8}), we derive the following Hirschman-Shannon inequality (Theorem \ref{Theorementropy1}):
\begin{equation
\begin{array}
[c]{c
\log\left[ (2\pi e)^{2n}\det\left( Cov(W\rho)\right) \right] \geq\\
\\
\geq\log\left[ \left( \pi e\mathcal{P}\left[W\rho\right]\right) ^{2n}\sqrt{\det\left(
Cov(|\widetilde{W\rho}|^{2})\right) \cdot\det\left( Cov(|\mathcal{F}_{\hbar
}\widetilde{W\rho}|^{2})\right) }\right] \geq\\
\\
\geq2n\log\left( \mathcal{P}\left[W\rho \right]\right) +E\left( |\widetilde{W\rho
|^{2}\right) +E\left( |\mathcal{F}_{\hbar}\widetilde{W\rho}|^{2}\right)
\geq\log\left( \pi\hbar e\mathcal{P}\left[W\rho \right]\right) ^{2n}
\end{array}
\label{eqentropy26
\end{equation}
We obtain an inequality throughout (\ref{eqentropy26}) if and only if $W \rho$ is the Wigner distribution of a pure Gaussian state.
For pure states $W \rho=W \psi$, the refined RSUP leads to the following Hirschman-Lieb-Shannon relation which
involves $W \psi$ only and not its Fourier transform (Corollary \ref{Corollary3}):
\begin{equation
\begin{array}
[c]{c
\log\left[ (2\pi e)^{n} \sqrt{\det\left( Cov (W \psi) \right) }\right]
\ge\\
\\
\ge\log\left[ \left( 2\pi e \right) ^{n} \sqrt{\det\left( Cov
(|\widetilde{W \psi}|^{2}) \right) } \right] \ge\\
\\
\ge E \left( |\widetilde{W \psi}|^{2}\right) \ge\log\left( \frac{ \pi\hbar
e}{2} \right) ^{2n}.
\end{array}
\label{eqentropy30
\end{equation}
Before we conclude the introduction, let us comment on the new parts of the inequalities (\ref{eqentropy26},\ref{eqentropy30}). In (\ref{eqentropy26}) the last inequality is the Hirschman inequality for the Wigner function and the penultimate inequality is the Shannon inequality applied both to $|\widetilde{W \rho}|^2$ and to $|\mathcal{F}_{\hbar} \widetilde{W \rho}|^2$. The new inequalities are:
\begin{equation}
\det\left( Cov(W\rho)\right) \ge \left(\frac{ \mathcal{P}\left[W\rho \right]}{2} \right)^{2n} \sqrt{\det\left(
Cov(|\widetilde{W\rho}|^{2})\right) \cdot\det\left( Cov(|\mathcal{F}_{\hbar
}\widetilde{W\rho}|^{2})\right) },
\label{eqentropy30.1
\end{equation}
and
\begin{equation}
\log\left[ \left(\frac{2\pi e}{\mathcal{P}\left[W\rho\right]} \right)^{2n}\det\left( Cov(W\rho)\right) \right] \geq\\
\\
E\left( |\widetilde{W\rho}|^{2}\right) +E\left( |\mathcal{F}_{\hbar}\widetilde{W\rho}|^{2}\right).
\label{eqentropy30.2
\end{equation}
In (\ref{eqentropy30}) the last inequality is the entropic inequality of Lieb \cite{Lieb}. The penultimate inequality is the Shannon inequality applied to $|\widetilde{W \psi}|^{2}$. The new inequalities are:
\begin{equation}
\det\left( Cov (W \psi) \right) \ge \det\left( Cov (|\widetilde{W \psi}|^2) \right),
\label{eqentropy30.3
\end{equation}
and
\begin{equation}
\log\left[ (2\pi e)^{n} \sqrt{\det\left( Cov (W \psi) \right) }\right]
\ge E \left( |\widetilde{W \psi}|^{2}\right) .
\label{eqentropy30.4
\end{equation}
\section*{Notation}
The Plancherel-Fourier transform of a function $f \in L^1 (\mathbb{R}^n) \cap L^2 (\mathbb{R}^n)$ is defined by:
\begin{equation}
(\mathcal{F}f) (\omega):= \int_{\mathbb{R}^n} f(x) e^{-2 i \pi \omega \cdot x} dx
\label{eqNotation1}
\end{equation}
and the $\hbar$-scaled Fourier transform is:
\begin{equation}
(\mathcal{F}_{\hbar}f)(p):=\left( \tfrac{1}{2\pi\hbar}\right)^{n/2
\int_{\mathbb{R}^{n}}f(x)e^{-\frac{i}{\hbar}x\cdot p}dx .
\label{eqentropy4
\end{equation}
We use lower case letters $f,g,\cdots$ for functions defined on the configuration space $\mathbb{R}^n$ and upper case letters from the middle of the alphabet $F,G, \cdots$ for functions on the phase space $\mathbb{R}^{2n}$. We shall use the physicists' convention for the inner product (anti-linear in the first argument and linear in the second)
\begin{equation}
(f|g)= \int_{\mathbb{R}^n} \overline{f(x)} g(x) dx.
\label{eqNotation2}
\end{equation}
To avoid a proliferation of subscripts, we use the notation
\begin{equation}
((F|G))= \int_{\mathbb{R}^{2n}} \overline{F(z)} G(z) dz
\label{eqNotation3}
\end{equation}
for the inner product on the phase space. Similarly we denote by $|| \cdot ||$ the norm on $L^2 (\mathbb{R}^n)$ and by $||| \cdot |||$ that on $L^2 (\mathbb{R}^{2n})$. Sometimes, when more general $L^p $ norms are needed, we will be more specific and write $|| \cdot||_{L^p(\mathbb{R}^n)}$.
The Schwartz class of test functions is $\mathcal{S} (\mathbb{R}^n)$ and its dual - the space of tempered distributions - is denoted by $\mathcal{S}^{\prime} (\mathbb{R}^n)$. The distributional bracket is written $< \cdot, \cdot>$.
Given a functional space $L$, we denote by $\mathcal{F}L$ the set of distributions $f \in \mathcal{S}^{\prime} (\mathbb{R}^n)$ for which $\mathcal{F}f \in L$.
\section{A review of Wigner distributions}
In this section, we recapitulate the main results about Wigner distributions, which we will need in the sequel.
\subsection{Symplectic geometry}
The standard symplectic form on $\mathbb{R}^{2n} = \mathbb{R}_x^n \times \mathbb{R}_p^n$ is given by
\begin{equation}
\sigma (z,z^{\prime}) = z \cdot J^T z^{\prime}= p \cdot x^{\prime}- x \cdot p^{\prime},
\label{eqReview1}
\end{equation}
for $z=(x,p)$ and $z=(x^{\prime},p^{\prime})$. A linear automorphism $s: \mathbb{R}^{2n} \to \mathbb{R}^{2n}$ is a symplectic transformation if $\sigma (s(z),s(z^{\prime}))= \sigma (z,z^{\prime})$ for all $z,z^{\prime} \in \mathbb{R}^{2n}$. Let the symplectic transformation be represented by the matrix $S \in Gl(2n)$: $s(z) =Sz$. Then
\begin{equation}
S^T J S=J.
\label{eqReview2}
\end{equation}
The set of real $2n \times 2n$ matrices which satisfy (\ref{eqReview2}) form a group called the symplectic group $Sp(n)$. If a matrix $A \in Gl(2n)$ is such that
\begin{equation}
A^T J A= -J,
\label{eqReview3}
\end{equation}
then it is said to be anti-symplectic. Every anti-symplectic matrix $A$ can be written as \cite{PAMS}
\begin{equation}
A= T S,
\label{eqReview4}
\end{equation}
where $S \in Sp(n)$, and $T$ is usually interpreted as a "time-reversal" operator, since it amounts to a reversal of the particle's momentum:
\begin{equation}
T=\left(
\begin{array}{c c}
I & 0\\
0 & -I
\end{array}
\right).
\label{eqReview5}
\end{equation}
We shall denote the group of matrices which are either symplectic or anti-symplectic by $ASp(n)$.
Given a real symmetric positive definite matrix $B$ its symplectic eigenvalues (also called Williamson invariants) are given by the moduli of the eigenvalues of the matrix $B J^{-1}$ \cite{Gosson,Williamson}. Since they come in pairs $\pm i \lambda$ $(\lambda >0)$, we denote the $n$ moduli in increasing order by:
\begin{equation}
0 < \lambda_{\sigma,1}(B) \le \lambda_{\sigma,2}(B) \le \cdots \le \lambda_{\sigma,n}(B).
\label{eqReview6}
\end{equation}
The set
\begin{equation}
Spec_{\sigma} (B)= \left(\lambda_{\sigma,1}(B) , \lambda_{\sigma,2}(B) , \cdots , \lambda_{\sigma,n}(B) \right)
\label{eqReview7}
\end{equation}
is called the symplectic spectrum of $B$. Williamson's Theorem \cite{Williamson} states that the matrix $B$ can be diagonalized to a "normal" form by way of a similarity transformation with a symplectic matrix. More specifically, there exists $S \in Sp(n)$ such that
\begin{equation}
SBS^T = \left(
\begin{array}{c c}
\Lambda & 0 \\
0 & \Lambda
\end{array}
\right),
\label{eqReview8}
\end{equation}
where $\Lambda = diag \left(\lambda_{\sigma,1}(B) , \lambda_{\sigma,2}(B) , \cdots , \lambda_{\sigma,n}(B) \right)$.
\subsection{Weyl operators}
The symplectic Fourier transform of a function $F \in L^1 (\mathbb{R}^{2n}) \cap L^2 (\mathbb{R}^{2n})$ is given by:
\begin{equation}
(\mathcal{F}_{\sigma} F) (\zeta) = \frac{1}{(2 \pi \hbar)^n} \int_{\mathbb{R}^{2n}} F(z) e^{-\frac{i}{\hbar} \sigma (\zeta,z)} dz.
\label{eqReview9}
\end{equation}
It is related to the Fourier transform (\ref{eqNotation1}) and the $\hbar$-scaled Fourier transform (\ref{eqentropy4}) by:
\begin{equation}
(\mathcal{F}_{\sigma} F)(\zeta) = \frac{1}{(2 \pi \hbar)^n}(\mathcal{F} F) \left( \frac{J \zeta}{2 \pi \hbar}\right)=\left(\mathcal{F}_{\hbar} F \right) (J \zeta).
\label{eqReview10}
\end{equation}
The symplectic Fourier transform is an involution which extends by duality to an involutive automorphism $\mathcal{S}^{\prime} (\mathbb{R}^{2n}) \to \mathcal{S}^{\prime} (\mathbb{R}^{2n})$.
Given a symbol $a \in \mathcal{S}^{\prime} (\mathbb{R}^{2n})$, the associated Weyl operator is given by the Bochner integral \cite{Birk,Gosson}:
\begin{equation}
\widehat{A}:= \left( \frac{1}{2 \pi \hbar} \right)^n \int_{\mathbb{R}^{2n}} (\mathcal{F}_{\sigma} a) (z_0) \widehat{T} (z_0 ) dz_0,
\label{eqReview11}
\end{equation}
where $\widehat{T} (z_0)$ is the Heisenberg-Weyl operator
\begin{equation}
(\widehat{T} (z_0) f) (x)= e ^{\frac{i}{\hbar} p_0 \cdot\left(x- \frac{x_0}{2} \right)} f(x-x_0),
\label{eqReview12}
\end{equation}
for $z_0 = (x_0,p_0) \in \mathbb{R}^{2n}$ and $f \in \mathcal{S} (\mathbb{R}^n)$. We remark that the operator $\widehat{A}$ is formally self-adjoint if and only its symbol $a$ is real.
The Weyl correspondence, written $a \overset{\mathrm{Weyl}}{\longleftrightarrow} \widehat{A}$ or $\widehat{A} \overset{\mathrm{Weyl}}{\longleftrightarrow} a$, between an element $a \in \mathcal{S}^{\prime} (\mathbb{R}^{2n})$ and the Weyl operator it defines is bijective; in fact the Weyl transformation is one-to-one from $\mathcal{S}^{\prime} (\mathbb{R}^{2n})$ onto the space $\mathcal{L}\left( \mathcal{S}(\mathbb{R}^{n}),\mathcal{S}^{\prime} (\mathbb{R}^{n})\right)$ of linear continuous maps $\mathcal{S}(\mathbb{R}^{n}) \to \mathcal{S}^{\prime} (\mathbb{R}^{n})$ (see e.g. Maillard \cite{Maillard}, Unterberger \cite{Unterberger} or Wong \cite{Wong}). This can be proven using Schwartz's kernel theorem and the fact that the Weyl symbol $a$ of the operator $\widehat{A}$ is related to the distributional kernel $K_A$ of that operator by the partial Fourier transform with respect to the y variable
\begin{equation}
a(x, p) = \int_{\mathbb{R}^n} K_A \left( x+ \frac{y}{2},x- \frac{y}{2} \right) e^{- \frac{i}{\hbar}p \cdot y} dy,
\label{eqReview13}
\end{equation}
where $K_A \in \mathcal{S}^{\prime} (\mathbb{R}^n \times \mathbb{R}^n )$ and the Fourier transform is defined in the usual distributional sense. Conversely, the kernel $K_A$ is expressed in terms of the symbol $a$ by the inverse Fourier transform
\begin{equation}
K_A(x, y) = \left( \frac{1}{2 \pi \hbar} \right)^n \int_{\mathbb{R}^n} a \left(\frac{x+y}{2},p \right) e^{ \frac{i}{\hbar} p\cdot (x-y)} dp.
\label{eqReview14}
\end{equation}
Weyl operators enjoy the following symplectic covariance property \cite{Folland,Birk,Gosson,Gro,Wong}. Let $S \in Sp(n)$ and $\widehat{S} \in Mp(n)$ be one of the two metaplectic operators that project onto $S$. Recall that metaplectic operators constitute a unitary representation of the two-fold cover $Sp_2(n)$ of $Sp(n)$. If $\widehat{A}: \mathcal{S} (\mathbb{R}^n) \to \mathcal{S}^{\prime} (\mathbb{R}^n)$ is a Weyl operator with symbol $a \in \mathcal{S}^{\prime} (\mathbb{R}^{2n})$, then we have
\begin{equation}
\widehat{S}^{-1} \widehat{A} \widehat{S} \overset{\mathrm{Weyl}}{\longleftrightarrow} a \circ S .
\label{eqReview14.1}
\end{equation}
Since an anti-symplectic transformation is the composition $TS$ (see (\ref{eqReview4})) it suffices to consider the action of $T$. Quantum mechanically, this is implemented by the anti-linear operator
\begin{equation}
(\widehat{T}f)(x) = \overline{f(x)}.
\label{eqReview14.1.A}
\end{equation}
This also supports the interpretation of $T$ as a time reversal. If $f$ obeys the Schr\"odinger equation, then $\overline{f}$ obeys the same equation with the time reversal $t \to -t$.
Assuming that the product $\widehat{A}\widehat{B}$ exists (which is the case for instance if $\widehat{B} : \mathcal{S}(\mathbb{R}^{n}) \to \mathcal{S}(\mathbb{R}^{n})$) the Weyl symbol $c$
of $\widehat{C}= \widehat{A}\widehat{B}$ and its symplectic Fourier transform $\mathcal{F}_{\sigma} c$ are given by the formulae:
\begin{equation}
c(z) =
\left(\frac{1}{4 \pi \hbar} \right)^{2n} \int_{\mathbb{R}^{2n}} \int_{\mathbb{R}^{2n}} a\left(z + \frac{u}{2} \right) b \left(z - \frac{v}{2} \right) e^{\frac{i}{2 \hbar} \sigma (u,v)} du dv,
\label{eqReview15}
\end{equation}
and
\begin{equation}
(\mathcal{F}_{\sigma} c)(z) =\left(\frac{1}{2 \pi \hbar} \right)^{n} \int_{\mathbb{R}^{2n}} (\mathcal{F}_{\sigma} a) (z-z^{\prime}) (\mathcal{F}_{\sigma} b) (z^{\prime}) e^{\frac{i}{2 \hbar} \sigma (z,z^{\prime})} d z^{\prime}.
\label{eqReview16}
\end{equation}
The first formula is often written $c = a \star_{\hbar} b$ and $a \star_{\hbar} b$ is called the \textit{twisted product} or \textit{Moyal product} (see e.g. \cite{Folland,Groenewold,Moyal,Wong}).
\subsection{Quantum states and Wigner functions}
An important case consists of rank one operators of the form:
\begin{equation}
\left(\widehat{\rho}_{f,g} h \right) (x) = (g|h) f (x),
\label{eqReview17}
\end{equation}
for fixed $f,g \in L^2 (\mathbb{R}^n)$ acting on $h \in L^2 (\mathbb{R}^n)$. They are Hilbert-Schmidt operators with kernel $K_{f,g}(x,y) = (f \otimes \overline{g}) (x,y)=f(x) \overline{g(y)} $. According to (\ref{eqReview13}), the associated Weyl symbol is:
\begin{equation}
\rho_{f,g}(x,p) = \int_{\mathbb{R}^n} f \left( x + \frac{y}{2} \right) \overline{g \left( x - \frac{y}{2} \right)} e^{- \frac{i}{\hbar} p \cdot y} dy.
\label{eqReview18}
\end{equation}
This is just the cross-Wigner function up to a multiplicative constant:
\begin{equation}
\begin{array}{c}
W(f,g)(x,p)= \left(\frac{1}{2 \pi \hbar} \right)^n \rho_{f,g} (x,p)=\\
\\
= \left(\frac{1}{2 \pi \hbar} \right)^n \int_{\mathbb{R}^n} f \left( x + \frac{y}{2} \right) \overline{g \left( x - \frac{y}{2} \right)} e^{- \frac{i}{\hbar} p \cdot y} dy.
\end{array}
\label{eqReview19}
\end{equation}
From (\ref{eqReview14.1}), we conclude that
\begin{equation}
W(\widehat{S}f,\widehat{S}g)(z)=W(f,g)(S^{-1} z).
\label{eqReview19.1}
\end{equation}
If $g=f$, we simply write $Wf$ meaning $W(f,f)$:
\begin{equation}
Wf(x,p)= \left(\frac{1}{2 \pi \hbar} \right)^n \int_{\mathbb{R}^n} f \left( x + \frac{y}{2} \right) \overline{f \left( x - \frac{y}{2} \right)} e^{- \frac{i}{\hbar} p \cdot y} dy.
\label{eqReview20}
\end{equation}
We say that $W f$ is the Wigner function \cite{Wigner} associated with the pure state $f \in L^2 (\mathbb{R}^n)$.
In quantum mechanics, one usually has to deal with statistical mixtures of pure states. This means that pure states represented by the rank one operators $\widehat{\rho}_f =\widehat{\rho}_{f,f}$ (see (\ref{eqReview17})) are replaced by convex combinations of the form:
\begin{equation}
\widehat{\rho} = \sum_{\alpha} p_{\alpha} \widehat{\rho}_{f_{\alpha}},
\label{eqReview21}
\end{equation}
with $p_{\alpha} \ge 0$ and $\sum_{\alpha} p_{\alpha} =1$. The convergence of the series in (\ref{eqReview21}) is understood in the sense of the trace norm. Operators of this form are called density matrices. They are positive trace class operators with unit trace. The set of density matrices - the set of states - is denoted by $\mathcal{S}( L^2 (\mathbb{R}^n))$. A density matrix $\widehat{\rho}$ is a Hilbert-Schmidt operator with kernel:
\begin{equation}
\rho (x,y) = \sum_{\alpha} p_{\alpha} f_{\alpha}(x) \overline{ f_{\alpha}(y)}.
\label{eqReview22}
\end{equation}
The associated Wigner function is
\begin{equation}
\begin{array}{c}
W \rho (x,p) = \sum_{\alpha} p_{\alpha} Wf_{\alpha} (x,p)= \left(\frac{1}{2 \pi \hbar}\right)^n \int_{\mathbb{R}^n} \rho \left(x+ \frac{y}{2},x- \frac{y}{2} \right) e^{- \frac{i}{\hbar} p \cdot y} dy=\\
\\
= \left(\frac{1}{2 \pi \hbar}\right)^n \sum_{\alpha} p_{\alpha} \int_{\mathbb{R}^n} f_{\alpha}\left(x+ \frac{y}{2} \right) \overline{f_{\alpha}\left(x- \frac{y}{2} \right)} e^{- \frac{i}{\hbar} p \cdot y} dy
\end{array}
\label{eqReview23}
\end{equation}
with uniform convergence.
We shall denote by $\mathcal{W} (\mathbb{R}^{2n})$ the set of all Wigner functions associated with density matrices, that is the range of the Weyl transform acting on $\mathcal{S}(L^2 (\mathbb{R}^n))$. This is basically the set of quantum mechanical states in the Weyl-Wigner representation. One can tell whether an element $W \rho \in \mathcal{W} (\mathbb{R}^{2n})$ represents a pure or a mixed state by calculating its purity:
\begin{equation}
\mathcal{P} \left[W \rho \right]:= (2 \pi \hbar)^n ||| W \rho|||^2.
\label{eqReview23.1}
\end{equation}
We have:
\begin{equation}
\left\{
\begin{array}{l l}
\mathcal{P} \left[W \rho \right]=1, & \mbox{if $W \rho$ is a pure state}\\
& \\
\mathcal{P} \left[W \rho \right]<1, & \mbox{if $W \rho$ is a mixed state}
\end{array}
\right.
\label{eqReview23.1}
\end{equation}
One aspect which makes the Wigner formalism very appealing is the fact that expectation values are computed with a formula akin to classical statistical mechanics \cite{Folland,Gosson,Wong}. Indeed, if $\widehat{A}$ is a self-adjoint Weyl operator with symbol $a \in \mathcal{S} (\mathbb{R}^{2n})$, then it can be shown that
\begin{equation}
(g|\widehat{A} f)= ((a| W(g,f) )),
\label{eqReview24}
\end{equation}
for $f,g \in \mathcal{S} (\mathbb{R}^n)$. In particular, we have:
\begin{equation}
<\widehat{A} >_f=(f|\widehat{A} f)= \int_{\mathbb{R}^{2n}} a(x,p) W f(x,p) dx dp.
\label{eqReview25}
\end{equation}
For a generic self-adjoint Weyl operator $\widehat{A} \overset{\mathrm{Weyl}}{\longleftrightarrow} a$ which is also trace-class, the following identity holds:
\begin{equation}
Tr (\widehat{A}) = \left( \frac{1}{2 \pi \hbar}\right)^n \int_{\mathbb{R}^{2n}} a(z) dz.
\label{eqReview26}
\end{equation}
If $\widehat{A} \overset{\mathrm{Weyl}}{\longleftrightarrow} a$ and $\widehat{B} \overset{\mathrm{Weyl}}{\longleftrightarrow} b$ are Weyl operators such that $\widehat{A} \widehat{B}$ is trace-class, then we have\cite{Birk,Gosson}:
\begin{equation}
Tr (\widehat{A}\widehat{B}) = \left( \frac{1}{2 \pi \hbar}\right)^n \int_{\mathbb{R}^{2n}} a(z)\star_{\hbar} b(z) dz = \left( \frac{1}{2 \pi \hbar}\right)^n \int_{\mathbb{R}^{2n}} a(z) b(z) dz .
\label{eqReview27}
\end{equation}
In particular, for density matrices (\ref{eqReview25}) generalizes to
\begin{equation}
< \widehat{A}>_{\rho} = Tr ( \widehat{A} \widehat{\rho}) =\int_{\mathbb{R}^{2n}} a(z) W \rho (z) dz,
\label{eqReview27.1}
\end{equation}
provided $\widehat{A} \widehat{\rho}$ is trace class.
In general, it is very difficult to determine whether a given phase space function $F$ is the Wigner function of some density matrix $\widehat{\rho} \in \mathcal{S}(L^2 (\mathbb{R}^n))$. It can be shown that \cite{Dias1,Lions}:
\begin{theorem}\label{TheoremReview1}
Let $F: \mathbb{R}^{2n} \to \mathbb{C}$ be a measurable function. We have $F \in \mathcal{W} (\mathbb{R}^{2n})$ if and only if:
\vspace{0.3 cm}
\noindent
(i) $F$ is a real function,
\vspace{0.3 cm}
\noindent
(ii) $F \in L^2 (\mathbb{R}^{2n})$,
\vspace{0.3 cm}
\noindent
(iii) $\int_{\mathbb{R}^{2n}}F (z) dz =1$,
\vspace{0.3 cm}
\noindent
(iv) $\int_{\mathbb{R}^{2n}}F (z) W f (z) dz \ge 0$, for all $f \in L^2 (\mathbb{R}^n) $.
\end{theorem}
The first two conditions mean that $F$ is the Weyl symbol of a self-adjoint Hilbert-Schmidt operator. The last condition means that this operator is positive. These conditions, together with (iii), imply that the operator is trace class and that the trace is equal to one.
This set of conditions are somewhat tautological as they require the knowledge of the set of pure state Wigner functions $W f$ to check the positivity (iv).
There are an alternative set of necessary and sufficient conditions, the Kastler, Loupias, Miracle-Sole (KLM) conditions \cite{Kastler,LouMiracle1,LouMiracle2}, that do not share this disadvantage. However, they are virtually impossible to check, as they amount to verifying the positivity of an infinite hierarchy of matrices of growing dimension (see also \cite{Dias2,Narcow2,Narcow3,Narconnell}). In practise, these conditions can be checked up to a given finite order, in which case they provide a set of necessary but not sufficient conditions for a measurable function $F$ to belong to $\mathcal{W}(\mathbb{R}^{2n})$. Other, more practical, necessary conditions are the uncertainty principles.
\subsection{Uncertainty principles}
One of the hallmarks of quantum mechanics is the uncertainty principle. For a survey of mathematical aspects of the uncertainty principle see \cite{Folland2}. Good discussions on the physical interpretation and implications of the uncertainty principle can be found in \cite{Busch1,Busch2}. Roughly speaking, an uncertainty principle poses an obstruction to a state being sharply localized both in position and in momentum space. There are various ways one can formulate this principle mathematically. For instance, one can show that (see e.g. \cite{PLA,Janssen})
\begin{theorem}\label{TheoremReview2}
If $W \rho \in \mathcal{W}(\mathbb{R}^{2n})$, then $W \rho$ is uniformly continuous and it cannot be compactly supported.
\end{theorem}
Other results for the support of joint position-momentum (or time-frequency) representations can be found in \cite{Demange} for the ambiguity function and in \cite{Wilczek} for the short-time Fourier transform. The continuous wavelet transform, which is a time-scale representation, was also shown to have non-compact support in \cite{Wilczek}. Ghobber and Jaming \cite{Ghobber1,Ghobber2} derived uncertainty principles for arbitrary integral operators (Fourier, Dunkl, Clifford transforms, etc) which have bounded kernels and satisfy a Plancherel theorem. A sharp version of the Beurling uncertainty principle was proven by B. Demange for the ambiguity function \cite{Demange}.
The most famous version of an uncertainty principle is Heisenberg's uncertainty principle:
\begin{theorem}\label{TheoremReview3}
Let $< \widehat{X}_i>= Tr(\widehat{X}_i \widehat{\rho})$, $<\widehat{P}_i>= Tr(\widehat{P}_i \widehat{\rho})$, $\Delta x_i^2= Tr((\widehat{X}_i-< \widehat{X}_i>\widehat{I})^2 \widehat{\rho})$ and $\Delta p_i^2= Tr((\widehat{P}_i-< \widehat{P}_i>\widehat{I})^2 \widehat{\rho})$ denote the expectation values and the variances of the particle's position and momentum which we assume to be finite. Then:
\begin{equation}
\Delta x_i \Delta p_i \ge \frac{\hbar}{2},
\label{eqReview28}
\end{equation}
for $i=1, \cdots, n$.
\end{theorem}
This theorem does not take into account the correlations $x_ix_j$, $p_ip_j$ or $x_i p_j$. A first generalization would be the Heinig-Smith uncertainty principle \cite{Heinig}:
\begin{theorem}\label{TheoremReview4}
Let $f \in L^2 (\mathbb{R}^n)$, and $\widetilde{f}$ as before, such that
\begin{equation}
d_{ij}= \int_{\mathbb{R}^n} (x_i-<x_i>)(x_j-<x_j>) |\widetilde{f}(x)|^2 dx
\label{eqReview29}
\end{equation}
and
\begin{equation}
\widetilde{d}_{ij}= \int_{\mathbb{R}^n} (\omega_i-<\omega_i>)(\omega_j-<\omega_j>) |(\mathcal{F} \widetilde{f})(\omega)|^2 d \omega
\label{eqReview30}
\end{equation}
are finite for all $i,j=1, \cdots, n$. Here
\begin{equation}
\begin{array}{l}
<x_i>= \int_{\mathbb{R}^n} x_i |\widetilde{f}(x)|^2 dx, \\
\\
<\omega_i>= \int_{\mathbb{R}^n} \omega_i |(\mathcal{F} \widetilde{f})(\omega)|^2 d \omega.
\end{array}
\label{eqReview31.1}
\end{equation}
Then the covariance matrices $D=(d_{ij})_{ij}$ and $\widetilde{D}=(\widetilde{d}_{ij})_{ij}$ satisfy:
\begin{equation}
(\det D)(\det \widetilde{D}) \ge \left(\frac{1}{4 \pi} \right)^{2n}.
\label{eqReview31}
\end{equation}
Moreover, an equality holds if and only if $f$ is a generalized Gaussian of the form:
\begin{equation}
f(x)=e^{- \pi x \cdot A x + 2 \pi b \cdot x + c},
\label{eqReview31.1}
\end{equation}
where $A \in Gl(n, \mathbb{C})$ is symmetric with $Re(A) >0$, and $b \in \mathbb{C}^n$, $c \in \mathbb{C}$.
\end{theorem}
\begin{remark}\label{RemarkReview5.1}
The previous theorem also holds for density matrices. Moreover, as in Theorem \ref{TheoremReview3}, we could have assumed immediately that $f$ is normalized $||f||=||\mathcal{F}f ||=1$. We have chosen this version here, because this is how we will need this result below.
\end{remark}
\begin{remark}\label{RemarkReview5}
It will be useful in the sequel to write the Heinig-Smith inequality for functions $F$ defined in the phase space $\mathbb{R}^{2n}$ and express it in terms of the symplectic Fourier transform. Thus, in view of (\ref{eqReview10}):
\begin{equation}
\operatorname*{Cov} (|\mathcal{F} \widetilde{F}|^2)= \frac{1}{(2 \pi \hbar)^2} J \operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{F}|^2)
J^T .
\label{eqReview32.1}
\end{equation}
Replacing $D$ by $\operatorname*{Cov}(|\widetilde{F}|^2)$, $\widetilde{D}$ by $\operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{F}|^2)$ and $n$ by $2n$ in (\ref{eqReview31}) yields:
\begin{equation}
\det \left(\operatorname*{Cov}(|\widetilde{F}|^2) \right) \det \left(\operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{F}|^2) \right) \ge \left( \frac{\hbar}{2} \right)^{4n}.
\label{eqReview32.2}
\end{equation}
Moreover, the inequality (\ref{eqReview32.2}) becomes an equality if and only if $F$ is of the form:
\begin{equation}
F(z)=e^{- \pi z \cdot A z + 2 \pi b \cdot z + c},
\label{eqReview32.3}
\end{equation}
where $A \in Gl(2n, \mathbb{C})$ is symmetric with $Re(A) >0$, and $b \in \mathbb{C}^{2n}$, $c \in \mathbb{C}$.
\end{remark}
Other uncertainty principles involving quadratic forms were obtained by B. Demange \cite{Demange}.
Theorems \ref{TheoremReview3} and \ref{TheoremReview4} still do not account for the position-momentum correlations. A consequence of this is that they are not invariant under linear (anti-)symplectic transformations. On the other hand, the Robertson-Schr\"odinger uncertainty principle is symplectially invariant \cite{Gosson}.
\begin{theorem}\label{TheoremReview6}
{\bf (Robertson-Schr\"odinger uncertainty principle)} Let $ \operatorname*{Cov} (W \rho)$ be the covariance matrix of $W \rho$ (or $\widehat{\rho}$) with entries:
\begin{equation}
\operatorname*{Cov} (W \rho)= \int_{\mathbb{R}^{2n}} (z-<z>) (z-<z>)^T W \rho (z) dz ,
\label{eqReview33}
\end{equation}
which we assume to be finite. Then we have:
\begin{equation}
\operatorname*{Cov} (W \rho) + \frac{i\hbar}{2} J \ge 0.
\label{eqReview34}
\end{equation}
That is, the matrix $\operatorname*{Cov} (W \rho) + \frac{i\hbar}{2} J$ is positive in $\mathbb{C}^{2n}$.
\end{theorem}
By diagonalizing $\operatorname*{Cov} (W \rho)$ with the help of Williamson's Theorem and using the symplectic invariance of (\ref{eqReview34}), we conclude that the RSUP is equivalent to \cite{Gosson,Narcow2,Narcow}
\begin{equation}
\lambda_{\sigma,1}\left( \operatorname*{Cov} (W \rho)\right) \ge \frac{\hbar}{2},
\label{eqReview35}
\end{equation}
where $\lambda_{\sigma,1}\left(\operatorname*{Cov} (W \rho)\right)$ is the smallest symplectic eigenvalue of $\operatorname*{Cov} (W \rho)$. The extremal situation
\begin{equation}
\lambda_{\sigma,1}\left(\operatorname*{Cov} (W \rho)\right)=\lambda_{\sigma,2}\left(\operatorname*{Cov} (W \rho)\right)= \cdots=\lambda_{ \sigma,n}\left(\operatorname*{Cov} (W \rho)\right)=\frac{\hbar}{2},
\label{eqReview36}
\end{equation}
corresponds to a minimal uncertainty density matrix. In $\mathcal{W}(\mathbb{R}^{2n})$ this can only be achieved by Gaussian pure states \cite{Gosson}.
\begin{theorem}\label{TheoremReview7}
Let $\operatorname*{Cov} (W \rho)$ satisfy the RSUP (\ref{eqReview34}) with $W \rho \in \mathcal{W} (\mathbb{R}^{2n})$. Then it saturates the uncertainty principle in the sense of (\ref{eqReview36}) if and only if $W \rho =W f$ is the Wigner function of a Gaussian pure state $f$ of the form (\ref{eqReview31.1}).
\end{theorem}
\begin{remark}\label{RemarkLittlejohn}
The Wigner function of a Gaussian pure state (\ref{eqReview31.1}) can be expressed as
\begin{equation}
W f(z)= \frac{1}{( \pi \hbar)^n} \exp \left( - \frac{1}{2} (z-z_0) \cdot \left( \operatorname*{Cov} (Wf) \right)^{-1} (z-z_0)\right),
\label{eqReview37}
\end{equation}
where $z_0 \in \mathbb{R}^{2n}$ and the covariance matrix $ \operatorname*{Cov} (Wf)$ is a real symmetric positive-definite $2 n \times 2n$ matrix such that
\begin{equation}
\frac{2}{\hbar} \operatorname*{Cov} (Wf) \in Sp(n).
\label{eqReview38}
\end{equation}
This is known by physicists as Littlejohn's Theorem \cite{Littlejohn} but was first proven by Bastiaans \cite{Bastiaans}.
\end{remark}
Theorem \ref{TheoremReview7} is valid in $\mathcal{W}(\mathbb{R}^{2n})$ but not in $L^2(\mathbb{R}^{2n})$. In fact, the RSUP is only a necessary condition for a real phase space function $F$ to be a Wigner function. However, it is not sufficient (not even if saturated). Here is a counter-example.
\begin{example}\label{ExampleReview9}
Let $F$ be the function on $\mathbb{R}^2$ defined by
\begin{equation}
F(z)=\frac{1}{\pi R^2} \chi_R (z),
\label{eqReview39}
\end{equation}
where $\chi_R (z)$ is the indicator function of the disc of radius $R$ centered at the origin
\begin{equation}
\chi_R (z)= \left\{
\begin{array}{l l}
1 & if ~ |z| \le R\\
0 & if ~|z| >R
\end{array}
\right. ~.
\label{eqReview40}
\end{equation}
The function $F$ is real and normalized. However, it cannot possibly be a Wigner function, because it is discontinuous and because it has compact support. But, as we now show, it can nevertheless satisfy the Robertson-Schr\"odinger uncertainty principle, or even saturate it, provided we choose the radius $R$ appropriately.
A simple calculation shows that the covariance matrix of $F$ is
\begin{equation}
\operatorname*{Cov} (F)= \frac{R^2}{4} I,
\label{eqReview41}
\end{equation}
where $I$ is the identity matrix. The Williamson invariant of $\operatorname*{Cov} (F)$ is
\begin{equation}
\lambda_{\sigma,1}(\operatorname*{Cov}(F)) = \frac{R^2}{4}.
\label{eqReview42}
\end{equation}
So the Robertson-Schr\"odinger uncertainty principle is satisfied, if and only if
\begin{equation}
R \ge \sqrt{2 \hbar},
\label{eqReview43}
\end{equation}
and saturated provided
\begin{equation}
R = \sqrt{2 \hbar}.
\label{eqReview44}
\end{equation}
In higher dimension $n >1$, we may consider the tensor products
\begin{equation}
F(z)= \Pi_{j=1}^n \frac{1}{\pi R^2} \chi_R (z_j).
\label{eqReview44}
\end{equation}
Again, if (\ref{eqReview43}) holds, then $F$ satisfies the RSUP and it saturates it for (\ref{eqReview44}).
\end{example}
Thus, as we argued in the introduction, the only imprint of quantum mechanics in the RSUP is a scale requirement related to Planck's constant. Indeed, we have the more dramatic result that, provided the covariance matrix is finite and positive-definite, then any phase space function satisfies the RSUP after a scale transformation.
\begin{lemma}\label{LemmaReview10}
Let $F: \mathbb{R}^{2n} \to \mathbb{R}$ be a normalized measurable function such that its covariance matrix $\operatorname*{Cov}(F)$ is finite and positive-definite. Then there exists $0<\mu \le 1 $ such that $F_{\mu}(z)= \mu^{2n} F(\mu z)$ satisfies the RSUP.
\end{lemma}
\begin{proof}
Let $\lambda_{\sigma,1} \left( \operatorname*{Cov}(F)\right)$ denote the smallest Williamson invariant of $\operatorname*{Cov} (F)$. If $\lambda_{\sigma,1} \left( \operatorname*{Cov}(F)\right) \ge \frac{\hbar}{2}$, we choose $\mu=1$ and we are done. Alternatively, suppose that $\lambda_{\sigma,1} \left( \operatorname*{Cov} (F)\right) < \frac{\hbar}{2}$. Since $\operatorname*{Cov} (F_{\mu}) = \frac{1}{\mu^2} \operatorname*{Cov} (F)$, we conclude that $\lambda_{\sigma,1} \left( \operatorname*{Cov} (F_{\mu})\right)= \frac{ \lambda_{\sigma,1} \left( \operatorname*{Cov} (F)\right)}{\mu^2}$. If we choose
\begin{equation}
0< \mu < \sqrt{\frac{2 \lambda_{\sigma,1} \left( \operatorname*{Cov} (F)\right)}{\hbar}} <1,
\label{eqReview45}
\end{equation}
then $F_{\mu}$ satisfies the RSUP.
\end{proof}
\subsection{Modulation spaces}
To conclude this section, we address the question of finiteness of the covariance matrix elements of a given function. The proper setting in this respect is that of Feichtinger's modulation spaces \cite{Hans1,fei81}\footnote{For a detailed review see \cite{Gro}; we are using here their formulation in terms of the Wigner
distribution as in \cite{Birk}.}. These are a class of functional spaces which, roughly speaking, describe the integrability, decay and smoothness properties of a function and its Fourier transform.
Let $\langle z\rangle=(1+|z|^2)^{1/2}$; we will
call $\langle\cdot\rangle$ the standard weight function. The modulation space
$M_{s}^{q}(\mathbb{R}^{n})$ consists of all distributions $f\in
\mathcal{S}^{\prime}(\mathbb{R}^{n})$ such that $W(f,g)\in L_{s
^{q}(\mathbb{R}^{2n})$ for all $g \in\mathcal{S}(\mathbb{R}^{n}) \backslash \left\{0 \right\}$. Here
$L_{s}^{q}(\mathbb{R}^{2n})$ is the space of all functions $F$ on
$\mathbb{R}^{2n}$ such tha
\begin{equation}
||F||_{L_{s}^{q}}=\left( \int_{\mathbb{R}^{2n}}\left( \langle z\rangle
^{s}|F(z)|\right)^{q}dz\right) ^{1/q}<\infty.
\label{eqReview46}
\end{equation}
One shows that $M_{s}^{q}(\mathbb{R}^{n})$ is a Banach space for the norm
\begin{equation}
||f||_{g,M_{s}^{q}}=||W(f,g)||_{L_{s}^{q}};
\label{eqReview47}
\end{equation}
these norms are in fact all equivalent for different choices of window $g$, so that the condition $f\in
M_{s}^{q}(\mathbb{R}^{n})$ holds if $W(f,g)\in L_{s}^{q}(\mathbb{R
^{2n})$ for one $g \in\mathcal{S}(\mathbb{R}^{n})\backslash \left\{0 \right\}$; even more surprisingly,
we have $f\in M_{s}^{q}(\mathbb{R}^{n})$ if and only if $Wf =W(f,f) \in L_{s
^{q}(\mathbb{R}^{2n})$ (but it is of course not immediately obvious from this
characterization that $M_{s}^{q}(\mathbb{R}^{n})$ is a vector space!). The
class of modulation spaces contain as particular cases several well-known
function spaces. For instance, the Shubin clas
\begin{equation}
Q^{s}(\mathbb{R}^{n})=L_{s}^{2}(\mathbb{R}^{n})\cap H^{s}(\mathbb{R}^{n}),
\label{eqReview48}
\end{equation}
corresponds to $M_{s}^{2} (\mathbb{R}^n)$. In particular, it can be shown that:
\begin{equation}
M_{1}^{2} (\mathbb{R}^n) \simeq \left\{f \in \mathcal{S}^{\prime} (\mathbb{R}^n): ~ \int_{\mathbb{R}^n} (1+|x|^2) \left(|f(x)|^2 + |(\mathcal{F}f)(x)|^2 \right) dx < \infty \right\}.
\label{eqReview49}
\end{equation}
The case $q=1$, $s=0$ is also noteworthy. The corresponding
modulation space $M_{0}^{1}(\mathbb{R}^{n})$ is called Feichtinger's algebra
and is usually denoted by $S_{0}(\mathbb{R}^{n})$. The Feichtinger algebra is
an algebra for both pointwise multiplication and convolution. One proves that
$S_{0}(\mathbb{R}^{n})$ is the smallest Banach space containing $\mathcal{S
(\mathbb{R}^{n})$ and which is invariant under the action of metaplectic
operators and translations. We have the inclusio
\begin{equation}
S_{0}(\mathbb{R}^{n})\subset C^{0}(\mathbb{R}^{n})\cap L^{1}(\mathbb{R
^{n})\cap \mathcal{F} L^{1}(\mathbb{R}^{n}).
\label{eqReview50}
\end{equation}
The modulation spaces $M_{s}^{q}(\mathbb{R}^{n})$ have similar properties:
\begin{proposition}
(i) Each space $M_{s}^{q}(\mathbb{R}^{n})$ is invariant under the action of
the Heisenberg-Weyl operators $\widehat{T}(z)$ and there exists a constant $C>0$ such tha
\begin{equation}
||\widehat{T}(z)f||_{g,M_{s}^{q}}\leq C\langle z\rangle^{s
||f||_{g ,M_{s}^{q}};
\label{eqReview51}
\end{equation}
(ii) If $\widehat{S}\in\operatorname*{Mp}(n)$ and $f \in M_{s
^{q}(\mathbb{R}^{n})$ then $\widehat{S} f \in M_{s}^{q}(\mathbb{R}^{n})$;
\noindent
(iii) $\mathcal{S}(\mathbb{R}^{n})$ is dense in each of the spaces $M_{s
^{q}(\mathbb{R}^{n})$ and we hav
\begin{equation}
\mathcal{S}(\mathbb{R}^{n})=\cap_{s\geq0}M_{s}^{2}(\mathbb{R}^{n}).
\label{eqReview52}
\end{equation}
\end{proposition}
We remark that the Feichtinger algebra $S_{0}(\mathbb{R}^{n})=M_{0}^{1}(\mathbb{R}^{n})$ is the smallest
algebra containg the Schwartz functions and having properties (i) and (ii) above.
\section{The refined Robertson-Schr\"odinger uncertainty principle}
To prove our main theorem, we need the following two preliminary results.
\begin{proposition}\label{PropositionERSUP1}
Let $\widehat{A} \overset{\mathrm{Weyl}}{\longleftrightarrow} a$ be a positive Weyl operator with symbol $a \in \mathcal{S}^{\prime} (\mathbb{R}^{2n})$, and let $W \rho$ be the Wigner function associated with the density matrix $\widehat{\rho}$. If $\widehat{A}\widehat{\rho}$ is trace-class, then we have
\begin{equation}
\frac{1}{(2 \pi \hbar)^n} \int_{\mathbb{R}^{2n}} a(z) W \rho (z) dz \ge \int_{\mathbb{R}^{2n}} a(z) (W \rho (z) \star_{\hbar} W \rho (z)) dz \ge 0 ,
\label{eqERSUP1}
\end{equation}
where $\star_{\hbar}$ denotes the Moyal product. Moreover, the first inequality bccomes an equality if and only if the state is pure.
\end{proposition}
\begin{proof}
A density matrix is a trace class operator and hence compact. Thus, it admits the following spectral decomposition \cite{Gosson}:
\begin{equation}
\widehat{\rho} = \sum_{\alpha} \lambda_{\alpha} \widehat{P}_{\alpha},
\label{eqERSUP2}
\end{equation}
where $(\lambda_{\alpha})_{\alpha}$ are the eigenvalues of $\widehat{\rho}$, with
\begin{equation}
\lambda_{\alpha}>0, \hspace{1 cm} \sum_{\alpha} \lambda_{\alpha}=1.
\label{eqERSUP3}
\end{equation}
Here $\widehat{P}_{\alpha}$ is the orthogonal projection onto the eigenspace associated with the eigenvalue $\lambda_{\alpha}$.
Since
\begin{equation}
\widehat{\rho}^2 = \sum_{\alpha} \lambda_{\alpha}^2 \widehat{P}_{\alpha},
\label{eqERSUP4}
\end{equation}
we have by linearity, the positivity of $\widehat{A} $, convergence in the trace norm and the fact that $0 < \lambda_{\alpha} \le 1$:
\begin{equation}
0 \le Tr(\widehat{A}\widehat{\rho}^2) = \sum_{\alpha} \lambda_{\alpha}^2 Tr(\widehat{A}\widehat{P}_{\alpha}) \le \sum_{\alpha} \lambda_{\alpha} Tr(\widehat{A}\widehat{P}_{\alpha}) =Tr(\widehat{A}\widehat{\rho}) .
\label{eqERSUP5}
\end{equation}
Finally, an equality holds if and only if $\lambda_{\alpha}=0$ or $\lambda_{\alpha}=1$ for all $\alpha$. This is possible for a normalized state if and only if the state is pure. From (\ref{eqReview15},\ref{eqReview27}), we then recover (\ref{eqERSUP1}).
\end{proof}
The following technical result will also be useful
\begin{proposition}\label{PropositionERSUP2}
Let $F \in M_1^2 (\mathbb{R}^{2n})$ and $a(z)= \eta \cdot(z-z_0)$ for fixed $\eta \in \mathbb{C}^{2n}$ and $z_0 \in \mathbb{R}^{2n}$. Then the following identity holds:
\begin{equation}
\begin{array}{c}
\frac{1}{2} \int_{\mathbb{R}^{2n}} \left(|a \star_{\hbar}F|^2 + |F \star_{\hbar}a|^2 \right) dz=\\
\\
= \int_{\mathbb{R}^{2n}} \left(|a (z) |^2 |F(z)|^2 + \frac{ |\eta \cdot z|^2}{4} | (\mathcal{F}_{\sigma} F)(z)|^2 \right) dz.
\end{array}
\label{eqERSUP5.1}
\end{equation}
\end{proposition}
\begin{proof}
We start by showing that, as a distribution, $a \star_{\hbar}F \in \mathcal{S}^{\prime} (\mathbb{R}^{2n})$ is given by:
\begin{equation}
(a \star_{\hbar}F)(z)= a(z) F(z) + \frac{i \hbar}{2} \eta \cdot J \nabla F (z),
\label{eqERSUP5.2}
\end{equation}
where
\begin{equation}
\nabla F= \left(\frac{\partial F}{\partial x_1}, \cdots, \frac{\partial F}{\partial x_n} , \frac{\partial F}{\partial p_1}, \cdots, \frac{\partial F}{\partial p_n}\right)
\label{eqERSUP5.3}
\end{equation}
is the distributional gradient of $F$.
Indeed, let $\phi \in \mathcal{S}(\mathbb{R}^{2n})$. We have~by the distributional property (\ref{eqReview27}):
\begin{equation}
\begin{array}{c}
<a \star_{\hbar}F, \phi> = <F,\phi \star_{\hbar} a > =\\
\\
= \int_{\mathbb{R}^{2n}} F(z) (\phi \star_{\hbar} a) (z) dz = \int_{\mathbb{R}^{2n}} F(z) \left(\phi (z) a (z) - \frac{i \hbar}{2} \eta \cdot J \nabla \phi (z) \right) dz =\\
\\
= <Fa + \frac{i \hbar}{2} \eta \cdot J \nabla F, \phi> .
\end{array}
\label{eqERSUP5.4}
\end{equation}
Hence, (\ref{eqERSUP5.2}) follows.
Since $M_1^2( \mathbb{R}^{2n}) \simeq H^1 (\mathbb{R}^{2n}) \cap \mathcal{F}H^1 (\mathbb{R}^{2n})$ it is clear from (\ref{eqERSUP5.2}), that $a \star_{\hbar}F \in L^2 (\mathbb{R}^{2n})$. Moreover, given that
\begin{equation}
F \star_{\hbar}a= a \star_{- \hbar}F,
\label{eqERSUP5.5}
\end{equation}
the same can be said about $F \star_{\hbar}a$. We conclude that the left-hand side of (\ref{eqERSUP5.1}) is well defined and finite.
From (\ref{eqERSUP5.2},\ref{eqERSUP5.5}), we have
\begin{equation}
\begin{array}{c}
\frac{1}{2} \int_{\mathbb{R}^{2n}} \left(|a \star_{\hbar}F|^2 + |F \star_{\hbar}a|^2 \right) dz=\\
\\
= \frac{1}{2}\int_{\mathbb{R}^{2n}} \left(\left| a(z) F(z) + \frac{i \hbar}{2} \eta \cdot J \nabla F (z) \right|^2 + \left| a(z) F(z) - \frac{i \hbar}{2} \eta \cdot J \nabla F (z) \right|^2\right) dz =\\
\\
= \int_{\mathbb{R}^{2n}} \left(| a(z)|^2 |F(z)|^2 + \frac{\hbar^2}{4} | \eta \cdot J \nabla F(z)|^2 \right) dz.
\end{array}
\label{eqERSUP5.6}
\end{equation}
Since $F \in H^1 (\mathbb{R}^{2n}) $, we can express the last term as
\begin{equation}
\int_{\mathbb{R}^{2n}} | \eta \cdot J \nabla F(z)|^2 dz = \frac{1}{\hbar^2} \int_{\mathbb{R}^{2n}} | \eta \cdot z|^2 |(\mathcal{F}_{\sigma} F)(z)|^2 dz
\label{eqERSUP5.7}
\end{equation}
and we recover (\ref{eqERSUP5.1}).
\end{proof}
We are now in a position to prove the refined RSUP. This uncertainty principle synthesizes the Heinig-Smith inequality and the RSUP, but is stronger than both.
\begin{theorem}\label{TheoremERSUP2}
Let $W \rho \in \mathcal{W}(\mathbb{R}^{2n})$ be such that
\begin{equation}
W \rho \in \mathcal{A} (\mathbb{R}^{2n}) := \left\{F \in M_{1}^{2} (\mathbb{R}^{2n}): ~ F \mbox{ is real and } \operatorname*{Cov} (F) \mbox{ is finite } \right\}.
\label{eqERSUP6}
\end{equation}
Then the following matrix inequalities hold in $\mathbb{C}^{2n}$:
\begin{equation}
\operatorname*{Cov}(W \rho) + \frac{i \hbar}{2}J \ge \mathcal{P} \left[ W \rho \right] \left(\operatorname*{Cov}(|\widetilde{W \rho}|^2) + \frac{1}{4} \operatorname*{Cov} (|\mathcal{F}_{\sigma}(\widetilde{W \rho})|^2) + \frac{i \hbar}{2}J \right) \ge 0 .
\label{eqERSUP7}
\end{equation}
The first inequality becomes a matrix identity if and only if the state is pure.
We remark that if a real function $F$ belongs to $M_1^2 (\mathbb{R}^{2n}) \cap M_2^1 (\mathbb{R}^{2n})$, then automatically $F \in \mathcal{A} (\mathbb{R}^{2n}) $.
\end{theorem}
\begin{proof}
We start by remarking that if $W \rho \in \mathcal{A} (\mathbb{R}^{2n})$, then all the covariance matrices appearing in (\ref{eqERSUP7}) are finite.
Define the operators
\begin{equation}
\widehat{Y}_j= \widehat{Z}_j - <\widehat{Z}_j> \widehat{I},
\label{eqERSUP8}
\end{equation}
for $j=1, \cdots, 2n$, and where
\begin{equation}
<\widehat{Z}_j>=Tr(\widehat{Z}_j \widehat{\rho}).
\label{eqERSUP9}
\end{equation}
Let also $\eta =(\eta_1, \cdots, \eta_{2n}) \in \mathbb{C}^{2n}$ and define
\begin{equation}
\widehat{A}:= (\eta \cdot \widehat{Y})^{\ast} (\eta \cdot \widehat{Y})= \sum_{j,k=1}^{2n} \overline{\eta_j} \eta_k \widehat{Y}_j\widehat{Y}_k,
\label{eqERSUP10}
\end{equation}
where $\widehat{B}^{\ast}$ denotes the adjoint of the operator $\widehat{B}$. Clearly, $\widehat{A}$ is a positive Weyl operator with symbol:
\begin{equation}
a(z)=\sum_{j,k=1}^{2n} \overline{\eta_j} \eta_k y_j\star_{\hbar} y_k =\sum_{j,k=1}^{2n} \overline{\eta_j} \eta_k(y_j y_k + \frac{i \hbar}{2} J_{jk} )= | \eta \cdot y|^2 + \frac{i \hbar}{2} \sigma (\eta, \overline{\eta}),
\label{eqERSUP11}
\end{equation}
where $y_j=z_j - <\widehat{Z}_j>$ is the symbol of $\widehat{Y}_j$.
Since $W \rho \in \mathcal{A} (\mathbb{R}^{2n})$, we have that $\widehat{A}\widehat{\rho}$ is trace class, or equivalently, that
\begin{equation}
\int_{\mathbb{R}^{2n}} a(z) W \rho (z) dz
\label{eqERSUP12}
\end{equation}
exists and is finite. We conclude that (\ref{eqERSUP1}) holds.
Next we evaluate the integrals in (\ref{eqERSUP1}). We start with
\begin{equation}
\begin{array}{c}
\int_{\mathbb{R}^{2n}} a(z) W \rho (z) dz = \sum_{j,k=1}^{2n} \overline{\eta_j} \eta_k \int_{\mathbb{R}^{2n}}y_jy_k W \rho (z) dz + \frac{i \hbar}{2} \sigma (\eta, \overline{\eta}) = \\
\\
= \overline{\eta} \cdot \operatorname*{Cov} (W \rho) \eta + \frac{i \hbar}{2} \sigma (\eta, \overline{\eta})= \overline{\eta} \cdot \left(\operatorname*{Cov} (W \rho)+ \frac{i \hbar}{2} J \right) \eta .
\end{array}
\label{eqERSUP12}
\end{equation}
Next, we have
\begin{equation}
\begin{array}{c}
(2 \pi \hbar)^n \int_{\mathbb{R}^{2n}} a(z) (W \rho (z) \star_{\hbar} W \rho (z)) dz = Tr(\widehat{A} \widehat{\rho}^2)=\\
\\
=Tr((\eta \cdot \widehat{Y})^{\ast} (\eta \cdot \widehat{Y}) \widehat{\rho}^2) =\\
\\
=\frac{1}{2}Tr\left( \left((\eta \cdot \widehat{Y})^{\ast} (\eta \cdot \widehat{Y})+(\eta \cdot \widehat{Y})(\eta \cdot \widehat{Y})^{\ast} \right) \widehat{\rho}^2\right)+\frac{1}{2}Tr\left(\left[(\eta \cdot \widehat{Y})^{\ast}, (\eta \cdot \widehat{Y})\right] \widehat{\rho}^2\right) =\\
\\
=\frac{1}{2}Tr\left[ \left((\eta \cdot \widehat{Y}) \widehat{\rho} \right) \left(\widehat{\rho}(\eta \cdot \widehat{Y})^{\ast} \right) \right]+\frac{1}{2}Tr\left[ \left((\eta \cdot \widehat{Y})^{\ast} \widehat{\rho} \right) \left(\widehat{\rho}(\eta \cdot \widehat{Y}) \right) \right] + \\
\\
+\frac{i \hbar}{2} \sigma( \eta , \overline{\eta}) Tr (\widehat{\rho}^2)=\\
\\
=\frac{(2 \pi \hbar)^n}{2} \int_{\mathbb{R}^{2n}} \left[ \left( (\eta \cdot y) \star_{\hbar} W \rho \right) \left(W \rho \star_{\hbar} (\overline{\eta \cdot y}) \right) + \right.\\
\\
\left. + \left( (\overline{\eta \cdot y}) \star_{\hbar} W \rho \right) \left(W \rho \star_{\hbar} (\eta \cdot y) \right) \right]dz
+ \frac{i \hbar}{2}(2 \pi \hbar)^n \sigma( \eta , \overline{\eta}) ||| W \rho|||^2=\\
\\
=\frac{(2 \pi \hbar)^n}{2} \int_{\mathbb{R}^{2n}} \left( |(\eta \cdot y) \star_{\hbar} W \rho|^2 +| W \rho \star_{\hbar} (\eta \cdot y)|^2 \right) dz +\\
\\
+ \frac{i \hbar}{2}(2 \pi \hbar)^n \sigma( \eta , \overline{\eta}) ||| W \rho|||^2,
\end{array}
\label{eqERSUP13}
\end{equation}
where we used the cyclicity of the trace and (\ref{eqReview27}).
From Proposition \ref{PropositionERSUP2}, it follows that
\begin{equation}
\begin{array}{c}
(2 \pi \hbar)^n \int_{\mathbb{R}^{2n}} a(z) (W \rho (z) \star_{\hbar} W \rho (z)) dz = \\
\\
= (2 \pi \hbar)^n \int_{\mathbb{R}^{2n}} \left(|a(z)|^2 |W \rho (z)|^2 +\frac{|\eta \cdot z|^2}{4} |(\mathcal{F}_{\sigma} W \rho)(z) |^2\right) dz + \frac{i \hbar}{2} \sigma (\eta , \overline{\eta}) \mathcal{P}\left[W \rho \right].
\end{array}
\label{eqERSUP14}
\end{equation}
Now let us consider the two terms in the integral in previous expression. We have (recall that $<\widehat{Z}>$ is the expectation value for $W \rho$ and not $|W \rho|^2$):
\begin{equation}
\begin{array}{c}
\int_{\mathbb{R}^{2n}} |a(z)|^2 |W \rho (z)|^2 dz = \overline{\eta} \cdot \left(\int_{\mathbb{R}^{2n}} (z-<\widehat{Z}>) (z-<\widehat{Z}>)^T |W \rho (z)|^2 dz \right) \eta \ge \\
\\
\ge min_{\zeta \in \mathbb{R}^{2n}} \left\{\overline{\eta} \cdot \left(\int_{\mathbb{R}^{2n}} (z-\zeta) (z- \zeta)^T |W \rho (z)|^2 dz \right) \eta \right\} =\\
\\
= ||| W \rho|||^2 \overline{\eta} \cdot \operatorname*{Cov} (|\widetilde{W \rho}|^2) \eta .
\end{array}
\label{eqERSUP15}
\end{equation}
Next, we remark that
\begin{equation}
\int_{\mathbb{R}^{2n}} |\eta \cdot z|^2 |(\mathcal{F}_{\sigma} W \rho)(z) |^2 dz = |||W \rho|||^2 \overline{\eta} \cdot \operatorname*{Cov}\left( |\mathcal{F}_{\sigma} \widetilde{W \rho} |^2\right) \eta,
\label{eqERSUP16}
\end{equation}
where we used the fact that
\begin{equation}
\int_{\mathbb{R}^{2n}} z_j|\mathcal{F}_{\sigma} W \rho (z)|^2 dz=0,
\label{eqERSUP17}
\end{equation}
for $j=1, \cdots, 2n$, and that, by Placherel's Theorem, $|||\mathcal{F}_{\sigma} W \rho|||=|||W \rho|||$. Altogether, from (\ref{eqERSUP14})-(\ref{eqERSUP16}), we obtain
\begin{equation}
\begin{array}{c}
(2 \pi \hbar)^n \int_{\mathbb{R}^{2n}} a(z) \left( W \rho \star_{\hbar} W \rho \right) (z) dz \ge \\
\\
\mathcal{P} \left[W \rho \right] \overline{\eta} \cdot \left( \operatorname*{Cov} (|\widetilde{W \rho}|^2) + \frac{1}{4} \operatorname*{Cov} \left( |\mathcal{F}_{\sigma} \widetilde{W \rho} |^2\right) + \frac{i \hbar}{2} J \right) \eta.
\end{array}
\label{eqERSUP18}
\end{equation}
The first inequality in (\ref{eqERSUP7}) then follows from (\ref{eqERSUP1},\ref{eqERSUP12},\ref{eqERSUP18}).
To show the second inequality in (\ref{eqERSUP7}), we observe that, from our previous calculations (\ref{eqERSUP14}, \ref{eqERSUP16}):
\begin{equation}
\begin{array}{c}
|||W \rho|||^2 \overline{\eta} \cdot \left[ \operatorname*{Cov}(|\widetilde{W \rho}|^2) + \frac{1}{4} \operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{W \rho}|^2) + \frac{i \hbar}{2} J\right] \eta =\\
\\
= \mbox{min}_{\zeta \in\mathbb{R}^{2n}} \overline{\eta} \cdot \left( \int_{\mathbb{R}^{2n}} (z - \zeta)(z - \zeta)^T | W \rho(z)|^2 dz \right) \eta + \\
\\
+ |||W \rho|||^2 \overline{\eta} \cdot \left[ \frac{1}{4} \operatorname*{Cov}(| \mathcal{F}_{\sigma} \widetilde{W \rho}|^2) + \frac{i \hbar}{2} J \right] \eta =\\
\\
= \mbox{min}_{\zeta \in\mathbb{R}^{2n}} \overline{\eta} \cdot \left( \int_{\mathbb{R}^{2n}} (z - \zeta) \star_{\hbar} (z- \zeta)^T (W \rho (z) \star_{\hbar} W \rho (z)) dz \right) \eta =\\
\\
= \mbox{min}_{\zeta \in\mathbb{R}^{2n}} \int_{\mathbb{R}^{2n}} b_{\zeta} (z) (W \rho (z) \star_{\hbar} W \rho (z)) dz =\\
\\
= \mbox{min}_{\zeta \in\mathbb{R}^{2n}} \frac{1}{(2 \pi \hbar)^n} Tr (\widehat{B}_{\zeta} \widehat{\rho}^2),
\end{array}
\label{eqERSUP18.1}
\end{equation}
where $\widehat{B}_{\zeta}$ is the Weyl operator
\begin{equation}
\widehat{B}_{\zeta} = \left((\eta \cdot(\widehat{Z}- \zeta)\right)^{\ast} \left((\eta \cdot(\widehat{Z}- \zeta)\right),
\label{eqERSUP18.2}
\end{equation}
with symbol
\begin{equation}
b_{\zeta} (z) = \overline{\eta} \cdot (z - \zeta) \star_{\hbar} (z- \zeta)^T \eta = | \eta \cdot (z- \zeta)|^2 + \sigma (\eta, \overline{\eta}).
\label{eqERSUP18.3}
\end{equation}
This is manifestly a positive operator, and so from (\ref{eqERSUP18.1}), it follows that
\begin{equation}
|||W \rho|||^2 \overline{\eta} \cdot \left[ \operatorname*{Cov} (|\widetilde{W \rho}|^2) + \frac{1}{4} \operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{W \rho}|^2) + \frac{i \hbar}{2} J \right] \eta \ge 0.
\label{eqERSUP18.4}
\end{equation}
We leave to the reader the simple proof that the first inequality in (\ref{eqERSUP7}) becomes an equality if and only if the state is pure.
\end{proof}
\vspace{0.3 cm}
\noindent
The following is a simple corollary of the previous theorem.
\begin{corollary}\label{Corollary1}
Let $W \rho \in \mathcal{W}(\mathbb{R}^{2n}) \cap \mathcal{A} (\mathbb{R}^{2n})$. Then the following inequalities hold:
\begin{equation}
\begin{array}{l}
\operatorname*{Cov}(W \rho) \ge \mathcal{P} \left[W \rho \right] \left(\operatorname*{Cov} (|\widetilde{W \rho}|^2) + \frac{1}{4} \operatorname*{Cov} (| \mathcal{F}_{\sigma} \widetilde{W \rho}|^2) \right), \\
\\
\operatorname*{Cov}(W \rho) \ge \mathcal{P} \left[W \rho \right] \operatorname*{Cov} (|\widetilde{W \rho}|^2), \\
\\
\operatorname*{Cov}(W \rho) \ge \frac{\mathcal{P} \left[W \rho \right] }{4} \operatorname*{Cov} (| \mathcal{F}_{\sigma} \widetilde{W \rho}|^2) .
\end{array}
\label{eqERSUP18.4.B}
\end{equation}
\end{corollary}
\begin{proof}
The first inequality is obtained from (\ref{eqERSUP7}) by a restriction to $\mathbb{R}^{2n}$. The remaining two inequalities follow from the observation that $A +B \ge A$ if $A$ and $B$ are real symmetric and positive matrices.
\end{proof}
Before we proceed, we make the following remarks.
\begin{remark}\label{RemarkSymplecticCapacities}
The RSUP has an interesting geometric interpretation; as shown in
\cite{physreps} the conditio
\[
\Sigma+\frac{i\hbar}{2}J\geq0
\]
is equivalent to the condition $c(\Omega)\geq\pi\hbar$ where $\Omega$ is the
covariance ellipsoid and $c$ any symplectic capacity on the standard
symplectic space $(\mathbb{R}^{2n},\sigma)$. This property relates the RSUP to
deep results in symplectic topology (Gromov's non-squeezing theorem
\cite{Gromov}). It would certainly interesting to extend this geometric
interpretation to the refinement of the RSUP and the inequalities (\ref{eqERSUP18.4.B}) proposed in the present paper.
\end{remark}
\begin{remark}\label{Remarkhbar}
Let $A \in M (n; \mathbb{C})$ be some complex matrix. Then $A$ is positive if and only if $A^T$ is positive. From this observation and the fact that $J^T =-J$ it follows that a function $F$ satisfies the refined RSUP (\ref{eqERSUP7}) if and only if it satisfies the same inequalities with $\hbar$ replaced by $- \hbar$.
\end{remark}
Next we show that the refined RSUP is invariant under linear symplectic and anti-symplectic transformations.
\begin{theorem}\label{TheoremSymplecticCovariance}
Suppose that $F \in \mathcal{A} (\mathbb{R}^{2n}) $ satisfies the refined RSUP:
\begin{equation}
\operatorname*{Cov} (F) + \frac{i \hbar}{2} J \ge \mathcal{P} \left[F \right] \left( \operatorname*{Cov} (|\widetilde{F}|^2) + \frac{1}{4} \operatorname*{Cov} \left( |\mathcal{F}_{\sigma} \widetilde{F}|^2\right) + \frac{i \hbar}{2} J \right) \ge 0 .
\label{eqERSUP19}
\end{equation}
Then for every $S \in ASp(n)$, the function $F \circ S$ also satisfies (\ref{eqERSUP19}).
\end{theorem}
\begin{proof}
A simple calculation shows that
\begin{equation}
\left(\mathcal{F}_{\sigma} (F \circ S)\right)( \zeta)= (\mathcal{F}_{\sigma} F) (\epsilon S\zeta),
\label{eqERSUP20}
\end{equation}
where $\epsilon=1$ if $S$ is symplectic and $\epsilon=-1$ if $S$ is anti-symplectic. It is then a straightforward task to check that
\begin{equation}
\operatorname*{Cov}(G \circ S)= S^{-1} \operatorname*{Cov}(G) (S^{-1})^T,
\label{eqERSUP21}
\end{equation}
for $G=F,|\widetilde{F}|^2$ and $| \mathcal{F}_{\sigma} \widetilde{F}|^2$. Using the fact that $SJS^T= \epsilon J$, we conclude that $F \circ S$ satisfies (\ref{eqERSUP19}) with $\hbar$ replaced by $\epsilon \hbar$. In view of Remark \ref{Remarkhbar} the result follows.
\end{proof}
\begin{theorem}\label{TheoremSaturation}
Let $F\in \mathcal{A} (\mathbb{R}^{2n})$ be such that (\ref{eqERSUP19}) holds. Then $F$ has minimal Robertson-Schr\"odinger uncertainty,
\begin{equation}
\lambda_{\sigma,1} ( \operatorname*{Cov}(F))= \cdots=\lambda_{\sigma,n} (\operatorname*{Cov}(F))=\frac{\hbar}{2},
\label{eqsaturation1}
\end{equation}
if and only if $F$ is proportional to a Gaussian pure state Wigner function:
\begin{equation}
F(z) = \frac{1}{(\pi \hbar)^n} \exp \left( -\frac{1}{2} (z-z_0) \cdot ( \operatorname*{Cov}(F))^{-1} (z-z_0) \right)
\label{eqsaturation2}
\end{equation}
with $z_0 \in \mathbb{R}^{2n}$ and $\frac{2}{\hbar} \operatorname*{Cov}(F) \in Sp (n)$.
\end{theorem}
\begin{proof}
Since (\ref{eqERSUP19}) holds, we have in particular
\begin{equation}
\operatorname*{Cov}(F) + \frac{i \hbar}{2} J \ge 0.
\label{eqsaturation3}
\end{equation}
Let $(u_j)_j$ be the $n$ eigenvectors of $ \operatorname*{Cov}(F) J^{-1}$ associated with the eigenvalues $-i \lambda_{\sigma,j} (\operatorname*{Cov}(F)) = -\frac{i \hbar}{2}$:
\begin{equation}
\operatorname*{Cov}(F) J^{-1} u_j = - \frac{i\hbar}{2} u_j, \hspace{1 cm} j=1, \cdots, n.
\label{eqsaturation4}
\end{equation}
Then we have:
\begin{equation}
\overline{u_j} \cdot J\left( \operatorname*{Cov}(F) + \frac{i \hbar}{2} J \right)J^{-1} u_j=0,
\label{eqsaturation5}
\end{equation}
for $j=1, \cdots, n$.
From (\ref{eqERSUP19}), we must also have:
\begin{equation}
\overline{u_j} \cdot J \left( \operatorname*{Cov}(|\widetilde{F}|^2) +\frac{1}{4} \operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{F}|^2) + \frac{i \hbar}{2} J \right) J^{-1} u_j=0,
\label{eqsaturation5}
\end{equation}
for $j=1, \cdots, n$.
By (\ref{eqERSUP19}), the matrix
\begin{equation}
A= \operatorname*{Cov}(|\widetilde{F}|^2) +\frac{1}{4} \operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{F}|^2)
\label{eqsaturation6}
\end{equation}
satisfies the RSUP. And so, from (\ref{eqsaturation5}), we conclude that its symplectic eigenvalues
are also all equal to $\frac{\hbar}{2}$ and that $(u_j)_j$ are the associated eigenvectors:
\begin{equation}
A J^{-1} u_j = - \frac{i\hbar}{2} u_j, \hspace{1 cm} j=1, \cdots, n.
\label{eqsaturation7}
\end{equation}
It follows that
\begin{equation}
\det (A)= \Pi_{j=1}^n \left(\lambda_{\sigma,j}(A) \right)^2 = \left( \frac{\hbar}{2}\right)^{2n}.
\label{eqsaturation7.1}
\end{equation}
Setting $X=\det ( \operatorname*{Cov}(|\widetilde{F}|^2))$, $Y=\det (\frac{1}{4} \operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{F}|^2))$, we have from (\ref{eqsaturation7.1}), Minkowski's Determinant Theorem, the Heinig-Smith inequality (\ref{eqReview32.2}) and the arithmetic-geometric mean inequality that
\begin{equation}
\begin{array}{c}
\left(\frac{\hbar}{2} \right)^{2n} = \det \left( \operatorname*{Cov}(|\widetilde{F}|^2) +\frac{1}{4} \operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{F}|^2) \right) \ge \\
\\
\ge \left( X^{\frac{1}{2n}} + Y^{\frac{1}{2n}} \right)^{2n} \ge \left(2 \sqrt{ X^{\frac{1}{2n}} Y^{\frac{1}{2n}}} \right)^{2n}=\\
\\
=\sqrt{\left(\det ( \operatorname*{Cov}(|\widetilde{F}|^2)) \right) \left(\det ( \operatorname*{Cov} (|\mathcal{F}_{\sigma} \widetilde{F}|^2)) \right)} \ge \left(\frac{\hbar}{2} \right)^{2n}.
\end{array}
\label{eqsaturation8}
\end{equation}
Thus all the inequalities become equalities. In particular the Heinig-Smith inequality is saturated, and $F$ must be of the form (\ref{eqReview32.3}).
We have
\begin{equation}
\operatorname*{Cov} (F)=\frac{1}{2 \pi} A^{-1}, \hspace{0.5 cm} <z_j>_F= (A^{-1}b)_j,
\label{eqsaturation9}
\end{equation}
for $j=1, \cdots, 2n$. Since, by assumption, $F$ is a real function, we conclude that $b \in \mathbb{R}^{2n}$, $c \in \mathbb{R}$ and $A$ is real, symmetric and positive-definite. Altogether, we recover (\ref{eqsaturation2}). Finally, since $F$ is a Gaussian distribution which saturates the RSUP, then by Littlejohn's Theorem we must have $\frac{2}{\hbar} \operatorname*{Cov}(F) \in Sp (n)$.
\end{proof}
To complete our analysis we consider two examples. The first one shows that a function may satisfy the RSUP but not the refined RSUP. In a certain sense Example \ref{ExampleReview9} already does that. But that is not really a good example since $ \operatorname*{Cov}(|\mathcal{F}_{\sigma} \widetilde{F}|^2)$ is not finite.
The second example shows that the refined RSUP is not a sufficient condition for a phase space function to be a Wigner distribution.
\begin{example}\label{ExampleFinal1}
Consider the following real and normalized function defined on $\mathbb{R}^2$:
\begin{equation}
F(z)= \frac{48}{\pi \hbar} \left( \frac{|z|^2}{\hbar} - \frac{1}{6} \right) e^{- \frac{4 |z|^2}{\hbar}}.
\label{eqExampleFinal1}
\end{equation}
By straightforward calculations, we have:
\begin{equation}
\operatorname*{Cov} (F)= \frac{\hbar}{2}I, \hspace{1 cm} \operatorname*{Cov} (|\widetilde{F}|^2) = \frac{11 \hbar}{80}I,
\label{eqExampleFinal2}
\end{equation}
while
\begin{equation}
\mathcal{P}\left[F \right] = 10.
\label{eqExampleFinal3}
\end{equation}
We conclude that
\begin{equation}
\operatorname*{Cov} (F) + \frac{i \hbar}{2}J \ge 0,
\label{eqExampleFinal4}
\end{equation}
that is $F$ satisfies the RSUP. On the other hand:
\begin{equation}
\mathcal{P}\left[F \right] \operatorname*{Cov} (|\widetilde{F}|^2) > \operatorname*{Cov}(F),
\label{eqExampleFinal5}
\end{equation}
which violates the second inequality in (\ref{eqERSUP18.4.B}).
To obtain a similar example in higher dimensions, we just have to take tensor products of the function (\ref{eqExampleFinal1}).
\end{example}
\begin{example}\label{ExampleFinal2}
Next consider the function
\begin{equation}
F(z) = \frac{1}{2 \pi \hbar} \left( \frac{|z|}{\hbar}-1 \right) e^{- \frac{|z|^2}{2 \hbar}} .
\label{eqExampleFinal6}
\end{equation}
A simple calculation shows that $\mathcal{F}_{\sigma}(F)=- F$ and that
\begin{equation}
\operatorname*{Cov}(F)= 3 \hbar I, \hspace{0.7 cm} \operatorname*{Cov}(|\widetilde{F}|^2) = \operatorname*{Cov}(|\mathcal{F}_{\sigma} \widetilde{F}|^2) = \frac{3 \hbar}{2}I, \hspace{0.7 cm} \mathcal{P}\left[F \right]= \frac{1}{2}.
\label{eqExampleFinal7}
\end{equation}
We conclude that $F$ satisfies the refined RSUP (\ref{eqERSUP19}).
However, this is not a Wigner function. To see this consider the ground state of the simple harmonic oscillator:
\begin{equation}
F_0(z)= \frac{1}{\pi \hbar} e^{-\frac{|z|^2}{\hbar}}.
\label{eqExampleFinal8}
\end{equation}
We have:
\begin{equation}
\int_{\mathbb{R}^2} F(z) F_0(z) dz = - \frac{\hbar}{9},
\label{eqExampleFinal9}
\end{equation}
which violates the positivity condition (iv) in Theorem \ref{TheoremReview1}.
\end{example}
\section{The Hirschman-Shannon inequality for Wigner functions}
In this section, we prove the entropic inequalities which appear as a by-product of the refined RSUP.
\begin{theorem}\label{Theorementropy1}
Let $W\rho$ be a Wigner function with purity $\mathcal{P
\left[W\rho\right]$ and finite covariance matrix $Cov(W\rho)$. Then $|\widetilde{W\rho
}|^{2}$ and $|\mathcal{F}_{\hbar}\widetilde{W\rho}|^{2}$ have finite
covariance matrices and entropies and the following inequalities hold:
\begin{equation
\begin{array}
[c]{c
\log\left[ (2\pi e)^{2n}\det\left( Cov(W\rho)\right) \right] \geq\\
\\
\geq\log\left[ \left( \pi e\mathcal{P}\left[W\rho\right]\right) ^{2n}\sqrt{\det\left(
Cov(|\widetilde{W\rho}|^{2})\right) \cdot\det\left( Cov(|\mathcal{F}_{\hbar
}\widetilde{W\rho}|^{2})\right) }\right] \geq\\
\\
\geq2n\log\left( \mathcal{P}\left[W\rho\right]\right) +E\left( |\widetilde{W\rho
|^{2}\right) +E\left( |\mathcal{F}_{\hbar}\widetilde{W\rho}|^{2}\right)
\geq\log\left( \pi\hbar e\mathcal{P}\left[W\rho\right]\right) ^{2n}
\end{array}
\label{eqIntroductionentropy26
\end{equation}
We have an equality throughout in (\ref{eqIntroductionentropy26}) if and only if
$W \rho =W \psi$ is a pure Gaussian of the form:
\begin{equation}
W\psi(z)=\frac{1}{(\pi\hbar)^{n}}e^{-\frac{1}{2}(z-z_{0})\cdot\left(
Cov(W\psi)\right) ^{-1}(z-z_{0})}, \label{eqIntroduction28
\end{equation}
where $z_{0}\in\mathbb{R}^{2n}$ and
\begin{equation}
\frac{2}{\hbar}Cov(W\psi)\in Sp(n) \label{eqIntroductionentropy29
\end{equation}
is a $2n\times2n$ real symplectic matrix.
\end{theorem}
\begin{proof}
\vspace{0.3 cm}
From (\ref{eqentropy6}) with $n \to 2n$ and $f=\widetilde{W\rho}$, we
obtain:
\begin{equation
\begin{array}
[c]{c
\log\left( \pi\hbar e\right) ^{2n}\leq E\left( |\widetilde{W\rho
|^{2}\right) +E\left( |\mathcal{F}_{\hbar}\widetilde{W\rho}|^{2}\right)
\leq\\
\\
\leq\log\left[ (2\pi e)^{2n}\sqrt{\det\left( Cov(|\widetilde{W\rho
|^{2})\right) \cdot\det\left( Cov(|\mathcal{F}_{\hbar}\widetilde{W\rho
|^{2})\right) }\right] .
\end{array}
\label{eqentropy21
\end{equation}
The first inequality in (\ref{eqERSUP18.4.B}) and Minkowski's determinant theorem yield
\begin{equation
\begin{array}
[c]{c
\det\left( Cov(W \rho)\right) \ge\left( \mathcal{P} \left[W \rho\right] \right) ^{2n}
\det\left[ Cov \left( |\widetilde{W \rho}|^{2}\right) + \frac{1}{4} J^{T}
Cov \left( |\mathcal{F}_{\hbar} \widetilde{W \rho}|^{2}\right) J\right]
\ge\\
\\
\ge\left( \mathcal{P} \left[W \rho \right] \right) ^{2n} \left[ \det^{\frac{1}{2n}}
\left( Cov \left( |\widetilde{W \rho}|^{2}\right) \right) + \frac{1}{4}
\det^{\frac{1}{2n}} \left( Cov \left( |\mathcal{F}_{\hbar} \widetilde{W
\rho}|^{2}\right) \right) \right] ^{2n}
\end{array}
\label{eqProof3
\end{equation}
From the concavity of the logarithm and (\ref{eqentropy21}):
\begin{equation
\begin{array}
[c]{c
log \left( \det\left( Cov(W \rho)\right) \right) \ge2n \log\left( 2
\mathcal{P} \left[W \rho\right] \right) +\\
\\
+2n \log\left[ \frac{1}{2} \det^{\frac{1}{2n}} \left( Cov \left(
|\widetilde{W \rho}|^{2}\right) \right) + \frac{1}{8} \det^{\frac{1}{2n}}
\left( Cov \left( |\mathcal{F}_{\hbar} \widetilde{W \rho}|^{2}\right)
\right) \right] \ge\\
\\
\ge2n \log\left( \mathcal{P} \left[W \rho\right] \right) + \frac{1}{2} \log\left( Cov
\left( |\widetilde{W \rho}|^{2}\right) \right) + \frac{1}{2} \log\left(
Cov \left( |\mathcal{F}_{\hbar} \widetilde{W \rho}|^{2}\right) \right)
\ge\\
\\
\ge2n \log\left( \mathcal{P} \left[W \rho\right] \right) + E \left( |\widetilde{W
\rho}|^{2}\right) + E \left( |\mathcal{F}_{\hbar} \widetilde{W \rho
|^{2}\right) - \log(2 \pi e)^{2n},
\end{array}
\label{eqProof4
\end{equation}
and the result follows.
Finally, suppose we have an equality throughout (\ref{eqIntroductionentropy26}). By
Hirschman's Theorem, the last inequality becomes an equality if and only if $W
\rho$ is a generalized Gaussian. But since, $W \rho$ is a real normalized
function, it must be of the form:
\begin{equation}
W \rho(z) = \frac{1}{(2 \pi)^{n} \sqrt{\det A}} e^{- \frac{1}{2} (z-z_{0})
\cdot A^{-1} (z-z_{0})}, \label{eqProof5
\end{equation}
with $A$ a real, symmetric, positive-definite $2n \times2n $ matrix. By
standard Gaussian integral computations, we conclude that:
\begin{equation}
\mathcal{P} \left[W \rho\right]= \left( \frac{\hbar}{2} \right) ^{n} \frac{1
{\sqrt{\det A}}, \hspace{1 cm} Cov (W \rho)=A. \label{eqProof6
\end{equation}
Moreover,
\begin{equation
\begin{array}
[c]{l
\widetilde{W \rho} (z)= \frac{1}{\pi^{n/2} \sqrt[4]{\det A}} e^{- \frac{1}{2}
(z-z_{0}) \cdot A^{-1} (z-z_{0})},\\
\\
\left( \mathcal{F}_{\hbar} \widetilde{W \rho} \right) (\zeta)=
\frac{\sqrt[4]{\det A}}{\pi^{n/2} \hbar^{n}} e^{- \frac{1}{2 \hbar^{2}}
\zeta\cdot A \zeta- \frac{i}{\hbar} \zeta\cdot z_{0}}.
\end{array}
\label{eqProof7
\end{equation}
From which we conclude that
\begin{equation}
Cov \left( |\widetilde{W \rho}|^{2}\right) = \frac{1}{2} A , \hspace{1 cm}
Cov \left( |\mathcal{F}_{\hbar} \widetilde{W \rho}|^{2}\right) = \frac{1}{2
\hbar^{2}} A^{-1}. \label{eqProof8
\end{equation}
If we have an equality throughout (\ref{eqIntroductionentropy26}), then we also have
an equality in (\ref{eqProof3}). By Minkowski's determinant theorem that can
happen if and only if, there exists a constant $\alpha\ge0$, such that
\begin{equation}
Cov \left( |\widetilde{W \rho}|^{2}\right) = \alpha J^{T} Cov \left(
|\mathcal{F}_{\hbar} \widetilde{W \rho}|^{2}\right) J \label{eqProof9
\end{equation}
Plugging (\ref{eqProof8}) into (\ref{eqProof9}) yields:
\begin{equation}
A= \frac{\alpha}{\hbar^{2}} J^{T} A^{-1} J \Leftrightarrow A J A =
\frac{\alpha}{\hbar^{2}} J .
\label{eqProof10
\end{equation}
In other words: $A$ is proportional to a symplectic matrix.
Equating the first and the last term in (\ref{eqIntroductionentropy26}), we obtain:
\begin{equation}
\det\left( Cov (W \rho) \right) = \left( \frac{\hbar}{2} \right) ^{2n}
\mathcal{P}^{2n} (W \rho) .
\label{eqProof11
\end{equation}
From (\ref{eqProof6}) and (\ref{eqProof11}), we conclude that:
\begin{equation}
\det A = \left( \frac{\hbar}{2} \right) ^{2n}, \hspace{1 cm} \mathcal{P} \left[W
\rho \right]=1. \label{eqProof12
\end{equation}
which proves the result.
\end{proof}
Another consequence of the refined RSUP is the following corollary for pure states.
\begin{corollary}
\label{Corollary3} Suppose that the Wigner function $W \psi$ has a finite
covariance matrix. Then $|\widetilde{W \psi}|^{2}$ has a finite covariance
matrix and a finite entropy and we have:
\begin{equation
\begin{array}
[c]{c
\log\left[ (2\pi e)^{n} \sqrt{\det\left( Cov (W \psi) \right) }\right]
\ge\\
\\
\ge\log\left[ \left( 2\pi e \right) ^{n} \sqrt{\det\left( Cov
(|\widetilde{W \psi}|^{2}) \right) } \right] \ge\\
\\
\ge E \left( |\widetilde{W \psi}|^{2}\right) \ge\log\left( \frac{ \pi\hbar
e}{2} \right) ^{2n}
\end{array}
\label{eqIntroductionentropy30
\end{equation}
\end{corollary}
\begin{proof}
The last inequality in (\ref{eqIntroductionentropy30}) is a well known result by E.
Lieb \cite{Lieb}. The penultimate inequality is just Shannon's inequality
(\ref{eqentropy3}). In remains to prove the first inequality. But again from
the first inequality in (\ref{eqERSUP18.4.B}), we conclude that
\begin{equation}
\det\left( Cov (W \psi) \right) \ge\det\left( Cov (| \widetilde{W \psi
}|^{2} ) \right) ,
\end{equation}
and the result follows.
\end{proof}
\begin{remark}
\label{Remark2} Notice that the previous results are mainly interesting if the
state $W\rho$ does not depart appreciably from a pure state, that is if
$\mathcal{P}\left[W\rho \right]\approx1$. This is of course true if we have exactly a pure
state as in (\ref{eqIntroductionentropy30}). If $W\rho$
is highly mixed $\mathcal{P}\left[W\rho \right]\approx0$, then $\log\left( \mathcal{P
\left[W\rho \right]\right) \rightarrow-\infty$, and inequality (\ref{eqIntroductionentropy26})
becomes trivially true.
\end{remark}
\begin{remark}
Before we proceed let us make a brief comment on the choice of Fourier transform in the various inequalities.
In the refined RSUP we chose the symplectic Fourier transform in order to have a simpler expression. Otherwise,
we would have to make the replacement
\begin{equation}
Cov \left(|\mathcal{F}_{\sigma} (\widetilde{W \rho})|^2 \right) =J^T Cov \left(|\mathcal{F}_{\hbar} (\widetilde{W \rho})|^2 \right) J .
\label{eqentropy26.1
\end{equation}
Because of this identity, the determinants of the two covariance matrices coincide. Likewise, we can easily show that $E \left( |\mathcal{F}_{\sigma} (\widetilde{W \rho})|^2\right)= E \left( |\mathcal{F}_{\hbar} (\widetilde{W \rho})|^2\right) $. Consequently, (\ref{eqentropy26}) holds whether we use $|\mathcal{F}_{\sigma} (\widetilde{W \rho})|^2$ or $|\mathcal{F}_{\hbar} (\widetilde{W \rho})|^2$. We picked $|\mathcal{F}_{\hbar} (\widetilde{W \rho})|^2$ because we can then compare it directly with the Hirschman inequality. But this is really just a question of taste.
\end{remark}
\section{Outlook}
The Wigner quasi-distribution plays a central role in both time-frequency
analysis and quantum mechanics (from which it originates). One should however
be aware that it is not the only possible choice. Any element of the so-called
Cohen class \cite{Gro} having the correct marginals is a priory an
equally good choice in entropic questions of the type considered in this paper
(even if the Wigner quasi-distribution is well-adapted when symplectic
symmetries are present). It would for instance be interesting to generalize
our results to a particular element of the Cohen class, namely the
Born--Jordan distribution \cite{Springer} which is closely related to the
eponymous quantization procedure, and which has certain advantages compared to
those of the Wigner quasi-distribution (in particular it damps certain
unwanted interference effects \cite{cogoni}). We hope to come back to this
case in a near future.
\section*{Acknowledgements}
The work of N.C. Dias and J.N. Prata is supported by the COST Action 1405 and
by the Portuguese Science Foundation (FCT) grant PTDC/MAT-CAL/4334/2014. M. de
Gosson has been funded by the grant P27773 of the Austrian Research agency FWF.
|
1,116,691,499,040 | arxiv |
\section{Introduction}
\label{sec:intro}
The Hubble constant ($H_0$) is one of the key parameters to describe the
Universe. Current observations of the cosmic microwave background (CMB) imply $H_0 =
\SI[separate-uncertainty = true]{67.36 \pm
0.54}{\kilo\meter\per\mathrm{s}\per\mega\parsec}$, assuming a flat $\Lambda$CDM cosmology and the
standard model of particle physics \citep{Planck:2018vks}. This is in tension to $H_0 = \SI[separate-uncertainty = true]{74.03
\pm 1.42}{\kilo\meter\per\mathrm{s}\per\mega\parsec}$, which is measured from the local
distance ladder \citep{Riess:2016jrr,Riess:2018byc,Riess:2019cxk}. In order to
verify or refute this $4.4 \sigma$ tension, independent methods are
needed.
One such method is lensing time-delay cosmography, which can determine
$H_0$ in a single step. The basic idea is to measure the time delays
between multiple images of a strongly lensed variable source
\citep{Refsdal:1964}. This time delay, in combination with
reconstructions of the lens mass distributions and line-of-sight mass structure, directly yields a ``time-delay distance" which is inversely
proportional to $H_0$ (i.e., $t \propto D_{\Delta t} \propto
H_0^{-1}$).
While the time-delay distance primarily constrains $H_0$, it also provides information about other cosmological parameters \citep[e.g.,][]{Linder:2011, JeeEtal16, ShajibEtal18, Grillo:2018ume}.
Applying this method to four lensed quasar systems, the
H0LiCOW collaboration\footnote{\url{http://h0licow.org}} \citep{Suyu:2016qxx} together with the
COSMOGRAIL collaboration\footnote{\url{http://cosmograil.org}}
\citep{Eigenbrod:2005ie,2017Courbin,Bonvin:2018dcc} measured $H_0 =
72.5^{+2.1}_{-2.3} \, \si{\kilo\meter\per\mathrm{s}\per\mega\parsec}$ in flat
$\Lambda$CDM \citep{Birrer:2018vtm}, which is in agreement with the measurements using,
a local distance ladder, but larger than CMB measurements.
Another promising approach goes back to the initial idea in
\cite{Refsdal:1964} that uses lensed supernovae (LSNe) instead of quasars
for time-delay cosmography. So far only two LSNe systems with resolved
multiple images have been observed. The first one, called SN ``Refsdal"
discovered by \cite{Kelly:2015xvu,Kelly:2015vjq}, was a 1987A-like Type
II SN, which was strongly lensed by the galaxy cluster MACS
J1149.5+222.3. As shown in \cite{Grillo:2018ume}, with SN Refsdal
one can measure $H_0$ with a $1 \sigma$ statistical
error of $7\%$. The second LSNe with resolved images is iPTF16geu
reported by \cite{Goobar:2016uuf} from the intermediate Palomar
Transient Factory (iPTF). The system is a SNe Ia at redshift
$0.409$ and strongly lensed by an intervening galaxy at a redshift of
$0.216$. Strong lens mass models of the system from
\cite{More:2016sys} yield SN image fluxes that are discrepant with
the observations, which might be partly an effect of microlensing
\citep{Yahalomi:2017ihe,Foxley-Marrable:2018dzu,Dhawan:2019vof}. Additionally, \cite{Mortsell:2019auy} show that the flux anomalies are within stellar microlensing predictions for certain values of the slope of the projected surface density of the lens galaxy. The models in \cite{More:2016sys} and \cite{Goobar:2016uuf} also predict very short time delays ($\approx
\SI{0.5}{\day}$) that can thus be significantly biased by a microlensing time delay \citep{Bonvin:2018b}. Therefore it is important to include microlensing in LSNe studies.
Even though the number of LSNe is a factor of approximately $60$ \citep{Oguri:2010} lower than the number
of lensed quasars, there are important advantages in using LSNe when measuring time delays. First, if they are observed before the peak,
the characteristic SN light curves make time-delay measurements easier and possible
on shorter time scales in comparison to stochastically
varying quasars. Second, supernova images fade away with time, which facilitates
measurements of lens stellar kinematics and therefore enables the
combination of dynamics
\citep{Barnabe2011,2017:Yildirim,Shajib:2018} and lens mass modeling. This helps to overcome
degeneracies like the mass-sheet degeneracy
\citep{Falco:1985,Schneider:2013wga}. The intrinsic luminosity of the source can also be another way in avoiding mass-sheet degeneracy. Since SNe Ia are standardizable candles, LSNe Ia are very
promising in breaking the model degeneracies in two independent ways.
Even though only two LSNe with resolved images are currently known, the Large Synoptic Survey Telescope (LSST) will
play a key role in detecting many more LSNe.
From investigations done by \cite{Oguri:2010} assuming detections based on image multiplicity, we expect to find $45$ LSNe Ia over the ten year survey. A different approach, using strong lensing magnification for detection \citep{GoldsteinNugent:2017,Goldstein:2017bny}, leads to $500-900$ LSNe Ia in ten years (see also \citealp{Quimby:2014}). The differences in the expected number of LSNe Ia arise from different assumptions about the limiting magnitude and cumulative season length, as pointed out by \cite{Wojtak:2019hsc}.
A remaining question, however,
is how many of the detected systems are valuable for measuring time
delays and whether it will be possible to measure time delays with just the
LSST data. The LSST cadence strategy
\citep{Marshall:2017wph} will be defined soon and the goal of this
paper is to evaluate different cadences for our science case of
measuring time delays in LSNe Ia. For this purpose, we have
investigated 20 different observing strategies. We used mock LSNe Ia from the Oguri and Marshall (OM10) catalog \citep{Oguri:2010} to simulate
observations, and produced the light curves for the mock
SNe images based on synthetic observables calculated with Applied Radiative Transfer
In Supernovae \citep[{\tt ARTIS};][]{Kromer:2009ce} for the spherically symmetric SN Ia model W7
\citep{1984:Nomoto}. Furthermore, we employed magnifications
maps from {\tt GERLUMPH} \citep{Vernardos:2015wta} to include the effects of
microlensing, similar to the approach followed by \cite{Goldstein:2017bny}. We then simulated
data points for the light curves following the observational sequence
from different cadences and uncertainties according to the LSST
science book \citep{2009:LSSTscience}. We used the free-knot splines estimator from Python Curve Shifting \citep[{\tt PyCS};][]{2013:Tewesb,Bonvin:2015jia} to measure the time delay from the simulated observation.
The structure of the paper is as follows. In Section
\ref{sec:Microlensing on Type Ia Supernovae} we present a theoretical
calculation of microlensing on LSNe Ia. In Section \ref{sec: LSST} we introduce relevant information about LSST and different
observing strategies investigated in this work. In Section \ref{sec:Time-Delay
Measurements of mock LSST LSNe Ia}, mock light curves of LSNe Ia are simulated and the time-delay measurement to quantify different LSST observing strategies is described in Section \ref{sec:Time-delay measurement}. The results are presented in Section \ref{sec:results}
before we conclude in Section \ref{sec:Summary and Future Prospects}.
Throughout this paper, magnitudes are given in the AB system.
\section{Microlensing on Type Ia Supernovae}
\label{sec:Microlensing on Type Ia Supernovae}
In this section we describe the calculation of microlensed SNe Ia
light curves combining magnifications maps and a theoretical SNe Ia
model. The relevance
of microlensing on LSNe Ia has been shown theoretically by
\citet[e.g.,][]{DoblerKeeton06}, \cite{Goldstein:2017bny} and \cite{Bonvin:2018b} and, as mentioned before, the first detected LSNe Ia shows
discrepancies between models and observation which might be partly due
to microlensing \citep{More:2016sys,Yahalomi:2017ihe,Foxley-Marrable:2018dzu}. Therefore to simulate more realistic light curves of LSNe Ia we included microlensing in our studies. In Section \ref{sec: Microlensing} magnifications maps are described and Section \ref{sec:SNeIa and the 1D projection} explains the radiative transfer code \texttt{ARTIS} used to calculate synthetic observables. In addition the projection of the 3D simulation output to 1D is discussed including the geometrical delay as described by \cite{Bonvin:2018b}. In Section \ref{sec: Microlensing on SNe Ia} a comprehensive derivation of microlensed light curves of SNe Ia is presented.
\subsection{Magnification maps for microlensing}
\label{sec: Microlensing}
Microlensing is the effect of additional magnification or
demagnification caused by stars, or other compact objects with comparable properties, of the lensing galaxy. We used
magnification maps based on {\tt GERLUMPH} \citep[J.~H.~H.~Chan et al.~in
preparation]{Vernardos:2015wta} to model the effect of microlensing on a SN Ia . These maps are created using the
inverse ray-shooting technique
\citep[e.g.,][]{Kayser:1986,Wambsganss:1992,Vernardos:2013vma} and are pixellated maps
containing magnification factors $\mu$ at the source plane. The three main parameters for the
maps are the convergence $\kappa$, the shear $\gamma$, and the smooth
matter component $s$ which is defined as the ratio of the smooth
matter convergence $\kappa_s$ to the total convergence $\kappa$. For
simplicity, we assumed $s=0.6$ in our investigation. Estimated $s$ values at image positions of galaxy-scale lenses typically vary between $0.3$ and $ 0.8$ \citep[e.g.,][]{Schechter2014,Chen2018,Bonvin:2019xvn} and therefore cover a much broader range.
Nevertheless \cite{Goldstein:2017bny} investigated a few different $s$ values and found that the effect of microlensing on LSNe Ia depends more on the spatial distribution of the radiation than on the precise $s$ value. Even though we over- or underestimate the microlensing effect slightly (depending on the mock lens system) by fixing $s$ in our work, this is done in the same way for all cadence strategies investigated in this work, thus leaving the overall message unchanged. A further investigation of different $s$ values will be presented in S. Huber et al.~(in preparation). The Einstein radius
$R_\mathrm{Ein}$ is the characteristic scale of the map at the source plane, defined as
\begin{equation}
R_\mathrm{Ein}=\sqrt{\frac{4 G \langle M \rangle}{c^2} \frac{D_\mathrm{s} D_\mathrm{ds}}{D_\mathrm{d}}}.
\label{basics: Einstein Radius physical coordinate in cm}
\end{equation}
We assume a Salpeter initial mass function (IMF) with a mean mass of
the point mass microlenses of $\langle M \rangle = 0.35
M_\odot$. Details of the IMF are not relevant for our studies (J.~H.~H.~Chan et al.~in
preparation). The angular diameter distances $D_\mathrm{s}$, $D_\mathrm{d}$, and $D_\mathrm{ds}$ are measured from us to the source, from us to the lens, and between the
lens and the source, respectively. If we assume a flat $\Lambda$CDM
cosmology and neglect the contribution of radiation, we can calculate
the angular diameter distance via
\begin{equation}
\scalebox{1.2}{$D_\mathrm{A} = \frac{c}{H_0 (1+z_2)} \int_{z_1}^{z_2} \frac{\mathrm{d} z}{\sqrt{\Omega_{\mathrm{m},0} (1+z)^3+ \Omega_{\Lambda,0} }}.$}
\end{equation}
Our maps have a resolution of $20000 \times 20000$ pixels and the
total size of the maps is set to $10 R_\mathrm{Ein} \times 10 R_\mathrm{Ein}$. Therefore the
size of one square pixel of the magnification map is
\begin{equation}
\scalebox{1.2}{$\Delta d_\mathrm{mag}=\frac{10 R_\mathrm{Ein}}{20000}=\frac{1}{
1000} \sqrt{\frac{G \langle M \rangle}{c^2} \frac{D_\mathrm{s} D_\mathrm{ds}}{D_\mathrm{d}}}.$}
\label{eq: pixelsize micro map}
\end{equation}
For the simulated LSST LSNe Ia in Section \ref{sec:Time-Delay Measurements of mock LSST LSNe Ia}, the size of these microlensing maps ranges from $\SI{4.12e-2}{\parsec}$ to $\SI{2.70e-1}{\parsec}$ with a median of $\SI{1.02e-1}{\parsec}$.
As an example, a magnification map for $\kappa=0.6$ and $\gamma=0.6$ is shown in
Figure \ref{fig: microlensing map}, where $R_\mathrm{Ein} = \SI{7.2e-3}{\parsec} = \SI{2.2e16}{\mathrm{cm}} $ assuming an iPTF16geu like configuration.
\begin{figure}
\centering
\includegraphics[scale=0.65]{k06_g06_s06_m035_salpeter.jpg}
\caption[Microlensing map for $\kappa=0.6$, $\gamma=0.6$ and
$s=0.6$.]{Example magnification map for $\kappa=0.6$,
$\gamma=0.6$ and $s=0.6$. The color scheme illustrates the different
magnification factors $\mu$ at the source plane depending on the $x$ and $y$
coordinate. Many micro ``caustics" are visible separating regions of
high and low magnification.}
\label{fig: microlensing map}
\end{figure}
\subsection{Theoretical SNe Ia model and the 1D projection}
\label{sec:SNeIa and the 1D projection}
To combine magnification maps with SNe Ia, we adopt a similar approach as \cite{Goldstein:2017bny} where the spherically symmetric W7
model \citep{1984:Nomoto} and the Monte Carlo-based radiative transfer
code SEDONA \citep{Kasen:2006ce} were used.
For our analysis, we also rely on the W7 model, but calculate synthetic
observables with the radiative transfer code {\tt ARTIS}
\citep{Kromer:2009ce}, which stands for Applied Radiative Transfer In
Supernovae and is a Monte Carlo based code to solve the frequency and
time-dependent radiative transfer problem in 3D. Thus, {\tt ARTIS} is not a
deterministic solution technique, where the radiative transfer equation is
discretized and solved numerically, but a probabilistic approach in which the radiative transfer process is simulated by a large number of Monte Carlo
packets, whose propagation is tracked based on the methods developed
by \cite{Lucy:1999,Lucy:2001ts,Lucy:2003zx,Lucy:2004fz}. In this procedure, $\gamma$-ray photon
packets from the radioactive decay of $^{56}$Ni \, to $^{56}$Co \, and the
successive decay of $^{56}$Co \, to $^{56}$Fe \, are converted into UVOIR
(ultraviolet-optical-infrared radiation) packets which are then
treated with the full Monte Carlo radiative transport procedure. In
the propagation of UVOIR packets, bound-free, free-free, and especially
bound-bound processes are taken into account. Once a packet escapes
from the SN ejecta and the computational domain (which we refer to as a
simulation box), the position $\vec{x}$ where it
escapes the simulation box, the time $t_\mathrm{e}$ when it leaves and
the propagation direction $\vec{n}$ are stored in addition to the
energy and frequency. For the spherically symmetric ejecta the
interaction of a photon packet stops after leaving the ejecta surface
so in general before hitting the simulation box. For an
illustration of two photon-packets leaving the simulation box in the
same direction, see Figure \ref{fig: 1D projection}.
Typically one is interested in spectra and light curves, to compare observations to theoretical models. To get this information from
numerical simulations, all escaping packets have to be binned in
frequency and time, alongside the solid angle for asymmetric models. Since the microlensing effect depends on the location of the
source as shown in Figure \ref{fig: microlensing map}, spatial
information of the SN is needed as well. Therefore, we have to project
the 3D SN onto a 2D plane perpendicular to the observer and get the
specific intensity as a function of wavelength, time, and spatial
coordinates $x$ and $y$. Throughout this work, we assume that SNe Ia
can be treated with spherical symmetry and therefore no binning in solid
angle is necessary. While this is exact for an inherent 1D model like
W7 and good for multi-dimensional simulations that lead to nearly spherically
symmetric ejecta like some delayed detonations \citep{Seitenzahl:2013}
and sub-Chandrasekhar detonations \citep{Sim:2010}, this approximation
is questionable for models that lead to strongly asymmetric ejecta
like the violent merger \citep{Pakmor:2011,Pakmor:2012apjl}.
In the 1D case, the spatial dependency of the specific intensity
reduces to the dependency on the impact parameter $p$, that is, the projected
distance from the ejecta center. To construct this, we consider a
plane containing the position $\vec{x}$, where a photon-packet has
left the 3D simulation box, and the propagation direction
$\vec{n}$. This is illustrated in Figure \ref{fig: 1D projection} for
two packets leaving at different positions but propagating in the same
direction. Because of the vast distance of the SN, the observer is defined as a plane perpendicular to
$\vec{n}$. The radial
coordinate where the photon leaves the box is $r=\sqrt{\vec{x}^2}$,
and the angle between the position vector $\vec{x}$ and the
propagation direction $\vec{n}$ is $\cos \theta = \frac{\vec{x} \cdot
\vec{n}}{|\vec{x}| |\vec{n}|}$. Then, the impact parameter is
defined as
\begin{equation}
p =r \sin \theta = r \sqrt{1-\cos^2 \theta}, \qquad \theta \in [0,\pi].
\label{eq:impact parameter radial}
\end{equation}
From Figure \ref{fig: 1D projection} we see that different photon-packets, leaving the box at different positions but at the same time after explosion $t_\mathrm{e}$, will reach the observer at different times. If we assume that the orange packet from Figure \ref{fig: 1D projection} reaches the observer at time $t^\prime$ and the blue packet at time $t$ we can relate both times via $t = t^\prime + \frac{d^\prime-d}{c}$,
where $d = r \cos \theta $ and $d^\prime = |\vec{x}^\prime|$. The time when the orange packet reaches the observer can be expressed as $t^\prime = t_\mathrm{e} + C$, where C is a constant defining the distance from the observer to the simulation box for the orange packet. From this we can write $
t=t_\mathrm{e} + C + \frac{d^\prime - d}{c}$. Since the comparison to real observations is always performed relative to a maximum in a chosen band we are only interested in relative times. Therefore we can simplify the equation for $t$ by defining a reference plane at the center of the SN perpendicular to the propagation
direction $\vec{n}$ (red dashed line). For this reference plane $C=-\frac{d^\prime}{c}$ which leads to the observer time
\begin{equation}
t=t_\mathrm{e} - \frac{r \cos\theta}{c},
\label{SN: observer time}
\end{equation}
as defined in \cite{Lucy:2004fz} which accounts for the geometrical delay described in \cite{Bonvin:2018b}. We will refer to the observer time $t$ as the time since explosion. With the definition of the time $t$ and the impact parameter
$p$, the energy is binned in these two quantities\footnote{Technical
detail: Since the box expands with the SN over time, the impact
parameter $p$ is a function of time. To eliminate this time
dependency, one can assume that the SN is homologously expanding \citep{Ropke:2004ji} and
therefore simply divide the impact parameter by the observer time as in \cite{Goldstein:2017bny}. The unit of this new impact parameter is
therefore $\si{\mathrm{cm}\per\mathrm{s}}$ instead of $\si{\mathrm{cm}}$ and the unit of the new
specific intensity is $\si{\mathrm{erg}\mathrm{s}\per\cubic\mathrm{cm}}$ instead of
$\si{\mathrm{erg}\per\mathrm{s}\per\cubic\mathrm{cm}}$.} as well as in wavelength
$\lambda$. The emitted specific intensity can then be calculated via
\begin{equation}
I_{\lambda,\mathrm{e}} = \frac{\mathrm{d} E}{4 \pi \mathrm{d} t \,\mathrm{d} \lambda \, 2 \pi p \,\mathrm{d} p},
\end{equation}
where the factor $4 \pi$ is needed as a normalization over the unit sphere.
\begin{figure}
\centering
\includegraphics[scale=0.7]{1DProjection.jpg}
\caption[1D projection of SNe.]{Slice through
spherically symmetric SN enclosed in 3D simulation box in order to
explain 1D projection of SN and definition of observer time defined in Equation (\ref{SN: observer time}).}
\label{fig: 1D projection}
\end{figure}
\subsection{Microlensed flux of SNe Ia}
\label{sec: Microlensing on SNe Ia}
To calculate microlensed light curves, one has to first determine the
observed spectral flux for a SN, which can be calculated for a source of
angular size $\Omega_0$ on the sky as
\begin{equation}
F_{\lambda,\mathrm{o}} = \int_{\Omega_0} I_{\lambda,\mathrm{o}} \cos \theta_\mathrm{p} \,\mathrm{d}\Omega.
\end{equation}
Here $ I_{\lambda,\mathrm{o}}$ is the specific intensity at the
position of the observer. In Figure \ref{fig: SNe projected
perpendicular to line of sight} a spherical source (gray disk) is
placed perpendicular to the line of sight at $\theta_\mathrm{p}=0$. The disk
represents the projected emitted SN specific intensity
$I_{\lambda,\mathrm{e}}$.
\begin{figure}
\centering
\includegraphics[scale=0.6]{spectral_flux3.jpg}
\caption[SN projection onto a disc.]{SN projected
onto disk perpendicular to line of sight to observer. The
center of the disk with radius $p_\mathrm{S}$ is placed at
$\theta_{\rm p}=0$ at an angular diameter distance of $D_\mathrm{A}$ from
the observer. }
\label{fig: SNe projected perpendicular to line of sight}
\end{figure}
Since the source size is much smaller than
the angular diameter distance to the source, we use the approximation
for small angles and get $\theta_\mathrm{p} = \frac{p}{D_\mathrm{A}}$ and $\cos\theta_\mathrm{p} \approx 1$, which means that we assume parallel light
rays. Therefore $\mathrm{d} \Omega = \mathrm{d} \phi \,\mathrm{d} \theta_\mathrm{p} \, \theta_\mathrm{p} = \frac{1}{D_\mathrm{A}^2} \mathrm{d} \phi \,\mathrm{d} p \, p $ and the spectral flux can be expressed as
\begin{equation}
\scalebox{1.1}{$F_{\lambda,\mathrm{o}}=\frac{1}{D_\mathrm{A}^2}\int_0^{2 \pi} \mathrm{d} \phi \int_0^{p_\mathrm{S}} \mathrm{d} p \, p \, \, I_{\lambda,\mathrm{o}},$}
\label{eq:non redshifted radiaion flux as integral over specific intensity in radial P coordinats without microlensing}
\end{equation}
where $p_\mathrm{s}$ is the source radius of the projected disk.
The next step is to relate the specific intensity at the observer's
position to the source position. Hereby, we have to take into account
that the specific intensity is redshift dependent. According to
Liouville’s theorem $I_\nu / \nu^3$ is Lorentz invariant \citep[page 414]{mihalas1984foundations} and therefore we have
$I_\lambda \propto \lambda^{-5}$. Since the emitted wavelength
$\lambda_\mathrm{e}$ can be related to the observed one,
$\lambda_\mathrm{o}$, via $
\lambda_\mathrm{o}=\lambda_\mathrm{e}(1+z)$ we find that
$I_{\lambda,\mathrm{o}}=I_{\lambda,\mathrm{e}}/(1+z)^5$. Therefore by
using ${D_\mathrm{L}} = (1+z)^2 D_\mathrm{A}$ the spectral flux reduces to
\begin{equation}
F_{\lambda,\mathrm{o}}=\frac{1}{{D_\mathrm{L}}^2 (1+z)}\int_0^{2 \pi} \mathrm{d} \phi \int_0^{p_\mathrm{S}} \mathrm{d} p \, p \, I_{\lambda,\mathrm{e}}.
\label{eq:radiaion flux as integral over specific intensity in radial P coordinats without microlensing}
\end{equation}
To add the effect of microlensing $I_{\lambda,\mathrm{e}}$ has to be
replaced with $\mu I_{\lambda,\mathrm{e}}$, which is possible since
lensing conserves surface brightness. The value $\mu$ is the
microlensing magnification\footnote{We break here with
the traditional nomenclature adopted in radiative transfer, where
$\mu$ stands for $\cos \theta$. Instead, $\mu$ denotes the
magnification factor throughout this work.} as a function of $\phi$ and $p$. Therefore
we get
\begin{equation}
F_{\lambda,\mathrm{o}}=\frac{1}{{D_\mathrm{L}}^2(1+z)}\int_0^{2 \pi} \mathrm{d} \phi \int_0^{p_\mathrm{S}} \mathrm{d} p \, p \, \mu \, I_{\lambda,\mathrm{e}}.
\label{eq:radiaion flux with microlensing as integral over specific intensity in radial P coordinats}
\end{equation}
We note that this equation is in agreement with \cite{Hogg:2002yh} and \cite{Goldstein:2017bny}, where in the latter the flux is calculated in the supernova frame ($F_{\lambda,\mathrm{e}}$) instead of the observer frame ($F_{\lambda,\mathrm{o}}$).
The projected specific intensity inferred from simulations is a
discrete function in time, wavelength, and impact parameter and denoted
as $I_{\lambda_j,\mathrm{e}}(t_i,p_k)$. Because of the spherical
symmetry of W7, it has just a 1D radial dependency whereas the
magnification map is obtained on a 2D cartesian grid. To combine both quantities as needed in Equation (\ref{eq:radiaion flux with
microlensing as integral over specific intensity in radial P
coordinats}), it is necessary to transform one of both discrete
quantities into the other coordinate system. We choose to interpolate
the specific intensity onto a 2D cartesian grid:
\begin{equation}
I_{\lambda_j,\mathrm{e}}(t_i,p_k) \rightarrow I_{\lambda_j,\mathrm{e}}(t_i,x_l,y_m).
\label{eq:Specific Intensity Interpolation onto cartesian grid}
\end{equation}
For this, we construct a cartesian grid with a pixel size $\Delta
x=\Delta y \equiv \Delta d_\mathrm{mag}$. To get accurate results, $\Delta p
\gtrsim \Delta d_\mathrm{mag}$ is required but to save computational memory we
restrict ourselves to
\begin{equation}
\Delta p \approx \Delta d_\mathrm{mag}.
\label{eq:criteria for bin size}
\end{equation}
As the SNe Ia ejecta expand, $\Delta p$ grows. Since $\Delta d_\mathrm{mag}$ is a
fixed quantity defined by Equation (\ref{eq: pixelsize micro map}), we
interpolate the magnification map to a finer or coarser grid to
fulfill the criteria in Equation (\ref{eq:criteria for bin size}) using
the Python library scipy\footnote{https://www.scipy.org/}
\citep{Scipy:2001}. To get $I_{\lambda_j,\mathrm{e}}(t_i,x_l,y_m)$ for
a given time $t_i$ we interpolate $I_{\lambda_j,\mathrm{e}}(t_i,p_k)$
in $p$ and evaluate it for all grid points $(x_l,y_m$). Therefore the
spectral flux at time $t_i$ after explosion can be calculated via
\begin{equation}
\scalebox{0.9}{$F_{\lambda_j,\mathrm{o, cart}}(t_i)=\frac{1}{{D_\mathrm{L}}^2(1+z)} \sum_{l=0}^{N-1} \sum_{m=0}^{N-1} I_{\lambda_j,\mathrm{e}}(t_i,x_l,y_m) \, \mu(x_l,y_m) \, \Delta d_\mathrm{mag}^2.$}
\label{eq:flux discrete for spectra}
\end{equation}
For the calculation of fluxes and light curves for
astronomical sources at redshift $z$ we have
\begin{equation}
t_\mathrm{o} = t_\mathrm{e} \, (1+z) \qquad \mathrm{and} \qquad \lambda_\mathrm{o} = \lambda_\mathrm{e} \, (1+z).
\label{eq:redshift time and wavelength}
\end{equation}
To calculate microlensed light curves for the six LSST filters (details about LSST in Section \ref{sec: LSST}) we combine Equation (\ref{eq:flux discrete for spectra}) with the
transmission function $S_\mathrm{X}(\lambda)$ for LSST filter X. We
calculate AB-magnitudes as described by \cite{Bessel:2012} such that
\begin{equation}
\scalebox{0.9}{$
m_\mathrm{AB,X}(t_i) = -2.5 \log_{10} \left( \frac{\sum_{j=0}^{N_\lambda-1} S_\mathrm{X}(\lambda_j) \, F_{\lambda_j,\mathrm{o,cart}}(t_i) \, \Delta \lambda_j \, \lambda_j}{\sum_{j=0}^{N_\lambda-1} S_\mathrm{X}(\lambda_j) \, c \, \Delta \lambda_j / \lambda_j} \times \si{\square\mathrm{cm}\over\mathrm{erg}} \right) - 48.6$}
\label{snmicro: light curves ab magnitudes}
\end{equation}
for the magnitude at the i-th time bin for filter X.
Light curves in absolute magnitudes are shown in Figure \ref{fig: micro influence on light curves} for the \textit{g} and \textit{z} bands. It is important to catch the light curve peaks of different images of a LSNe Ia to measure time delays. While we have a single peak for rest-frame light curves \textit{u} and \textit{g} we find a secondary peak in the redder bands where we could ideally catch both peaks for delay measurements. In addition to the non microlensed case (dotted black), light curves with microlensing (solid cyan and dashed violet) for two different positions (see left panel) in the magnification map from Figure \ref{fig: microlensing map} are shown.
The microlensed light curves are highly distorted and peaks are shifted, which adds large uncertainty to the time-delay measurement between different images based on light curves that undergo different
microlensing.
A more detailed investigation of microlensing is presented in Appendix \ref{sec: Case study}, where also spectra and color curves are discussed. We find from the investigated magnification map (Figure \ref{fig: microlensing map})
an achromatic phase for some color curves up to approximately $25-30$ days, as reported in \cite{Goldstein:2017bny}; however, other color curves show a shorter or non-existent achromatic phase. Our investigation also indicates that
the achromatic phase depends highly on the specific intensity
profiles and therefore the investigation of different explosion models is necessary
to explore this further (S. Huber et al., in preparation). Furthermore, some color curves from \texttt{ARTIS} are different in shape from the ones of \texttt{SEDONA}, which is important since features like peaks are necessary to measure time delays.
Even though color curves seem to be more promising for measuring time delays \citep[as suggested by][and discussed in Appendix \ref{sec: Case study}]{Goldstein:2017bny},
we use light curves instead for our further investigation because the
sparse sampling of LSST does not provide directly color
curves.
Since color information is more easy to obtain with triggered
follow-up observations, it is promising to develop color
curve fitting methods in the future.
\begin{figure*}[htbp]
\centering
\subfigure{\includegraphics[scale=0.42]{ld21.jpg}}
\subfigure{\includegraphics[scale=0.42]{lg.jpg}}
\subfigure{\includegraphics[scale=0.42]{lz.jpg}}
\caption{Influence of microlensing on light curves \textit{g} and \textit{z} for two different positions (solid cyan and violet dashed) as shown in left panel at 21 days after explosion for magnification map of Figure \ref{fig: microlensing map} where $R_\mathrm{Ein} = \SI{7.2e-3}{\parsec}$. The case of no microlensing is shown as black dotted line
in the middle and right panels.
We see that microlensing can cause distortion of light curves, shift the peaks and therefore add uncertainties to time-delay measurements between images undergoing different microlensing.}
\label{fig: micro influence on light curves}
\end{figure*}
\section{Large Synoptic Survey Telescope (LSST)}
\label{sec: LSST}
The LSST will target about $\SI{20000}{\square\deg}$ of the
southern hemisphere with a field of view of $\SI{9.6}{\square\deg}$. Observations will be taken in six broad photometric bands \textit{ugrizy} and each position in the survey area will be repeatedly observed over
time, where each visit is composed of one or two back-to-back exposures
in the observing strategies currently under consideration. About $90 \%$ of the observing time will be spent on the
$\SI{18000}{\square\deg}$ wide-fast-deep survey (WFD), where
the inter-night gap between visits in any filter is about three days \citep{2009:LSSTscience}. The
rest of the
time will be used for other regions like the northern Ecliptic, the south Celestial Pole, the Galactic Center, and a few ``deep drilling fields" (DDFs) where single fields ($\SI{9.6}{\square\deg}$)
will be observed to a greater depth in
individual visits.
The scientific goals of LSST include exploring the nature of dark energy
and dark matter, exploring the outer regions of the solar system, and
completing the inventory of small bodies in the solar system. These science goals
restrict the cadence strategy
but still leave a certain amount of freedom. For example, to detect
fast-moving transients like asteroids, a revisit of an observed field
within an hour is usually necessary. Such a revisit is planned if the first
observation was taken in one of the bands \textit{g, r, i,} or \textit{z} and is done in the same filter as the first observation for most of the cadence strategies under investigation in this work. For more
details, see \cite{2009:LSSTscience}.
As the LSST Project is in the process of finalizing the cadence
strategy, this paper
investigates how different cadence strategies will influence the
possibility of measuring time delays for LSNe Ia. We specifically look at what is termed as a ``rolling
cadence", where the overall idea is to subdivide the WFD and focus on different subdivided parts in different years, with the final ten-year static survey performance being the same as the nominal ten-year survey. This strategy is one way to provide a better sampling but it will reduce the number of seasons.
A specific case for a rolling cadence is the one with two declination bands, which subdivides the WFD (with a declination from 0 to $\SI{-60}{\deg}$) into a northern
region covering declination from 0 to $\SI{-30}{\deg}$ and a southern
one with declination in $-30$ to $\SI{-60}{\deg}$. The idea is then to
visit the northern part only in odd years (year one, three, five, seven, and nine) and the
southern part in even years (year two, four, six, eight, and ten) or vice versa.
We investigate 20 different observing strategies which are potential LSST cadences or of special interest for our science case. In Section \ref{sec:specifications of observing strategies} we present the different observing strategies. Readers who are more interested in the overall conclusions instead of specific details about the cadence strategies might directly jump to Section \ref{sec:categorization of observing strategies}.
\subsection{Specifications of observing strategies}
\label{sec:specifications of observing strategies}
Sixteen out of the 20 investigated cadence strategies are implemented with the {\tt OpSim} scheduler\footnote{\url{https://cadence-hackathon.readthedocs.io/en/latest/current_runs.html} and in addition \texttt{pontus\_2506} from Tiago Ribeiro.} and the remaining four are produced by {\tt alt\_sched}\footnote{\url{http://altsched.rothchild.me:8080/}} and the {\tt feature-based} scheduler\footnote{\url{https://github.com/yoachim/SLAIR\_runs}}.
Both the {\tt OpSim} and {\tt feature-based} schedulers use a greedy
algorithm, where the sky location of the next visit is determined by
optimizing different parameters such as seeing, time lapsed since the
last visit at the location, etc. In contrast, {\tt alt\_sched} employs
a non-greedy algorithm by observing at minimum air mass and only
relaxing on that to increase season length.
The
following key points describe the different observing strategies very
briefly, where strategies with a capital letter have a larger than nominal $\SI{18000}{\square\deg}$ WFD
footprint (the color scheme is explained in Section \ref{sec:categorization of observing strategies})\footnote{A discussion within the Dark Energy Science Collaboration revealed that the three rolling cadences \texttt{kraken\_2036}, \texttt{mothra\_2045}, and \texttt{pontus\_2502} seem to lack some observations. Nevertheless, we investigate those cadences as all others, because we are mainly interested in the dependency on different parameters. Our statement about rolling cadences would stay the same even if we remove these three strategies from our investigation.}:
\begin{itemize}
\item \altsched: Non-greedy algorithm;
revisits in the same night in different filter; visits distributed in
\textit{ugrizy} as $\sim(8.2, 11.0, 27.6, 18.1, 25.6, 9.5)\%$.
\item \texttt{\textcolor{blue}{alt\_sched\_rolling}}: Same as \texttt{alt\_sched} but as a rolling cadence with two declination bands.
\item \texttt{\textcolor{orange}{baseline2018a}}: Greedy algorithm like all following cadences;
official baseline; $2 \times \SI{15}{\mathrm{s}}$ exposure; revisit within
an hour in the same filter and scattered visits over WFD, four DDFs,
northern Ecliptic, south Celestial Pole, and Galactic
Center; distribution of visits in WFD over \textit{ugrizy} as $\sim (6.8, 9.4,
21.5, 21.6, 20.2, 20.4)\%$. For all following cadences up to \texttt{pontus\_2506} just the main differences with respect to \texttt{baseline2018a} are listed.
\item \texttt{\textcolor{orange}{colossus\_2664}}: WFD cadence over Galactic Plane.
\item \texttt{\textcolor{orange}{colossus\_2665}}: Slightly expanded WFD.
\item \texttt{\textcolor{magenta}{colossus\_2667}}: Single visits instead of pair visits each night.
\item \texttt{\textcolor{orange}{kraken\_2026}}: Unofficial baseline with improved slew time.
\item \texttt{\textcolor{orange}{kraken\_2035}}: Nine DDFs instead of four.
\item \texttt{\textcolor{blue}{kraken\_2036}}: Standard WFD cadence in year one, two, nine, and ten and a rolling
cadence with three declination bands in between.
\item \texttt{\textcolor{orange}{kraken\_2042}}: Single 30\,s exposure instead of $2 \times 15{\mathrm{s}}$
exposure.
\item \texttt{\textcolor{magenta}{Kraken\_2044}}: Very large WFD footprint of
\SI{24700}{\square\deg}; five DDFs; single visits instead of visits in
pairs each night.
\item \texttt{\textcolor{blue}{mothra\_2045}}: A rolling cadence
in WFD (two dec. bands).
\item \texttt{\textcolor{blue}{Mothra\_2049}}: Similar to \texttt{mothra\_2045} but on a very large WFD footprint
(\SI{24700}{\square\deg}).
\item \texttt{\textcolor{blue}{Nexus\_2097}}: Similar to \texttt{kraken\_2036} but on a WFD footprint of
\SI{24700}{\square\deg}.
\item \texttt{\textcolor{orange}{Pontus\_2002}}: Very large WFD footprint
(\SI{24700}{\square\deg}) and five DDFs.
\item \texttt{\textcolor{magenta}{pontus\_2489}}: $2 \times \SI{15}{\mathrm{s}}$ visits replaced by $1 \times
\SI{20}{\mathrm{s}}$ in \textit{grizy} and $1 \times \SI{40}{\mathrm{s}}$ in \textit{u} band.
\item \texttt{\textcolor{orange}{pontus\_2502}}: A rolling cadence (two dec. bands) in WFD where the baseline cadence stays on at a reward level of $25\%$.
\item \texttt{\textcolor{magenta}{pontus\_2506}}: Revisits in the same night in different filter.
\item \texttt{\textcolor{orange}{rolling\_10yrs\_opsim}}: A rolling cadence (two dec. bands) in WFD where the de-emphasized band is set to reach $25\%$ of it’s usual number of visits in a year; paired visits in \textit{g, r,} and \textit{i}.
\item \texttt{\textcolor{magenta}{rolling\_mix\_10yrs\_opsim}}: A rolling cadence similar to rolling\_10yrs\_opsim but with revisits in different
filters.
\end{itemize}
\subsection{Categorization of observing strategies}
\label{sec:categorization of observing strategies}
From our investigation (in Section \ref{sec:results}), we find that the main relevant parameters for
measuring time delays in LSNe Ia are the cumulative season length ($t_\mathrm{eff}$),
mostly in terms of the total number of LSNe Ia, and the mean
inter-night gap ($t_\mathrm{gap}$; also referred as sampling frequency or sampling) concerning the quality of the light curves.
These two parameters are defined later in this section. For categorizing different observing strategies $t_\mathrm{gap}$ and $t_\mathrm{eff}$ are shown
in Figure \ref{fig:WFD cumulative season length and inter night
gap} for 20 LSST observing strategies and from this we can separate them into three different categories with respect to the current LSST baseline cadence strategy (\texttt{baseline2018a}):
\begin{itemize}
\item \textcolor{orange}{$``\mathrm{baseline \, like}"$: baseline-like cadence strategies in terms of sampling respectively cadence ($t_\mathrm{gap}$) and
cumulative season length ($t_\mathrm{eff}$)}
\item \textcolor{blue}{$``\mathrm{higher \, cadence \, \& \, fewer \, seasons}"$: higher cadence but shorter cumulative season length}
\item \textcolor{magenta}{$``\mathrm{higher \, cadence}"$: higher cadence and baseline-like cumulative season}
\end{itemize}
Readers interested in general properties of the strategies should focus on these three categories which are highlighted by the category names and their corresponding colors.
Observing strategies in blue $``\mathrm{higher \, cadence \, \& \, fewer \, seasons}"$ are all rolling cadences. The
alternating observation pattern for different years leads to a shorter
cumulative season length and hence an improved sampling. Magenta strategies $``\mathrm{higher \, cadence}"$ provide a better mean inter-night gap
than the baseline cadence by reducing the exposure time, doing the
revisits of the same field within an hour in different filters or by
just doing single visits of a field within a night. For this reason,
these strategies provide sampling similar to rolling cadences
but they leave the cumulative season length close to the baseline
cadence. Rolling cadences which keep the WFD on a $25\%$ reward level
have a cumulative seasons length similar to the baseline cadence but
do not provide a better mean inter-night gap and are therefore listed in category $``\mathrm{baseline \, like}"$\footnote{except for \texttt{rolling\_mix\_10yrs\_opsim} where the revisit in different filters
improves the sampling frequency.}.
The mean cumulative season length and mean inter-night gap from a simulation of a given observing strategy are calculated by taking the mean of all fields under
consideration. We look at two different cases. The first case
considers 719 LSST fields from the WFD survey\footnote{The 719 WFD fields contain all fields with $\mathrm{Dec}
\in [-58,-2] \, \si{\deg}$ and $\mathrm{RA} \in [0,120] \cup [330,360]
\, \si{\deg}$, where all DDFs are excluded.}, which is shown as black solid line
in Figure \ref{fig:WFD cumulative season length and inter night
gap}, with the shaded region marking $99\%$ of the fields. In the
second case we consider for comparison all 5292 LSST fields covering
the entire sky. We only take into account those fields
where observations are taken, which is shown as blue dashed line. In the upper panel, cadences with the black
solid line below the black dot-dashed line are those with a
significantly better inter-night gap than the baseline cadence (i.e., magenta $``\mathrm{higher \, cadence}"$ and blue $``\mathrm{higher \, cadence \, \& \, fewer \, seasons}"$ strategies), whereas the others are baseline-like (orange $``\mathrm{baseline \, like}"$). From the lower panel we distinguish between strategies with a cumulative season length
similar to the baseline cadence (magenta $``\mathrm{higher \, cadence}"$ and orange $``\mathrm{baseline \, like}"$) and a significantly worse
cumulative season length (blue $``\mathrm{higher \, cadence \, \& \, fewer \, seasons}"$). The area of the WFD footprint is not plotted explicitly because relative differences in the area are smaller than those in the cumulative season length.
Nevertheless cadence strategies with a capital (small) letter have a nominal WFD footprint of $24700 \, (18000) \, \si{\square\deg}$.
The cumulative season length is the summed up season length over all
seasons. A season gap for an LSST field is defined if no observation
in any filter is taken for 85 days\footnote{To avoid unrealistically
long seasons, we split a season if the season length is longer than 320
days at the biggest gap. Seasons with a season length shorter than 10
days are removed from the simulations.}. The mean cumulative season length
of all fields under consideration is shown in the lower panel of
Figure \ref{fig:WFD cumulative season length and inter night gap}. For
the inter-night gap, shown in the upper panel of Figure \ref{fig:WFD
cumulative season length and inter night gap}, the revisits of a
field within hours in the same filter are summarized into a single
visit. Since SNe do not typically change over such a short time scale, the data
points are combined into a single detection with reduced
uncertainty. For some of the observing strategies, the mean
inter-night gap between the picked WFD fields deviates significantly
from the consideration of all fields, which is due to time spent on other surveys like northern
hemisphere, the southern Celestial Pole, and the Galactic Center.
\begin{figure*}
\centering
\includegraphics[width=0.6\textwidth]{cadences.png}
\caption{Mean inter-night gap (upper panel) and mean cumulative
season length (lower panel) for 20 different observing strategies
to define the three categories $``\mathrm{higher \, cadence \, \& \, fewer \, seasons}"$, $``\mathrm{higher \, cadence}"$, and $``\mathrm{baseline \, like}"$ as described in Section \ref{sec:categorization of observing strategies}. }
\label{fig:WFD cumulative season length and inter night gap}
\end{figure*}
\section{Generating realistic LSST mock light curves of LSNe Ia}
\label{sec:Time-Delay Measurements of mock LSST LSNe Ia}
The goal of this section is to describe how mock LSST light curves
for LSNe Ia are obtained for different cadence strategies. We used mock LSNe Ia from
the OM10 catalog \citep{Oguri:2010}, where we assumed
the spherically symmetric SN Ia W7 model
\citep{1984:Nomoto} for each image to simulate observations randomly. Synthetic light curves were produced with the
radiative transfer code {\tt ARTIS} \citep{Kromer:2009ce} where we included
the effect of microlensing via magnifications maps from {\tt GERLUMPH}
\citep[J.~H.~H.~Chan in preparation]{Vernardos:2015wta} following
Section \ref{sec: Microlensing on SNe Ia}. We then simulated data
points for the light curves, following the observation pattern from
different cadences and uncertainties according to the LSST science
book \citep{2009:LSSTscience}.
In Section
\ref{sec:OM10} we describe the OM10 mock catalog for strong lenses and
Section \ref{sec:Light curves for various observing strategies}
illustrates how we simulated mock light curves for mock LSNe Ia from OM10.
\subsection{Mock LSNe Ia from the OM10 catalog}
\label{sec:OM10}
The OM10 catalog \citep{Oguri:2010} is a mock lens catalog for
strongly lensed quasars and supernovae for LSST. For our purpose, we
focus on the LSNe Ia in the catalog. We expect about 45 spatially resolved LSNe Ia for
the ten-year LSST survey, under the assumption of OM10, namely a survey
area of $\Omega_\mathrm{OM10}=\SI{20000}{\square\deg}$ and a season
length of three months. Additionally, the 10$\sigma$ point source limiting magnitude in the \textit{i} band for a single visit is assumed to be $23.3$. The catalog contains LSNe Ia with two images (doubles) and four images (quads), but includes only those systems where
the multiple images are resolved (minimum image separation of
$\SI{0.5}{\arcsec}$) and the peak of the \textit{i}-band magnitude (of the fainter image for a double or the 3rd brightest image for a quad) falls in an observing season and is 0.7 mag brighter than the 10$\sigma$ point source limiting magnitude. Since we used the W7 model for our mock light curves and we got random microlensing magnification, we allowed automatically for fainter systems up to 25 mag in \textit{i}-band\footnote{98\% brighter than 24.0 mag and 41\% brighter than 22.6.}, instead of the sharp OM10 cut of 22.6 mag. Applying the cut as in OM10 is not necessary, because we used the 5$\sigma$ depth from simulations of the LSST observing strategies to create realistic light curves with uncertainties. Therefore, systems which are too faint will provide overall worse time-delay measurements than bright ones, making it unnecessary to exclude them in advance. Furthermore, applying no cut in magnitude allows us to draw conclusions about fainter systems not in the OM10 catalog, which are also relevant for time-delay measurements.
The mock catalog assumes as a lens mass model a Singular
Isothermal Ellipsoid \citep[SIE;][]{Kormann:1994} and the convergence for the
SIE is given in \cite{Oguri:2010} via
\begin{equation}
\kappa(\theta_1,\theta_2)= \frac{\theta_\mathrm{Ein} \sqrt{1-e}}{2} \frac{\lambda(e)}{\sqrt{\theta_1^2+(1-e)^2 \theta_2^2}},
\label{MockCurves:konvergence OM}
\end{equation}
where $(\theta_1,\theta_2)$ are the lens coordinates, $\theta_\mathrm{Ein}$ is the Einstein radius in arcsec, $e$ is the
ellipticity and $\lambda (e)$ the dynamical
normalization defined in \cite{Oguri:2012}. The lens mass
distribution is then rotated by its position angle.
The OM10 catalog is composed of two parts. The first part is the input
for the SIE model containing properties of the source and the lens,
such as redshift, velocity dispersion, source positions, and so
on. This first part is used to calculate mock images using GLAFIC
\citep{Oguri:2010GLAFIC} and therefore predict image positions,
magnifications, and time delays, which is the second part of the OM10
catalog. Furthermore, a microlensing map like the one in Figure \ref{fig:
microlensing map} is needed to get the macro and microlensing magnification for different images, and therefore $\kappa$ and $\gamma$ have
to be known for each of the mock images\footnote{In principle also the
smooth matter fraction $s$ but for simplicity we assumed as before $s=0.6$.}. We
calculated these parameters analytically for the SIE model following
equations from \cite{Kormann:1994}, \cite{Oguri:2010} and
\cite{Oguri:2012}, and checked
the consistency by comparing to magnification factors predicted by
GLAFIC.
The distribution of the source redshift and the time-delay of all OM10
mock systems is shown in Figure \ref{fig:OM10 source redshift
distribution}. For quad systems, the maximum of the six possible time
delays (between pair of images) is shown. All 417 LSNe Ia from OM10
correspond to the blue line. To reduce the computational effort for the investigations in Section \ref{sec:results} we restrict ourselves to a subsample of 202 mock LSNe Ia (101 mock quads and 101 mock doubles) which is represented by the orange line. We find LSNe Ia for
a source redshift of 0.2 to 1.4 where most of them are around 0.8. In
terms of time delays, most of the systems have a maximum delay shorter
than 20 days. There are only a few systems with very long time delays (greater than 80 days).
\begin{figure}[h!]
\centering
\subfigure{\includegraphics[width=0.45\textwidth]{source_redshift_w7_part.jpg}}
\subfigure{\includegraphics[width=0.469\textwidth]{OM10_time_delay_distribution.png}}
\caption{Source redshift (upper panel) and time-delay (lower panel)
distribution of LSNe Ia from the OM10 catalog. The blue line
shows the whole catalog (417 mock systems). The orange line
shows the subsample of 202 mock systems (101 randomly picked quads and 101 randomly picked doubles) under investigations in
Section \ref{sec:results}. For the time-delay distribution, the maximum time delay is shown (just relevant for quads) and there are
three systems not in the plot with time delays greater than
140 days. The highest delay of a LSNe Ia in the OM10 catalog is 290
days.}
\label{fig:OM10 source redshift distribution}
\end{figure}
\subsection{Sampling of the light curves for various LSST observing strategies}
\label{sec:Light curves for various observing strategies}
To simulate observations, we randomly picked 202 mock LSNe Ia from the OM10
catalog (see orange curves in Figure \ref{fig:OM10 source redshift distribution}) and produced synthetic microlensed light curves for the mock
SNe images following Section \ref{sec: Microlensing on SNe Ia}. As an
example a mock quad system and the corresponding light curves (each
image in a random position in its corresponding microlensing map) is
shown in Figure \ref{fig: simulated observation}. Image A arrives
first followed by C, D, and B. In the simulated light curves of image D
(red solid line), an ongoing microlensing event is visible as
additional brightening about $\SI{80}{\day}$ after the peak, which is not visible
in the other three images.
\begin{figure}[h!]
\centering
\subfigure{\includegraphics[width=0.44\textwidth]{399.jpg}}
\subfigure{\includegraphics[width=0.5\textwidth]{Obsevation_number399_baseline2018a_filter_i_oversampling_00.jpg}}
\caption{Synthetic \textit{i}-band light curves (lower panel) of a mock quad
LSNe Ia (upper panel) to illustrate simulated observations. The redshift of the source is $0.71$ and is taken into account. The observation sequence is for a random
field in the WFD survey for the \texttt{baseline2018a}
cadence.}
\label{fig: simulated observation}
\end{figure}
To get simulated data points from the theoretical light curves as shown
in Figure \ref{fig: simulated observation}, we combined the
light curves with an observing sequence of visits. This is illustrated
for the \texttt{baseline2018a} cadence in Figure \ref{fig:observation patter
LSST 10 year survey} where for one field in the WFD, all observations
within the 10-year survey are shown. For this purpose, we picked 10
fields in the WFD survey which are listed in Table \ref{tab: 10 wfd
fields}\footnote{We have not added dithering to observing strategies simulated with the OpSim scheduler, which means that we underestimated the number of visits slightly.}. That these ten fields are representative for the WFD survey
is shown in Figure \ref{fig:Comparison 10 fields to WFD fields}. Here
the mean inter-night gap (top left panel), mean cumulative season
length (bottom left panel) and mean 5$\sigma$ depth for bands \textit{g} (top
right panel) and \textit{r} (bottom right panel) for our ten fields (orange),
WFD fields (black) and all fields (blue) are shown, while the shaded region
encloses the 99th percentile.
For each of the ten fields for a given cadence, we considered the following
for each visit of the field: date (mjd), filter(s) observed, and
5$\sigma$ point-source depth $m_5$. The depth is needed to calculate the
photometric uncertainties $\sigma_1$ according to the \cite{2009:LSSTscience} (see Appendix \ref{sec:Appendix LSST uncertainty}).
The magnitude for each data point can then be calculated via
\begin{equation}
m_\mathrm{data} = m_{\mathrm{W7}} + r_\mathrm{norm} \sigma_1,
\label{eq:noise realization random mag including error LSST science book}
\end{equation}
where $r_\mathrm{norm}$ is a random number following the normal
distribution and $ m_{\mathrm{W7}}$ is the magnitude of the data point
from the theoretical W7 model. By placing the synthetic light curves
(shown as solid lines in Figure \ref{fig: simulated observation})
randomly in one of the fields in Table \ref{tab: 10 wfd fields},
randomly in time following the detection criteria from the OM10
catalog, and using Equation (\ref{eq:noise
realization random mag including error LSST science book}), we created
simulated data points as illustrated in Figure \ref{fig: simulated
observation}. If two or more data points are taken within one hour
in the same filter we combined them into a single measurement, because SNe typically do not change on such time scales. Specifically,
two data points $m_\mathrm{data,1} + \sigma_1$ and $m_\mathrm{data,2}
+ \sigma_2$ observed at time $t_1$ and $t_2$, where $ t_1 \le t_2 \le
t_1+ \SI{1}{\hour}$, were combined into a single one as
\begin{equation}
m_\mathrm{combined} + \sigma_\mathrm{combined},
\end{equation}
where
\begin{equation}
\scalebox{1.1}{$m_\mathrm{combined} = \frac{m_1/\sigma_1^2 + m_2/\sigma_2^2}{1/\sigma_1^2 + 1/\sigma_2^2}, \quad \sigma_\mathrm{combined} = \sqrt{\frac{1}{1/\sigma_1^2+1/\sigma_2^2}}. $}
\end{equation}
We assigned to the combined data point the time $t_\mathrm{combined} = (t_1 + t_2)/2.$
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{field_number4_baseline2018a_Daniel.jpg}
\caption[Observation sequence for a WFD field in LSST.]{Illustration of Modified Julian Date (MJD) and
filters when observations are taken over 10-year survey for field
number four from Table \ref{tab: 10 wfd fields} for observing strategy \texttt{baseline2018a}. The y axis shows the six LSST filters and the number of observations taken in that filter.}
\label{fig:observation patter LSST 10 year survey}
\end{figure}
\begin{table*}[htbp]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
field number & 1 & 2 & 3 & 4 & 5& 6 & 7 & 8 & 9 & 10 \\
\hline
RA in deg& 0.0 & 32.1 & 65.8 & 50.9 &44.9& 125.6 & 155.0 & 207.7 & 304.3 & 327.5 \\
\hline
DEC in deg& -7.4 & -44.2 & -7.2 & -30.0 & -50.9& -11.4 & -25.6 & -45.3 & -55.2 & -35.9 \\
\end{tabular}
\caption[Ten wide fast deep fields of LSST, where observation sequence is considered.]{Ten fields of WFD survey, where observational sequence for different cadences is considered, which is used to determine fraction of systems with measured time delay as discussed in Section \ref{sec:results}. We investigate the observing sequence at the centers of the listed fields.}
\label{tab: 10 wfd fields}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{compare10_WFD.png}
\caption[Comparison of inter-night gap, cumulative season length, and
5$\sigma$ depth of ten fields under investigation to the
subsample of 719 WFD fields.]{Comparison of inter-night
gap, cumulative season length, and 5$\sigma$ depth of ten fields
under investigation (orange) to sample of 719 WFD (black)
fields. In addition, all 5292 LSST fields where
observations are taken (blue) are shown. The lines indicate the mean and the shaded
area includes everything up to the 99th percentile. We see that the
ten chosen fields are representative for the WFD survey but not for
the whole survey.}
\label{fig:Comparison 10 fields to WFD fields}
\end{figure*}
\section{Time-delay measurements}
\label{sec:Time-delay measurement}
In this section we describe how we estimate time-delays from the simulated observations to quantify different observing strategies. We investigate 202 mock LSNe Ia (already mentioned in Section \ref{sec:Time-Delay Measurements of mock LSST LSNe Ia}) for each cadence strategy to have sufficient statistics, where we pick $50\%$ doubles and $50\%$
quads. We define a system with ``good" time delay measurement as a systems where the accuracy is below $1\%$ and the precision is below $5\%$. To estimate accuracy and precision we investigate for each of the mock systems, 100 random starting configurations. A starting configuration corresponds to a random
position in the microlensing map and a random field from Table
\ref{tab: 10 wfd fields}, where it is placed randomly in one of the
observing seasons such that the peak of the \textit{i}-band magnitude of the fainter image for a double or the 3rd brightest image for a quad falls in the observing season. We used the same random positions in the microlensing map for each mock image for all observing strategies investigated here, to avoid uncertainties due to different microlensing
patterns. For each of these starting
configurations, we then draw 1000 different noise realizations of light
curves following Equation (\ref{eq:noise realization random mag including error LSST science book}). For each of these realizations we have to estimate the time delay and compare it to the true value.
To get a measured time delay from the mock data we used the
free-knot splines estimator from {\tt PyCS}
\citep[Python Curve Shifting;][]{2013:Tewesb,Bonvin:2015jia}. As a spline, a piecewise polynomial
function of degree three is used. The polynomial pieces are connected by
knots, where for the optimization process, the initial number of knots
has to be specified. The polynomial coefficients and the knot
positions are free variables to optimize. To avoid clustering of the
knots a minimum knot separation is also defined in advance \citep{Molinari2004}. The basic
idea of the optimizer is to fit a single intrinsic spline to two light
curves from different images and shift the data iteratively in time and
magnitude, and modify the spline parameters, to get a time-delay
measurement.
We show in Figure \ref{fig:PyCS illustration} an example of
the fitting of the spline to two light curves, with one light curve
time-shifted by the time delay to increase overlap with the other.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{example_fit.png}
\caption{Illustration of spline fitting technique for a double mock LSNe Ia at redshift $0.27$ for the \textit{i}-band light curve. The black line corresponds to the spline fit of the data (blue and orange), where the knots positions (small vertical ticks on the black lines) as well as the magnitude and time shifts have been iteratively optimized to minimize a chi-square term, resulting in the measured delay indicated in the top-right.}
\label{fig:PyCS illustration}
\end{figure}
Both the spline parameters and the time delay between the two curves
are optimized by reducing the residuals in the fit of the spline to
the two light curves. Even with noiseless data, we would get a spread
of delays from PyCS due to the range of splines that could fit to the
data equally well. Densely sampled light curves with little
microlensing would restrict the range of delays.
We do not explicitly include
additional spline components to model the microlensing
variation. An analysis that models separately the intrinsic and microlensing variability is deferred to future work.
{\tt PyCS} was initially developed to measure time delays in strongly
lensed quasars, and is not yet optimized for LSNe Ia, such as fitting
simultaneously multiple filters and using SN template light curves.
Nonetheless, \citet{Rodney:2015uyq} used the tools of {\tt PyCS} to
measure the time delays between the multiple images of SN Refsdal as
one of the approaches, and also fit SN templates to the light
curves as another approach. The resulting delays from both
approaches were consistent with each other. While both methods did not
explicitly include the effects of microlensing, the residuals of the
light curves of SN Refsdal suggested that no major microlensing event
occurred in the case of SN Refsdal \citep{Rodney:2015uyq}. The
template-fitting approach was also used by \citet{Goldstein:2017bny}
to fit to mock light curves and color curves, although in an idealized
scenario without noise and high-cadence sampling.
\citet{Goldstein:2017bny} found the fitting of templates to light
curves yielded time-delay uncertainties of approximately $4\%$, limited by
microlensing distortion of light curves, whereas the fitting to color
curves in the achromatic phase provided approximately $1\%$ uncertainties in the
delays. For our LSST light curves, we opt to use {\tt PyCS} on light
curves given that (1) color curves are not available from LSST data
given the sampling cadence, and (2) there is currently no publicly
available template-fitting software accounting for microlensing, an
effect that can significantly distort the light curves as shown in
Section \ref{sec:Microlensing on Type Ia Supernovae}.
Applying {\tt PyCS} to individual filter's light curves, we get a single
independent time delay for each filter. This means that we have for the given LSST filter $f$, the $j$-th starting configuration and the $k$-th noise realization a deviation from the true time delay:
\begin{equation}
\tau_{\mathrm{d, }f,j,k} = \frac{\Delta t_{\mathrm{measured, }f,j,k} - \Delta t_{\mathrm{true, }f,j,k}}{\Delta t_{\mathrm{true, }f,j,k}}.
\label{eq: deviation from true time delay}
\end{equation}
For each observing strategy and double LSNe Ia, we have thus 1 (delay for the one pair of images) $\times \, 6$ (filters) $\times \, 100$ (starting configurations) $\times \, 1000$ (noise realisations)
time-delay deviations as in Equation (\ref{eq: deviation from true time delay}).
For the six pairs of images for a quad system, we have a sample of $6
\times 6 \times 100 \times 1000$.
To exclude starting configurations which are completely wrong in comparison to most of the investigated systems we calculated separately for each starting configuration
the median $\tau_{\mathrm{d,50, }f,j}$ and the error as $\delta_{f,j} =
(\tau_{\mathrm{d,84, }f,j}-\tau_{\mathrm{d,16 },f,j})/2$, where
$\tau_{\mathrm{d,50, }f,j}$, $\tau_{\mathrm{d,84, }f,j}$ and
$\tau_{\mathrm{d,16, }f,j}$ are the 50th, 84th, and 16th percentile
from the 1000 noise realizations. Furthermore, we combined the six filters via
the weighted mean into a single time-delay deviation
$\tau_{\mathrm{d,50},j} \pm \delta_j$, where
\begin{equation}
\tau_\mathrm{d,50,j} = \frac{\sum_{f=\mathrm{ugrizy}} \tau_{\mathrm{d,50, }f,j} / \delta_{f,j}^2 }{\sum_{f=\mathrm{ugrizy}} 1/\delta_{f,j}^2}, \qquad \delta_j = \sqrt{\frac{1}{\sum_{f=\mathrm{ugrizy}} 1/\delta_{f,j}^2}}.
\end{equation}
This is possible since the distribution of the time-delay deviation for each filter is
approximately Gaussian.
From this we exclude
``catastrophic failures" which are starting configurations with
$\delta_j \ge 2 \bar{\delta}_j$ or
$|\tau_{\mathrm{d,50},j}-\bar{\tau}_{\mathrm{d,50},j}| \ge 5
\delta_j$, which occur for about $10\%$ of the starting
configurations independent of the observing strategy. The bar indicates the mean, that is,
\begin{equation}
\bar{\delta}_j = \frac{1}{100} \sum_{j=1}^{100} \delta_j \qquad \mathrm{and} \qquad \bar{\tau}_{\mathrm{d,50},j} = \frac{1}{100} \sum_{j=1}^{100} \tau_{\mathrm{d,50},j}.
\end{equation}
The failures are likely due to a bad starting time of the supernova
in the season (such as at the beginning or end of season, where some
of the light curves of the multiple images would be incomplete due to
seasonal gap) and strong microlensing distortions. These effects
could be easily identified in real lens systems, and provide advance
warning of potentially problematic delay inference. In addition,
simulations of light curves mimicking those of real lens systems could
be used to identify catastrophic failures of problematic systems and
avoid the use of their time delays for further analysis such as
cosmography.
After excluding catastrophic failures we are left with about 90 of the 100 initial starting configurations leading to approximately $ 90 \times 1000 \approx 90000$ time-delay
deviations $\tau_{\mathrm{d, }f,j,k}$ for each filter $f$. From these we define accuracy as the median $\tau_{\mathrm{d,50, }f}$
and precision as $\delta_f = (\tau_{\mathrm{d,84,
}f}-\tau_{\mathrm{d,16 },f})/2$, where $\tau_{\mathrm{d,84, }f}$ is
the 84th and $\tau_{\mathrm{d,16, }f}$ the 16th percentile of the 90000 starting configuration and noise realizations, that is, over the $j$ and $k$ indexes. Since the time-delay deviations from the
six filters are independent, we combined them into a single time-delay
deviation. This means that in the end, we have
for one strategy and a mock LSNe Ia a single $\tau_\mathrm{d,50} \pm \delta$ per pair of images, where
\begin{equation}
\tau_\mathrm{d,50} = \frac{\sum_{f=\mathrm{ugrizy}} \tau_{\mathrm{d,50, }f} / \delta_f^2 }{\sum_{f=\mathrm{ugrizy}} 1/\delta_f^2}, \qquad \delta = \sqrt{\frac{1}{\sum_{f=\mathrm{ugrizy}} 1/\delta_f^2}}.
\label{eq: accuracy and precission}
\end{equation}
To use the weighted mean here is possible since the time-delay distributions for different filters are approximately Gaussian.
\section{Results: cadence strategies for LSNe}
\label{sec:results}
In this section, we present the results of the investigation of the
different cadence strategies presented in Section \ref{sec: LSST}. We
distinguish between two different cases: (1) using LSST data only for
measuring time delays, and (2) using LSST just as a discovery machine
for LSNe Ia and getting the time delay(s) from follow-up observations.
Given that $H_0 \propto {\Delta t_\mathrm{true}^{-1}}$, where
$\Delta t_\mathrm{true}$ is the time delay between two images, we aim
for accuracy ($\tau_\mathrm{d,50}$ in Equation (\ref{eq: accuracy and
precission})) smaller than $1 \%$ and precision ($\delta$ in Equation (\ref{eq: accuracy and
precission})) smaller
than $5\%$. We refer to systems fulfilling these requirements as
systems with good time delays. A quad system is counted as
successful if at least one of the six delays fulfills these demands. The
accuracy requirement is needed for measuring $H_0$ with $1\%$
uncertainty, and the precision requirement ensures that the delay
uncertainty does not dominate the overall uncertainty on $H_0$ given
typical mass modeling uncertainties of about $5\%$
\citep[e.g.,][]{Suyu2018}.
\subsection{Number of LSNe Ia}
Before comparing cadence strategies based on the time-delay
measurements, we first estimate the total number of LSNe Ia for
different observing strategies. Since different observing strategies
have different survey areas and different cumulative season lengths,
the number of LSNe Ia deviates from the predicted number from OM10. We
approximate the total number of LSNe Ia as
\begin{equation}
\label{eq: total number of LSNe Ia from modified OM 10}
N_\mathrm{LSNe Ia, cad} \approx N_\mathrm{LSNe Ia, OM 10} \frac{\Omega_\mathrm{cad}}{\Omega_\mathrm{OM10}} \frac{\bar{t}_\mathrm{eff,cad}}{t_\mathrm{eff, OM10}},
\end{equation}
where $N_\mathrm{LSNe Ia, OM10} = 45.7$, $\Omega_\mathrm{OM10} =
\SI{20000}{\square\deg}$ and $t_\mathrm{eff, OM10}=\SI{2.5}{\year}$
from \cite{Oguri:2010}. The
effective respectively cumulative season length for a given cadence strategy is given via $\bar{t}_\mathrm{eff,cad}$,
where we have averaged over the sample of 719 WFD
fields. The survey area for a given observing
strategy is $\Omega_\mathrm{cad}$. Instead of taking the nominal values
($\SI{24700}{\square\deg}$ for large footprint strategies and
$\SI{18000}{\square\deg}$ for rest) we calculated the area from fields
represented by our study, which are the fields with a mean cumulative
season length and inter-night gap similar or even better than the 719
WFD fields, that means, cumulative season length ($t_\mathrm{eff}$)
longer than the lower 99th percentile and inter-night gap
($t_\mathrm{gap}$) shorter than the upper 99th percentile. Additionally we take into account the 5$\sigma$ depth ($m_5$), where we
consider only the main relevant bands \textit{g, r, i,} and \textit{z}. Here we
consider all fields with ($m_5+0.2 \mathrm{mag}$) greater than the
lower 99th percentile of the 719 WFD fields. The relaxed 5$\sigma$
depth is necessary in order to represent the wider areas as suggested
by the nominal values\footnote{This leads to a few percent
overestimation of the total number of LSNe Ia with good time
delays for large footprints in comparison to the
$\SI{18000}{\square\deg}$. Nonetheless, since we find that the improvement
due to wider area is too small this is not a problem and does not
affect the overall conclusions of our work.}. The area can
then be calculated from the number of fields fulfilling the above
defined criteria ($N_\mathrm{cad,criteria})$, multiplied with the field of view of
$\SI{9.6}{\square\deg}$, taking into account the overlap factor of the
fields:
\begin{equation}
\Omega_\mathrm{cad} = f_\mathrm{overlap} \cdot N_\mathrm{cad,criteria} \cdot \SI{9.6}{\square\deg},
\end{equation}
where
\begin{equation}
f_\mathrm{overlap}=\frac{4 \pi \cdot (\SI{180}{\deg}/\pi)^2}{5292 \cdot \SI{9.6}{\square\deg}} \approx 0.812.
\end{equation}
The total number of fields is 5292, which cover the entire sky, as
noted in Section \ref{sec: LSST} and the numerator corresponds to the surface area of a sphere in $\mathrm{deg}^2$. Therefore, $\Omega_\mathrm{cad}$ is equivalent to $4 \pi N_\mathrm{cad,criteria}/5292$ in units of $\mathrm{rad}^2$. The results from Equation (\ref{eq: total number of LSNe Ia from
modified OM 10}) for the 20 investigated cadences are shown in Table
\ref{tab: total number of LSNe Ia from OM 10}. We find that mainly the
cumulative season length sets the order of the table and therefore for rolling cadences with a lower number of observing
seasons (blue $``\mathrm{higher \, cadence \, \& \, fewer \, seasons}"$ strategies) many LSNe Ia will not be detected, because
of the alternating observation scheme.
\begin{table}
\tabcolsep=0.15cm
\centering
\begin{tabular}{c|c|cc} & $N_\mathrm{LSNe Ia,cad}$ & $\bar{t}_\mathrm{eff,cad}$ in yr & $\Omega_\mathrm{cad}$ in $\si{\square\deg}$ \\
\hline
\texttt{\textcolor{magenta}{Kraken\_2044}} & 101.9 & 4.64 & 24010 \\
\texttt{\textcolor{orange}{Pontus\_2002}} & 86.0 & 4.11 & 22926 \\
\texttt{\textcolor{magenta}{colossus\_2667}} & 84.0 & 5.16 & 17797 \\
\texttt{\textcolor{magenta}{pontus\_2489}} & 81.1 & 5.00 & 17758 \\
\texttt{\textcolor{orange}{rolling\_10yrs\_opsim}} & 79.1 & 4.77 & 18148 \\
\texttt{\textcolor{magenta}{rolling\_mix\_10yrs\_opsim}} & 78.9 & 4.76 & 18132 \\
\texttt{\textcolor{orange}{kraken\_2042}} & 78.0 & 4.79 & 17828 \\
\texttt{\textcolor{orange}{colossus\_2665}} & 76.8 & 4.55 & 18475 \\
\texttt{\textcolor{orange}{pontus\_2502}} & 76.3 & 4.74 & 17602 \\
\texttt{\textcolor{orange}{colossus\_2664}} & 74.6 & 4.48 & 18202 \\
\texttt{\textcolor{orange}{baseline2018a}} & 73.4 & 4.64 & 17306 \\
\texttt{\textcolor{orange}{kraken\_2035}} & 73.4 & 4.54 & 17680 \\
\texttt{\textcolor{orange}{kraken\_2026}} & 72.4 & 4.63 & 17119 \\
\texttt{\textcolor{magenta}{pontus\_2506}} & 72.2 & 4.36 & 18132 \\
\altsched & 61.7 & 3.81 & 17703 \\
\texttt{\textcolor{blue}{Nexus\_2097}} & 52.2 & 2.79 & 20471 \\
\texttt{\textcolor{blue}{Mothra\_2049}} & 50.9 & 2.55 & 21874 \\
\texttt{\textcolor{blue}{kraken\_2036}} & 45.2 & 2.79 & 17719 \\
\texttt{\textcolor{blue}{alt\_sched\_rolling}} & 37.9 & 2.03 & 20463 \\
\texttt{\textcolor{blue}{mothra\_2045}} & 37.2 & 2.48 & 16417 \\
\end{tabular}
\caption[Total number of LSNe Ia for different observing strategies based on OM10.]{Total number of LSNe Ia over 10-year survey calculated via equation (\ref{eq: total number of LSNe Ia from modified OM 10}) where $69\%$ are doubles and $31\%$ are quads. To understand the differences between the multiple strategies also the cumulative season length $\bar{t}_\mathrm{eff,cad}$ and the survey area $\Omega_\mathrm{cad}$ are shown. The total number depends on the selection criteria assumed in \cite{Oguri:2010}. If we relax the criteria like the image separation these numbers will be higher, but the order will be unchanged. Since differences in $\bar{t}_\mathrm{eff,cad}$ are much larger than in $\Omega_\mathrm{cad}$ the cumulative season length mostly sets the order of the table.}
\label{tab: total number of LSNe Ia from OM 10}
\end{table}
\subsection{LSST data only}
\label{sec:LSST Data only}
Here, we quantify the 20 investigated cadences for the case of
using LSST data only for measuring time delays. We have investigated
101 randomly picked quads and 101 randomly picked doubles. The
distribution of the source redshifts and time delays are shown as
orange lines in Figure \ref{fig:OM10 source redshift
distribution}. The 202 systems are used to determine the fraction
$f_{a}$ of systems with good time delays:
\begin{equation}
f_{a} = \frac{N_{\Delta t,a}}{N_{a}} \qquad a = \mathrm{double, quad},
\label{eq:fraction of sytems with good delays}
\end{equation}
where $N_{\Delta t,a}$ is the number of systems with good time
delays and $N_{a}=101$ for $a=\mathrm{double, quad}$. Since we have
picked the same amounts of
doubles and quads, whereas the real ratio between doubles and quads in
the OM10 catalog is
$69:31$, the total fraction can be calculated as
\begin{equation}
f_\mathrm{total} = 0.69 f_\mathrm{double} + 0.31 f_\mathrm{quad}.
\label{eq:res:LSST DATA only}
\end{equation}
The fractions of doubles $f_\mathrm{double}$ and quads
$f_\mathrm{quad}$ as well as the total fraction
$f_\mathrm{total}$ are shown in Table \ref{tab:LSST only
fraction of systems}. It becomes clear that the fraction of systems
with good delays depends mostly on the inter-night gap, where
strategies with better sampling (blue $``\mathrm{higher \, cadence \, \& \, fewer \, seasons}"$ and magenta $``\mathrm{higher \, cadence}"$ strategies) provide
higher fractions.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|cc}
&$f_\mathrm{total}$& $f_\mathrm{double}$ & $f_\mathrm{quad}$ \\
\hline
\texttt{\textcolor{blue}{alt\_sched\_rolling}} & 17.2 & 21.8 & 6.9 \\
\altsched & 13.5 & 17.8 & 4.0 \\
\texttt{\textcolor{magenta}{rolling\_mix\_10yrs\_opsim}} & 10.2 & 13.9 & 2.0 \\
\texttt{\textcolor{magenta}{pontus\_2506}} & 9.1 & 11.9 & 3.0 \\
\texttt{\textcolor{magenta}{colossus\_2667}} & 9.1 & 11.9 & 3.0 \\
\texttt{\textcolor{magenta}{pontus\_2489}} & 7.4 & 9.9 & 2.0 \\
\texttt{\textcolor{orange}{rolling\_10yrs\_opsim}} & 6.8 & 8.9 & 2.0 \\
\texttt{\textcolor{blue}{mothra\_2045}} & 6.1 & 7.9 & 2.0 \\
\texttt{\textcolor{magenta}{Kraken\_2044}} & 5.8 & 7.9 & 1.0 \\
\texttt{\textcolor{orange}{kraken\_2042}} & 5.8 & 7.9 & 1.0 \\
\texttt{\textcolor{blue}{Nexus\_2097}} & 4.8 & 6.9 & 0.0 \\
\texttt{\textcolor{orange}{kraken\_2026}} & 4.8 & 6.9 & 0.0 \\
\texttt{\textcolor{blue}{Mothra\_2049}} & 4.7 & 5.9 & 2.0 \\
\texttt{\textcolor{blue}{kraken\_2036}} & 4.7 & 5.9 & 2.0 \\
\texttt{\textcolor{orange}{colossus\_2665}} & 3.7 & 5.0 & 1.0 \\
\texttt{\textcolor{orange}{baseline2018a}} & 3.7 & 5.0 & 1.0 \\
\texttt{\textcolor{orange}{colossus\_2664}} & 3.4 & 5.0 & 0.0 \\
\texttt{\textcolor{orange}{kraken\_2035}} & 2.0 & 3.0 & 0.0 \\
\texttt{\textcolor{orange}{pontus\_2502}} & 1.4 & 2.0 & 0.0 \\
\texttt{\textcolor{orange}{Pontus\_2002}} & 1.4 & 2.0 & 0.0 \\
\end{tabular}
\caption[Fraction of systems with good delays for using LSST data only.]{Fraction of systems (in \%) of 202 investigated mock systems (101 doubles and 101 quads) where time delay has been measured with accuracy smaller than $1 \%$ and precision smaller than $5 \%$ for using LSST data only. The total fraction $f_\mathrm{total}$ accounts for the expected 69:31 ratio of doubles and quads from OM10 (see Equation (\ref{eq:res:LSST DATA only})). The investigation has been done for the ten fields listed in Table \ref{tab: 10 wfd fields}. These are not the final results as the total number of detected LSNe Ia is not taken into account.}
\label{tab:LSST only fraction of systems}
\end{table}
We determined the value of a given cadence strategy for our science
case, by combining Table \ref{tab: total number of LSNe Ia from
OM 10} and \ref{tab:LSST only fraction of systems}. The results for
the 10-year survey are shown in Figure \ref{fig:LSST only, total
number with good delays}. One sees that the key for obtaining a high
number of LSNe Ia with good delays is short inter-night gap while
keeping the cumulative season length baseline-like (magenta $``\mathrm{higher \, cadence}"$ strategies). Only for the strategy \texttt{alt\_sched\_rolling}, the much better
sampling can compensate for the short cumulative season length.
From the upper panel of Figure \ref{fig:distribution,LSST only, altsched}, it becomes
clear that only nearby systems ($z \lesssim 0.9$) with long time
delays ($\Delta t \gtrsim \SI{25}{\day}$) are measured
successfully. High redshift systems are overall fainter and the larger photometric errors make delay measurements more uncertain. Shorter time delays are not accessible because of the
sparse sampling and microlensing uncertainties. Looking at the total
number in Figure \ref{fig:LSST only, total number with good delays}, we
find that even the best strategies provide just a handful of systems
and therefore using just LSST data for measuring time delays is not
ideal. Therefore we investigate the prospects of
using follow-up observations in combination with LSST data.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{MetricLSSTonly.pdf}
\caption{Number of LSNe Ia for 10-year survey where time delay has been measured with accuracy $<1\%$ and precision $<5\%$ for using only LSST data.}
\label{fig:LSST only, total number with good delays}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{dist_alt_sched.png}
\caption{Time-delay and source-redshift distribution for 202
investigated mock LSNe Ia for ``LSST only" (upper panel) and ``LSST + follow-up" (lower panel) for observing
strategy \texttt{kraken\_2044}. For a quad system, just a single delay is
shown, either the first successful measured time-delay or the
maximum of the six possible time delays. The blue circles show all 202
investigated systems and the orange filled dots correspond to
systems where the time delay has been measured with accuracy better
than $1 \%$ and precision better than $5 \%$. Comparing the two panels we see significant improvement going from ``LSST only" to ``LSST + follow-up", which we find for most of the observing strategies as suggested by Table \ref{tab:LSST+follow-up fraction of systems}.}
\label{fig:distribution,LSST only, altsched}
\end{figure}
\subsection{LSST and follow-up observation}
\label{sec:LSST+follow-up}
Here, we investigate 20 different LSST observing strategies for
using LSST just as a discovery machine. For the time delay measurement we
assumed follow-up observation in the three filters \textit{g, r,} and \textit{i}, going to a
depth of $m_{5,\mathrm{g}}=\SI{24.6}{\mag}$,
$m_{5,\mathrm{r}}=\SI{24.2}{\mag}$ and
$m_{5,\mathrm{i}}=\SI{23.7}{\mag}$, which are similar to the depth of
the baseline cadence. These depths correspond to an observing time of approximately $\SI{6}{\minute}$ per filter and night on a 2 m telescope, which is despite diameter assumed to be identical to LSST (e.g., detector sensitivity). We adopt a realistic scenario where follow-up
starts two days after the third data point exceeds the 5$\sigma$
depth in any filter\footnote{\cite{Goldstein:2018bue} suggests that follow-up after three data points might be optimistic, but we would like to point out that this relies on the applied classification scheme \citep{Goldstein:2017bny}
that does not make use of all available lensing information which would help with identification.}.
The follow-up is assumed to take place every
second night in all three filters. Alternative follow-up scenarios are investigated in Section \ref{sec:number of LSNe Ia with delays}.
Assuming a 2-meter telescope is a conservative assessment of the follow-up resources. Observing with larger telescopes would be quite reasonable, which would significantly reduce the exposure time or enable greater depth. The prospects of deeper follow-up will be discussed in Section \ref{sec:number of LSNe Ia with delays}.
The fraction of systems with well measured time delays is calculated
similar to Section \ref{sec:LSST Data only} and summarized in Table
\ref{tab:LSST+follow-up fraction of systems} for the 20 investigated
observing strategies. Applying only the accuracy requirement ($\tau_\mathrm{d,50} < 1 \%$) would yield for all cadence strategies about $30 \%$ less systems from the 202 investigated ones with a slight trend for more accurate systems for cadence strategies with improved sampling.
Since for the case of ``LSST + follow-up" accuracy is only weakly dependent on the cadence strategy, the precision requirement ($\delta < 5 \%$) sets mostly the order of Table \ref{tab:LSST+follow-up fraction of systems}. Since blue ($``\mathrm{higher \, cadence \, \& \, fewer \, seasons}"$) and magenta ($``\mathrm{higher \, cadence}"$) strategies perform better than orange ($``\mathrm{baseline \, like}"$) strategies in Tables \ref{tab:LSST only fraction of systems} and \ref{tab:LSST+follow-up fraction of systems}, we see that for a good precision a short inter-night gap is important. Even though the light curves for Table \ref{tab:LSST+follow-up fraction of systems} are created via follow-up resources, the better inter-night gap is still important to detect systems earlier and get better sampled light curves, although it is less important as for ``LSST only" where the ratio between the best and worst cadence
strategy is about $12$ instead of approximately $2$ for LSST +
follow-up. This makes clear that in terms of the fraction of systems
with good delays, the sampling of the LSST observing strategy is still important but far less than if
we would rely on LSST data only.
From Table \ref{tab:LSST+follow-up fraction of systems} we see that we can increase the fraction and therefore the number of LSNe Ia with good
delays for ``LSST + follow-up" in comparison to using only LSST data by a factor of 2 to 16, depending on the cadence strategy. For a
strategy like \texttt{alt\_sched\_rolling}, the effort of triggering the above
defined follow-up observation is questionable, but for most other
strategies the improvement is significant.
In practice it is important to pick systems with good accuracy for a final cosmological sample in order to determine $H_0$. We find that the reduction due to our accuracy requirement is partly due to microlensing but also the quality of the light curve plays a role since follow-up with greater depth provide more systems with accurate time delays. The prospects of greater depth are investigated in Section \ref{sec:number of LSNe Ia with delays} and one way to mitigate the effect of microlensing is the use of the color information as discussed in Appendix \ref{sec: Case study}. From Figure \ref{fig:Accuracy as function of time delay} we see that for ``LSST + follow-up" nearly all time delays greater than 20 days yield an accuracy within one percent, whereas going for short delays is dangerous in terms of adding bias to a final cosmological sample.
\begin{figure}
\centering
\includegraphics[scale=0.6]{successful_delays_colossus_2667.png}
\caption{Duration distribution for all 707 possible time delays (blue) and time delays with accuracy better than $1 \%$ (orange) from 202 investigated systems for ``LSST + follow-up" and observing strategy \texttt{colossus\_2667}. Nearly all time delays are accurate for pairs of images which yield a time delay greater than 20 days.}
\label{fig:Accuracy as function of time delay}
\end{figure}
In the lower panel of Figure \ref{fig:distribution,LSST only, altsched}, we
see that similar to the case of using only LSST data, we are limited to
nearby systems ($z \lesssim 0.9$). In terms of time delays, we can
reach lower values due to the much better quality of the
light curve, but still, most of the short time delays are not
accessible because of microlensing and our cut on precision.
\begin{table}[h!]
\tabcolsep=0.15cm
\centering
\begin{tabular}{c|c|cc|c}
&$f_\mathrm{total}$& $f_\mathrm{double}$ & $f_\mathrm{quad}$ & $\frac{f_\mathrm{total,LSST+follow-up}}{f_\mathrm{total,LSST only}}$ \\
\hline
\texttt{\textcolor{blue}{alt\_sched\_rolling}} & 34.4 & 43.6 & 13.9 & 2.0 \\
\altsched & 32.1 & 41.6 & 10.9 & 2.4 \\
\texttt{\textcolor{magenta}{colossus\_2667}} & 31.1 & 40.6 & 9.9 & 3.4 \\
\texttt{\textcolor{magenta}{pontus\_2506}} & 27.0 & 34.7 & 9.9 & 3.0 \\
\texttt{\textcolor{blue}{mothra\_2045}} & 26.7 & 35.6 & 6.9 & 4.4 \\
\texttt{\textcolor{magenta}{Kraken\_2044}} & 26.7 & 34.7 & 8.9 & 4.6 \\
\texttt{\textcolor{orange}{kraken\_2042}} & 25.0 & 32.7 & 7.9 & 4.3 \\
\texttt{\textcolor{orange}{kraken\_2026}} & 24.3 & 31.7 & 7.9 & 5.1 \\
\texttt{\textcolor{blue}{kraken\_2036}} & 24.0 & 31.7 & 6.9 & 5.1 \\
\texttt{\textcolor{magenta}{pontus\_2489}} & 23.6 & 30.7 & 7.9 & 3.2 \\
\texttt{\textcolor{blue}{Mothra\_2049}} & 23.6 & 30.7 & 7.9 & 5.0 \\
\texttt{\textcolor{magenta}{rolling\_mix\_10yrs\_opsim}} & 23.3 & 30.7 & 6.9 & 2.3 \\
\texttt{\textcolor{blue}{Nexus\_2097}} & 23.3 & 30.7 & 6.9 & 4.9 \\
\texttt{\textcolor{orange}{baseline2018a}} & 23.3 & 30.7 & 6.9 & 6.3 \\
\texttt{\textcolor{orange}{Pontus\_2002}} & 22.0 & 28.7 & 6.9 & 16.1 \\
\texttt{\textcolor{orange}{kraken\_2035}} & 22.0 & 28.7 & 6.9 & 10.7 \\
\texttt{\textcolor{orange}{colossus\_2665}} & 22.0 & 28.7 & 6.9 & 5.9 \\
\texttt{\textcolor{orange}{colossus\_2664}} & 22.0 & 28.7 & 6.9 & 6.4 \\
\texttt{\textcolor{orange}{pontus\_2502}} & 20.3 & 26.7 & 5.9 & 14.8 \\
\texttt{\textcolor{orange}{rolling\_10yrs\_opsim}} & 18.2 & 23.8 & 5.9 & 2.7 \\
\end{tabular}
\caption[Fraction of systems with good delays for using LSST as discovery machine in combination with follow-up observation to get time delays.]{Fraction of systems (column two, three \& four in \%) of 202 investigated mock systems (101 doubles and 101 quads) where time delay has been measured with accuracy smaller than $1\%$ and precision smaller than $5\%$ for using LSST as a discovery machine and getting time delays from follow-up observations. The investigation has been done for the ten fields listed in Table \ref{tab: 10 wfd fields}. The 5th column shows how much better a cadence performs in comparison to using LSST data only. This table is insufficient to rank different cadence strategies because the total number of detected LSNe Ia is not taken into account.}
\label{tab:LSST+follow-up fraction of systems}
\end{table}
By combining Tables \ref{tab: total number of LSNe Ia from OM 10} and
\ref{tab:LSST+follow-up fraction of systems}, we get the total amount
of LSNe Ia with good time delays as shown in Figure \ref{fig:LSST only
and LSST+follow-up, total number with good delays}. We note that the presented results have errors within $10 \%$ due to uncertainties in the calculated area and sampling. Another point is that we do not apply the sharp OM10 cut of 22.6 mag as mentioned in Section \ref{sec:OM10}. We find that we are also able to get good time delays for fainter systems ($> \SI{22.6}{\mag}$) although in number they are a factor of at least 1.7 fewer than for bright ones ($\le \SI{22.6}{mag}$). This means that the numbers presented in Table \ref{tab: total number of LSNe Ia from OM 10} and therefore also the numbers in Figure \ref{fig:LSST only and LSST+follow-up, total number with good delays} are a conservative estimate and in reality we can expect even more systems with well measured time delays. An overly optimistic version of Figure \ref{fig:LSST only and LSST+follow-up, total number with good delays} is presented in Appendix \ref{sec:Appendix optimistic estimate of the number of LSNe Ia with well measured delay}. While these sources of uncertainties might change the ordering presented in Figure \ref{fig:LSST only and LSST+follow-up, total number with good delays} slightly, it does not influence our overall conclusions which will be presented in the following.
We see that for
the current baseline strategy we would expect about $17$ LSNe Ia with
good delays over the 10-year survey. To increase this number, the most
promising strategies are those with a baseline-like cumulative season
length $\bar{t}_\mathrm{eff,cad}$ and an enhanced sampling (magenta $``\mathrm{higher \, cadence}"$
strategies). To achieve this, the most efficient way would be to get
rid of the revisit within the same night (compare \texttt{colossus\_2667} to
\texttt{baseline2018a}). Because this would make the science case of fast
moving objects impossible, we think a reasonable compromise is to do
the revisit within the same night in a different filter \citep{Lochner:2018}. This performs
worse than doing single visits but still better than doing the revisit
in the same filter (compare \texttt{pontus\_2506} to \texttt{colossus\_2667} and
\texttt{baseline2018a}). In terms of the cumulative season length, it seems
appropriate to stay with a baseline-like season length of about 170
days and ten seasons. Further improvement can be achieved by the replacement of the $2 \times
\SI{15}{\mathrm{s}}$ exposure by $1 \times \SI{30}{\mathrm{s}}$ to improve efficiency
(compare \texttt{kraken\_2042} to \texttt{baseline2018a}).
Although our numbers for an extended WFD area by $\SI{6700}{\square\deg}$ (compare \texttt{Kraken\_2044} and
\texttt{colossus\_2667}, and \texttt{Pontus\_2002} and \texttt{baseline2018a}) are increased, we only find this for ``LSST + follow-up". For ``LSST only", strategies with a smaller WFD footprint perform better. Therefore we suggest to stick with the WFD footprint of $\SI{18000}{\square\deg}$, as used for 16 of the 20 investigated observing strategies, but we are also fine with $\SI{24700}{\square\deg}$.
Concerning the depth of the observing strategy most of the investigated
strategies provide a similar 5$\sigma$ depth as the baseline cadence (see right panels of Figure \ref{fig:Comparison 10 fields to WFD fields}). Those strategies with a slightly lower 5$\sigma$ depth (\texttt{alt\_sched}, \texttt{alt\_sched\_rolling} and \texttt{pontus\_2489}) show no significant deviations in the results, which is related to their enhanced sampling in comparison to the baseline cadence.
Another
interesting scenario to investigate is the redistribution from visits
in \textit{y} band to more useful bands for LSNe Ia as done in \texttt{alt\_sched}. This
means going from a distribution of visits in \textit{ugrizy}: (6.8, 9.4, 21.5,
21.6, 20.2, 20.4)\% to (8.2, 11.0, 27.6, 18.1, 25.6, 9.5)\%. Because
of the many differences between \texttt{alt\_sched} and \texttt{baseline2018a}, a direct
comparison is impossible but we expect some improvement. A simulation implementing the redistribution with the greedy
algorithm used for \texttt{baseline2018a} would be helpful to quantify
this.
Furthermore, a very important result: most rolling cadence strategies are disfavored for our LSNe Ia science case. For
these cadence strategies, the shortened cumulative season lengths
$\bar{t}_\mathrm{eff,cad}$ lead to an overall more negative impact on
the number of LSNe Ia with delays, compared to the gain from the
increased sampling frequency.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.85\textwidth]{MetricLSST.png}
\caption{Number of LSNe Ia for 10-year survey where time delay has been measured
with accuracy $< 1\%$ and precision $< 5\%$ by using
LSST as discovery machine in combination with follow-up
observations for measuring time delays (black bars) and using only
LSST data (gray bars, see also Figure \ref{fig:LSST only, total
number with good delays}). Follow-up is
every second night in filters \textit{g, r,} and \textit{i}, starting two nights after
third LSST detection (with brightness exceeding 5$\sigma$ depth in
any filter). With follow-up observations, we get a substantial
increase in the number of LSNe Ia systems with good measured delays. The numbers shown in this figure are a conservative estimate. An optimistic approach is discussed in Appendix \ref{sec:Appendix optimistic estimate of the number of LSNe Ia with well measured delay}, leading to the same overall conclusion about the categories of cadence strategies (magenta, orange, and blue) but providing about 3.5 times more LSNe Ia with well-measured delays.}
\label{fig:LSST only and LSST+follow-up, total number with good delays}
\end{figure*}
\subsection{Different follow-up scenarios}
\label{sec:number of LSNe Ia with delays}
In this section, the prospects of increasing the number of LSNe Ia by
assuming different follow-up scenarios are discussed. For this purpose,
we have investigated a sample of 100 mock LSNe Ia (50 mock quads and
50 mock doubles). The result for
the standard follow-up case is shown in Table \ref{tab:follow-up
alternative scenarios and no microlensing} first row for the two cadence strategies \texttt{baseline2018a} and \texttt{alt\_sched}. To clarify,
the standard follow-up scenario assumes observations in the three filters
\textit{g, r,} and \textit{i}, going to a depth of $m_{5,\mathrm{g}}=\SI{24.6}{\mag}$,
$m_{5,\mathrm{r}}=\SI{24.2}{\mag}$ and
$m_{5,\mathrm{i}}=\SI{23.7}{\mag}$. Follow-up is assumed every second
night in all three filters two days after the third data point exceeds the
5$\sigma$ depth in any filter.
\begin{table}
\centering
\tabcolsep=0.11cm
\begin{tabular}{cc|c|c}
& row& \texttt{baseline2018a} & \texttt{alt\_sched} \\
\hline
LSST + follow-up& 1 & 16.5 $(22.4 \%)$ & 21.0 $(33.9 \%)$ \\
\hline
follow-up in bands $riz$& 2 & 15.0 $(20.4 \%)$ & 20.2 $(32.7 \%)$ \\
follow-up after 2 data points& 3 & 20.0 $(27.2 \%)$ & 23.0 $(37.3 \%)$ \\
daily follow-up& 4 & 19.4 $(26.4 \%)$ & 23.3 $(37.8 \%)$ \\
follow-up every third day& 5 & 13.5 $(18.4 \%)$ & 18.0 $(29.2 \%)$ \\
deeper follow-up (1 mag)& 6 & 28.2 $(38.4 \%)$ & 27.0 $(43.8 \%)$ \\
deeper follow-up (2 mag)& 7 & 37.1 $(50.6 \%)$ & 34.0 $(55.0 \%)$ \\
deeper follow-up (4 mag)& 8 & 39.4 $(53.7 \%)$ & 37.6 $(60.9 \%)$ \\
\hline
no microlensing& 9 & 35.7 $(48.6 \%)$ & 33.3 $(53.9 \%)$\\
no microl., 1 mag deeper& 10 & 48.4 $(65.9 \%)$ & 43.2 $(69.9 \%)$ \\
\end{tabular}
\caption{Summary of different
follow-up strategies and prospects of an
improved analysis technique concerning modeling of
microlensing. For the two strategies \texttt{baseline2018a} and \texttt{alt\_sched},
the number of LSNe Ia with good quality time-delay measurements over the 10-year survey
are shown for each considered scenario, where 100 mock LSNe Ia have been
investigated. The percentages in the
brackets show how many of the total numbers of LSNe Ia (73.4 for baseline2018a and 61.7 for alt\_sched from Table \ref{tab: total number of LSNe Ia from OM 10}) have well measured
time delays. The exact definition of ``LSST + follow-up" (row 1) is
described in the text and the scenarios from rows two to eight are
alternative follow-up scenarios detailed in the text. Rows nine and
ten are hypothetical numbers interesting for future improved analysis
techniques of microlensing.}
\label{tab:follow-up alternative scenarios and no microlensing}
\end{table}
An alternative follow-up scenario would be to observe in bands \textit{r, i,}
and \textit{z}. The numbers in the second row are slightly worse than those for
following up in bands \textit{g, r,} and \textit{i}, even though high redshift SNe are well visible in the \textit{z} band. The reason for this is that we have assumed a baseline like 5$\sigma$ depth for the follow-up observations, with $m_{5,\mathrm{z}}=\SI{22.8}{\mag}$ which is $\SI{1.8}{\mag}$ lower than the 5$\sigma$ depth in the \textit{g} band.
The more aggressive approach is to trigger follow-up after the
second data point exceeds the 5$\sigma$ depth (see row 3). The
improvement of $10$ to $21\%$ might look promising, but also many more
false positives will be detected and therefore some observing
time would likely be wasted on false positives.
Of further interest is also the cadence of the follow-up
observation. Therefore we consider two additional cases where we
follow-up daily (see row 4) and every third day (see row 5), instead of the standard follow-up
of every second day. While going down to observations every three days
decreases the number of LSNe Ia with good delays by about $18 \%$, daily visits improve on a level of 11 to $18 \%$. Going
from a two-days to a single day cadence increases the effort of
follow-up significantly by increasing the numbers of LSNe Ia only
slightly.
A more promising approach is to keep the follow-up observations every
two days but increase the depth. To go one magnitude deeper (see row 6) than the
average baseline depth a total
observing time of approximately $\SI{45}{\min}$ per night is needed for a 2 m telescope as in Section \ref{sec:LSST+follow-up}, which is feasible. For \texttt{alt\_sched}, this
leads to an improvement of $29\%$ in comparison to the standard follow-up
scenario and therefore slightly better than the daily follow-up
case. For \texttt{baseline2018a}, the improvement is $71\%$ and therefore
definitely worth considering the effort (compare upper two panels in
Figure \ref{fig:distribution,LSST + follow-up, baseline2018a}).
Another possibility is to go two magnitudes deeper but therefore we have
to observe approximately $\SI{2}{\hour}$ per night to get observations in 3
filters. This seems only feasible for a two-meter-telescope which can
observe simultaneously in three filters or by a telescope with a larger
diameter. For \texttt{alt\_sched}, this means an improvement in comparison to
the standard follow-up scenario of $62\%$ and for \texttt{baseline2018a} an
improvement of $125\%$. Going another two magnitudes
deeper does not increase the number of LSNe Ia significantly and
therefore going beyond two magnitudes is in our investigation not worth the effort
(compare rows seven and eight in Table \ref{tab:follow-up alternative
scenarios and no microlensing}).
A limiting factor of our analysis is the microlensing effect which is
not taken into account in our time-delay measurement with {\tt PyCS} and
therefore we are not able to accurately measure short time delays (see Figure \ref{fig:distribution,LSST only, altsched} and the upper
two panels of Figure \ref{fig:distribution,LSST + follow-up,
baseline2018a}) because we do not model the bias due to microlensing magnification, which is an absolute bias in time, whereas the accuracy is relative to the length of the delay. In rows nine and ten of Table \ref{tab:follow-up
alternative scenarios and no microlensing}, we see that we could
increase the number of LSNe Ia with good delays by a factor of $60\%$
to $120\%$ in the best case scenario, where we imagine a perfect correction for microlensing deviations. This would give us access to short time-delays
as visible in the comparison of the upper two panels and the lower two
panels of Figure \ref{fig:distribution,LSST + follow-up,
baseline2018a} and
therefore encourages the use of
color curves instead of light curves to reduce the impact of microlensing on the delay measurement as suggested by
\cite{Goldstein:2017bny} and discussed in Appendix \ref{sec: Case
study}. Also, the approach of using SNe Ia templates to fit the intrinsic light curve shape including effects of microlensing might be
reasonable and produce higher fraction of good delays. Some of these are currently being explored
(\citeauthor{PierelRodney19} \citeyear{PierelRodney19}; Collett et al., in prep, T.~Collett, priv.~comm.).
\begin{figure}
\centering
\subfigure{\includegraphics[width=0.425\textwidth]{dist_baseline2018a_os_4_microlensing.png}}
\subfigure{\includegraphics[width=0.425\textwidth]{dist_baseline2018a_os_8_microlensing.png}}
\subfigure{\includegraphics[width=0.425\textwidth]{dist_baseline2018a_os_4_macrolensing.png}}
\subfigure{\includegraphics[width=0.425\textwidth]{dist_baseline2018a_os_8_macrolensing.png}}
\caption{Time-delay and
source-redshift distribution for 100 investigated mock LSNe for
\texttt{baseline2018a}, similar to Figure \ref{fig:distribution,LSST only,
altsched}. The upper two panels show the standard follow-up
observation (first panel) and the option going one magnitude deeper
(second panel). The lower two panels show the same follow-up
scenarios hypothetically without microlensing. The distributions
vary slightly because for a quad system just a single time delay is
shown, either the first successfully measured delay or the maximum
of the six possible delays.}
\label{fig:distribution,LSST + follow-up, baseline2018a}
\end{figure}
\section{Discussion and summary}
\label{sec:Summary and Future Prospects}
In this work, we explored different LSST cadence strategies for
measuring time delays in strongly lensed SNe Ia. As illustrated in
Figure \ref{fig:LSST only and LSST+follow-up, total number with good
delays}, we have found that using LSST just as discovery machine in
combination with high cadence follow-up observation for the delay measurement is
the best way to increase the number of LSNe Ia with good time delays.
In contrast, using only LSST data is not ideal.
To estimate the resulting $H_0$ constraint from a sample of LSST LSNe
Ia, we assume that each LSNe Ia system with good delays yields
typically an $H_0$ measurement with approximately $5\%$ uncertainty in flat $\Lambda$CDM (including
all sources of uncertainties such as the time-delay uncertainty
investigated in this paper, and lens mass mass modeling
uncertainties). This is currently achieved with the best lensed
quasar systems of the H0LiCOW sample, and serves as a reference given
that we expect LSNe Ia to yield similar or better constraints than
that of lensed quasars. While focussing only on LSNe Ia with good
delays could potentially introduce selection bias, we suspect such
biases to be small and, if present, could be corrected
\citep[e.g.,][]{Collett:16}. Thus, for a sample of $N$ lenses, the
uncertainty on $H_0$ would scale approximately as $5\%$/sqrt$(N)$,
assuming Gaussian uncertainties. With LSST data only,
the number of lensed SNe Ia from our investigation (Figure
\ref{fig:LSST only and LSST+follow-up, total number with good delays})
ranges from approximately $1-8$, depending on the strategy. This
would yield an $H_0$ constraint with about $2-5\%$ uncertainty from the
sample. In the case of LSST with follow-up, the number of lensed SNe
increase substantially, varying from approximately $10-28$,
translating to an $H_0$ constraint with about $1-2\%$ uncertainty.
Therefore, with optimal LSST observing strategy and fast-response
follow-up, we would reach percent-level constraint on $H_0$, which is
a factor of two to five lower in uncertainty compared to the case of
LSST-only scenario.
From the investigated cadence strategies for the follow-up scenario,
we have found that observing strategies with an improved sampling by
keeping everything else baseline-like is, in general, the best
observing strategy for our science case. An ideal strategy is
presented in the following key points:
\begin{itemize}
\item Ten seasons with a season length of 170 days or
longer
\item WFD footprint of $\SI{18000}{\square\deg}$ up to $\SI{24700}{\square\deg}$
\item One revisit within a night in a different filter than the first visit
\item Replacement of $2 \times \SI{15}{\mathrm{s}}$ exposure by $1 \times \SI{30}{\mathrm{s}}$
\item Distribution of visits like \texttt{alt\_sched} [\textit{ugrizy} as $\sim(8.2, 11.0, 27.6, 18.1, 25.6, 9.5)$\%].
\end{itemize}
Another very important point is that most of the suggested rolling
cadences are clearly disfavored for our science case
because many LSNe Ia will not even be detected due to the reduced
cumulative season length. The only rolling cadence which performed
well is \texttt{rolling\_mix\_10yrs\_opsim}, but this is most likely because the WFD
stays on in the background and additionally revisits are done in
different filters, which can partly compensate for the not ideal
``rolling" feature.
We have assumed that follow-up observations starts two days after the
third LSST data point exceeds the 5$\sigma$ depth. The follow up is done
every second night in three filters \textit{g, r,} and \textit{i} to a depth of
$m_{5,\mathrm{g}}=\SI{24.6}{\mag}$,
$m_{5,\mathrm{r}}=\SI{24.2}{\mag}$ and
$m_{5,\mathrm{i}}=\SI{23.7}{\mag}$, which is feasible with a 2-meter telescope. To improve on that mainly a
greater depth is of interest. Follow-up observations going one magnitude
deeper than the baseline 5$\sigma$ depth, or even two magnitude deeper,
if feasible, will increase the number of LSNe Ia with good time-delays
significantly. Going beyond two magnitude deeper is not worth the
effort.
We would like to point out that we have only investigated LSNe
Ia. Although a single lensed Core-Collapse (CC) SN is less valuable
than a LSNe Ia
(given the standardizable light curves of SNe Ia)
, the larger sample of lensed CC SNe, primarily type IIn \citep{Goldstein:2018bue,Wojtak:2019hsc}, which will be detected by
LSST makes them as well relevant for time-delay cosmography. Due to
the different light curve shapes and luminosities the optimal cadence
strategy for measuring time delays in CC SNe might be different from
the one for LSNe Ia. At least in terms of total number of lensed CC
SNe the strategies will be ordered in the same way as in Table
\ref{tab: total number of LSNe Ia from OM 10} but the numbers will be
a factor of 1.8 higher \citep{Oguri:2010}. In terms of measuring time delays the improved sampling requested from our investigation of LSNe Ia will be also helpful for the case of CC SNe. To investigate the prospects
of measuring time delays in lensed CC SNe similar to the case of LSNe
Ia the specific intensity from a theoretical model is required.
In terms of analyzing the data it seems promising to find ways to
reduce the impact of microlensing. One possibility will be the use of
color curves instead of light curves. To do this, it might be worth to
implement SNe template fitting instead of splines into {\tt PyCS}. With the
recent discovery of the very first LSNe system and the expected
sample from LSST, our work demonstrates that time-delay cosmography as
envisioned by \cite{Refsdal:1964} has bright prospects in the LSST era.
\begin{acknowledgements}
We thank W.~Hillebrandt, S.~Blondin, D. A.~Goldstein for useful discussions,
and the internal LSST DESC reviewers S.~Rodney, A.~Goobar, and T.~E.~Collett for their feedback that improved the presentation of our paper. We also thank the anonymous referee for helpful comments.
SH and SHS thank the Max Planck Society for support through the Max
Planck Research Group for SHS. This project has received funding from
the European Research Council (ERC) under the European Union’s Horizon
2020 research and innovation programme (grant agreement No
771776). This research was supported in part by Perimeter Institute
for Theoretical Physics. Research at Perimeter Institute is supported
by the Government of Canada through the Department of Innovation,
Science and Economic Development and by the Province of Ontario
through the Ministry of Research, Innovation and Science.
UMN has been supported by the Transregional Collaborative Research
Center TRR33 ``The Dark Universe" of the Deutsche
Forschungsgemeinschaft.
VB, JHHC and FC acknowledge support from the Swiss National Science
Foundation and through European Research Council (ERC) under the European
Union's Horizon 2020 research and innovation programme (COSMICLENS:
grant agreement No 787866).
DR acknowledges support from the
National Science Foundation Graduate Research Fellowship under Grant
No.~DGE 1752814.
HA has been supported by the Rutgers Discovery Informatics
Institute Fellowship of Excellence in Computational and Data Science
for academic years 2017-2018, 2018-2019.
MK acknowledges support from
the Klaus Tschira Foundation.
This work was supported in part by World Premier International
Research Center Initiative (WPI Initiative), MEXT, Japan, and JSPS
KAKENHI Grant Number JP15H05892 and JP18K03693.
The DESC acknowledges ongoing support from the Institut National de Physique Nucl\'eaire et de Physique des Particules in France; the Science \& Technology Facilities Council in the United Kingdom; and the Department of Energy, the National Science Foundation, and the LSST Corporation in the United States. DESC uses resources of the IN2P3 Computing Center (CC-IN2P3--Lyon/Villeurbanne - France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S.\ Department of Energy under Contract No.\ DE-AC02-05CH11231; STFC DiRAC HPC Facilities, funded by UK BIS National E-infrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DE-AC02-76SF00515.
\end{acknowledgements}
\FloatBarrier
\bibliographystyle{aa}
|
1,116,691,499,041 | arxiv | \section{Introduction}
It is well known that some liquids (for example, water, silica,
silicon, carbon, and phosphorus) show an anomalous behavior \cite{
book,book1,deben2001,netz,denstr,errington2}:their phase diagrams
have regions where a thermal expansion coefficient is negative
(density anomaly), self-diffusivity increases upon compression
(diffusion anomaly), and the structural order of the system
decreases with increasing pressure (structural anomaly)
\cite{deben2001,netz}. A number of studies demonstrates water-like
anomalies in fluids that interact through spherically symmetric
potentials (see, for example, \cite{buld2009,wepre,wepre1} and
references therein). Many of these studies report the appearance
of diffusion anomaly in different systems. However, the diffusion
coefficient is closely related to shear viscosity of liquid
therefore one can expect that the shear viscosity also
demonstrates some kind of anomalous behavior.
Although many studies of core-softened systems report the
diffusivity calculations there is a lack of studies which
calculate shear viscosity. This can be related to the fact that
viscosity is much harder to compute in simulation. So the usual
intuition is applied: the viscosity can be extracted from the
diffusion coefficient by Stockes-Einstein (SE) relation
\cite{hansen}. However, it was recently found that SE relation can
be violated at low temperatures \cite{sev1,sev2}. This case the
usual intuition can fail to predict the viscosity behavior
correctly. In this respect it is important to monitor both the
diffusion coefficient and shear viscosity of core-softened liquids
at low temperatures to see theirs behavior in the regions of
anomalous behavior.
The goal of the present article is to investigate the behavior of
shear viscosity of core-softened fluids at low temperatures, to
see if theirs shear viscosity demonstrates anomalous behavior and
if so to find the relation between the viscosity anomaly region
and the regions of other anomalies.
\section{Systems and methods}
Two systems are studied in the present work. The first one is a
core-softened system introduced by de Oliveira et al
\cite{barbpot}. This system is described by the Lennard-Jones
potential with Gaussian well (LJG):
\begin{equation}
U(r)=4\varepsilon
\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right]+a\varepsilon
\cdot \exp\left(-\frac{1}{c^2}\left(\frac{r-r_0}{\sigma_0}\right)^2\right),
\end{equation}
with $a=5.0$, $r_0/ \sigma=0.7$ and $c=1.0$. The diffusivity of
this system was studied in several papers
\cite{barbpot,indiabarb,werosbreak,wetraj,wetraj1}. Note that the
parameters of the potential are chosen in such a way that the
effect of attraction becomes negligibly small and one can consider
this system as a purely repulsive core-softened one.
The second system studied in this work is Soft Repulsive Shoulder
System (SRSS) introduced in the work \cite{wejcp}. The potential
of this system has form:
\begin{equation}
U(r)=\varepsilon
\left(\frac{\sigma}{r}\right)^{14}+\frac{1}{2}\varepsilon
\cdot[1-\tanh(k_0\{r-\sigma_1\})],
\end{equation}
where $\sigma$ is "hard"-core diameter, $\sigma_1=1.35$ is
soft-core diameter and $k_0=10.0$. In Ref. \cite{wepre} it was
shown that this system demonstrates anomalous behavior. Our later
publications gave detailed study of diffusion, density and
structural anomalies in this system \cite{wepre1}.
It is well known that there is a close link between diffusion
coefficient and shear viscosity of liquids. Viscosity is a
quantity which is usually measured in experiments. However, due to
the technical problems the simulation of shear viscosity
represented in the literature is very poor. One of the goals of
this article is to study the behavior of viscosity of the two
model liquids described above. Taking into account that the
diffusion coefficient demonstrates anomalous behavior for these
systems we are interesting to see if the viscosity also
demonstrates some anomalies.
In order to study the transport coefficients of the systems we
used Molecular Dynamics method. In both cases a system of $N=1000$
particles was simulated. The equations of motion were integrated
by velocity Verlet algorithm. In case of LJG system the time step
was set to $dt=0.001$, the equilibration period was $3.5 \cdot
10^6$ steps and the production period $1 \cdot 10^6$ steps. In the
case of SRSS the equilibration the time step was $dt=0.0005$, the
equilibration period was $3.5 \cdot 10^6$ and the production run
was $0.5 \cdot 10^6$ steps. The cut-off radius was set $3.5$ for
LJG system and $2.2$ for SRSS. Velocity rescaling was applied
during equilibration, the production corresponded to $NVE$
ensemble. Shear viscosity is difficult to measure in simulation
because of large fluctuations of shear stress function. In order
to improve the precision of the data we increased the
equilibration time in anomalous region up to $7.5 \cdot 10^6$ and
the production time up to $1.5 \cdot 10^6$ for some simulations.
In order to get good statistics on the transport properties of the
systems many data points were simulated. In the case of LJG system
the data points were chosen in the density interval from
$\rho=0.05$ till $\rho=0.3$ with step $\delta \rho =0.01$ along
several isotherms. The following isotherms were considered:
$T=0.15;0.2;0.25;0.3;0.4;0.5;1.0$. In order to see the anomalous
region better we also simulated the isotherms $T=0.17$ and $0.23$
for the densities from $\rho=0.08$ up to $0.18$ with step $0.01$.
In case of SRSS we used the densities from $\rho=0.3$ up to
$\rho=0.8$ with step $\delta \rho =0.05$ and temperatures
$T=0.2;0.25;0.3;0.35;0.4;0.5;0.7$ and $1.0$.
The diffusion coefficients were computed via Einstein relation and
shear viscosity by integration of shear stress autocorrelation
function.
\section{Results and discussion}
\subsection{Lennard-Jones - Gauss system}
As it was mentioned in the introduction the viscosity anomaly in
LJG system was already reported in Ref. \cite{egorov}. Here we
make a more detailed simulation study of this anomaly. Our goal is
to see the location of the anomaly in $\rho-T$ plane and its
relation with other anomalies, such as diffusion anomaly, density
anomaly and structural anomaly.
Figs.~\ref{fig:fig1} shows the viscosity along several isotherms
for LJG system. One can see that the anomaly is very pronounced
for the temperature $T=0.15$, but it rapidly disappears with
increasing temperature. At $T=0.3$ the anomaly is already of the
order of numerical accuracy and we estimate this temperature as
the temperature where viscosity anomaly disappears.
\begin{figure}
\includegraphics[width=8cm, height=8cm]{visc-barb.eps
\caption{\label{fig:fig1} (Color online). Viscosity anomaly in JLG
system.}
\end{figure}
The location of diffusion, density and structural anomalies of LJG
system in $\rho-T$ plane have already been reported in literature
\cite{indiabarb}. Figs.~\ref{fig:fig2} shows the regions of these
anomalies and the viscosity anomaly. Interestingly, as it was
proposed in Ref. \cite{debenedetti} the anomalous regions are
enveloped in each other. However, the viscosity anomaly violates
this rule: it has partial overlap with density anomaly, but no one
of them is inside of one another. It was also shown in the
literature that from the thermodynamic arguing it follows that the
density anomaly region is always inside the structural anomaly
one, while the diffusion anomaly can have any location with
respect to the other anomalies \cite{denstr,denstr1}. The
viscosity anomaly is another example of anomalies of dynamic
rather then thermodynamic properties. Therefore, one can expect
that the viscosity anomaly can also have any possible location
with respect to the density and structure anomalies.
\begin{figure}
\includegraphics[width=8cm, height=8cm]{anom-barb.eps
\caption{\label{fig:fig2} (Color online). Location of anomalous
regions in $\rho-T$ plane for LJG system.}
\end{figure}
In our previous work we showed that the anomalies can be visible
along some paths in thermodynamic space while along others they
can be invisible \cite{werosbreak,wetraj,wetraj1}. For example,
diffusion anomaly is seen along isotherms but not isochors. An
important consequence of this difference is that Rosenfeld excess
entropy scaling for diffusion coefficient \cite{ros,ros1} is
fulfilled along isochors but breaks down along isotherms
\cite{wepre1}. This makes important to see the viscosity behavior
along different trajectories.
\begin{figure}
\includegraphics[width=8cm, height=8cm]{visc-isochors-barb.eps
\caption{\label{fig:fig3} (Color online). Shear viscosity of LJG
system along several isochors.}
\end{figure}
Figs.~\ref{fig:fig3} shows shear viscosity of LJG system along
several isochors. One can see that viscosity demonstrates a
minimum. Viscosity minimum along isobars was observed
experimentally for water \cite{visc-water}. The authors called
this minimum as "viscosity anomaly". However, in our previous work
we showed that viscosity minimum along isochors appears naturally
because of the interplay of potential-potential and
kinetic-kinetic correlations even in simple liquids \cite{wesoft}.
The same results can be obtained for isobars (not shown in Ref.
\cite{wesoft}).
\begin{figure}
\includegraphics[width=8cm, height=8cm]{visc-Ros-isochor-barb.eps
\caption{\label{fig:fig4} (Color online). Rosenfeld relation for
shear viscosity of LJG system along isochors.}
\end{figure}
\begin{figure}
\includegraphics[width=8cm, height=8cm]{visc-Ros-T025.eps
\includegraphics[width=8cm, height=8cm]{visc-Ros-T1.eps
\caption{\label{fig:fig5} (Color online). Rosenfeld relation for
shear viscosity of LJG system along isotherms at low and high
temperature.}
\end{figure}
Figs.~\ref{fig:fig4} shows the Rosenfeld relation for shear
viscosity of LJG system along isochors. One can see that the
linear relation between $\ln(\frac{\eta \rho ^{-2/3}}{T^{1/2}})$,
which is predicted by Rosenfeld relation, holds true except the
low $S_{ex}$ region. However, if we consider the Rosenfeld
relation along isotherms we see that it breaks down at low
temperature (Figs.~\ref{fig:fig5} (a)). Here we observe a self
crossing loop like the one observed for diffusion in our previous
works \cite{wepre1,werosbreak,wetraj,wetraj1}. Fig.~\ref{fig:fig5}
(b) shows the Rosenfeld scaling for high temperature ($T=1.0$).
Points correspond to the data from simulations while the straight
line is the best fit line. One can see that the simulation points
demonstrate some kind of oscillations around the best fit line. We
relate these oscillations to the numerical inaccuracies in the
viscosity computations and we believe that Rosenfeld scaling of
viscosity does work for high enough temperatures.
One can conclude that as in the case of diffusion coefficient
Rosenfeld relation for shear viscosity in systems with
thermodynamic anomalies holds true along isochors but breaks down
along low temperature isotherms. This confirms the idea of
different behavior of a system along different trajectories in
$\rho - T - P$ space which is discussed in details in Ref.
\cite{wetraj,wetraj1}.
\subsection{Soft Repulsive Shoulder system}
The second system considered in this work is Soft Repulsive
Shoulder System (Eq. (2)). Phase diagram and anomalous behavior of
this system were studied in details in our previous works
\cite{wejcp,wepre,wepre1}. However, the shear viscosity of this
system is measured for the first time.
\begin{figure}
\includegraphics[width=8cm, height=8cm]{visc-Fomin1.eps
\includegraphics[width=8cm, height=8cm]{visc-Fomin2.eps
\caption{\label{fig:fig6} (Color online). Viscosity of SRSS system
along a set of isotherms at (a) low and (b) intermediate and high
temperatures.}
\end{figure}
Figs.~\ref{fig:fig6} (a) and (b) show the shear viscosity of SRSS
system along several isotherms. One can see that at the lowest
temperature $T=0.15$ a tiny loop develops at the densities $\rho
=0.40 - 0.45$. However, the size of this loop in inside the error
bar, so we can not consider it as a real anomaly. We believe that
the anomaly appears at lower temperatures. Though the shear stress
autocorrelation function decays very slowly and viscosity
calculations become very difficult.
\begin{figure}
\includegraphics[width=8cm, height=8cm]{visc-Fomin-isochors.eps
\caption{\label{fig:fig7} (Color online). Viscosity of SRS system
along a set of isochors.}
\end{figure}
Figs.~\ref{fig:fig7} shows the shear viscosity plotted along a set
of isochors. One can see that for all presented densities the
viscosity curves monotonically decrease with increasing
temperatures. Basing on the arguments of our previous work
\cite{wesoft} we expect that shear viscosity passes a minimum at
higher temperatures.
\begin{figure}
\includegraphics[width=8cm, height=8cm]{Ros-F-isotherms.eps
\includegraphics[width=8cm, height=8cm]{Ros-F-isochors.eps
\caption{\label{fig:fig8} (Color online). Rosenfeld scaling plots
for SRS along (a) isotherms and (b) isochors.}
\end{figure}
Figs.~\ref{fig:fig8} (a) and (b) show the Rosenfeld scaling plots
for SRSS along isotherms and isochors. One can see that the
scaling relations break down in case of isotherms. The reason for
this breakdown is that while viscosity is monotonous function of
density the excess entropy demonstrates anomalous behavior
\cite{wetraj,wetraj1}. As a result viscosity as function of
$S_{ex}$ demonstrates nonlinear behavior.
At the same time Rosenfeld relation holds true along isochors
(Figs.~\ref{fig:fig8} (b)). All isochors can be divided in
low($\rho \leq 0.45$) and high density ($\rho > 0.45$) groups. The
curves belonging to the same groups have similar slopes while the
slope of the curves from different groups is essentially
different. The reason for this change is that at low densities the
system can be essentially approximated by the system with
effective diameter $\sigma$ while at high densities with diameter
$d$. This change of particle size alters also the kinetic and
thermodynamic properties of the system which we observe as the
slope change in Figs.~\ref{fig:fig8} (b).
\subsection{Stokes-Einstein Relation}
In our previous work \cite{wetraj,wetraj1} we showed that the
discrepancy in the diffusion and structural anomalies regions
leads to the Rosenfeld relation breakdown along isotherms. As it
was shown in the previous section the regions of diffusion and
viscosity anomalies are also different. In this respect it becomes
important to see if Stockes-Einstein relation still holds true in
the anomalous regions.
The Stockes-Einstein relation can be written in the following
form:
\begin{equation}
c_{SE}=\frac{k_BT}{\pi D \eta d} \approx const,
\end{equation}
where $d$ is the character particle size. The coefficient $c_{SE}$
should be approximately constant and belong to the interval $2\leq
c_{SE} \leq 3$. The limiting values $c_{SE}=2$ and $c_{SE}=3$
correspond to the stick and slip boundary conditions.
\begin{figure}
\includegraphics[width=8cm, height=8cm]{SE-barboska.eps
\includegraphics[width=8cm, height=8cm]{SE-Fomin.eps
\caption{\label{fig:fig9} (Color online). $c_{SE}$ for (a) LJG
system and (b) SRS along several isochores.}
\end{figure}
Figs.~\ref{fig:fig9} (a) and (b) show the Stockes-Einstein
coefficient for LJG and SRSS systems along isochors. One can see
that in both cases $c_{SE}$ is not constant. At the same time the
numerical values of $c_{SE}$ in both cases can be far from the
interval $[2,3]$. In our previous work we studied Stockes-Einstein
relation for Soft Spheres system \cite{wesoft-old}. It was found
that Stockes-Einstein relation can be fulfilled there if one takes
the effective particle diameter as Barker perturbation theory one
$d\sim T^{-1/n}$, where $n$ is the softness coefficient. However,
in the case of core-softened systems the situation is more
complicated. These systems have two character length scales and
the applicability of perturbation theory to such systems is
questionable.
From figs.~\ref{fig:fig9} (a) and (b) one can see that the
qualitative behavior of $c_{SE}$ is defined by the viscosity
behavior. For example, the maximum of $c_{SE}$ appears at the
temperatures of minimum of viscosity. At the same time viscosity
of SRSS is monotonous and we observe that $c_{SE}$ is also
monotonous in this case. However, one needs more detailed
investigations to give any conclusive statements on
Stockes-Einstein relation for core-softened systems.
\section{Conclusions}
It is well known from the literature that many core-softened
liquids demonstrate some kind of anomalies. One of the typical
anomalies in the core-softened systems is diffusion anomaly. It is
also widely believed that diffusion is strongly connected to shear
viscosity by Stockes-Einstein relation. In this respect it is
interesting to see if the same systems demonstrate viscosity
anomaly as well. In the present work we investigate this question.
We find that the viscosity anomaly does exist, however, the region
of ($\rho - T$) parameters where viscosity demonstrate anomalous
behavior is different from the diffusion anomaly region. We place
the regions of different anomalies in the same plot to see the
relations between them. Finally we check the Stockes-Einstein
relation for the liquids under investigation.
\bigskip
We thank S. M. Stishov, E. E. Tareyeva and V.V. Brazhkin for
stimulating discussions. Y.F. thanks the Joint Supercomputing
Center of the Russian Academy of Sciences for computational power
and the Russian Scientific Center Kurchatov Institute for
computational facilities. The work was supported in part by the
Russian Foundation for Basic Research (Grants No 13-02-00913, No
11-02-00341-a and No 13-02-00579) the Ministry of Education and
Science of Russian Federation (projects 8370, 8512, Scientific
School No 5365.2012.2 and Young Candidates Grant No 2099.2013.2).
|
1,116,691,499,042 | arxiv | \section{Introduction}
The two dimensional Toda lattice is one of soliton equations which has become
more and more important as a key object in theoretical physics. It was first
formulated by Hirota in 1981\cite{Hirota} as a discrete version of two
continuous time Toda lattice and shown by Miwa\cite{Miwa} its equivalence to
the KP hierarchy. When quasi periodic solutions are substituted, it is nothing
but the identity known as Fay's trisecant formula which characterizes
algebraic curves.
This equation has become known in other fields of physics in the last ten
years. It was shown being satisfied by the string amplitudes in particle
physics\cite{S}. More recently there appeared papers demonstrating unexpected
correlation of this equation with other topics in physics. The transfer matrix
of the solvable lattice model with $A_l$ symmetry, for example, was shown to
satisfy this equation\cite{KLWZ}\cite{Kuniba}. This equation has been also
proven to unify discrete Painlev\'e equations\cite{Ramani}. The connection of
solvable cellular automata to this equation offers another example\cite{TTMS}.
Completely integrable nonlinear systems must play fundamental roles in various
phenomena in physics. It is remarkable that many integrable systems in
different fields are unified into the single equation. We are interested in
clarifying ultimate notion of integrability of the systems. Investigation of
such systems themselves, however, will not reveal all of features of the
systems. The real meaning of integrability will be clarified only in comparison
with nonintegrable systems.
An arbitrary deformation of the two dimensional Toda lattice will destroy
integrability and create chaos. Since the system contains infinite number of
degrees of freedom it is extremely difficult to study analytically the
behaviour of transition from nonintegrable to integrable phases. It should be
recalled that a very little is known about analytical properties of
nonintegrable systems. The main part of the studies of complex dynamical
systems were limited to simple systems with one degree of freedom.
Very recently we pointed out\cite{SS toda30} that a set of lattice points,
which form a parallelogram in the two dimensional lattice space, constitute a
piece of the Toda lattice. We call it a Toda molecule\cite{Toda molecule} since
it is essentially what is intended to be called by this name, but used in a bit
different context in the literature. The remarkable fact is that the small
pieces can be separated from other parts without loosing any properties of the
original Toda lattice.
The purpose of this paper is to study in detail analytical properties of the
smallest piece of Toda molecules. The smallest Toda molecule is a smallest
parallelogram of four lattice points.We will call it a Toda atom for
convenience. Since every Toda molecule preserves properties possessed by the
Toda lattice, we can study analytical properties of the system from the
knowledge of a Toda atom. In the first part of this paper we show that the time
evolution of a Toda atom is described by an iterative M\"obius map. The form
invariance of this map certificates integrability of this system. In the second
part of this paper we will consider a deformation of this piece. Under generic
deformation a chaos will be generated through the time evolution. We are
especially concerned with analytical property of the Julia set as the system
approaches to the integrable map. We will show how the Julia set converges to
the points on the orbit of M\"obius map as a parameter, which interpolates
between integrable and nonintegra
\section{Pieces of Toda Lattice}
In this section we like to show that the two dimensional Toda lattice can be
cut into small pieces without loosing any properties possessed by the original
system. To begin with let us write down the equation which was derived by
Hirota as a discrete version of the two continuous time Toda
lattice\cite{Hirota}:
\be:
&&\alpha\ g_n(l+1,m)g_n(l,m+1)\ +\ \beta\ g_n(l,m)g_n(l+1,m+1)\nonumber\\
&&\qquad\qquad - (\alpha+\beta)\ g_{n+1}(l+1,m)g_{n-1}(l,m+1)=0,\quad \alpha, \beta \in
{\mbox{\boldmath$C$}},\quad l,m,n\in \mbox{\boldmath$Z$}.
\lb{HBDE}
\ee:
We called this equation Hirota bilinear difference equation\footnote{This
equation is also called Hirota-Miwa equation in recent literature.} and
abbreviated as HBDE. This is a nonlinear system defined on the three
dimensional lattice space. Our key observation is the following. For a fixed
point of the lattice $(l,m, n)=(\bar l,\bar m,\bar n)$ , we denote by
\mbox{\boldmath$A$} the set of points $(\bar l,\bar m,\bar n), (\bar l+1,\bar
m,\bar n), (\bar l,\bar m+1,\bar n),(\bar l+1,\bar m+1,\bar n) , (\bar l+1,\bar
m,\bar n+1), (\bar l,\bar m+1,\bar n-1)$ . Then if $g_n(l,m)$ is a solution of
$(\rf{HBDE})$,
\b:
f(l,m,n)=\cases{g_n(l,m),\qquad (l,m,n)\in {\mbox{\boldmath$A$}},\cr 0,\qquad\qquad\quad
{\rm otherwise},\cr}
\lb{Toda atom}
\e:
is also a solution of $(\rf{HBDE})$. This is the smallest piece of the Toda
lattice.
The proof is simple. Because \mbox{\boldmath$A$} is surrounded by zero, every
equation on other pieces is automatically satisfied. The result can be easily
generalized to larger parallelogram prism when it is surrounded by zero. We
call it a Toda molecule according to ref.\cite{Toda molecule}. Then it will be
natural to call $(\rf{Toda atom})$ a Toda atom. If there are many Toda
molecules in the three dimensional lattice space separated by zeros from each
other it is again a solution of $(\rf{HBDE})$. A slice perpendicular to the
$l$ axis of such example is presented in Fig. 1.
\begin{center}\begin{minipage}{7cm}\unitlength 1mm\begin{picture}(70,75)
\put(0,20){\line(1,0){60}}\put(62,18){\makebox(3,3)[l]{$n$}}
\put(30,5){\line(0,1){55}}\put(28,62){\makebox(3,3)[l]{$m$}}
\put(25,70){\makebox(10,3)[c]{Fig. 1}}
\thicklines
\put(5,55){\circle{1}}\put(10,55){\circle{1}}\put(15,55){\circle{1}}
\put(20,55){\circle{1}}\put(25,55){\circle{1}}
\put(30,55){\circle{1}}\put(35,55){\circle{1}}\put(40,55){\circle{1}}
\put(45,55){\circle{1}}\put(50,55){\circle{1}}\put(55,55){\circle{1}}
\put(5,50){\circle*{1}}\put(10,50){\circle*{1}}\put(15,50){\circle*{1}}
\put(20,50){\circle*{1}}\put(25,50){\circle*{1}}\put(30,50){\circle*{1}}
\put(35,50){\circle*{1}}\put(40,50){\circle{1}}\put(45,50){\circle*{1}}
\put(50,50){\circle*{1}}\put(55,50){\circle*{1}}
\put(5,50){\line(1,0){30}}\put(45,50){\line(1,0){10}}
\put(5,50){\line(1,-1){5}}\put(10,50){\line(1,-1){5}}
\put(15,50){\line(1,-1){5}}\put(20,50){\line(1,-1){5}}
\put(25,50){\line(1,-1){5}}\put(30,50){\line(1,-1){5}}
\put(35,50){\line(1,-1){5}}\put(45,50){\line(1,-1){5}}
\put(50,50){\line(1,-1){5}}\put(55,50){\line(1,-1){4}}
\put(5,45){\circle*{1}}\put(10,45){\circle*{1}}\put(15,45){\circle*{1}}
\put(20,45){\circle*{1}}\put(25,45){\circle*{1}}\put(30,45){\circle*{1}}
\put(35,45){\circle*{1}}\put(40,45){\circle*{1}}\put(45,45){\circle{1}}
\put(50,45){\circle*{1}}\put(55,45){\circle*{1}}
\put(5,45){\line(1,0){35}}\put(50,45){\line(1,0){7}}
\put(5,40){\circle{1}}\put(10,40){\circle{1}}\put(15,40){\circle{1}}
\put(20,40){\circle{1}}\put(25,40){\circle{1}}
\put(30,40){\circle{1}}\put(35,40){\circle{1}}\put(40,40){\circle{1}}
\put(45,40){\circle{1}}\put(50,40){\circle{1}}\put(55,40){\circle{1}}
\put(5,35){\circle{1}}\put(10,35){\circle{1}}\put(15,35){\circle{1}}
\put(20,35){\circle*{1}}\put(25,35){\circle*{1}}\put(30,35){\circle{1}}
\put(35,35){\circle*{1}}\put(40,35){\circle*{1}}\put(45,35){\circle*{1}}
\put(50,35){\circle{1}}\put(55,35){\circle{1}}
\put(20,35){\line(1,0){5}}\put(35,35){\line(1,0){10}}
\put(20,35){\line(1,-1){28}}\put(25,35){\line(1,-1){28}}
\put(35,35){\line(1,-1){10}}\put(40,35){\line(1,-1){10}}
\put(45,35){\line(1,-1){10}}
\put(5,30){\circle*{1}}\put(10,30){\circle*{1}}\put(15,30){\circle*{1}}
\put(20,30){\circle{1}}\put(25,30){\circle*{1}}
\put(30,30){\circle*{1}}\put(35,30){\circle{1}}\put(40,30){\circle*{1}}
\put(45,30){\circle*{1}}\put(50,30){\circle*{1}}\put(55,30){\circle{1}}
\put(5,30){\line(1,0){10}}\put(25,30){\line(1,0){5}}\put(40,30){\line(1,0){10}}
\put(5,30){\line(1,-1){5}}\put(10,30){\line(1,-1){5}}
\put(15,30){\line(1,-1){5}}%
\put(5,25){\circle{1}}\put(10,25){\circle*{1}}\put(15,25){\circle*{1}}
\put(20,25){\circle*{1}}\put(25,25){\circle{1}}\put(30,25){\circle*{1}}
\put(35,25){\circle*{1}}\put(40,25){\circle{1}}\put(45,25){\circle*{1}}
\put(50,25){\circle*{1}}\put(55,25){\circle*{1}}
\put(10,25){\line(1,0){10}}\put(30,25){\line(1,0){5}}
\put(45,25){\line(1,0){10}}
\put(5,20){\circle{1}}\put(10,20){\circle{1}}\put(15,20){\circle{1}}
\put(20,20){\circle{1}}\put(25,20){\circle{1}}
\put(30,20){\circle{1}}\put(35,20){\circle*{1}}\put(40,20){\circle*{1}}
\put(45,20){\circle{1}}\put(50,20){\circle{1}}\put(55,20){\circle{1}}
\put(35,20){\line(1,0){5}}
\put(5,15){\circle{1}}\put(10,15){\circle{1}}\put(15,15){\circle*{1}}
\put(20,15){\circle*{1}}\put(25,15){\circle{1}}
\put(30,15){\circle{1}}\put(35,15){\circle{1}}\put(40,15){\circle*{1}}
\put(45,15){\circle*{1}}\put(50,15){\circle{1}}\put(55,15){\circle{1}}
\put(15,15){\line(1,0){5}}\put(40,15){\line(1,0){5}}\put(15,15){\line(1,-1){5}}
\put(20,15){\line(1,-1){5}}
\put(5,10){\circle{1}}\put(10,10){\circle{1}}\put(15,10){\circle{1}}
\put(20,10){\circle*{1}}\put(25,10){\circle*{1}}
\put(30,10){\circle{1}}\put(35,10){\circle{1}}\put(40,10){\circle{1}}
\put(45,10){\circle*{1}}\put(50,10){\circle*{1}}\put(55,10){\circle{1}}
\put(20,10){\line(1,0){5}}\put(45,10){\line(1,0){5}}
\end{picture}\end{minipage}\end{center}
For an illustration let us consider the one soliton state localized on the
smallest parallelogram specified by $(m,n)=(0,0), (0,1), (1,-1), (1,0)$ on the
$(m,n)$ lattice plane, but allowed to range all integers along $l$. Now we
recall that in the usual lattice space the one soliton solution is given
by\cite{Miwa}\cite{SS3}
\b:
f^{1sol}(l,m,n)=\prod_j(1-az_j)^{-k_j}+\prod_j(1-bz_j)^{-k_j}.
\lb{Miwa1soliton}
\e:
Here $a,b$ are arbitrary constants and $\{z_j\}$ are parameters which determine
velocity of the soliton. $\{k_j\}$ are variables taking values on integers. We
can choose any three among $\{k_j\}$ to relate them to our variables $(l,m,n)$.
Let $k_1, k_2, k_3$ be such three and relate them according to
\b:
k_1=m+n-{1\over 2},\quad k_2=-m-{1\over 2},\quad k_3=l-n-{1\over 2}.
\e:
Writing $(\rf{Miwa1soliton})$ explicitly we find
\be:
f^{1sol}(l,0,0)&=&A(1-az_3)^{-l}+B(1-bz_3)^{-l}\nonumber\\
f^{1sol}(l,1,0)&=&A{1-az_2\over 1-az_1}(1-az_3)^{-l}+B{1-bz_2\over
1-bz_1}(1-bz_3)^{-l}\nonumber\\
f^{1sol}(l,0,1)&=&A{1-az_3\over 1-az_1}(1-az_3)^{-l}+B{1-bz_3\over
1-bz_1}(1-bz_3)^{-l}\nonumber\\
f^{1sol}(l,1,-1)&=&A{1-az_2\over 1-az_3}(1-az_3)^{-l}+B{1-bz_2\over
1-bz_3}(1-bz_3)^{-l}
\lb{1soliton}
\ee:
where
$$
A:=\sqrt{(1-az_1)(1-az_2)(1-az_3)},\quad B:=\sqrt{(1-bz_1)(1-bz_2)(1-bz_3)}.
$$
We see from $(\rf{1soliton})$ that all points belonging to the same piece
behave similarly. The parameters are related to $\alpha,\ \beta$ of
$(\rf{HBDE})$ by
$$
\alpha=z_1(z_2-z_3),\quad \beta=z_2(z_3-z_1),
$$
for $(\rf{1soliton})$ to satisfy HBDE. If we define the amplitude
$\varphi_{mn}(l)$ by
\b:
\varphi_{mn}(l):={f(l+1,m,n)f(l-1,m,n)\over f^2(l,,m,n)}\ -\ 1
\lb{amplitude}
\e:
it behaves as
\b:
\varphi_{00}^{1sol}(l)={\sinh^2p\over \cosh^2 (pl+\chi)},\quad p:={1\over
2}\ln{1-az_3\over 1-bz_3},\quad \chi:={1\over 2}\ln{B\over A}.
\lb{soliton peak}
\e:
This represents a localized peak along the $l$ axis. The other amplitudes
$\varphi_{mn}(l)$ behave almost the same but different by the values of the
phase $\chi$.
If we consider an evolution of the system in variable $l$, a Toda atom is
composed of four lattice points. Since there is only one equation of motion
$(\rf{HBDE})$, they are not independent variables. Three of them can be chosen
as we like leaving one to be determined by the equation. Let $z_l$ be
$f(l,0,0)$. The other three could be either dependent or independent of $z_l$.
If they are independent of $z_l$, the equation of motion is linear in $z_l$. On
the other hand if they do depend on $z_l$ they are allowed at most linear in
$z_l$, for the equation to remain Hirota bilinear form. Namely we can write
\b:
f(l,m,n)=A_{m,n}z_l+B_{m,n},\qquad (m,n)=(0,0),(1,0),(0,1),(1,-1),
\lb{linear transf}
\e:
with $A_{0,0}=1,\ B_{0,0}=0$. Upon substituting them together into HBDE, it is
easy to see that we obtain an equation of the form
\b:
z_{l+1}={Az_l+B\over Cz_l+D}.
\lb{general Moebius}
\e:
$$
A=-\beta B_{1,0}+(\alpha+\beta)B_{0,1}A_{1,-1},\quad
B=(\alpha+\beta)B_{0,1}B_{1,-1},
$$
$$
C=(\alpha+\beta)(A_{1,0}-A_{0,1}A_{1,-1}),\quad D=\alpha
B_{1,0}-(\alpha+\beta)A_{0,1}B_{1,-1}.
$$
$(\rf{general Moebius})$ is a M\"obius map. Thererfore the map is integrable.
If we remember that HBDE is invariant under the transformation of
$f(l,m,n)\rightarrow e^{al+bm+cn}f(l,m,n)$, the one soliton solution
$(\rf{1soliton})$ offers an example of $(\rf{linear transf})$.
The solution of $(\rf{general Moebius})$ can be obtained as follows. A M\"obius
map has three fixed points. By an appropriate transformation : $z_l\rightarrow
\phi\circ z_l\circ \phi^{-1}$, one of the fixed points can be transformed into
0. After the transformation the map will have the form
\b:
z_{l+1}=\mu{z_l\over 1+\nu z_l}.
\lb{ILM}
\e:
$(\rf{ILM})$ is easily solved for an arbitrary initial value $z_0$ to get
\b:
z_l={\mu^l z_0\over 1+\nu{1-\mu^l\over 1-\mu}z_0}.
\lb{Moebius}
\e:
Applying to this the inverse transformation : $z_l\rightarrow \phi^{-1}\circ
z_l\circ \phi$ , the general solution to $(\rf{general Moebius})$ is obtained.
We call the map $(\rf{ILM})$ the integrable logistic map (ILM). The meaning of
this name will become clear later.
We notice that $(\rf{ILM})$ corresponds to the case in which one of the lattice
point is fixed constant and the other three points behave the same:
\be:
f^{ILM}(l,0,0)&=&f^{ILM}(l,0,1)=f^{ILM}(l,1,-1)=:z_l,\nonumber\\
f^{ILM}(l,1,0)&=&{\mu-1\over\nu},\quad \mu=-{\beta\over\alpha}.
\lb{simple case}
\ee:
How does the amplitude look like in this case? To see it we substitute
$(\rf{Moebius})$ into $(\rf{amplitude})$ and get
\b:
\varphi^{ILM}={\sinh^2p\over\cosh^2(pl+\chi)-\cosh^2 p},\quad p:={1\over 2}\ln\mu
,\quad \chi:={1\over 2}\ln{\mu z_0\over 1-\mu+\nu z_0}.
\e:
The similarity of this result to the one soliton solution $(\rf{soliton peak})$
must be apparent.
We may further simplify the equation by
\b:
f^{lin}(l,0,0)=:z_l,\quad f^{lin}(l,1,-1)=f^{lin}(l,1,0)=1-{1\over\mu},\quad
f^{lin}(l,0,1)=c (c: {\rm const}).
\e:
The map turns to be linear
\b:
z_{l+1}=\mu z_l +(1-\mu)c
\e:
and yields the solution
\b:
z_l=\mu^l (z_0-c)+c.
\e:
The corresponding amplitude is
\b:
\varphi^{lin}(l)={\sinh^2p\over\cosh^2(pl+\chi)},\quad p={1\over 2}\ln\mu,\quad
\chi={1\over 2}\ln{z_0-c\over c}.
\e:
which is again the form of $(\rf{soliton peak})$.
\section{Generalized Logistic Map}
As we have learned in the preceeding section the smallest piece of Toda lattice
already possesses useful informations of the integrable dynamical systems. In
this section we study a deformation of the Toda atom. There could be many
different ways of deformation, some of which preserve integrability and some
others destroy it. Since we are interested in studying the transition between
integrable and nonintegrable maps, we must break integrability of the Toda
atom.
For this purpose we recall that the Toda molecules have a characteristic form
as seen in Fig. 1. Their cross sections in the $(m,n)$ plane are parallelogram
declined to the same direction. It owes to the property of the Toda atom
defined in $(\rf{Toda atom})$. The very reason of this asymmetry comes from the
asymmetry\footnote{If we had chosen other set of variables, HBDE looked more
symmetric and the corresponding Toda atom could be either cubic or
octahedron\cite{DYB}. We have used asymmetric variables such that deformations
can be discussed.} of HBDE under the exchange of $l$ and $m$ as seen in
$(\rf{HBDE})$. The equation in which the role of $l$ and $m$ in $(\rf{HBDE})$
are exchanged is also integrable. In fact we could start from it without
changing none of the results.
From this argument we are tempted to consider the following deformation of
HBDE.
\be:
&&\alpha\ g_n(l+1,m)g_n(l,m+1)+\beta\ g_n(l,m)g_n(l+1,m+1)\nonumber\\
&&\qquad -\ (\alpha+\beta)\ \left[(1-\gamma)\delta g_{n+1}(l+1,m)+\gamma
g_{n+1}(l,m+1)\right]\nonumber\\
&&\qquad\qquad\qquad \times\ \left[(1-\gamma')\delta' g_{n-1}(l,m+1)+\gamma'
g_{n-1}(l+1,m)\right]=0.
\lb{deformed HBDE}
\ee:
We notice that this equation is integrable when $\gamma=\gamma'=0$ and
$\delta\delta'=1$ ,or $\gamma=\gamma'=1$. Integrability of other cases is not
known at this point. Moreover we are not able to separate some small part of
lattice independently from the rest as it was done to get a Toda atom.
Nevertheless it is worthwhile to study $(\rf{deformed HBDE})$ defined on a
portion of the lattice shown in Fig. 2a.
\begin{center}\begin{minipage}{6cm}\unitlength 1mm\begin{picture}(60,50)
\put(5,5){\circle{2}}\put(15,5){\circle{2}}\put(25,5){\circle{2}}
\put(35,5){\circle{2}}\put(45,5){\circle{2}}
\put(5,15){\circle{2}}\put(15,15){\circle*{2}}\put(25,15){\circle*{2}}
\put(35,15){\circle*{2}}\put(45,15){\circle{2}}
\put(5,25){\circle{2}}\put(15,25){\circle*{2}}\put(25,25){\circle*{2}}
\put(35,25){\circle*{2}}\put(45,25){\circle{2}}
\put(5,35){\circle{2}}\put(15,35){\circle{2}}\put(25,35){\circle{2}}
\put(35,35){\circle{2}}\put(45,35){\circle{2}}
\thicklines
\put(15,25){\line(1,0){20}}\put(15,25){\line(0,-1){10}}
\put(15,15){\line(1,0){20}}\put(35,25){\line(0,-1){10}}
\bezier{15}(25,25)(30,20)(34,16)\bezier{15}(15,25)(20,20)(24,16)
\put(20,45){\makebox(10,3)[c]{Fig. 2a}}
\end{picture}\end{minipage}
\begin{minipage}{6cm}\unitlength 1mm\begin{picture}(60,50)
\put(5,5){\circle{2}}\put(15,5){\circle{2}}\put(25,5){\circle{2}}
\put(35,5){\circle{2}}\put(45,5){\circle{2}}
\put(5,15){\circle{2}}\put(15,15){\circle{2}}\put(25,15){\circle*{2}}
\put(35,15){\circle*{2}}\put(45,15){\circle{2}}
\put(5,25){\circle{2}}\put(15,25){\circle*{2}}\put(25,25){\circle*{2}}
\put(35,25){\circle*{2}}\put(45,25){\circle{2}}
\put(5,35){\circle{2}}\put(15,35){\circle{2}}\put(25,35){\circle{2}}
\put(35,35){\circle{2}}\put(45,35){\circle{2}}
\thicklines
\put(15,25){\line(1,0){20}}\put(15,25){\line(1,-1){10}}
\put(35,25){\line(0,-1){10}}\put(25,15){\line(1,0){10}}
\bezier{15}(25,25)(30,20)(34,16)
\put(20,45){\makebox(10,3)[c]{Fig. 2b}}
\end{picture}\end{minipage}\end{center}
\vglue 0.5cm
In order to proceed further we have to specify the model so that we can study
analytical properties of the map explicitly. We will consider, in the following
discussion, the map given by $(\rf{ILM})$ and its deformation. We also restrict
our argument to the case of $\gamma'=0$, $\delta=\delta'^{-1}={\nu\over\mu}$ in
$(\rf{deformed HBDE})$ for simplicity and define (Fig. 2b)
$$
f^{GLM}(l,0,0)=f^{GLM}(l,0,1)=f^{GLM}(l,1,-1)=f^{GLM}(l,1,1)=:z_l,
$$
\b:
f^{GLM}(l,1,0)={\mu -1\over\nu}\ (\mu: {\rm const}).
\e:
The dynamics of this model is described by the map
\b:
z_{l+1}=f(z_l):=\mu{z_l(1-\gamma z_l)\over 1+\nu(1-\gamma)z_l};\quad z_l \in
\mbox{\boldmath$C$},\quad l\in \mbox{\boldmath$Z$}.
\lb{GLM}
\e:
We call this map a generalized logistic map (GLM). Some properties are listed
below :
\begin{enumerate}
\item
When $\gamma=1$, the map becomes the ordinary logistic map studied in the
literature extensively.
\item
GLM becomes the logistic equation for all values of the parameters $\gamma,\mu
,\nu$ when the continuous limit of the variable $l$ is taken. To show it let us
introduce new variables $u$ and new parameters $a$ and $h$ by
\b:
u(l):={\nu+\gamma\mu-\gamma\nu\over\mu-1}z_l,\qquad ah:=\mu-1.
\e:
We replace $z_{l+1}$ by $z_{l+h}$ and take the limit $h\rightarrow 0$. We will
find that $(\rf{GLM})$ reduces to the logistic equation:
\b:
{du\over dl}=au(1-u).
\lb{logistic equation}
\e:
\item
GLM includes $(\rf{ILM})$ as the special case with $\gamma=0$. This explaines
the name of ILM used for $(\rf{ILM})$.
\item
GLM generates Julia set as long as $\gamma\ne 0$. Hence it is not integrable
except for $\gamma=0$. This will be discussed later.
\end{enumerate}
The most important feature of GLM is that it interpolates nonintegrable map to
integrable map in the limit of continuous deformation. This fact enables us to
study analytically the transition between two phases. The problem we concern in
what follows is the analytical properties of the map $(\rf{GLM})$. To proceed
further it is more convenient to convert the map $(\rf{GLM})$ into the standard
form of rational map of degree 2:
\b:
F(z)=\phi\circ f\circ \phi^{-1}(z)={z(z+\lambda)\over 1+\lambda' z}\
e^{i\theta}\lb{F(z)}
\e:
by the M\"obius transformation
\b:
\phi(x)={(1-\mu)x\over(\nu\gamma-\nu-\mu\gamma)x+(\nu\gamma-\nu-\gamma)\mu
e^{-i\theta}}.
\e:
where
\b:
\lambda=\mu
e^{-i\theta},\qquad\lambda'={\nu\gamma-\nu-2\mu\gamma+\mu^2\gamma\over
(\nu\gamma-\nu-\gamma)\mu} e^{i\theta}.
\e:
The corresponding integrable map turns to be the following case
\b:
F(z)=\mu z=\lambda e^{i\theta}z,\quad {\rm if}\quad \gamma=0\quad {\rm i.e.} \quad
\lambda\lambda'=1.
\lb{linear map}
\e:
The main feature of a dynamical system is determined by the nature of fixed
points of the map. Namely the multiplier $\Lambda$ at fixed point $a$ of the
map $\varphi(z)$ is defined by the derivative of the map at $a$:
\b:
\Lambda:=\left.{d\varphi(z)\over dz}\right|_{z=a}.
\e:
The fixed point $a$ is an attractor of the map if $|\Lambda|<1$, a repeller if
$|\Lambda|>1$, and neutral if $|\Lambda|=1$.
In the case of GLM, the fixed points are easily found as
\b:
0,\quad p=-\lambda-{1-\lambda\lambda'\over\lambda'-e^{i\theta}},\quad \infty.
\e:
The corresponding multipliers are
\b:
\Lambda_0=\lambda e^{i\theta},\quad \Lambda_p={2-\lambda
e^{i\theta}-\lambda'e^{-i\theta}\over 1-\lambda\lambda'},\quad
\Lambda_\infty=\lambda' e^{-i\theta}.
\e:
In the integrable limit $\lambda\lambda'\rightarrow 1$, we observe the
following characteristic features:
\begin{enumerate}
\item
Since
\b:
|\Lambda_0\Lambda_\infty|\rightarrow 1,
\e:
the map converges either to $0$ or to $\infty$ depending on $|\lambda|=|\mu|<1$
or $>1$.
\item
The fixed point $p$ approaches to $-\lambda$ and it turns to a super repeller
\b:
|\Lambda_p|\ \rightarrow\ \infty.
\e:
\end{enumerate}
\section{Julia Sets}
In the complex dynamical systems, chaos appears from a Julia set.
Given the map $f(z)$ on a Riemann sphere
$\bar{\mbox{\boldmath $C$}}=\mbox{\boldmath $C$}\cup \{ \infty \}$,
the Riemann sphere is devided into two parts
depending on whether the orbits converge or not.
A set of initial values whose orbits,together with their neiborhood, converge
is called Fatou set $F(f)$.
On the other hand, a set which does not is called Julia set $J(f)$.
This definition leads to the fact
that the Julia set does not contain any attractive periodic cycle.
In this sense the orbit in Julia set is chaotic.
By definiton, Fatou set and Julia set are invariant of the map,
that is \[ f(F)=f^{-1}(F)=F\, ,\: f(J)=f^{-1}(J)=J. \]
It is easy to understand that attractive fixed points belong to Fatou set.
Contrary it is known that repulsive fixed points belong to Julia set
\cite{chaos}.
Then we can compute Julia set by
inversely mapping a repulsive fixed point as an initial value.
We show some of their examples in Fig. 3 for the map of $(\rf{F(z)})$.
The Julia set does not exist if the map is completely integrable. Integrable
maps converge to orbits predictable for any given initial values. Conversely if
there exists an orbit not predictable for some initial values, the map is not
integrable. Therefore a Julia set appears in nonintegrable maps, but not in
integrable maps.
In our standard map of degree 2 given by $(\rf{F(z)})$, a Julia set is known to
exists except for at the integrable point $\lambda\lambda'=1$. We like to know
how it disappears from the complex plane of the variable when the parameters
approach to the limit $\lambda\lambda'\ \rightarrow\ 1$. We have given in
\cite{SSSY} an argument about this problem for some limited range of
parameters. The purpose of this section is to present another argument which
should supplement our previous one.
The inverse map of $(\rf{F(z)})$ is easily obtained as
\b:
z_l=F^{-1}(z_{l+1})={1\over 2}(\rho z_{l+1}-\lambda)\pm{1\over 2}\sqrt{(\rho
z_{l+1}+\lambda)^2+4z_{l+1}e^{-i\theta}(1-\lambda\lambda')},
\lb{inverse map}
\e:
where we defined
\b:
\rho:=\lambda'e^{-i\theta}.
\e:
From this expression it is apparent that the inverse map is not unique but
double valued at every step. As we pointed out in above the inverse map
generates points of the Julia set if it starts from a point on the Julia set.
Substituting one value of the Julia set into $(\rf{inverse map})$, we get two
points every time. After $n$ steps the number of points of the Julia set
increases as many as $2^{n+1}-1$. This explains the nature of the Julia set.
Some of the points could be those of periodic maps. They must be subtracted
from the number.
In the integrable limit $\lambda\lambda'\ \rightarrow\ 1$ the inverse map
$(\rf{inverse map})$ is still double valued. They are
\b:
z_l=\cases{\rho z_{l+1}\cr -\lambda\cr}.
\lb{ILM set}
\e:
We notice that the second solution does not depend on $z_{l+1}$, hence is the
same at every step of the map. For $(\rf{ILM set})$ to generate the Julia set
we must start from a repulsive fixed point. When $|\lambda|>1$, the origin is
such a point. Thence we find from $(\rf{ILM set})$ the `Julia
set'\footnote{This set does not possess properties expected for the ordinary
Julia set. We call it `Julia set' only in the sense that it is generated by the
inverse map starting from a repeller.}
\b:
J^{ILM}=\left\{\left. -\rho^n\lambda\ \right|\ n\in
{\mbox{\boldmath$N$}}\right\}\e:
for the integrable map. The number of the `Julia set' increases proportional to
the number of the steps $n$. Moreover the element of $J^{ILM}$ is equal to
$-\mu^{-n}\lambda$, which is nothing but the solution exactly expected from the
map $(\rf{linear map})$, if it started from $-\lambda$.
The next problem we concern is to explore how the Julia set of GLM turns into
those points of $(\rf{ILM set})$ in the limit $\lambda\lambda'\ \rightarrow\
1$. Since we are interested in the transition from a nonintegrable map to the
integrable map, we are to consider small values of $|\lambda\lambda'-1|$. The
inverse map $(\rf{inverse map})$ can be rewritten as
\b:
F^{-1}(z)=\left\{\matrix{\rho z\cr -\lambda\cr}\right\}\pm E(z)
\lb{F^-1}
\e:
where we put
\b:
E(z):={1\over 2}(\rho z+\lambda)\left(\sqrt{1-{4z\epsilon
e^{-i\theta}\over(\rho z+\lambda)^2}}\ -\
1\right),\qquad\epsilon:=\lambda\lambda'-1.
\lb{E(z)}
\e:
Note that $E(z)$ vanishes for small values of $\epsilon$. To see the behaviour
of $E(z)$ for small $\epsilon$ we first observe the inequality which is true
for all $\epsilon$:
\b:
|E(z)|< 3\sqrt{|z\epsilon|},\qquad^\forall\epsilon\in {\mbox{\boldmath$C$}}.
\lb{|E(z)|<}
\e:
The proof of this inequality owes to the following facts.
\begin{enumerate}
\item
If $|w|<1$,
\be:
\left|{1\over \sqrt w}\left(\sqrt{1-w}-1\right)\right|&=&{1\over |\sqrt
w|}\left|1-\sqrt{1-w}\right| \le {1\over |\sqrt
w|}\left(1-\sqrt{1-|w|}\right)\nonumber\\
&\le&{1\over |\sqrt w|}\left(1-(1-|w|)\right)=|\sqrt w|\le 1.
\ee:
\item
If $|w|>1$,
\be:
\left|{1\over \sqrt w}\left(\sqrt{1-w}-1\right)\right|&=&\left|\sqrt{{1\over
w}-1}-\sqrt{{1\over w}}\right|< 3.
\ee:
\end{enumerate}
Substituting
\b:
w:={4z\epsilon e^{-i\theta}\over (\rho z+\lambda)^2},
\lb{w}
\e:
into $E(z)$ of $(\rf{E(z)})$, we can write
\b:
|E(z)|=\sqrt{|z\epsilon|}\left|{1\over \sqrt
w}\left(\sqrt{1-w}-1\right)\right|,\e:
from which $(\rf{|E(z)|<})$ follows.
We can perform the inverse map $(\rf{F^-1})$ iteratively. Let us denote the map
$(\rf{F^-1})$ as
\b:
A(z):=\rho z+E(z),\qquad B(z):=-\lambda-E(z).
\e:
Then the second map becomes
\b:
F^{-2}(z)=\cases{A\left(F^{-1}(z)\right)\cr
B\left(F^{-1}(z)\right)\cr}=\cases{A\circ A(z)\cr A\circ B(z)\cr B\circ A(z)
\cr B\circ B(z)\cr}.
\e:
After $n$ steps we obtain
\b:
F^{-n}(z)=\left\{\left. A^{\nu_1}\circ B^{\nu_2}\circ A^{\nu_3}\circ\cdots\circ
B^{\nu_n}(z)\right|\ \nu_1+\nu_2+\cdots+\nu_n=n\right\}.
\e:
If we had started from the repeller the above maps have produced the Julia set
of GLM. In the following we consider the case $|\lambda|>1,\ |\lambda'|<1$, so
that the origin is a repulsive fixed point and the infinity is an attractive
fixed point. Since $E(0)=0$ the origin is mapped to
\b:
A(0)=0,\qquad B(0)=-\lambda
\e:
by the first iteration. The second iteration yields
\b:
A^2(0)=0,\ A\circ B(0)=-\rho\lambda+E(-\lambda),\ B\circ A(0)=-\lambda,\
B^2(0)=-\lambda-E(-\lambda).
\lb{2 iteration}
\e:
We notice that since $E(-\lambda)$ is the order of $\epsilon$ from
$(\rf{|E(z)|<})$, all of the points after the second iteration are in the
neighbourhood of $J^{ILM}$. Proceeding similarly we obtain the Julia set as
follows:
\b:
J^{GLM}=\left\{\ \left. A^{\nu_1}\circ B^{\nu_2}\circ A^{\nu_3}\circ
\cdots\circ B^{\nu_\infty}(0)\right|\ \nu_1,\nu_2,\cdots \in
{\mbox{\boldmath$N$}} \right\}.
\lb{J^GLM}
\e:
We remark some important properties which result from this expression.
\begin{enumerate}
\item
The invariance of the Julia set under the map.
It is obvious from $(\rf{J^GLM})$ that
\b:
J^{GLM}=A\left(J^{GLM}\right)\cup
B\left(J^{GLM}\right)=F^{-1}\left(J^{GLM}\right).
\e:
\item
An element of the form $B\circ X$ for any $X\in J^{GLM}$ belongs to the
neibourhood of $-\lambda$, as seen from
\b:
B\circ X=-\lambda-E(X).
\e:
\item
An element of the form $A^s\circ B\circ X$ maps $B\circ X$ to the neighbourhood
of $-\rho^s\lambda$.
In fact after applying $A$'s $s$ times we get
\be:
A^s(B\circ X)&=&A^{s-1}(\rho B\circ X+E(B\circ X))\nonumber\\
&=&A^{s-2}\left(\rho^2 B\circ X+\rho E(B\circ X)+E(A\circ B\circ X)\right)\nonumber\\
&=&\rho^s B\circ X
+\sum_{k=0}^{s-1}\rho^k E\left(A^{s-k-1}\circ B\circ X\right)\nonumber\\
&=&-\rho^s\lambda-\rho^s E(X)
+\sum_{k=0}^{s-1}\rho^k E\left(A^{s-k-1}\circ B\circ X\right).
\lb{A^s(BX)}
\ee:
\end{enumerate}
Since every element of $J^{GLM}$, beside 0, is either the form of $B\circ X$ or
$A^s\circ B\circ X$, we conclude that every element of $J^{GLM}$ is in the
neighbourhood of $J^{ILM}$.
We now proceed to show that $J^{GLM}$ approaches uniformly to $J^{ILM}$ as
$\epsilon$ goes to 0. Since the infinity is an attractive fixed point the Julia
set must be in a finite region of the complex plane. We assume that they are
inside of the disc of radius $R$, {\it i.e.}, $|z|< R,\ ^\forall z\in J^{GLM}$.
Therefore we can bound $|E(z)|$ by
\b:
|E(z)|< 3\sqrt R\sqrt{|\epsilon|},\qquad z\in J^{GLM}.
\e:
The summation of $(\rf{A^s(BX)})$ can be estimated as
\b:
\sum_{k=0}^{s-1}\left|\rho^kE\left(A^{s-k-1}BX\right)\right|<
\left(\sum_{k=0}^{s-1}|\rho|^k\right)3\sqrt R\sqrt{|\epsilon|}={1-|\rho|^s\over
1-|\rho|}3\sqrt R\sqrt{|\epsilon|},
\e:
which vanishes as $|\epsilon|$ approaches to 0 for all integer $s$ because we
assume $|\rho|^s=|\lambda'|^s < 1$. This proves that all points in $J^{GLM}$
approach to $J^{ILM}$ uniformly in the integrable limit.
In the above we considered the case of $|\lambda|>1,|\lambda'|<1$.
The other case $|\lambda|>1,|\lambda'|<1$ can be also treated similarly
if $z_l$ is transformed into $w_l=1/z_l$ in (\rf{F(z)}).
Since this transformation is equivalent to the exchange of the role of
$\lambda$ and $\lambda'$ (and replacement of $\theta$ by $-\theta$ )
in (\rf{F(z)}), we can replay on the $w$-plane the same argument to the above.
We conclude this paper by showing pictures which represent the convergence of
the Julia set to the points of iterative maps of the integrable system. The
parameters of the map are fixed at $\lambda=4$ and $\theta=0.03\pi$. Under the
choice of these parameters, $\lambda'$, hence $\epsilon$, can be changed
freely. In the integrable limit $\epsilon=0$, $J^{ILM}=\left\{-4{1\over
4^n}e^{-i0.03\pi n},\ n=0,1,2,\cdots\right\}$.It is shown in Fig. 3c. As
$\epsilon$ differs from 0 the Julia set expand from these points as seen in
other pictures. The real and imaginary axes are not drawn except in Fig. 3a, so
that the points in the neighbourhoods of $z=-4$ and 0 are visible in other
figures.
\vglue 0.5cm
\noindent
\begin{picture}(80,80)
\put(0,0){\epsfxsize=8cm\epsfbox{fig3a.ps}}
\put(35,70){\makebox(10,3)[c]{Fig. 3a}}
\put(10,35){\makebox(4,4)[c]{$- 4$}}
\put(67,35){\makebox(4,4)[c]{0}}
\put(35,5){\makebox(10,3)[c]{$\epsilon=-1$}}
\end{picture}
\begin{picture}(80,80)
\put(0,0){\epsfxsize=8cm\epsfbox{fig3b.ps}}
\put(35,70){\makebox(10,3)[c]{Fig. 3b}}
\put(10,35){\makebox(4,4)[c]{$- 4$}}
\put(67,35){\makebox(4,4)[c]{0}}
\put(35,5){\makebox(10,3)[c]{$\epsilon=-0.5$}}
\end{picture}
\begin{center}\begin{picture}(80,80)
\put(0,0){\epsfxsize=8cm\epsfbox{fig3c.ps}}
\put(35,70){\makebox(10,3)[c]{Fig. 3c}}
\put(10,35){\makebox(4,4)[c]{$- 4$}}
\put(67,35){\makebox(4,4)[c]{0}}
\put(35,5){\makebox(10,3)[c]{$\epsilon=0$}}
\end{picture}\end{center}
By studying analytical property of a piece of Toda lattice we attempted to
clarify how a nonintegrable system approaches to the integrable one. Our
argument is based on the fact that the two dimensional Toda lattice can be
disjoined into small pieces, which are integrable by themselves and are called
Toda molecules. A Toda molecule is composed from smaller pieces, which we
called Toda atoms. Hence the two dimensional Toda lattice is a crystal
consisting of Toda atoms. For such a macroscopic system being integrable every
piece must be joined very carefully not to create a Julia set.
In this onnection it will be worth while recalling that a similar property is
possessed commonly in other integrable models. In the solvable lattice models
the partition function is factorizable into a product of Boltzmann weights. The
Yang-Baxter equation is a condition imposed on the factors to be connected
properly. Another exampele is the factorizability condition imposed on the
string amplitudes which led us to the $\tau$ function of the KP hierarchy. In
any case the connection rule must be such that the symmetry characterizing the
unit blocks is preserved under the coupling.
We will be interested in studying analytically properties of the compound
system of two GLM pieces in the forth comming paper.
\vfill\eject
|
1,116,691,499,043 | arxiv | \section{Introduction}
The problem of a mass gap occupies a special place in quantum chromodynamics. The significance of this problem
is related to the fact that its solution requires the development of nonperturbative quantization methods in field theory, which are absent at present.
It should be emphasised that the nonperturbative quantization is carried out numerically in lattice calculations. However, such calculations
do not enable one to have a full understanding of the nonperturbative quantization, and some questions of principle still remain to be clarified:
What are the properties of operators of a strongly interacting field? How one can determine a quantum state of such a field? Also,
there are many other unclear questions for which there are answers in the case of perturbative quantization using Feynman diagrams.
In this connection, the question arises as to the existence of a mass gap in other theories. The presence of a mass gap
in any theory is of interest by itself, but it is also possible that such a theory may serve as some approximate description of nonperturbative effects
in quantum theory involving a strong interaction. In Refs.~\cite{Dzhunushaliev:2020qwf,Dzhunushaliev:2021apa}, it is shown that in
SU(2) gauge theory with a source in the form of a nonlinear spinor field, there exist topologically trivial solutions with a radial magnetic field
(``monopoles''). Such ``monopoles'' possess the following features: (a)~the radial field decreases as $r^{-3}$ at infinity
(this is the reason why we use quotation marks for the word monopole); (b)~the solutions are topologically trivial, unlike those describing the 't~Hooft-Polyakov monopole;
(c)~the most interesting fact is that the energy spectrum of such solutions has an absolute minimum, which we can refer to as a mass gap.
In the absence of a SU(2) gauge field, the corresponding mass gap was found in Refs.~\cite{Finkelstein:1951zz,Finkelstein:1956} in studying a nonlinear spinor field.
These references also consider a generalization of nonlinear spinor field when one introduces some constant in the nonlinear term for the spinor field.
In the present paper we study effects of the presence of a mass gap and its properties in a system containing such a nonlinear spinor field.
The nonlinear Dirac equation was introduced by W.~Heisenberg as an equation describing the properties of an electron. The classical properties of this equation,
in particular, its soliton-like solutions, were investigated in Refs.~\cite{Finkelstein:1951zz,Finkelstein:1956}. Subsequently one of variants of the nonlinear Dirac equation
was employed for an approximate description of the properties of hadrons (this approach is called the Nambu-Jona-Lasinio model~\cite{Nambu:1961tp}; for a review, see Ref.~\cite{Klevansky:1992qe}).
In the present paper we study the dependence of the size of a mass gap, its position, etc., on the value of a parameter describing the nonlinear self-interaction potential of the spinor field.
\section{Equations and Ans\"{a}tze for Yang-Mills fields coupled to a nonlinear Dirac field}
\label{YM_Dirac_scalar}
In this section we closely follow Ref.~\cite{Dzhunushaliev:2020qwf,Dzhunushaliev:2021apa}. The Lagrangian describing a system consisting of a non-Abelian SU(2) field $A^a_\mu$
interacting with nonlinear spinor field $\psi$ can be taken in the form
\begin{equation}
\begin{split}
\mathcal L = & - \frac{1}{4} F^a_{\mu \nu} F^{a \mu \nu}
+ i \hbar c \bar \psi \gamma^\mu D_\mu \psi -
m_f c^2 \bar \psi \psi
+ \frac{\Lambda}{2} g \hbar c V \left( \bar \psi, \psi \right).
\label{1_10}
\end{split}
\end{equation}
Here $m_f$ is the mass of the spinor field;
$
D_\mu = \partial_\mu - i \frac{g}{2} \sigma^a
A^a_\mu
$ is the gauge-covariant derivative, where $g$ is the coupling constant and $\sigma^a$ are the SU(2) generators (the Pauli matrices);
$
F^a_{\mu \nu} = \partial_\mu A^a_\nu - \partial_\nu A^a_\mu +
g \epsilon_{a b c} A^b_\mu A^c_\nu
$ is the field strength tensor for the SU(2) field, where $\epsilon_{a b c}$ (the completely antisymmetric Levi-Civita symbol) are the SU(2) structure constants; $\Lambda$ is a constant; $\gamma^\mu$ are the Dirac matrices in the standard representation; $a,b,c=1,2,3$ are color indices and $\mu, \nu = 0, 1, 2, 3$ are spacetime indices; $V \left( \bar \psi, \psi \right)$
is the potential describing the nonlinear self-interaction of the spinor field $\psi$. According to Ref.~\cite{Finkelstein:1956}, this potential is a linear
combination of scalar and pseudoscalar interactions:
\begin{equation}
V \left( \bar \psi, \psi \right) = \alpha \left( \bar \psi \psi \right)^2 +
\beta \left( \bar \psi \gamma^5 \psi \right)^2,
\label{1_15}
\end{equation}
where $\alpha$ and $\beta$ are constants.
Our purpose is to study monopole-like solutions of these equations. To do this, we use the standard SU(2) monopole {\it Ansatz}
\begin{eqnarray}
A^a_i &=& \frac{1}{g} \left[ 1 - f(r) \right]
\begin{pmatrix}
0 & \phantom{-}\sin \varphi & \sin \theta \cos \theta \cos \varphi \\
0 & -\cos \varphi & \sin \theta \cos \theta \sin \varphi \\
0 & 0 & - \sin^2 \theta
\end{pmatrix} , \quad
i = r, \theta, \varphi \text{ (in polar coordinates)},
\label{2_10}\\
A^a_t &=& 0 ,
\label{2_13}
\end{eqnarray}
and the {\it Ansatz} for the spinor field from Refs.~\cite{Li:1982gf,Li:1985gf}
\begin{equation}
\psi^T = \frac{e^{-i \frac{E t}{\hbar}}}{g r \sqrt{2}}
\begin{Bmatrix}
\begin{pmatrix}
0 \\ - u \\
\end{pmatrix},
\begin{pmatrix}
u \\ 0 \\
\end{pmatrix},
\begin{pmatrix}
i v \sin \theta e^{- i \varphi} \\ - i v \cos \theta \\
\end{pmatrix},
\begin{pmatrix}
- i v \cos \theta \\ - i v \sin \theta e^{i \varphi} \\
\end{pmatrix}
\end{Bmatrix},
\label{2_20}
\end{equation}
where $E/\hbar$ is the spinor frequency and the functions $u$ and $v$ depend on the radial coordinate $r$ only.
We will seek a solution to the Euler-Lagrange equations coming from the variation of the Lagrangian~\eqref{1_10}. Substituting in these equations
the {\it Ans\"atz}~\eqref{2_10}-\eqref{2_20}, the expression~\eqref{1_15}, and integrating over the angles
$\theta, \varphi$ (for details see Ref.~\cite{Finkelstein:1956}), the resulting
equations for the unknown functions $f, u$, and $v$ are
\begin{eqnarray}
- f^{\prime \prime} + \frac{f \left( f^2 - 1 \right) }{x^2} +
\tilde g^2_{\Lambda} \frac{\tilde u \tilde v}{x} &=& 0 ,
\label{2_30}\\
\tilde v' + \frac{f \tilde v}{x} &=& \tilde u \left(
- 1+ \tilde E +
\frac{\tilde u^2 - \lambda \tilde v^2}{x^2}
\right) ,
\label{2_40}\\
\tilde u' - \frac{f \tilde u}{x} &=& \tilde v \left(
- 1 - \tilde E +
\frac{\tilde u^2 - \lambda \tilde v^2}{x^2}
\right).
\label{2_50}
\end{eqnarray}
Here, for convenience of making numerical calculations, we have introduced the following dimensionless variables:
$x = r/\lambda_c$,
$
\tilde u=u\sqrt{\Lambda/\lambda_c g},
\tilde v = v\sqrt{\Lambda/\lambda_c g},
\tilde E = \lambda_c E/(\hbar c),
\tilde g^2_{\Lambda} = g \hbar c\lambda_c^2/\Lambda
$, where $\lambda_c= \hbar / (m_f c)$ is the Compton wavelength and $\lambda$ is some combination of the constants
$\alpha, \beta$. The prime denotes differentiation with respect to~$x$.
The total energy density of the monopole under consideration is
\begin{equation}
\tilde \epsilon =
\tilde{\epsilon}_m + \tilde \epsilon_s =\frac{1}{\tilde g^2_\Lambda}
\left[
\frac{{f'}^2}{ x^2} +
\frac{\left( f^2 - 1 \right)^2}{2 x^4}
\right] +
\left[
\tilde E \frac{\tilde u^2 + \tilde v^2}{x^2} +
\frac{\left(\tilde u^2 - \lambda \tilde v^2 \right)^2}{2 x^4}
\right].
\label{2_60}
\end{equation}
Here the expressions in the square brackets correspond to the dimensionless energy densities of the non-Abelian gauge fields,
$
\tilde{\epsilon}_m \equiv
\frac{\lambda_c^4 g^2}{\tilde g^2_\Lambda} \epsilon_m
$,
and of the spinor field,
$
\tilde{\epsilon}_s \equiv \frac{\lambda_c^4 g^2}{\tilde g^2_\Lambda} \epsilon_s
$.
Correspondingly, the total energy of the monopole is calculated using the formula
\begin{equation}
\tilde W_t \equiv \frac{\lambda_c g^2}{\tilde g^2_\Lambda} W_t =
4 \pi
\int\limits_0^\infty x^2 \tilde \epsilon d x
= \left( \tilde{W}_t \right)_m + \left( \tilde{W}_t \right)_{s},
\label{2_70}
\end{equation}
where the energy density $\tilde \epsilon$ is taken from Eq.~\eqref{2_60}, and it is the sum of
the energies of the magnetic field and the nonlinear spinor field.
\section{Numerical results}
Numerical integration of Eqs.~\eqref{2_30}-\eqref{2_50} is carried out using the shooting method with the boundary conditions given for
$x = \delta \ll 1$ (for details see Refs.~\cite{Dzhunushaliev:2020qwf,Dzhunushaliev:2021apa}):
\begin{equation}
f(\delta) = 1 + \frac{f_2}{2} \delta^2 ,\quad
f^\prime(\delta) = f_2 \delta , \quad
\tilde u(\delta) = \tilde u_1 \delta ,\quad
\tilde v(\delta) = \frac{\tilde v_2}{2} \delta^2 .
\label{3_10}
\end{equation}
The value of the parameter $\tilde v_2 $ can be found from Eqs.~\eqref{2_30}-\eqref{2_50},
$
\tilde v_2 = 2 \tilde u_1 \left(
\tilde E - 1 + \tilde\Lambda \tilde u_1^2
\right)/3
$.
Adjusting the values of the parameters $f_2, \tilde u_1$, one can find the required regular solutions.
Our purpose is to construct a spectrum of solutions for different values of $\lambda$. To do this,
we find regular solutions for different values of the frequency $\tilde E$ with a fixed value of the parameter
$\lambda$. For every value of $\tilde E$, we calculate the total energy $\tilde W_t$ given by the expression~\eqref{2_70}.
Then we construct the dependence $\tilde W_t(\tilde E)$, using which it is possible to find a mass gap as a minimum of the function $\tilde W_t(\tilde E)$.
Such a procedure is repeated for different values of the parameter $\lambda$. In the present study we investigate the dependence
of the value of the mass gap $\Delta(\lambda)$ on the parameter $\lambda$. Also, we study the dependence
$\tilde E_\Delta(\lambda)$ on the same parameter, where the quantity $\tilde E_\Delta$ is defined as the value of the frequency
$\tilde E$ for which the energy spectrum has a minimum, i.e., it is the position of the mass gap on the axis $\tilde E$.
The typical solution of the equations~\eqref{2_30}-\eqref{2_50} is given in Fig.~\ref{potentials}, where the profiles of the functions
$f(x), \tilde v(x)$, and $\tilde u(x)$ are given. Fig.~\ref{fields} shows
the typical distribution of the energy density \eqref{2_60},
as well as the profiles of the physical components of
the color magnetic fields $H^a_i$ defined as
$
H^a_i=-(1/2)\sqrt{\gamma}\,\epsilon_{i j k} F^{a j k},
$
where $i, j, k$ are space indices and $\gamma$ is the determinant of the spatial metric. This gives the components
\begin{align}
H^a_r \sim & \frac{1 - f^2}{g r^2},
\label{3_10}\\
H^a_{\theta} \sim &\frac{1}{g}f^{\prime},
\label{3_20}\\
H^b_{\varphi} \sim &\frac{1}{g}f^{\prime},
\label{3_30}
\end{align}
where $a=1,2,3$ and we have dropped the dependence on the angular variables.
In these figures, we use quotation marks for the term ``monopole'' since the asymptotic behavior of the radial magnetic field
$H^a_r \sim r^{-3}$ differs from that of the 't~Hooft-Polyakov monopole, for which $H^a_r \sim r^{-2}$.
\begin{figure}[H]
\begin{minipage}[t]{.45\linewidth}
\begin{center}
\includegraphics[width=1\linewidth]{potentials}
\end{center}
\vspace{-0.5cm}
\caption{The typical behavior of the functions $f, \tilde u$, and $\tilde v$ of the
``monopole'' solution for
$
\tilde g_{\Lambda} = 0.3354, \lambda = 0.8, \tilde E = 0.99, f_2 = -0.0042694, u_1 = 0.46657
$.
}
\label{potentials}
\end{minipage} \hfill
\begin{minipage}[t]{.47\linewidth}
\begin{center}
\includegraphics[width=1\linewidth]{fields}
\end{center}
\vspace{-0.5cm}
\caption{The typical behavior of the magnetic fields $\tilde H_r, \tilde H_{\varphi, \theta}$ given by Eqs.~\eqref{3_10}-\eqref{3_30}
and the energy density $\tilde \epsilon$ from Eq.~\eqref{2_60} of the ``monopole'' solution for
$
\tilde g_{\Lambda} = 0.3354, \lambda = 0.8, \tilde E = 0.99, f_2 = -0.0042694, u_1 = 0.46657
$.
}
\label{fields}
\end{minipage} \hfill
\end{figure}
Fig.~\ref{spectrums} shows the dependence of the energy spectrum $\tilde W_t(\tilde E)$ on the parameter $\lambda$.
From this figure, one can conclude that when $\lambda$ increases, the energy spectrum rises, that is the corresponding values of the energy
$\tilde W_t$ increase.
\begin{figure}[H]
\begin{center}
\includegraphics[width=.45\linewidth]{spectrums}
\end{center}
\vspace{-0.5cm}
\caption{The energy spectra of the ``monopole'' solutions for different values of the nonlinearity parameter $\lambda$.
}
\label{spectrums}
\end{figure}
Our main purpose here is to study the dependence of the value
of the mass gap $\Delta$ on the nonlinearity parameter $\lambda$ characterizing the properties of the nonlinear potential of the spinor field $\psi$.
The corresponding results of computations are given in Fig.~\ref{Delta_vs_lambda}.
In turn, Fig.~\ref{position_MG} shows the dependence of the position of the mass gap on the nonlinearity parameter $\lambda$.
By the term a position of the mass gap, we mean the value of the frequency $\tilde E_\Delta$
for which the energy spectrum has a minimum.
It is of interest to note that the position of the mass gap $\tilde E_\Delta$ does not practically depend on the nonlinearity parameter
$\lambda$. One can assume that in fact this is the case, and deviations from this value are related to errors in numerical calculations.
\begin{figure}[H]
\begin{minipage}[t]{.45\linewidth}
\begin{center}
\includegraphics[width=1\linewidth]{Delta_vs_lambda}
\end{center}
\vspace{-0.5cm}
\caption{The dependence of the mass gap $\Delta$ on the nonlinearity parameter $\lambda$.
}
\label{Delta_vs_lambda}
\end{minipage} \hfill
\begin{minipage}[t]{.47\linewidth}
\begin{center}
\includegraphics[width=1\linewidth]{position_MG}
\end{center}
\vspace{-0.5cm}
\caption{The dependence of the position of the mass gap $\tilde E_\Delta$ on the nonlinearity parameter $\lambda$.
}
\label{position_MG}
\end{minipage} \hfill
\end{figure}
\section{Conclusions}
In the present paper we have studied the dependence of the value of the mass gap and its position on the parameter describing
the nonlinearity of the spinor field in the Dirac equation. We have found out that the value of the mass gap,
which is assumed to be a global minimum of the energy spectrum of the ``monopole'' solution in SU(2) Yang-Mills theory with a source
in the form of a nonlinear spinor field, depends smoothly on the nonlinearity parameter of the spinor field.
We use the word ``monopole'' in quotation marks because the magnetic field decreases asymptotically as $r^{-3}$,
unlike the 't~Hooft-Polyakov monopole solution where the magnetic field decreases according to the Coulomb law: $r^{-2}$.
It is of interest that the position of the mass gap does not practically depend on the value of the nonlinearity parameter,
at least in the range of values of the spinor field nonlinearity parameter considered here.
Thus, the study indicates that the mass gap in SU(2) Yang-Mills theory with a source
in the form of a nonlinear spinor field does exist for different types of nonlinearity and depends
smoothly on the nonlinearity parameter.
\section*{Acknowledgements}
The work was supported by the program No.~BR10965191 (Complex Research in Nuclear and Radiation Physics, High Energy Physics and Cosmology for the Development of Competitive Technologies)
of the Ministry of Education and Science of the Republic of Kazakhstan. We are also grateful to the Research Group Linkage Programme of the Alexander von Humboldt Foundation for the support of this research.
|
1,116,691,499,044 | arxiv |
\section{Introduction}\label{section1}
In this paper, we investigate the structure of the ring generated by the cohomology classes of special cycles
in orthogonal Shimura varieties over totally real fields.
For a quadratic space $V$ of dimension $m+2$ over a totally real field $F$ of degree $d$ over ${\mathbb Q}$ with signature
\begin{equation}\label{def-sig-V}
\text{\rm sig}(V) = (m,2)^{d_+}\times (m+2,0)^{d-d_+},
\end{equation}
the special cycles and the subring they generate in Chow groups and in cohomology of the associated Shimura variety $\text{\rm Sh}(V)$
were considered in \cite{kudla.rem-gen}. The case where $d_+=1$ was previously studied in \cite{K.duke}.
In the present paper, we consider the subring of cohomology, equipping it with
an inner product given by the cup product of classes
followed by a degree map on the top cohomology. In particular, classes not in complementary degrees pair to zero.
The reduced ring of special cycles $SC^\bullet(V)$ is obtained by taking the quotient of the special cycle ring by the radical of this pairing.
In order to work with compact quotients, we assume that $d_+<d$ or that $V$ is anisotropic. We prove two main results.
First, as a consequence of the Siegel-Weil formula, we show that the inner product of elements of $SC^\bullet(V)$ is determined by
the Fourier coefficients of pullbacks
of Hilbert-Siegel-Eisenstein series to products of Hilbert-Siegel subspaces under embeddings
\begin{equation}\label{diag-2}
\H_{n_1}^d\times \H_{n_2}^d\ \longrightarrow \ \H_m^d,\quad [\tau_1,\tau_2]\mapsto\begin{pmatrix} \tau_1&{}\\{}&\tau_2\end{pmatrix} \qquad n_1+n_2=m.
\end{equation}
Moreover,
the products in the ring $SC^\bullet(V)$ are completely determined by the Fourier coefficients of pullbacks
of Hilbert-Siegel-Eisenstein series to triple products of Hilbert-Siegel subspaces
respect to embeddings
\begin{equation}\label{diag-3}
\H_{n_1}^d\times \H_{n_2}^d\times \H_{n_3}^d \longrightarrow \H_m^d, \quad [\tau_1,\tau_2,\tau_3] \mapsto \begin{pmatrix} \tau_1&{}&{}\\{}&\tau_2&{}\\
{}&{}&\tau_3\end{pmatrix},\qquad n_1+n_2+n_3 =m.
\end{equation}
Here $\H_n$ denotes the Siegel upper half space of genus $n$. Precise statements are given in Theorem~A in section 2.
Second, as a consequence of the `matching principle of \cite{K.Bints}, we show that for quadratic spaces $V$ and $V'$ of dimension $m+2$ over $F$
that are isomorphic at all finite places, the reduced special cycle rings are isometrically isomorphic, $SC^\bullet(V) \simeq SC^\bullet(V')$, Theorem~B in Section 1. Note that the associated Shimura
varieties may have different dimensions, since $d_+(V)$ and $d_+(V')$ need not be equal, although they have the same parity.
Since the special cycles occur in codimensions $nd_+$, the isomorphism can involve a shift in degrees as well.
Finally, when $d_+(V)$ is even, there is an associated totally positive definite space $V_+$ such that $V$ and $V_+$ are isomorphic at all finite
places. This space is unique up to isometry. In Section~\ref{section4}, we give a combinatorial construction, involving representation numbers, of a graded ring $\text{\rm SC}^\bullet(V_+)$ of `special cycles' associated to $V_+$
and show that there is a comparison isomorphism $\text{\rm SC}^\bullet(V) \ {\overset{\sim}{\longrightarrow}}\ \text{\rm SC}^\bullet(V_+)$. In particular, the cohomological
special cycle ring of the Shimura variety $\text{\rm Sh}(V)$ for $d_+(V)$ even has a purely combinatorial description.
Our proof of these comparison results is indirect and depends on the Siegel-Weil formula, which might be regarded as a weak type of relative trace formula.
It would be of interest to find a more direct geometric proof.
It should be noted that we work on orthogonal Shimura varieties with many connected components and with ad\`elic weighted special cycles
on them. It seems possible that our results can be refined to cover the cohomological special cycle rings of the individual components of these varieties,
but this would require a `twisted' variant of the Siegel-Weil formula involving the automorphic characters of special orthogonal
groups obtained as a composition of the spinor norm with quadratic Hecke characters of $F$. A few hints at such a formula occur in the literature,
cf. \cite{snitz} and \cite{gan-snitz}. For example, the result of \cite{snitz} is used in \cite{KRYbook} to isolate the degrees of $0$-cycles
on individual connected components of Shimura curves over ${\mathbb Q}$.
Finally, it should be straightforward to extend the results of this paper and of \cite{kudla.rem-gen} to the case of unitary groups of
signature $(m,1)^{d_+}\times (m+1,0)^{d-d_+}$. The only exception is the modularity of the Chow group valued generating series, proved in
\cite{kudla.rem-gen} to be a consequence of the Bloch-Beilinson conjecture, since the proof there depends on a combination of the embedding trick
with a vanishing result for odd Betti numbers in relatively small degree. This vanishing result is a peculiarity of the group $\text{\rm SO}(m,2)$,
and is not available for the unitary case.
In we next two sections of the introduction, we give a more detailed description of our results. In Section~\ref{section1.1} we review the notation and results of \cite{kudla.rem-gen}.
In Section~\ref{subsec-1.2}, we give more precise statements.
\subsection{Background}\label{section1.1}
For a totally real field $F$ of degree $d$, let $\Sigma=\{\sigma\}$ be the set of embeddings of $F$ into ${\mathbb R}$.
For $(V,Q)$ a quadratic space of dimension $m+2$ over $F$ and $\sigma\in \Sigma$, let $V_\sigma=V\tt_{F,\sigma}{\mathbb R}$, and suppose that the signature
of $V$ is given by
\begin{equation}\label{def-sigV}
\text{\rm sig}(V_\sigma)= \begin{cases} (m,2)&\text{if $\sigma\in \Sigma_+(V)$,}\\
\noalign{\smallskip}
(m+2,0)&\text{otherwise,}
\end{cases}
\end{equation}
for a subset $\Sigma_+=\Sigma_+(V)$ of $\Sigma$ with $|\Sigma_+|=d_+$.
We assume that $1\le d_+<d$ or that $V$ is anisotropic, leaving aside the problem of extending our results to the non-compact case.
Let $G= R_{F/{\mathbb Q}}\text{\rm GSpin}(V)$ and let
\begin{equation}\label{DVS}
D = \prod_{\sigma\in \Sigma_+} D^{(\sigma)}, \qquad D^{(\sigma)} = \{ \, z \in \text{\rm Gr}_2^o(V_\sigma) \mid \ Q\mid_z <0\,\},
\end{equation}
where $\text{\rm Gr}_2^o(V_\sigma)$ is the Grassmannian of oriented $2$-planes in $V_\sigma$. For a compact open subgroup
$K \subset G({\mathbb A}_f)$, let
$$S_K = G({\mathbb Q})\backslash D\times G({\mathbb A}_f)/K,$$
and recall that $S_K$ is isomorphic to the set of complex points of a projective variety which is smooth if $K$ is neat. The canonical model is defined over a reflex field determined by $\Sigma_+$, but
we will not need this for the moment.
The Chow groups and (Betti) cohomology groups (with complex coefficients) of these varieties, as $K$ varies, form an inverse system, and we define
$$
\text{\rm CH}^{k}(S)= \varinjlim\limits_K\, \text{\rm CH}^{k}(S_K)\qquad\text{and}\qquad
H^{k}(S)= \varinjlim\limits_K\, H^{k}(S_K),
$$
and graded rings
$$\text{\rm CH}^{\bullet}(S) = \bigoplus_{k=0}^{md_+}\text{\rm CH}^k(S), \qquad\text{and}\qquad H^\bullet(S) = \bigoplus_{k=0}^{2m d_+} H^{k}(S)$$
under intersection product and cup product respectively.
The group $G({\mathbb A}_f)$ acts naturally on these rings. Of course, we will only be concerned with a subring of classes of Hodge type $(p,p)$.
As explained in \cite{K.duke}, p.45, we have
$$\pi_0(S_K) \simeq G({\mathbb Q})_+\backslash G({\mathbb A}_f)/K \simeq F^\times_{{\mathbb A}_f}/F^\times_+ \nu(K),$$
where $\nu:G\text{\rm ra} R_{F/{\mathbb Q}}{\mathbb G}_m$ is the spinor norm and $F^\times_+$ is the group of totally positive elements in $F^\times$. Thus
\begin{equation}\label{H0}
H^0(S) = C(F^\times_{{\mathbb A}_f}/F^\times_+,{\mathbb C}) = \varinjlim\limits_K \, C(F^\times_{{\mathbb A}_f}/F^\times_+ \nu(K),{\mathbb C})
\end{equation}
is the space of continuous complex valued functions on $F^\times_{{\mathbb A}_f}/F^\times_+$. In particular, there is a distinguished element $1\!\!1\in H^0(S)$
given by the constant function $1$, which gives the identity element of the ring $H^\bullet(S)$. We also write $1\!\!1$ for the analogous class in $\text{\rm CH}^0(S)$.
\begin{rem}\label{rem-chi-classes} A little more generally, for any character $\text{\boldmath$\chi$\unboldmath}$ of $F^\times_{{\mathbb A}}/F^\times F^\times_{\infty,+} \nu(K)$, we obtain, via (\ref{H0}), a class $\text{\boldmath$\chi$\unboldmath}_K \in H^0(S_K)$ and
a class $\text{\boldmath$\chi$\unboldmath}\in H^0(S)$. Of course there are analogous classes, which we denote by the same symbol, in $\text{\rm CH}^0(S_K)\tt_{\mathbb Q} E(\text{\boldmath$\chi$\unboldmath})$ and $\text{\rm CH}^0(S)\tt_{\mathbb Q} E(\text{\boldmath$\chi$\unboldmath})$, where $E(\text{\boldmath$\chi$\unboldmath})$ is the field generated by the values
of $\text{\boldmath$\chi$\unboldmath}$.
\end{rem}
The weighted special cycles are defined in Sections 5 and 10 of \cite{kudla.rem-gen}. For $1\le n\le m$, a Schwartz function $\varphi\in S(V({\mathbb A}_f)^n)^K$ and $T\in {\text{\rm Sym}}_n(F)$,
there are classes $Z(T,\varphi,K) \in \text{\rm CH}^{nd_+}(S_K)$ and their images $[Z(T,\varphi,K)] \in H^{2nd_+}(S_K)$ under the cycle class map\footnote{
Note the slight shift in notation from that of \cite{kudla.rem-gen} where the image was denoted by $cl([Z(T,\varphi,K)] )$.}
$$\text{cl}_k: \text{\rm CH}^k(S_K) \longrightarrow H^{2k}(S_K).$$
For example, for $T=0$,
$$Z(0,\varphi,K) = \varphi(0)\,\text{\boldmath$c$\unboldmath}_{S_K}^n,$$
where $\text{\boldmath$c$\unboldmath}_{S_K}$ is the `co-tautological' Chern class, \cite{kudla.rem-gen} (2.3).
The weighted cycles are compatible with pullback. For a subgroup $K'\subset K$ and the resulting projection $\text{\rm pr}: S_{K'} \text{\rm ra} S_K$, we have
$\text{\rm pr}^*(Z(T,\varphi,K) )= Z(T,\varphi,K')$.
Thus, there are classes in the direct limits
$$
Z(T,\varphi) \in \text{\rm CH}^{nd_+}(S)\qquad\text{and}\qquad [Z(T,\varphi)] \in H^{2n d_+}(S).
$$
One of the main results of \cite{kudla.rem-gen}, Proposition~5.2, is the following product formula. For $T_i\in {\text{\rm Sym}}_{n_i}(F)$ and
$\varphi_i\in S(V({\mathbb A}_f)^{n_i})$,
\begin{equation}\label{prod-ch}
Z(T_1,\varphi_1)\cdot Z(T_2,\varphi_2) = \sum_{\substack{ T\in {\text{\rm Sym}}_{n_1+n_2}(F)_{\ge 0}\\ \noalign{\vskip 2pt} T = \begin{pmatrix} \scriptstyle T_1&*\\{}^t*&\scriptstyle T_2\end{pmatrix}}} Z(T,\varphi_1\tt\varphi_2).
\end{equation}
There is a corresponding formula for the cup product of the cohomology classes $[Z(T_1,\varphi_1)]$ and $[Z(T_2,\varphi_2)]$.
Thus, the span of the special cycle classes, together with the class $1\!\!1$ for $n=0$, define subrings
which we call {\it special cycle class rings}.
\begin{rem}\label{twisted-SCs} Using the classes $\text{\boldmath$\chi$\unboldmath}_K \in H^0(S_K)$ and $\text{\boldmath$\chi$\unboldmath} \in H^0(S)$, we can define slight variants of the weighted cycles
\begin{align*}
Z(T,\varphi,K,\text{\boldmath$\chi$\unboldmath}) &= Z(T,\varphi,K)\cdot \text{\boldmath$\chi$\unboldmath}_K\quad\, \in \text{\rm CH}^0(S_K)\tt_{\mathbb Q} E(\text{\boldmath$\chi$\unboldmath}),\\
\noalign{\smallskip}
Z(T,\varphi, \text{\boldmath$\chi$\unboldmath}) &= Z(T,\varphi)\cdot \text{\boldmath$\chi$\unboldmath} \qquad\quad \ \in \text{\rm CH}^0(S)\tt_{\mathbb Q} E(\text{\boldmath$\chi$\unboldmath}),
\end{align*}
and their analogues in cohomology, where we are shifting by a character of the component group.
By taking suitable linear combinations of these cycles we can obtain special cycles supported on a given connected component.
\end{rem}
There a (formal) generating series for special cycle classes. For $\tau= (\tau_\sigma)_{\sigma\in \Sigma} \in \H_n^d$, where $\H_n$ is the Siegel space of genus $n$,
and $T\in {\text{\rm Sym}}_n(F)$, let
$$\qbold^T = e(\,\sum_\sigma \text{\rm tr}(\sigma(T)\tau_\sigma)\,), \qquad e(t) = e^{2\pi i t}.$$
Then, for $\varphi\in S(V({\mathbb A}_f)^n)$, define
\begin{equation}\label{CH-gen-fun}
\phi_n(\tau;\varphi) = \sum_{T\in {\text{\rm Sym}}_n(F)_{\ge 0}} Z(T,\varphi)\cdot \qbold^T \ \in \ \text{\rm CH}^{nd_+}(S)[[\qbold]],
\end{equation}
a formal power series with coefficients in the Chow group, and the corresponding series
\begin{equation}\label{B-gen-fun}
\phi^B_n(\tau;\varphi) = \sum_{T\in {\text{\rm Sym}}_n(F)_{\ge 0}} [Z(T,\varphi)]\cdot \qbold^T \ \in \ H^{2nd_+}(S)[[\qbold]],
\end{equation}
with coefficients in the Betti cohomology groups.
The series $\phi_n^B(\tau;\varphi)$ is the $\qbold$-expansion of a
Hilbert-Siegel modular form of parallel weight $(\frac{m}2+1, \dots, \frac{m}2+1)$.
As shown in \cite{kudla.rem-gen}, the modularity of the Chow group valued series $\phi_n(\tau;\varphi)$ follows from the Bloch-Beilinson conjecture about injectivity of the
Abel-Jacobi map.
The Betti version of the product formula (\ref{prod-ch}) is equivalent to the identity
$$\phi^B_n(\begin{pmatrix} \tau_1&{}\\{}&\tau_2\end{pmatrix}, \varphi_1\tt\varphi_2) = \phi^B_{n_1}(\tau_1;\varphi_1)\cdot \phi^B_{n_2}(\tau_2,\varphi_2)$$
on Hilbert-Siegel modular forms, where the product on the right side is given by the cup product in $H^\bullet(S)$.
\subsection{Results}\label{subsec-1.2}
It is notable that the weight of the generating series $\phi_n^B(\tau;\varphi)$ depends only on $m$ and not on $d_+$.
Similarly, $d_+$ plays almost no role in the structure of the product formula (\ref{prod-ch}) or its analogue in cohomology.
This suggests that there may be further relations among the subrings of cohomology and of Chow groups associated to
the Shimura varieties with differing archimedean data.
We sometimes write $\text{Sh}(V)$ or $\text{Sh}(V)_K$ rather than
$S$ or $S_K$ for the Shimura variety associated to $V$, when we want to vary $V$.
Since our access to the structure of these rings will be via intersection numbers, we introduce `reduced' or `numerical' versions.
Since our results concern only the ring in cohomology, we will now restrict our discussion to that case and,
for convenience, we write
$$\text{\rm SC}^\bullet(V)^\natural\subset H^{2\bullet d_+}(\text{Sh}(V))$$
for the subring of special cycle classes.
On the top degree cohomology group $H^{2md_+}(S_K)$ there is a normalized degree map
\begin{equation}\label{def-deg-K}
\deg_K: H^{2md_+}(S_K) \longrightarrow {\mathbb C}, \qquad z= [\b] \mapsto \text{\rm vol}(K/K\cap Z({\mathbb Q})) \,\int_{S_K} \b,
\end{equation}
where $\b$ is a degree $2md_+$-form on $S_K$ representing the class $z$. Here $Z$ is the identity component of the center of $G$
and $\text{\rm vol}(K)$ is computed with respect to
a Haar measure on $G({\mathbb A}_f)$ that will be specified in Section~\ref{subsec2.1} below. With this normalization, the degree is invariant under pullback for the
coverings $\text{\rm pr}:S_{K'}\text{\rm ra} S_K$, for $K'\subset K$ and hence gives a well defined map
\begin{equation}\label{def-deg}
\deg: H^{2md_+}(S) \longrightarrow {\mathbb C}.
\end{equation}
We extend this map by zero on $H^k(S)$ for $k<2md_+$.
Define an inner product on the cohomology ring $H^{\bullet}(S)$ by
$$\gs{Z_1}{Z_2} := \deg(Z_1\cdot Z_2),$$
and consider its restriction to subring of special cycles.
By associativity of the cup product, the radical of this pairing on $\text{\rm SC}^\bullet(V)^\natural$ is an ideal. We
define the {\it reduced ring of special cycles}
\begin{equation}\label{red-ring}
\text{\rm SC}^\bullet(V) = \text{\rm SC}^\bullet(V)^\natural/\text{Rad}
\end{equation}
as the quotient of $\text{\rm SC}^\bullet(V)^\natural$ by this ideal.
The form $\gs{}{}$ then defines a
non-degenerate pairing on $\text{\rm SC}^\bullet(V)$.
We let
$$z(T;\varphi) = \text{ the image of $[Z(T;\varphi)]$ in $\text{\rm SC}^\bullet(V)$,}$$
and write $z(T;\varphi) = z_V(T;\varphi) $ if we want to keep track of the space $V$.
Note that, by definition, $\text{\rm SC}^0(V)^\natural = {\mathbb C}\, 1\!\!1$, for the class $1\!\!1$ defined above. Thus
$\ker(\deg:\text{\rm SC}^m(V)^\natural \text{\rm ra} {\mathbb C})$ is the intersection of $\text{\rm SC}^m(V)^\natural$ with the radical
and hence $\text{\rm SC}^m(V) = {\mathbb C}\,1\!\!1^\vee$ for a class $1\!\!1^\vee$ with $\gs{1\!\!1}{1\!\!1^\vee}=1$.
For example, if $T\in {\text{\rm Sym}}_m(F)_{>0}$ and $\varphi\in S(V({\mathbb A}_f)^n)^K$, then $Z(T,\varphi)$ is a weighted $0$-cycle on $S_K$, and
$$z(T,\varphi) = \deg(Z(T,\varphi))\cdot 1\!\!1^\vee.$$
We want to consider how the ring $\text{\rm SC}^\bullet(V)$ varies with $V$.
Our main result concerns the case where only the archimedean part of $V$ varies.
Note that the real quadratic spaces of signatures $(m+2,0)$ and $(m,2)$
both have determinant $1$ and have Hasse invariants $+1$ and $-1$ respectively. Also recall that the character $\chi_V$ of a quadratic
space over $F$ is define by
$$\chi_V(x) = (x, (-1)^s\det(V))_F,\qquad x\in F^\times_{\mathbb A},$$
where $s= \frac12\dim V (\dim V+1)$ and $(\,,\,)_F$ is the global quadratic Hilbert symbol. For quadratic spaces $V$ and $V'$ of dimension $m+2$ over $F$ with
$\chi_V = \chi_{V'}$,
we say that $V\cong_f V'$
if there is an isometry $V'_{\frak p} \simeq V_\frak p$ for each finite place $\frak p$ of $F$. Here $V_\frak p = V\tt_F F_\frak p$.
For spaces $V$ and $V'$ with $V\cong_f V'$, we fix an isomorphism $V({\mathbb A}_f) \simeq V'({\mathbb A}_f)$ and hence obtain isomorphisms
\begin{equation}\label{naive-matching}
\rho^n_{V,V'}:S(V({\mathbb A}_f)^n) \simeq S(V'({\mathbb A}_f)^n), \qquad \varphi \mapsto \varphi',
\end{equation}
for all $n$, compatible with tensor products and with the action of the metaplectic group via the Weil
representation. Note that, by the product formula for the Hasse invariant, $V\cong_f V'$ implies that $d_+(V)$ and $d_+(V')$ have the same parity
but are otherwise unconstrained. The following result will be proved in Section 3 as a consequence of Theorem~A of Section 2 and the matching principle.
\begin{theob}\label{thmA} Suppose that $V\cong_f V'$ and that
$1\le d_+(V), d_+(V') <d$. \hfill\break
Then
the map
$$\text{\rm SC}^\bullet(V) \longrightarrow \text{\rm SC}^\bullet(V'), \qquad z_V(T,\varphi) \mapsto z_{V'}(T,\varphi'),\qquad \varphi'= \rho^n_{V,V'}(\varphi),$$
is well defined. Moreover, this map is an isometry and a ring isomorphism.
\end{theob}
A striking aspect of these isomorphisms is that, when $d_+(V)\ne d_+(V')$, they relate classes in different degrees!
{\bf Example.} As a concrete example, consider the case where $F$ is the maximal real subfield of ${\mathbb Q}(\mu_{19})$, so that $d=9$, and take $m=3$.
Let $V$ be a quadratic space over $F$ with signature $((3,2),(5,0)^{8})$ so that $d_+(V)=1$.
Then, for a fixed neat compact open subgroup $K$ in
$G({\mathbb A}_f)$, the variety $Sh(V)_K$ is a smooth projective $3$-fold, perhaps with several connected components. It can be viewed as bearing the same relation to
the classical Siegel $3$-fold over ${\mathbb Q}$ as a Shimura curve over $F$ bears to the classical modular curve. The special cycles are generalized Humbert surfaces,
for $n=1$, Shimura curves, for $n=2$, and $0$-cycles, for $n=3$. Their cohomology classes, taken up to numerical equivalence,
$$\text{\rm SC}^n(V)^K = \text{span}\{\,Z(T,\varphi)\, \mid T\in {\text{\rm Sym}}_n(F)^K_{\ge0}, \ \varphi\in S(V({\mathbb A}_f)^n)\,\})/\sim,$$
yield a reduced intersection ring, $\text{\rm SC}^\bullet(V)^K$, and, passing to the limit over $K$,
$$\text{\rm SC}^\bullet(V) = \text{\rm SC}^0(V) \oplus \text{\rm SC}^1(V)\oplus \text{\rm SC}^2(V)\oplus \text{\rm SC}^3(V), \qquad \text{\rm SC}^0(V) = {\mathbb C} \,1\!\!1, \ \text{\rm SC}^3(V)= {\mathbb C} \,1\!\!1^\vee.$$
Note that $\text{\rm SC}^\bullet(V)$is a {\it subquotient} of the cohomology ring of $Sh(V)$.
For an {\it odd} integer $r$ with $1\le r <9$, we consider quadratic spaces $V'= V'[r]$ over $F$ with a fixed isomorphism
\begin{equation}\label{f-id}
V'({\mathbb A}_f)\simeq V({\mathbb A}_f)\end{equation}
and with
$$\text{\rm sig}(V') = ((3,2)^{r},(5,0)^{d-r}).$$
Varying $r$ and the subset $\Sigma_+(V'[r])$ of indefinite places, we have $\sum_r{9 \choose r} =255$ such spaces, up to isomorphism.
Identifying $G(V)({\mathbb A}_f)$ and $G(V')({\mathbb A}_f)$ via (\ref{f-id}), we have a collection of Shimura varieties $Sh(V')_K$
of dimensions $3 r = 3, 9, 15,$ and $21$ with isomorphic reduced intersection rings, $\text{\rm SC}^\bullet(V')^K$.
\subsection{Outline of contents}
This ends our extended introduction. In Section~\ref{section2}, we review the Siegel-Weil formula and show that it identifies the image
of the generating series $\phi^B_n(\tau;\varphi)$ of (\ref{B-gen-fun}) under the degree map with a special value of a Siegel Eisenstein series,
Proposition~\ref{prop2.2}. By considering Fourier coefficients of pullbacks, we obtain formulas for inner products and triple products
of special cycles classes in term of Fourier coefficients of such pullbacks, Theorem~A and Corollary~\ref{IP-cor}. In Section~\ref{section3},
we explain how the matching principle introduced in \cite{K.Bints} yields comparisons like that of Theorem~B.
In Section~\ref{section4}, we consider the case $d_+=0$, so that $V=V_+$ is totally positive definite. For such a space
we give a combinatorial description of a `cohomology' ring and `special cycles' classes in it. Again by the Siegel-Weil formula,
we show that Theorem~B can be extended to the case $d_+=0$. In particular, the special cycle ring $\text{\rm SC}^\bullet(V)$ for $d_+(V)$ even
is isomorphic to the `special cycle' ring $\text{\rm SC}^\bullet(V_+)$ for the associated totally positive definite space.
Finally, in Section~\ref{section5}, we explain how information about local matching
can be obtained from the results of \cite{KR.rdps} and \cite{sweet.meta} on degenerate principal series representations.
\subsection{Thanks} I would like to thank Siddarth Sankaran for useful discussion about the construction of Section~\ref{section4}.
\section{The Siegel-Weil formula and intersection numbers}\label{section2}
In this section, we explain how the Siegel-Weil formula provides information about the intersection products of special cycles.
The case $d_+=1$ is treated in Section~10 of \cite{K.duke} to which we refer to more details. We will use some of the notation of Section~5.3 of \cite{kudla.rem-gen}.
\subsection{Measures}\label{subsec2.1}
Let $A^{2md_+}(S_K)$ be the space of smooth top degree differential forms on $S_K$.
By \cite{kudla.rem-gen} (5.18), the form $\text{\boldmath$\Omega$\unboldmath}^n$, defined there, is a $(nd_+,nd_+)$-form on $S_K$ which represents
the $n$-th power of the top Chern class of the vector bundle
$\text{\boldmath$\mathcal C$\unboldmath}_S$. In particular, the form $(-1)^{md_+} \text{\boldmath$\Omega$\unboldmath}^m$ gives an invariant volume form on $D$
and descends to $S_K$.
For a fixed base point $z_0\in D^+$, let $K_\infty$ be the stabilizer of
$z_0$ in $G({\mathbb R})$. Note that $K_\infty$ contains $Z({\mathbb R})$.
Then there is an isomorphism
\begin{equation}\label{J-iso}
J:A^{2md_+}(S_K) \ {\overset{\sim}{\longrightarrow}}\ [C^\infty(G({\mathbb Q})Z({\mathbb R})\backslash G({\mathbb A}))]^{K_\infty K},
\end{equation}
defined as follows.
If $\eta$ is a $2md_+$-form on $S_K$,
write
$\eta = \breve\eta\cdot (-1)^{md_+}\text{\boldmath$\Omega$\unboldmath}^{md_+}$ for a function $\breve\eta$ on $S_K$
and define $J(\eta)$ as the pullback of $\breve\eta$ to $G({\mathbb Q})Z({\mathbb R})\backslash G({\mathbb A})$ via the natural surjective map
$$G({\mathbb Q})Z({\mathbb R})\backslash G({\mathbb A}) \longrightarrow S_K = G({\mathbb Q})\backslash D\times G({\mathbb A}_f)/K, \qquad [g_\infty, g_f] \mapsto [g_\infty(z_0),g_f].$$
We define a Haar measure\footnote{Here, to lighten notation,
we slightly abuse notation and write $\text{\rm SO}(V)$ rather than $R_F/{\mathbb Q} \text{\rm SO}(V)$.} $d_\infty g$ on $\text{\rm SO}(V)({\mathbb R}) \simeq Z({\mathbb R})\backslash G({\mathbb R})$ as follows.
For a smooth compactly supported form $\a \in A^{2md_+}_c(D)$ on $D$, write
$\a = \breve \a \cdot (-1)^{md_+} \text{\boldmath$\Omega$\unboldmath}^{md_+}$ for a compactly supported function $\breve\a$ on $Z({\mathbb R})\backslash G({\mathbb R})$.
Then $d_\infty g$ is defined by the condition that
\begin{equation}\label{int-D-identity}
\int_D \a = \int_{Z({\mathbb R})\backslash G({\mathbb R})} \breve \a(g)\,d_\infty g,
\end{equation}
for all such $\a$.
Let $d^Tg$ be Tamagawa measure on $\text{\rm SO}(V)({\mathbb A})$. The factorization $d^Tg = d_\infty g\cdot d_f g$, for $d_\infty g$ just defined,
determines a unique Haar measure on $\text{\rm SO}(V)({\mathbb A}_f) = Z({\mathbb A}_f)\backslash G({\mathbb A}_f)$.
Now
\begin{equation}\label{haar-centerl}
Z({\mathbb Q})\backslash Z({\mathbb A}_f) =Z({\mathbb Q}) Z({\mathbb R})\backslash Z({\mathbb A}) \simeq F^\times_{\mathbb A}/F^\times F^\times_{\mathbb R}
\end{equation}
is compact, and, taking the Haar measure $dz$ giving this space volume $1$, we obtain a Haar measure $d_f \tilde g$
on $Z({\mathbb Q})\backslash G({\mathbb A}_f)$ with $d_f\tilde g = d_f g\cdot dz$.
Finally, we have the measure $dg = d_\infty g_\infty \cdot d_f\tilde g_f$ on $Z({\mathbb R})Z({\mathbb Q})\backslash G({\mathbb A})$.
With these choices, we have, for $\eta\in A^{2md_+}(S_K)$,
\begin{equation}\label{geo-int}
\text{\rm vol}(K/(K\cap Z({\mathbb Q})), d_f\tilde g)\cdot \int_{S_K}\eta = \int_{G({\mathbb Q})Z({\mathbb R})\backslash G({\mathbb A})} J(\eta)(g)\,dg.
\end{equation}
\subsection{Degree formulas}\label{section2}
We now follow Section~5.3 of \cite{kudla.rem-gen}. Recall that for $G'=G'_n= R_{F/{\mathbb Q}} \hbox{Sp}(n)$, the global metaplectic group
$\wtg({\mathbb A})$ acts on $S(V({\mathbb A})^n)$ via the Weil representation $\o=\o_V=\o_{\psi, V}$ for a fixed nontrivial character $\psi$ of $F_{\mathbb A}/F$.
Also recall the Schwartz form
\begin{equation}\label{arch-SF}
\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty = \bigotimes_{\sigma\in \Sigma_+(V)}\varphi_\sigma^{(n)} \ \tt \bigotimes_{\sigma\notin \Sigma_+(V)} \varphi_{\sigma,+}^0
\in [S(V_{\mathbb R}^n) \tt A^{(nd_+,nd_+)}(D)]^{G({\mathbb R})},
\end{equation}
where, for $\sigma\in \Sigma_+(V)$,
$$\varphi_\sigma^{(n)}\ \in S(V_\sigma^n)\tt A^{(n,n)}(D_\sigma)$$
is the Schwartz form for $V_\sigma$, and, for $\sigma\notin \Sigma_+(V)$, $\varphi_{\sigma,+}^0\in S(V_\sigma^n)$ is the Gaussian for $V_\sigma$, cf. Sections 7 and 8 of \cite{K.duke}.
For $\varphi \in S(V({\mathbb A}_f)^n)^K$, the theta form
\begin{equation}\label{theta-form}
\theta(g';\varphi) = \sum_{x\in V(F)^n} \o(g')(\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi)(x), \qquad g'\in \wtg({\mathbb A}),
\end{equation}
defines a closed $(nd_+,nd_+)$-form on $S_K$ and, as a function of $g'$, is left invariant for the (canonical) image of $G'({\mathbb Q})$ in $\wtg({\mathbb A})$.
We apply (\ref{J-iso}) to the top degree form $\theta(g';\varphi) \wedge \text{\boldmath$\Omega$\unboldmath}_S^{n-m}$ on $S_K$ and obtain, for $g\in G({\mathbb A})$,
\begin{equation}\label{to-Schwartz}
J(\, \theta(g';\varphi) \wedge \text{\boldmath$\Omega$\unboldmath}_S^{n-m})(g) =(-1)^{md_+}\, \theta(g',g;\breve\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi),
\end{equation}
where
$\breve\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty$ is
defined by
\begin{equation}\label{arch-SF-fun}
\breve\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty = \bigotimes_{\sigma\in \Sigma_+(V)}\breve\varphi_\sigma^{(n)} \ \tt \bigotimes_{\sigma\notin \Sigma_+(V)} \varphi_{\sigma,+}^0
\in S(V_{\mathbb R}^n),
\end{equation}
with
\begin{equation}\label{phKMtoS}
\varphi^{(n)}_\sigma(x)\wedge \text{\boldmath$\Omega$\unboldmath}^{m-n} = \breve\varphi^{(n)}_\sigma(x)\,\text{\boldmath$\Omega$\unboldmath}^m.
\end{equation}
In particular,
$\breve\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi \in S(V({\mathbb A})^n)$,
and the function in (\ref{to-Schwartz}) is the usual theta function
\begin{equation}\label{J-theta-form}
\theta(g',g;\breve\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi) = \sum_{x\in V(F)^n} \o(g')(\breve\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi)(g^{-1}x).
\end{equation}
Also note that
$$\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty(0)\wedge \text{\boldmath$\Omega$\unboldmath}^{m-n} = \text{\boldmath$\Omega$\unboldmath}^m.$$
This accounts for the sign change, due to the fact that, in the definition of $J$, $(-1)^{md_+}\text{\boldmath$\Omega$\unboldmath}^m$, the top power of the K\"ahler form is used.
Now, by (5.20) of \cite{kudla.rem-gen}, the cohomology class of the theta form is the generating series
\begin{equation}\label{KM-theo}
[\theta(g'_\tau;\varphi)] = N(\det(v))^{\frac{m+2}4} \,\phi^B_n(\tau;\varphi) = N(\det(v))^{\frac{m+2}4} \,
\sum_{T\in {\text{\rm Sym}}_n(F)_{\ge 0}} [Z(T,\varphi)]\, \qbold^T.
\end{equation}
Here, as in \cite{K.Bints}, section 1 in the case $n=1$, $g'_\tau=[g_\tau,1]$ is an element of the metaplectic cover $\widetilde{G'}({\mathbb R})$
where $g_\tau\in G'({\mathbb R})$ has components
$$(g_\tau)_\sigma = \begin{pmatrix} 1&u_\sigma\\ {}&1\end{pmatrix} \begin{pmatrix} a_\sigma&{}\\{}&{}^ta_\sigma^{-1}\end{pmatrix}, \qquad \tau_\sigma = u_\sigma + i v_\sigma\, \in \H_n, \quad v_\sigma = a_\sigma\,{}^ta_\sigma.$$
Thus, recalling (\ref{def-deg-K}) and using (\ref{geo-int}), we have
\begin{equation}\label{deg-form-1}
\deg(\phi_n^B(\tau,\varphi)\cdot \text{\boldmath$c$\unboldmath}_{S}^{m-n}) = (-1)^{md_+}N(\det(v))^{-\frac{m+2}4}\, \int_{G({\mathbb Q})Z({\mathbb R})\backslash G({\mathbb A})} \theta(g_\tau',g;\breve\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi)\,dg.
\end{equation}
Similarly, we note that the Schwartz forms are compatible with wedge products so that,
for $n=n_1+n_2$,
$$\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty = \text{\boldmath$\ph$\unboldmath}^{(n_1)}_\infty \wedge \text{\boldmath$\ph$\unboldmath}^{(n_2)}_\infty.$$
Thus, $\tau_1\in \H_{n_1}^d$, $\tau_2\in \H_{n_2}^d$, and $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$, $i=1$, $2$,
\begin{equation}\label{theta-prod}
\theta(g'_{\tau_1};\varphi_1) \wedge \theta(g'_{\tau_2};\varphi_2) = \theta(g'_\tau; \varphi_1\tt\varphi_2),\qquad \tau = \begin{pmatrix} \tau_1&{}\\{}&\tau_2\end{pmatrix}.
\end{equation}
Again computing the degree, we have, for $n_1+n_2=m$,
\begin{align}\label{deg-form-2}
&\gs{\,\phi^B_{n_1}(\tau_1,\varphi_1)}{\phi^B_{n_2}(\tau_2,\varphi_2)}\\
\noalign{\smallskip}
{}&=
(-1)^{md_+}N(\det(v_1)\det(v_2))^{-\frac{m+2}4}\, \int_{G({\mathbb Q})Z({\mathbb R})\backslash G({\mathbb A})} \theta(g_\tau',g;\breve\text{\boldmath$\ph$\unboldmath}^{(m)}_\infty\tt\varphi_1\tt\varphi_2)\,dg,\notag
\end{align}
for $\tau$ as in (\ref{theta-prod}).
\subsection{Consequences of the Siegel-Weil formula}
We next apply the Siegel-Weil formula as in section 10 of \cite{K.duke}.
Let $I_n(s,\chi_V)$ be the degenerate principal series representation of $\wtg({\mathbb A})$ associated to $\chi_V$, induced from the Siegel
parabolic $P$ of $G'$.
Define the $\wtg({\mathbb A})$ intertwining map
$$\l_V: S(V({\mathbb A})^n)\longrightarrow I_n(s_0,\chi), \qquad \text{\boldmath$\ph$\unboldmath}\mapsto \l_V(\text{\boldmath$\ph$\unboldmath})(g') = \o(g')\text{\boldmath$\ph$\unboldmath}(0),\qquad s_0 = \frac12\dim V - \rho_n,$$
$\rho_n= \frac12(n+1)$. The standard section $\P(s;f) \in I_n(s,\chi)$ attached to $\text{\boldmath$\ph$\unboldmath}$ is given by
$$\P(g',s;\text{\boldmath$\ph$\unboldmath}) = \o(g')\text{\boldmath$\ph$\unboldmath}(0)\cdot |a(g')|^{s-s_0}.$$
Here we follow the notation of \cite{KR.crelle}, \cite{sweet.thesis}, and \cite{gan.qiu.takeda}.
The Siegel-Eisenstein series is defined by the series
$$E(g',s,\l_V(\text{\boldmath$\ph$\unboldmath})) = \sum_{\gamma\in P'(F)\backslash G'(F)} \P(\gamma g',s;\text{\boldmath$\ph$\unboldmath}),$$
in the half-plane of absolutely convergence $\Re(s)>\rho_n$ and has a meromorphic analytic continuation to
the whole $s$ plane.
Since $V$ is anisotropic, the Eisenstein series is holomorphic at $s=s_0$ and, by the Siegel-Weil formula, \cite{KR.crelle},
$$E(g',s_0,\l_V(\text{\boldmath$\ph$\unboldmath})) = \int_{O(V)(F)\backslash O(V)({\mathbb A}_F)} \theta(g',g;\text{\boldmath$\ph$\unboldmath})\,dg$$
where
$$\text{\rm vol}(O(V)(F)\backslash O(V)({\mathbb A}_F),dg)=1.$$
In fact, we have
\begin{equation}\label{Tamagawa-2}
\int_{O(V)(F)\backslash O(V)({\mathbb A}_F)} \theta(g',g;\text{\boldmath$\ph$\unboldmath})\,dg = \frac12\int_{\text{\rm SO}(V)(F)\backslash SO(V)({\mathbb A}_F)} \theta(g',g;\text{\boldmath$\ph$\unboldmath})\,d^Tg.
\end{equation}
where $d^Tg$ is the Tamagawa measure on $\text{\rm SO}(V)({\mathbb A}_F)$.
This follows from the argument of Section~4 of \cite{K.Bints}, where we use the fact that the sign representation of $O(V)(F_\frak p)$
does not occur in the local theta correspondence with $G'_\frak p$ for $n<\dim V$, \cite{rallisHDC}, \cite{rallisHDC.bis}.
On the other hand, since the action of $G({\mathbb A})$ on $S(V({\mathbb A})^n)$ factors through $\text{\rm SO}(V)({\mathbb A})$, the integral on the right side of
(\ref{deg-form-1}), is
\begin{equation}\label{basic-theta-int}
\int_{G({\mathbb Q})Z({\mathbb R})\backslash G({\mathbb A})} \theta(g',g;\breve\text{\boldmath$\ph$\unboldmath}_\infty\tt\varphi)\,dg =
\int_{\text{\rm SO}(V)(F)\backslash \text{\rm SO}(V)({\mathbb A})} \theta(g',g;\breve\text{\boldmath$\ph$\unboldmath}_\infty\tt\varphi)\,d^Tg.
\end{equation}
Since the archimedean component $\l_{V_\infty}(\breve\text{\boldmath$\ph$\unboldmath}_\infty)$ will be fixed, for $\varphi \in S(V({\mathbb A}_f)^n)$,
for
$\kappa = \frac{m}2+1$ and $s_0=\kappa-\rho_n$,
we let
\begin{equation}\label{good-Eis}
E(\tau,s_0,\l_{V_f}(\varphi)):= (-1)^{md_+} N(\det(v))^{-\kappa/2}\cdot
E(g'_\tau,s_0,\l_V(\breve\text{\boldmath$\ph$\unboldmath}_\infty\tt\varphi)).
\end{equation}
\begin{rem}
Note that, for convenience, we have included the sign $(-1)^{md_+}$ in the definition. This sign must be kept in
mind later.
\end{rem}
Then, from (\ref{deg-form-1}), we obtain the following expression for the degree in terms of the Eisenstein series of genus $n$.
\begin{prop}\label{prop2.2}
For $\tau\in \H_n^d$,
\begin{equation}\label{deg-form-2}
\deg(\phi_n^B(\tau,\varphi)\cdot \text{\boldmath$c$\unboldmath}_{S}^{m-n}) = 2\, E(\tau,s_0,\l_{V_f}(\varphi)),
\end{equation}
where $s_0= \kappa-\rho_n$.
\end{prop}
In particular, as a consequence of this identity, the special value $E(\tau,s_0,\l_{V_f}(\varphi))$ is a {\it holomorphic} Hilbert-Siegel modular form of parallel weight $\kappa$
with Fourier
expansion
\begin{equation}\label{Eis-FC}
E(\tau,s_0,\l_{V_f}(\varphi)) =\sum_{T\in {\text{\rm Sym}}_n(F)_{\ge0}} A(T,\l_{V_f}(\varphi))\, \qbold^T.
\end{equation}
\begin{cor}\label{cor2.2} The inner product of a class $[Z(T,\varphi)]\in H^{2nd_+}(S)$ with the class $\text{\boldmath$c$\unboldmath}_{S}^{m-n}$ is given by
$$\gs{Z(T,\varphi)}{\text{\boldmath$c$\unboldmath}_{S}^{m-n}} = \deg(Z(T,\varphi)\cdot \text{\boldmath$c$\unboldmath}_S^{m-n}) =2\, A(T, \l_{V_f}(\varphi)).$$
\end{cor}
We also obtain relations between intersection products and Fourier coefficients of pullbacks.
For $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$, $i=1$, $2$, where $n_1+n_2=m$, we write
\begin{equation}\label{Eis-FC-2}
E(\begin{pmatrix} \tau_1&{}\\ {}&\tau_2\end{pmatrix},\frac12,\l_{V_f}(\varphi_1\tt\varphi_2)) = \sum_{T_1\in {\text{\rm Sym}}_{n_1}(F)_{\ge0}} \sum_{T_2\in {\text{\rm Sym}}_{n_2}(F)_{\ge0}}
A(T_1,T_2;\l_{V_f}(\varphi_1\tt\varphi_2))\,\qbold_1^{T_1}\,\qbold_2^{T_2}
\end{equation}
for the Fourier expansion of the pullback under (\ref{diag-2}).
Similarly, for $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$, $i=1$, $2$, $3$, where $n_1+n_2+n_3=m$,
we write
\begin{multline}\label{Eis-FC-3}
E(\begin{pmatrix} \tau_1&{}&{}\\ {}&\tau_2&{}\\{}&{}&\tau_3\end{pmatrix},\frac12,\l_{V_f}(\varphi_1\tt\varphi_2\tt\varphi_3))\\
\noalign{\smallskip}
= \sum_{T_1\in {\text{\rm Sym}}_{n_1}(F)_{\ge0}} \sum_{T_2\in {\text{\rm Sym}}_{n_2}(F)_{\ge0}} \sum_{T_3\in {\text{\rm Sym}}_{n_3}(F)_{\ge0}}
A(T_1,T_2, T_3;\l_{V_f}(\varphi_1\tt\varphi_2\tt\varphi_3))\,\qbold_1^{T_1}\,\qbold_2^{T_2}\,\qbold_3^{T_3}
\end{multline}
for the Fourier expansion of the pullback under (\ref{diag-3}).
Note that the special value is now take at the point $s_0=\kappa - \rho_m = \frac12$. Also note that these pullbacks have a non-trivial cuspidal
component. For example, in case (\ref{Eis-FC}), the cuspidal projection involves the doubling integrals \cite{PSR-doubling}, \cite{KR.annals}, \cite{gan.qiu.takeda}, and hence is
controlled by special values of L-functions.
\begin{atheo}\label{theo-B} (i) For $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$, $i=1$, $2$, where $n_1+n_2=m$,
\begin{align*}
\deg(\,\phi_{n_1}^B(\tau_1,\varphi_1) \cdot \phi_{n_2}^B(\tau_2,\varphi_2)\,) &= 2\, E(\begin{pmatrix} \tau_1&{}\\ {}&\tau_2\end{pmatrix},\frac12,\l_{V_f}(\varphi_1\tt\varphi_2)).
\end{align*}
In particular, for $T_1\in {\text{\rm Sym}}_{n_1}(F)_{\ge 0}$ and $T_2\in {\text{\rm Sym}}_{n_2}(F)_{\ge 0}$, the intersection product of the
weighted special cycles is given by
$$\deg(\,[Z(T_1,\varphi_1)]\cdot [Z(T_2,\varphi_2)]\,) =2\, A(T_1,T_2;\l_{V_f}(\varphi_1\tt\varphi_2)),$$
for the Fourier coefficient of the pulback in (\ref{Eis-FC-2}). \hfill\break
(ii) Similarly, for $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$, $i=1$, $2$, $3$, where $n_1+n_2+n_3=m$,
\begin{align*}
\deg(\,\phi_{n_1}^B(\tau_1,\varphi_1) \cdot \phi_{n_2}^B(\tau_2,\varphi_2)\cdot \phi_{n_3}^B(\tau_3,\varphi_3)\,) &= 2\,
E(\begin{pmatrix} \tau_1&{}&{}\\ {}&\tau_2&{}\\ {}&{}&\tau_3\end{pmatrix},\frac12,\l_{V_f}(\varphi_1\tt\varphi_2\tt\varphi_3)),
\end{align*}
and hence
$$\deg(\,[Z(T_1,\varphi_1)]\cdot [Z(T_2,\varphi_2)]\cdot [Z(T_3,\varphi_3)]\, ) = 2\,A(T_1,T_2, T_3;\l_{V_f}(\varphi_1\tt\varphi_2\tt \varphi_3)),$$
for the Fourier coefficient of the pullback in (\ref{Eis-FC-3}).
\end{atheo}
Passing to classes in $SC^\bullet(V)$, we obtain the following formulas.
\begin{cor}\label{IP-cor} (i) For $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$, $i=1$, $2$, where $n_1+n_2=m$,
$$
\gs{z(T_1,\varphi_1)}{z(T_2,\varphi_2)} = 2\,A(T_1,T_2;\l_{V_f}(\varphi_1\tt\varphi_2)).
$$
(ii) For $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$, $i=1$, $2$, $3$, where $n_1+n_2+n_3=m$,
$$
\gs{z(T_1,\varphi_1)\cdot z(T_2,\varphi_2)}{z(T_3,\varphi_3)} = 2\,A(T_1,T_2, T_3;\l_{V_f}(\varphi_1\tt\varphi_2\tt \varphi_3)).
$$
\end{cor}
Part (ii) of Corollary~\ref{IP-cor}
shows that the products in the ring $SC^\bullet(V)$ are completely controlled by Fourier coefficients of pullbacks under
(\ref{diag-3}) of
certain holomorphic Hilbert-Siegel Eisenstein series of genus $m$ and of
parallel weight $(\kappa,\dots, \kappa)$, $\kappa = \frac{m}2+1 = \rho_m+\frac12$.
More precisely, note that a class $z\in SC^n(V)$ is determined by its inner products $\gs{z}{z'}$ for $z'\in SC^{m-n}(V)$.
In particular, the class $z(T_1,\varphi_1)\cdot z(T_2,\varphi_2) \in SC^{n_1+n_2}(V)$ is determined by the inner products
with classes of the form $z(T_3,\varphi_3)$ and these are
given by (ii) of the corollary.
Similarly, the inner product on $SC^\bullet(V)$ is determined by (i) of the corollary.
\begin{rem}
It will be interesting to investigate the non-vanishing of such Fourier coefficients of pullbacks to see what kind of information
about the reduced ring of special cycles can be obtained from our product formulas.
\end{rem}
\section{Comparison results}\label{section3}
Since the structure of the reduced ring of special cycles is controlled by the Siegel-Weil Eisenstein series,
the matching principle introduced in \cite{K.Bints}, section 4, yields relations
between such special cycle rings. We will use the notation of section 4 of \cite{K.Bints} generalized to $F$. A detailed
treatment can be found in \cite{KR.crelle}.
\subsection{The matching principle}
Suppose that $V_1$ and $V_2$ are quadratic spaces over $F$ with $\dim_FV_1=\dim_FV_2$ and $\chi=\chi_{V_1}=\chi_{V_2}$.
There are intertwining maps
$$\l_{V_i}: S(V_i({\mathbb A})^n) \longrightarrow I(s_0,\chi), \qquad \l_{V_i}(\varphi_i)(g') = \o_{V_i}(g')\varphi_i(0),$$
where $s_0= \kappa - \rho_n$.
\begin{defn} (\cite{K.Bints}, Definition~4.3.) Schwartz functions $\varphi_1\in S(V_1({\mathbb A})^n)$ and $\varphi_2\in S(V_2({\mathbb A})^n)$
are said to {\bf match} if
$$\l_{V_1}(\varphi_1) = \l_{V_2}(\varphi_2) \in I(s_0,\chi).$$
\end{defn}
There are obvious local analogues.
For example, in the archimedean case, for $V=V_i$, $i=1$, $2$, suppose that the signature of $V$ is given by (\ref{def-sigV}).
For $\sigma\in \Sigma_+(V)$, let $D^{(\sigma)} = D(V_\sigma)$ be the space of
oriented negative $2$-planes in $V_\sigma= V\tt_{F,\sigma}{\mathbb R}$ as in (\ref{DVS}).
For $\sigma\in \Sigma_+(V)$, let $\breve\varphi^{(n)}_\sigma$ be the Schwartz function on $V_\sigma^n$ defined by (\ref{phKMtoS}).
Let $V_{\sigma,+}$ be the quadratic space over ${\mathbb R}$ of signature $(m+2,0)$, and let
$$\varphi^{(n)}_{\sigma,+} \in S(V_{\sigma,+}^n)$$
be the Gaussian. Then, by the analogue of Corollary~4.15 of \cite{K.Bints}, we have the following matching.
\begin{lem}
$$\l_{V_\sigma}(\breve\varphi^{(n)}_\sigma) = \l_{V_{\sigma,+}}(\varphi^{(n)}_{\sigma,+})\ \in I_{n,\sigma}(s_0,\chi).$$
\end{lem}
This is immediate from the fact that these two functions are eigenvectors for $K'_\infty$ of weight $\frac{m}2+1$ and
$\breve\varphi^{(n)}_\sigma(0) = \varphi^{(n)}_{\sigma,+}(0) =1$.
If we write
$$\text{\boldmath$\ph$\unboldmath}_{\infty, i} \in S((V_i\tt_{\mathbb Q} {\mathbb R})^n)\tt A^{(nd_+^i,n d_+^i)}(D(V_i\tt_{\mathbb Q}{\mathbb R})), \qquad i=1, 2, $$
for the Schwartz forms defined by (\ref{arch-SF}) and $\breve\text{\boldmath$\ph$\unboldmath}_{\infty,i}\in S(V_{i,{\mathbb R}}^n)$ for the corresponding Schwartz functions, then these match as well.
\begin{rem} Note that matching is {\bf not} compatible with tensor products in general.
If $\varphi_{\frak p,i}^{(n_1)}\in S(V_{\frak p,i}^{n_1})$ are matching and $\varphi_{\frak p,i}^{(n_2)}\in S(V_{\frak p,i}^{n_2})$ are matching,
it need not be the case that the tensor products
$\varphi_{\frak p,1}^{(n_1)}\tt \varphi_{\frak p,1}^{(n_2)} \in S(V_{\frak p,1}^n)$ and $\varphi_{\frak p,2}^{(n_1)}\tt \varphi_{\frak p,2}^{(n_2)}\in S(V_{\frak p,2}^n)$
are matching. A rather explicit description of examples is given in Section~\ref{section5}, in particular Remark~\ref{rem5.3} (iii).
\end{rem}
{\bf Basic Observation:} (i) If $\varphi_1\in S(V_1({\mathbb A})^n)$ and $\varphi_2\in S(V_2({\mathbb A})^n)$ are matching Schwartz functions,
then the associated Siegel-Eisenstein series coincide,
\begin{equation}\label{basic-matching-id-general}
E(g',s, \l_{V_1}( \varphi_1)) = E(g',s, \l_{V_2}(\varphi_2)).
\end{equation}
(ii) If $V_1$ and $V_2$
have signatures given by (\ref{def-sigV}) for $d_+(V_1)$ and $d_+(V_2)$,
and if $\varphi_1\in S(V_1({\mathbb A}_f)^n)$ and $\varphi_2\in S(V_2({\mathbb A}_f)^n)$ are matching Schwartz functions on the finite ad\`eles, then the associated Siegel-Eisenstein series coincide,
\begin{equation}\label{basic-matching-id}
E(g',s, \l_{V_1}(\breve\text{\boldmath$\ph$\unboldmath}_{\infty,1}\tt \varphi_1)) = E(g',s, \l_{V_2}(\breve\text{\boldmath$\ph$\unboldmath}_{\infty,2}\tt\varphi_2)).
\end{equation}
\subsection{Consequences of matching} \label{section3.2}
Since the inner product and the ring structure on $SC^\bullet(V)$ are controlled by the Fourier coefficients of
pullbacks of special values of such Siegel-Eisenstein series, the identity (\ref{basic-matching-id}) implies various relations.
For the remainder of this section, we slightly shift notation and suppose that $V$ and $V'$ are quadratic spaces over $F$ with $\dim_FV=\dim_FV'$, $\chi=\chi_{V}=\chi_{V'}$, and with signatures given by (\ref{def-sig-V}) with $1\le d_+(V), d_+(V') <d$.
For data $T\in {\text{\rm Sym}}_n(F)$ and $\varphi\in S(V({\mathbb A}_f)^n)$, we write $Z_V(T,\varphi)$ (resp.
$z_V(T,\varphi)$) for the corresponding class in $H^{2nd_+(V)}(Sh(V))$ (resp. $\text{\rm SC}^n(V)$).
First, we have the following consequence of Corollary~\ref{cor2.2}.
\begin{prop}\label{prop3.5}
If $\varphi\in S(V({\mathbb A}_f)^n)$ and $\varphi'\in S(V'({\mathbb A}_f)^n)$ are matching functions,
then, for all $T\in {\text{\rm Sym}}_n(F)_{\ge0}$,
$$(-1)^{m d_+(V)}\deg(Z_V(T,\varphi)\cdot \text{\boldmath$c$\unboldmath}_S^{m-n}) = (-1)^{m d_+(V')}\deg(Z_{V'}(T,\varphi')\cdot \text{\boldmath$c$\unboldmath}_{S'}^{m-n}).$$
\end{prop}
\begin{rem} (i) This says that the volumes of the cycles $Z_V(T,\varphi)$ and $Z_{V'}(T,\varphi')$
with respect to a suitable power of the respective K\"ahler forms coincide.
\hfill\break
(ii) For example, note that by matching, $\varphi(0) = \l_V(\varphi)(1)= \l_{V'}(\varphi')(1) = \varphi'(0)$, whereas
$Z_V(0,\varphi) = \varphi(0)\,\text{\boldmath$c$\unboldmath}_S^n$ and $Z_{V'}(0,\varphi') = \varphi'(0)\,\text{\boldmath$c$\unboldmath}_{S'}^n$. Thus, if we can find matching $\varphi$ and $\varphi'$ with $\varphi(0)\ne 0$,
Proposition~\ref{prop3.5} gives
\begin{equation}\label{equal-Chern-no}
(-1)^{m d_+(V)}\deg(\text{\boldmath$c$\unboldmath}_S^m) = (-1)^{m d_+(V')}\deg(\text{\boldmath$c$\unboldmath}_{S'}^m),
\end{equation}
an identity between top Chern numbers.
As explained below, for $n=1$ such matching functions always exist, so that (\ref{equal-Chern-no}) holds for any pair $V$ and $V'$.
\end{rem}
Next, as a consequence of Corollary~\ref{IP-cor}, we have the following relations
between inner products in $SC^\bullet(V)$ and $SC^\bullet(V')$.
\begin{prop}\label{prop-3.6} (i) Suppose that, for $m=n_1+n_2$, and for pairs of functions $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$ and
$\varphi'_i\in S(V'({\mathbb A}_f)^{n_i})$, the functions $\varphi_1\tt\varphi_2 \in S(V({\mathbb A}_f)^m)$ and $\varphi'_1\tt\varphi'_2 \in S(V'({\mathbb A}_f)^m)$
match. Then
$$\gs{z_V(T_1,\varphi_1)}{z_V(T_2,\varphi_2)} = \epsilon_{V,V'}\,\gs{z_{V'}(T_1,\varphi'_1)}{z_{V'}(T_2,\varphi'_2)},$$
where $\epsilon_{V,V'}=(-1)^{m (d_+(V')-d_+(V))}$. \hfill\break
(ii) Suppose that, for $m=n_1+n_2+n_3$, and for triples of functions $\varphi_i\in S(V({\mathbb A}_f)^{n_i})$ and
$\varphi'_i\in S(V'({\mathbb A}_f)^{n_i})$, the functions $\varphi_1\tt\varphi_2 \tt\varphi_2\in S(V({\mathbb A}_f)^m)$ and $\varphi'_1\tt\varphi'_2\tt \varphi'_3 \in S(V'({\mathbb A}_f)^m)$
match. Then
$$\gs{z_V(T_1,\varphi_1)\cdot z_V(T_2,\varphi_2)}{z_V(T_3,\varphi_3)} = \epsilon_{V,V'}\,\gs{z_{V'}(T_1,\varphi'_1)\cdot z_{V'}(T_2,\varphi'_2)}{z_{V'}(T_3,\varphi'_3)} .$$
\end{prop}
Now suppose that the quadratic spaces $V$ and $V'$ are isomorphic at all finite primes so that $V\simeq_f V'$ in the
notation of Section~\ref{subsec-1.2}. Note that in this case $d_+(V)$ and $d_+(V')$ have the same parity and
hence $\epsilon_{V,V'}=1$. We then have identifications (\ref{naive-matching}) as at the end of Section~\ref{subsec-1.2}
which provide an automatic matching
\begin{equation}\label{taut-matching}
\varphi \ \leftrightarrow\ \varphi' = \rho^n_{V,V'}(\varphi).
\end{equation}
for all data,
compatible with tensor products and with the Weil representation.
We have the following comparison result, a more precise version of Theorem~B.
\begin{theo}\label{theo3.4}
There is a linear map
$$\rho_{V,V'}: SC^\bullet(V) \longrightarrow SC^\bullet(V')$$
such that, for $\varphi$ and $\varphi'$ matching as in (\ref{taut-matching}),
$$\rho_{V,V'}: z_V(T,\varphi) \mapsto z_{V'}(T,\varphi').$$
Moreover, this map is a ring homomorphism and an isometry.
\end{theo}
\begin{proof} The rings $SC^\bullet(V)$ and $SC^\bullet(V')$ are spanned by the classes $z_V(T,\varphi)$ and $z_{V'}(T,\varphi')$.
Suppose that there is a linear relation in $SC^n(V)$
$$\sum_i c_i \,z_V(T_i,\varphi_i)=0, \qquad \varphi_i\in S(V({\mathbb A}_f)^n), \ T_i\in {\text{\rm Sym}}_n(F), \ c_i\in {\mathbb C}.$$
Then, by (i) of Proposition~\ref{prop-3.6},
$$0 = \gs{\sum_i c_i \,z_V(T_i,\varphi_i)}{z_V(T,\varphi)} \ = \ \gs{\sum_i c_i \,z_{V'}(T_i,\varphi'_i)}{z_{V'}(T,\varphi')}$$
for all pairs $T$ and $\varphi$. Here the Schwartz functions on the right side are matching those on the left via (\ref{taut-matching}).
Since the inner product on the ring $SC^\bullet(V')$ is non-degenerate by construction, we
have
$$\sum_i c_i \,z_{V'}(T_i,\varphi'_i)=0$$
in $SC^n(V')$. Thus the linear map $\rho_{V,V'}$ is well defined.
By (ii) of Proposition~\ref{prop-3.6} this map is a ring homomorphism and by (i) it is an isometry.
\end{proof}
\begin{rem} (i) Of course, the comparison isomorphism of Theorem~\ref{theo3.4} is a consequence of the Siegel-Weil formula for anisotropic $V$.
There should be an analogous comparison in the case where $d_+=d$ and $V$ is isotropic, but this will involve the
use of a more delicate version of the (extended) Siegel-Weil formula and it seems quite likely that interesting correction
terms will arise. \hfill\break
(ii) The twisted cycles $Z(T,\varphi,\text{\boldmath$\chi$\unboldmath})$ of Remarks~\ref{twisted-SCs} and \ref{rem-chi-classes} can be defined
for both $Sh(V)_K$ and $SH(V')_K$ and one might imaging that the comparison isomorphism can be extended to these classes.
This would require a variant of the Siegel-Weil formula giving an explicit description of the theta lifts $\theta_n(\text{\boldmath$\chi$\unboldmath})$
from $\text{\rm SO}(V)$ to $\hbox{Sp}_n$
of quadratic characters $\text{\boldmath$\chi$\unboldmath}$ of the spinor norm. At present we do not have such a formula in general.
The case of $\text{\rm SO}(3)$ and the metaplectic cover of $\text{\rm SL}_2$ is treated in \cite{snitz}.
An interesting cubic analogue is considered in \cite{gan-snitz}. We plan to return to this question.
\end{rem}
\section{The case $d_+=0$}\label{section4}
In this section we suppose that $V_+$ is a totally positive definite quadratic space of dimension $m+2$ over $F$.
In this case, there is no associated Shimura variety, but we would like to nonetheless define a ring and to extend the comparison
result of the previous section.
\subsection{A ring associated to $V_+$}
Let $\mathfrak Z^\nat_n$ be the free abelian group on symbols $[U]_n$ where $U$ is a subspace of $V_+$ with $\dim_F U \le n$, and let
$$\mathfrak Z^\nat = \mathfrak Z^\nat(V_+) = \bigoplus_{n=0}^m \mathfrak Z^\nat_n.$$
Writing $U_0 = \{0\}$ for the zero subspace of $V_+$, we have a class $\triv^\nat = [U_0]_0$, so that $\mathfrak Z^\nat_0 = {\mathbb Z} \,\triv^\nat$,
and a class $\cbold^\nat = [U_0]_1\in \mathfrak Z^\nat_1$. Define a product on $\mathfrak Z^\nat$ by
$$
[U_1]_{n_1}\cdot [U_2]_{n_2} = \begin{cases} [U_1+U_2]_{n_1+n_2} &\text{if $n_1+n_2\le m$,}\\
\noalign{\smallskip}
0&\text{otherwise.}
\end{cases}
$$
Here the `cutoff' at index $m$ is motivated by the comparison we will make below.
Note that, $\triv^\nat\cdot z = z$ for all $z$, and
$$ \cbold^\nat\cdot [U]_n = \begin{cases} [U]_{n+1} &\text{if $n<m$,}\\
\noalign{\smallskip}
0&\text{if $n=m$.}
\end{cases}
$$
Thus $\mathfrak Z^\nat$ is a graded commutative ring.
The group $\text{\rm GL}(V_+)$ acts naturally on $\mathfrak Z^\nat$ as ring automorphisms and the classes $\triv^\nat$ and $\cbold^\nat$ are invariant
under this action.
For a finite subgroup $\Gamma$ in $\text{\rm SO}(V_+)(F)$ and a subspace $U$ with $\dim_F U \le n$, let
$$Z_n(U)_\Gamma = \sum_{\gamma\in \Gamma/\Gamma_U} [\gamma U]_n\ \in \mathfrak Z^\nat_n,$$
where $\Gamma_U$ is the stabilizer of $U$ in $\Gamma$.
Note that such classes span the space of $\Gamma$-invariants in $\mathfrak Z^\nat_n$.
Then we have a `pullback' formula and a product formula, reminiscent of those for special cycles in the Shimura variety case, \cite{kudla.rem-gen}.
\begin{lem}
(i) For a subgroup $\Gamma'\subset \Gamma$,
$$Z_n(U)_{\Gamma} = \sum_{\gamma \in \Gamma'\backslash\Gamma/\Gamma_U} Z_n(\gamma U)_{\Gamma'}.$$
(ii)
For $n_1+n_2\le m$,
$$Z_{n_1}(U_1)_\Gamma \cdot Z_{n_2}(U_2)_{\Gamma} = \sum_{\gamma\in \Gamma\backslash I(U_1,U_2;\Gamma)} Z_{n_1+n_2}(W_\gamma)_\Gamma,$$
where
$$I(U_1,U_2;\Gamma) = \Gamma/\Gamma_{U_1}\times \Gamma/\Gamma_{U_2},$$
and \
$W_\gamma = \gamma_1 U_1+ \gamma_2 U_2$.
\end{lem}
Here `pullback' simply amounts to the inclusion of $(\mathfrak Z^\nat)^{\Gamma}$ in $(\mathfrak Z^\nat)^{\Gamma'}$.
We define a degree function on $\mathfrak Z^\nat$ by
$$
\deg([U]_n) = \begin{cases} 1 &\text{if $n=m$,}\\
\noalign{\smallskip}
0&\text{otherwise,}
\end{cases}
$$
and extend by linearity. This map is invariant under the action of
$\text{\rm GL}(V_+)$. There is a symmetric bilinear inner product on $\mathfrak Z^\nat$ defined by
$$\gs{z_1}{z_2} = \deg(\,z_1\cdot z_2\,).$$
This inner product has a very large radical $\mathcal R \mathfrak Z^\nat$. For example,
\begin{equation}\label{vanish-1}
\gs{[U_1]_{m-n}}{[U_2]_{n}-[U_3]_n} = 0,
\end{equation}
for all subspaces $U_1$ of dimension $\le m-n$ and $U_2$ and $U_3$ of dimension $\le n$.
The radical is an ideal in $\mathfrak Z^\nat$ and we consider the quotient ring $\mathfrak Z = \mathfrak Z^\nat/\mathcal R \mathfrak Z^\nat$.
We write $\triv}%\newcommand{\btriv}{\breve{\triv}$ and $\bcbold$ for the images of the classes $\triv^\nat$ and $\cbold^\nat$ in $\mathfrak Z$.
\begin{lem} \label{lem4.2} The ring $\mathfrak Z$ is isomorphic to a truncated polynomial ring,
$${\mathbb Z}[y]/(y^{m+1}) \ {\overset{\sim}{\longrightarrow}}\ {\mathbb Z}[\bcbold] \ {\overset{\sim}{\longrightarrow}}\ \mathfrak Z.$$
In particular, the image of a class $[U]_n$ in $\mathfrak Z$ is $\bcbold^{n}$.
\end{lem}
\begin{proof}
Since $\mathfrak Z^\nat_0={\mathbb Z}\,\triv^\nat$, we have $\mathcal R\mathfrak Z^\nat \cap \mathfrak Z^\nat_m = \ker(\deg)$ and so $\mathfrak Z_0= {\mathbb Z}\,1\!\!1$ and $\mathfrak Z_m = {\mathbb Z} \,\bcbold^m$.
Now, by (\ref{vanish-1}), we have
$[U_2]_n-[U_3]_n \in \mathcal R\mathfrak Z^\nat$ for all $U_2$ and $U_3$ of dimension $\le n$ and so
$$\sum_i a_i [U_i]_n \equiv (\sum_i a_i)\,(\cbold^\nat)^n \ \mod \mathcal R\mathfrak Z^\nat.$$
This proves the claim.
\end{proof}
\subsection{A replacement for cohomology}
For\footnote{Here we take $\text{\rm SO}(V_+)$ rather than $\text{\rm GSpin}(V_+)$ since there is no Shimura variety construction
involved.} $G_+=R_{F/{\mathbb Q}}\text{\rm SO}(V_+)$ and a compact open subgroup $K$ in $G_+({\mathbb A}_f)$, consider the space
$$H^\bullet(V_+)^\natural_K:=C(G_+({\mathbb A}_f)/K,\mathfrak Z^\nat)^{G_+({\mathbb Q})} $$
of functions
$\text{\boldmath$z$\unboldmath}: G_+({\mathbb A}_f) \longrightarrow \mathfrak Z^\nat$ such that
$$\text{\boldmath$z$\unboldmath}(\gamma g k) = \gamma\, \text{\boldmath$z$\unboldmath}(g),\qquad \forall \gamma\in G_+({\mathbb Q}), \ k\in K.$$
Then
$$H^\bullet(V_+)^\natural_K = \bigoplus_{n=0}^m H^n(V_+)^\natural_K, \qquad H^n(V_+)^\natural_K=C(G_+({\mathbb A}_f)/K,\mathfrak Z^\nat_n)^{G_+({\mathbb Q})},$$
is a graded ring.
If we write\footnote{Note that we do not have strong approximation in this case so that the double coset space does not have a group structure. }
\begin{equation}\label{coset-decomp}
G_+({\mathbb A}_f) = \bigsqcup_j G_+({\mathbb Q}) g_j K,
\end{equation}
there is an isomorphism
\begin{equation}\label{Ze-iso}
H^\bullet(V_+)^\natural_K=C(G_+({\mathbb A}_f)/K,\mathfrak Z^\nat)^{G_+({\mathbb Q})} \ {\overset{\sim}{\longrightarrow}}\ \prod_j (\mathfrak Z^\nat)^{\Gamma_{j}},\qquad \text{\boldmath$z$\unboldmath} \mapsto [ \dots, \text{\boldmath$z$\unboldmath}(g_j), \dots].
\end{equation}
Here $\Gamma_j = \Gamma_{g_j} = G_+({\mathbb Q})\cap g_j K g_j^{-1}$. Note that for $g\in G_+({\mathbb A}_f)$,
the group $\Gamma_g = G_+({\mathbb Q})\cap g K g^{-1}$ is finite and is trivial if $K$ is neat.
Varying $K$, we have the space of continuous functions
$$H^\bullet(V_+)^\natural= \varinjlim_K \,H^\bullet(V_+)^\natural_K = C_{\text{cont}}(G_+({\mathbb A}_f),\mathfrak Z^\nat)^{G_+({\mathbb Q})},$$
also a ring under pointwise operations.
Let $d^Tg$ be Tamagawa measure on $G_+({\mathbb A})$ and define a Haar measure $d_fg$ on $G_+({\mathbb A}_f)$ via the factorization
$d^Tg=d_\infty g_\infty\,d_fg_f$, where the archimedean factor is normalized by $\text{\rm vol}(\text{\rm SO}(V_+)({\mathbb R}),d_\infty g)=1$.
Define a degree map on $H^\bullet(V_+)^\natural$ by\footnote{Note that the constant $\deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath})$ is to be distinguished from
the function
$$\deg(\text{\boldmath$z$\unboldmath}) \in C_{\text{cont}}(G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)).$$
}
\begin{align*}
\deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath}) :&= \int_{G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)} \deg(\text{\boldmath$z$\unboldmath}(g))\,d_f g.
\end{align*}
For $z\in H^\bullet(V_+)^\natural_K$, by (\ref{Ze-iso}) we have
$$
\deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath}) =\text{\rm vol}(K) \sum_j |\Gamma_j|^{-1}\,\deg(\text{\boldmath$z$\unboldmath}(g_j)).
$$
Since the Tamagawa number of $\text{\rm SO}(V_+)$ is $2$,
$$\text{\rm vol}(K) \sum_j |\Gamma_j|^{-1} = \int_{G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)} d_f g = \int_{G_+({\mathbb Q})\backslash G_+({\mathbb A})}\,d^T g =2,$$
and we can also write
\begin{equation}\label{classical-mass}
\deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath}) = 2\,\frac{\sum_j |\Gamma_j|^{-1}\,\deg(\text{\boldmath$z$\unboldmath}(g_j))}{\sum_j |\Gamma_j|^{-1}}.
\end{equation}
On the other hand, if $K$ is neat, we have
$$
H^\bullet(V_+)^\natural_K \simeq \prod_j \mathfrak Z^\nat, \qquad\text{and}\qquad \deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath}) =\text{\rm vol}(K) \sum_j \deg(\text{\boldmath$z$\unboldmath}(g_j)).
$$
We define an inner product on $H^\bullet(V_+)^\natural$ by
\begin{equation}\label{IP-int}
\gs{\text{\boldmath$z$\unboldmath}_1}{\text{\boldmath$z$\unboldmath}_2} = \deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath}_1 \cdot \text{\boldmath$z$\unboldmath}_2) =\int_{G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)} \deg(\text{\boldmath$z$\unboldmath}_1(g)\cdot \text{\boldmath$z$\unboldmath}_2(g))\,d_f g,
\end{equation}
and we let
$$H^\bullet(V_+) = H^\bullet(V_+)^\natural/\mathcal R H^\bullet(V_+)^\natural$$
be the quotient by the radical $\mathcal R H^\bullet(V_+)^\natural$ of this pairing.
\begin{lem} The ring $H^\bullet(V_+)$ is a truncated polynomial ring,
$$C_{\text{cont}}(G_+({\mathbb Q})\backslash G_+({\mathbb A}_f))\tt_{\mathbb Z} {\mathbb Z}[\text{\boldmath$c$\unboldmath}] \ {\overset{\sim}{\longrightarrow}}\ H^0(V_+)[\bcbold] \ {\overset{\sim}{\longrightarrow}}\ H^\bullet(V_+),$$
where
$$H^0(V_+) = C_{\text{cont}}(G_+({\mathbb A}_f),\mathfrak Z_0)^{G_+({\mathbb Q})} \simeq C_{\text{cont}}(G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)).$$
In particular, for a class $\text{\boldmath$z$\unboldmath} \in H^n(V_+)$,
\begin{equation}\label{deg-formula4.3}
\text{\boldmath$z$\unboldmath} = \deg(\text{\boldmath$z$\unboldmath}\cdot \bcbold^{m-n})\,\bcbold^n.
\end{equation}
\end{lem}
\begin{proof} We claim that the subspace $C_{\text{cont}}(G_+({\mathbb A}_f),\mathcal R\mathfrak Z^\nat)^{G_+({\mathbb Q})}$ of functions valued in the radical $\mathcal R\mathfrak Z^\nat$
of $\mathfrak Z^\nat$ coincides with the radical $\mathcal RH^\bullet(V_+)^\natural$.
These functions are in the radical $\mathcal RH^\bullet(V_+)^\natural$ since, if $\text{\boldmath$z$\unboldmath}_1$ is such a function, the integrand
$\deg(\text{\boldmath$z$\unboldmath}_1(g)\cdot \text{\boldmath$z$\unboldmath}_2(g))$
in (\ref{IP-int}) vanishes. On the other hand, if a function $\text{\boldmath$z$\unboldmath}$ lies in the radical $\mathcal R H^\bullet(V_+)^\natural$,
it is right $K$ invariant for some neat compact open $K$. For any $z'\in \mathfrak Z^\nat$, we can define a function $\text{\boldmath$z$\unboldmath}'_j \in H^\bullet(V_+)^\natural_K$
with $\text{\boldmath$z$\unboldmath}'_j(g_i) = \delta_{ij} z'$. Then
$$0=\gs{\text{\boldmath$z$\unboldmath}}{\text{\boldmath$z$\unboldmath}'_j} = \text{\rm vol}(K) \,\gs{\text{\boldmath$z$\unboldmath}(g_j)}{z'},$$
so that $\text{\boldmath$z$\unboldmath}(g_j) \in \mathcal R\mathfrak Z^\nat$ for all $j$. Thus,
$$H^\bullet(V_+) \simeq C_{\text{cont}}(G_+({\mathbb A}_f),\mathfrak Z)^{G_+({\mathbb Q})}.$$
The lemma then follows from Lemma~\ref{lem4.2}.
\end{proof}
\begin{rem}
Of course we could have taken
$$H^\bullet(V_+) = C_{\text{cont}}(G_+({\mathbb Q})\backslash G_+({\mathbb A}_f))[\bcbold]$$
as the definition of the `cohomology' ring in the $d_+=0$ case, but felt that the version based on $\mathfrak Z^\nat$ and $H^\bullet(V_+)^\natural$
provides more insight and, in particular, a better parallel with the construction in the $d_+>0$ case.
\end{rem}
\subsection{Special cycles}\label{section4.3}
The analogue of the weighted special cycles are defined as follows.
For $T\in {\text{\rm Sym}}_n(F)$, $n\ge1$, and $\varphi \in S(V_+({\mathbb A}_f)^n)^K$, we define
$$Z(T,\varphi;K) \in H^n(V_+)^\natural_K$$
by
\begin{align}\label{def-d+special}
Z(T,\varphi;K)(g) &=
\sum_{\substack{ x\in V_+(F)^n\\ \noalign{\vskip 2pt} Q(x) = T\\ \noalign{\vskip 2pt}\mod \Gamma_g}} \varphi(g^{-1}x)\,Z_n(U(x))_{\Gamma_g},\notag\\
\noalign{\smallskip}
{}&= \sum_{\substack{ x\in V_+(F)^n\\ \noalign{\vskip 2pt} Q(x) = T}} \varphi(g^{-1}x)\,[U(x)]_n,
\end{align}
where $U(x)$ is the subspace spanned by the components of $x$. Note that the last expression is in fact independent of the
choice of $K$, subject only to the condition that the weight function $\varphi$ is $K$-invariant. Thus we will omit $K$ from the notation.
\begin{rem} The rings $\mathfrak Z^\nat$ amd $\mathfrak Z$ as initially defined are ${\mathbb Z}$-algebras. Since the coefficient rings will play no
role in our constructions, from now on we simply take complex coefficients and complex valued Schwartz functions.
\end{rem}
Again we have a product formula.
\begin{prop} The product formula (\ref{prod-ch}) holds for the weighted classes,
$$
Z(T_1,\varphi_1)\cdot Z(T_2,\varphi_2) = \sum_{\substack{ T\in {\text{\rm Sym}}_{n_1+n_2}(F)_{\ge 0}\\ \noalign{\vskip 2pt} T = \begin{pmatrix} \scriptstyle T_1&*\\{}^t*&\scriptstyle T_2\end{pmatrix}}} Z(T,\varphi_1\tt\varphi_2).
$$
\end{prop}
\begin{proof} Writing $n=n_1+n_2$, we have
\begin{align*}
Z(T_1,\varphi_1)\cdot Z(T_2,\varphi_2)(g) & = \sum_{\substack{ x_1\in V_+(F)^{n_1}\\ \noalign{\vskip 2pt} Q(x_1) = T_1}} \sum_{\substack{ x_2\in V_+(F)^{n_2}\\ \noalign{\vskip 2pt} Q(x_2) = T_2}}
\varphi_1(g^{-1}x_1)\,\varphi_2(g^{-1}x_2)\,\,[U(x_1)]_{n_1}\cdot[U(x_2)]_{n_2}\\
\noalign{\smallskip}
{}&=\sum_{\substack{ T\in {\text{\rm Sym}}_{n}(F)\\ \noalign{\vskip 2pt} T = \begin{pmatrix} \scriptstyle T_1&*\\{}^t*&\scriptstyle T_2\end{pmatrix}}} \sum_{\substack{ x= [x_1,x_2]\in V_+(F)^{n} \\
\noalign{\vskip 2pt} Q(x) = T}} \varphi_1\tt\varphi_2(g^{-1}x)\,[U(x_1)+U(x_2)]_n\\
\noalign{\smallskip}
{}&=\sum_{\substack{ T\in {\text{\rm Sym}}_{n}(F)\\ \noalign{\vskip 2pt} T = \begin{pmatrix} \scriptstyle T_1&*\\{}^t*&\scriptstyle T_2\end{pmatrix}}} Z(T,\varphi_1\tt \varphi_2)(g).
\end{align*}
\end{proof}
By (\ref{def-d+special}) and (\ref{deg-formula4.3}), the image of the class $Z(T,\varphi)$ in $H^n(V_+)$ is
\begin{equation}\label{image-ZTph}
\text{\boldmath$z$\unboldmath}(T,\varphi)(g) = \Rep_+(T,\varphi)(g)\cdot\bcbold^n
\end{equation}
where
\begin{equation}\label{def-rep-no}
\Rep_+(T,\varphi)(g) := \sum_{\substack{ x\in V_+(F)^n\\ \noalign{\vskip 2pt} Q(x) = T}} \varphi(g^{-1}x)
\end{equation}
is the representation number.
Note that $\Rep_+(T,\varphi)\in C_{\text{cont}}(G_+({\mathbb Q})\backslash G_+({\mathbb A}_f))$.
The product formula in the reduced ring $H^\bullet(V_+)$ amounts to a rather trivial identity for
such representation numbers.
We let $\text{\rm SC}^\bullet(V_+)^\natural$ be the subring of $H^\bullet(V_+)$ spanned by the special cycles $\text{\boldmath$z$\unboldmath}(T,\varphi)$ together with the class
$\triv}%\newcommand{\btriv}{\breve{\triv}$.
Thus
$$\text{\rm SC}^0(V_+)^\natural = {\mathbb C} \,\triv}%\newcommand{\btriv}{\breve{\triv} \ \ \subset \ \ H^0(V_+) = C_{\text{cont}}(G_+({\mathbb Q})\backslash G_+({\mathbb A}_f))\,\triv}%\newcommand{\btriv}{\breve{\triv}$$
is just the subspace of constant functions.
Let
$$\text{\rm SC}^\bullet(V_+) = \text{\rm SC}^\bullet(V_+)^\natural/\mathcal R \text{\rm SC}^\bullet(V_+)^\natural$$
be the quotient of this subring by the radical of the restriction of the inner product.
For special cycles in complementary degrees $n_1$ and $n_2$ with $n_1+n_2=m$, the inner product is given by
\begin{equation}\label{new-IP}
\gs{\text{\boldmath$z$\unboldmath}(T_1,\varphi_1)}{\text{\boldmath$z$\unboldmath}(T_2,\varphi_2)} = \int_{G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)} \Rep_+(T_1,\varphi_1)(g) \,\Rep_+(T_2,\varphi_2)(g) \,dg.
\end{equation}
We denote the image of $\text{\boldmath$z$\unboldmath}(T,\varphi)\in \text{\rm SC}^\bullet(V_+)^\natural$ in $\text{\rm SC}^\bullet(V_+)$ by $\text{\boldmath$z$\unboldmath}(T,\varphi)^\flat$.
\begin{lem} (i) For $n\le \frac12 m$, the map
$\text{\rm SC}^n(V_+)^\natural \longrightarrow \text{\rm SC}^n(V_+)$ is an isomorphism. \hfill\break
(ii)
On the other hand, for $n=m$, $\text{\rm SC}^m(V_+) = {\mathbb C}\, \bcbold^m$ and the map
$\text{\rm SC}^m(V_+)^\natural \longrightarrow \text{\rm SC}^m(V_+)$ is given by
$$\text{\boldmath$z$\unboldmath} \mapsto \text{\boldmath$z$\unboldmath}^\flat=\deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath})\cdot \bcbold^m.$$
\end{lem}
\begin{proof} For $n=0$ this is clear from the definition. For $1\le n\le \frac12 m$, let $n' = m-n$. For given $T\in {\text{\rm Sym}}_n(F)$ and $\varphi\in S(V_+({\mathbb A}_f)^n)$,
let
$$T' = \begin{pmatrix} T&{}\\{}&0\end{pmatrix}\in {\text{\rm Sym}}_{n'}(F), \qquad \varphi' = \bar\varphi\tt \varphi^0 \in S(V_+({\mathbb A}_f)^{n'}), $$
where $\varphi^0\in S(V_+({\mathbb A}_f)^{n'-n})$ with $\varphi^0(0)=1$. Then, since $V_+$ is anisotropic,
$$\Rep_+(T',\varphi') = \Rep_+(T,\bar\varphi) = \overline{\Rep_+(T,\varphi)}.$$
For $\text{\boldmath$z$\unboldmath}$ a complex linear combination of classes $\text{\boldmath$z$\unboldmath}(T,\varphi)$'s let $\text{\boldmath$z$\unboldmath}'$ be the corresponding conjugate linear
combination of the $\text{\boldmath$z$\unboldmath}(T',\varphi')$'s, so that $\text{\boldmath$z$\unboldmath}'(g) = \overline{\text{\boldmath$z$\unboldmath}(g)}$. Then
$$\gs{\text{\boldmath$z$\unboldmath}}{\text{\boldmath$z$\unboldmath}'} = \int_{G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)} |\text{\boldmath$z$\unboldmath}(g)|^2\,dg =0 \ \iff \text{\boldmath$z$\unboldmath} =0.$$
The second statement follows from the fact that
$\gs{\text{\boldmath$z$\unboldmath}}{\triv}%\newcommand{\btriv}{\breve{\triv}} = \deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath})$.
\end{proof}
\subsection{Generating series}\label{section4.4}
The generating series analgous to (\ref{B-gen-fun}) for these `special cycles' is
\begin{equation}\label{fake-gen-fun1}
\phi_n^{\text{\rm SC}(V_+)^\natural}(\tau, \varphi) = \sum_{T\in {\text{\rm Sym}}_n(F)_{\ge0}} \text{\boldmath$z$\unboldmath}(T,\varphi)\, \qbold^T \ \in \text{\rm SC}^n(V_+)^\natural[[\qbold]].
\end{equation}
The image of the series (\ref{fake-gen-fun1})
in $\text{\rm SC}^n(V_+)[[q]]$ is
\begin{equation}\label{fake-gen-fun2}
\phi_n^{\text{\rm SC}(V_+)}(\tau, \varphi) = \sum_{T\in {\text{\rm Sym}}_n(F)_{\ge0}} \text{\boldmath$z$\unboldmath}(T,\varphi)^\flat\, \qbold^T \ \in \text{\rm SC}^n(V_+)[[\qbold]].
\end{equation}
Evaluating at $g\in G_+({\mathbb A}_f)$
and using (\ref{image-ZTph}) we have
\begin{align}
\label{fake-gen-fun4}
\phi_n^{\text{\rm SC}(V_+)^\natural}(\tau, \varphi)(g) &=\bcbold^n\cdot \sum_{T\in {\text{\rm Sym}}_n(F)_{\ge0}} \Rep_+(T,\varphi)(g)\,\qbold^T\notag\\
\noalign{\smallskip}
{}&= \bcbold^n\cdot \sum_{\substack{ x\in V_+(F)^n}} \varphi(g^{-1}x)\,\qbold^{Q(x)} \\
\noalign{\smallskip}
{}&=\bcbold^n\cdot N(\det(v))^{-\frac{m+2}4} \,\theta(g'_\tau,g;\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi),\notag
\end{align}
where the function in the last line is a multiple of the classical theta series (\ref{J-theta-form}) given by
$$
\theta(g',g;\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi) = \sum_{x\in V_+(F)^n} \o(g')(\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi)(g^{-1}x),
$$
for
\begin{equation}\label{arch-SF-2}
\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty= \breve\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty=\bigotimes_{\sigma} \varphi_{\sigma,+}^0
\end{equation}
where $\varphi_{\sigma,+}^0\in S((V_+)_\sigma^n)$ is the Gaussian for $(V_+)_\sigma$.
Now we have the analogues of the formulas of Section~\ref{section2}. First we have the analogue of Proposition~\ref{prop2.2}, where in the
third step we use (\ref{fake-gen-fun4}),
\begin{align*}
\deg^{\text{\rm tot}}(\phi_n^{\text{\rm SC}(V_+)}(\tau,\varphi)\cdot \bcbold^{m-n}) & =
\int_{G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)}
\deg(\phi_n^{\text{\rm SC}(V_+)^\natural}(\tau,\varphi)(g)\cdot \bcbold^{m-n})\,d_fg\\
\noalign{\smallskip}
{}&=\int_{G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)}
N(\det(v))^{-\frac{m+2}4} \,\theta(g'_\tau,g;\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi)\,d_fg\\
\noalign{\smallskip}
{}&=N(\det(v))^{-\frac{m+2}4} \, \int_{G_+({\mathbb Q})\backslash G_+({\mathbb A})}
\theta(g'_\tau,g;\text{\boldmath$\ph$\unboldmath}^{(n)}_\infty\tt\varphi)\,d^Tg\\
\noalign{\smallskip}
{}&= 2\,E(\tau,s_0,\l_{(V_+)_f}(\varphi)).
\end{align*}
Here in the last step we use the Siegel-Weil formula and the expression in (\ref{good-Eis}) for the special value at $s=s_0 = \kappa-\rho_n$ of the Siegel-Eisenstein series.
If $\varphi$ is $K$-invariant for a compact open subgroup $K$ in $G_+({\mathbb A}_f)$, then, using (\ref{coset-decomp}), (\ref{classical-mass})
and (\ref{Eis-FC}), we have
Siegel's classical formula
\begin{equation}\label{Siegel!}
\deg^{\text{\rm tot}}(\text{\boldmath$z$\unboldmath}(T,\varphi)^\flat\cdot \bcbold^{m-n}) = 2\,\frac{\sum_j |\Gamma_j|^{-1}\,\Rep_+(T,\varphi)(g_j)}{\sum_j |\Gamma_j|^{-1}} = 2\,A(T,\l_{V_f}(\varphi))
\end{equation}
relating representation numbers and Fourier coefficients of Eisenstein series. This is the analogue of Corollary~\ref{cor2.2}.
The analogues of Theorem~A and Corollary~\ref{IP-cor} follow in the same way. We will not restate them here.
They imply that the product structure and inner product of classes $\text{\boldmath$z$\unboldmath}(T,\varphi)^\flat$ in the ring $\text{\rm SC}^\bullet(V_+)$ are once again
given by Fourier coefficients of pullbacks of Hilbert-Siegel Eisenstein series.
The product structure in the quotient ring is now much more subtle than that in $\text{\rm SC}^\bullet(V_+)^\natural$, as it involves the inner products (\ref{new-IP})
of the functions $\Rep_+(T,\varphi)$ on $G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)$. For example,
if $m=2n$ is even and $T_i\in {\text{\rm Sym}}_n(F)$ and $\varphi_i\in S(V_+({\mathbb A}_f)^n)$, then
$$\text{\boldmath$z$\unboldmath}(T_1,\varphi_1)^\flat\cdot \text{\boldmath$z$\unboldmath}(T_2,\varphi_2)^\flat = \int_{G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)} \Rep_+(T_1,\varphi_1)(g) \,\Rep_+(T_2,\varphi_2)(g)\,dg \cdot \bcbold^m.$$
\subsection{Another comparison}
The results of Sections~\ref{section4.3} and \ref{section4.4} imply that we again have a comparison isomorphism.
\begin{theo}\label{theo4.6} For a quadratic space $V$ over $F$ with $d_+(V)$ even, let $V_+$ be the associated totally positive definite space.
Fix an isometry $\rho_{V,V_+}: V({\mathbb A}_f) \ {\overset{\sim}{\longrightarrow}}\ V_+({\mathbb A}_f)$.
Let $\text{\rm SC}^\bullet(V_+)$ be the `special cycle' ring for $V_+$ defined in Section~\ref{section4.3}, and let $\text{\rm SC}^\bullet(V)$ be the
reduced special cycle ring defined in Section~\ref{subsec-1.2}.
Then there is a linear map
$$\rho_{V,V_+}: \text{\rm SC}^\bullet(V) \longrightarrow \text{\rm SC}^\bullet(V_+)$$
such that, for $\varphi$ and $\varphi'$ matching as in (\ref{taut-matching}),
$$\rho_{V,V_+}: z_V(T,\varphi) \mapsto \text{\boldmath$z$\unboldmath}_{V_+}(T,\varphi')^\flat.$$
Moreover, this map is a ring homomorphism and an isometry.
\end{theo}
Here we have added a subscript to indicate where the classes live.
Thus, the reduced special cycle rings $\text{\rm SC}^\bullet(V)_K$ for the Shimura varieties $Sh(V)_K$ for such $V$ with $d_+(V)$ even are all modeled on
the subquotient $\text{\rm SC}^\bullet(V_+)$ of the truncated polynomial ring
\begin{align*}
H^\bullet(V_+)_{\bar K}&\ {\overset{\sim}{\longrightarrow}}\ C_{\text{cont}}(G_+({\mathbb Q})\backslash G_+({\mathbb A}_f)/\bar K)[\bcbold].
\end{align*}
Here our choice of isometry $\rho_{V,V_+}: V({\mathbb A}_f) \ {\overset{\sim}{\longrightarrow}}\ V_+({\mathbb A}_f)$ gives an identification
$G({\mathbb A}_f) \ {\overset{\sim}{\longrightarrow}}\ \text{\rm GSpin}(V_+)({\mathbb A}_f)$, and we write $\bar K$ for the image of $K\subset G({\mathbb A}_f)$ in $G_+({\mathbb A}_f)= \text{\rm SO}(V)({\mathbb A}_f)$.
\section{Local matching conditions}\label{section5}
It is interesting to see how much matching can occur in cases where the spaces $V$ and $V'$ are not locally isometric at all places.
For $V$ and $V'$ as in Proposition~\ref{prop3.5}, there is a finite set of places
$$\Delta = \Delta(V,V') = \{ \,\frak p \mid V_\frak p \not\simeq V'_\frak p\,\} = \{ \,\frak p \mid \epsilon(V_\frak p) = - \epsilon(V'_\frak p)\,\},$$
where the Hasse invariants $\epsilon(V_\frak p)$ and $\epsilon(V'_\frak p)$ differ.
We fix an isomorphism
\begin{equation}\label{almost.matching}
V({\mathbb A}_f^\Delta) \ {\overset{\sim}{\longrightarrow}}\ V'({\mathbb A}_f^\Delta),
\end{equation}
where the superscript $\Delta$ means that the places in $\Delta$ have been omitted.
This gives isomorphisms
$$S(V({\mathbb A}_f^\Delta)^n) \ {\overset{\sim}{\longrightarrow}}\ S(V'({\mathbb A}_f^\Delta)^n), \qquad \varphi \mapsto \varphi'$$
for all $n$, compatible with the Weil representation and with tensor products. In particular, the functions $\varphi$ and $\varphi'$ are matching.
The existence of matching pairs of functions in $S(V({\mathbb A}_f)^n)$ and $S(V'({\mathbb A}_f)^n)$ then reduces to a local problem at places $\frak p\in \Delta$.
\subsection{Local matching conditions}
In this section, we describe the conditions required for local matching using the results of \cite{KR.rdps}, for $m$ even,
and \cite{sweet.meta}, for $m$ odd. We change notation and let $F$ be a non-archimedean local field of characteristic $0$
with a fixed non-trivial unitary additive character $\psi: F\text{\rm ra} {\mathbb C}^\times$. For a fixed $m\ge 1$, let
$$G' =G'_n= \begin{cases} \hbox{Sp}_n({\mathbb F}) &\text{if $m$ is even} \\
\noalign{\smallskip}
\noalign{\smallskip}
\widetilde{\hbox{Sp}_n}(F)&\text{its metaplectic cover, if $m$ is odd.}
\end{cases}
$$
For a quadratic space $V$ over $F$ of dimension\footnote{Note the shift in notation from the previous sections.} $m$ and character $\chi_V$, let $\o_V=\o_{V,\psi}$ be the Weil representation of $G'$
on $S(V^n)$. For the intertwining map
$$\l_V: S(V^n) \longrightarrow I_n(s_0,\chi_V), \qquad \l_V(\varphi)(g') = \o(g')\varphi(0), \quad s_0= \frac{m}2-\rho_n,$$
to the degenerate principal series representation $I_n(s,\chi_V)$ of $G'$ at the point $s_0$,
we let
$$R_n(V) = \l_V(S(V^n)) \ \subset I_n(s_0,\chi_V)$$
be the image.
For a fixed quadratic character $\chi$ of $F^\times$, these submodules account for the constituents of $I_n(s_0,\chi)$.
In the following description of the $R_n(V)$'s, for convenient reference, we give more complete information than needed for our application to matching.
For $m$ even, these results are quoted from \cite{KR.rdps}, while, $m$ odd the results are due to Sweet, \cite{sweet.meta}.
We consider quadratic spaces $V$ with $\chi_V=\chi$ and we vary $\dim V=m$. For a fixed $m$,
the isometry class of such a space $V$ is determined by its Hasse invariant $\epsilon(V)=\pm1$.
In particular, up to isometry, there are two such spaces $V_{m,\pm}$, except
when $m=1$ (resp. $m=2$ and $\chi=1$), in which case there is only one such space $V_1$ (resp. $V_2$).
The `generic' picture of the $R_n(V)$'s is quite simple.
\begin{prop}
\begin{itemize}
\item[(i)] For $3\le m < n+1$, or for $m=2$, $\chi\ne1$, we have $s_0<0$, and the representations $R_n(V_{m,\pm})$ are irreducible and distinct.
Their sum $R_n(V_{m,+})\oplus R_n(V_{m,-})$
is the socle of $I_n(s_0,\chi)$ and the quotient $I_n(s_0,\chi)/R_n(V_{m,+})\oplus R_n(V_{m,-})$ is irreducible.
\item[(ii)] For $m=n+1$, we have $s_0=0$. If $n=1$ and $\chi=1$, $R_1(V_2)=I_1(0,\chi)$ is irreducible.
Otherwise, the spaces $R_n(V_{n+1,\pm})$ are irreducible and distinct
and
$$I_n(0,\chi) = R_n(V_{n+1,+})\oplus R_n(V_{n+1,-}).$$
\item[(iii)] For $n+1<m<2n$,
the spaces $R_n(V_{m,\pm})$ are maximal subspaces of $I_n(s_0,\chi)$, $I_n(s_0,\chi) = R_n(V_{m,+})+R_n(V_{m,-})$, $R_n(V_{m,+})\cap R_n(V_{m,-})$ is irreducible, and
$$I_n(s_0,\chi)/R_n(V_{m,+})\cap R_n(V_{m,-}) \ {\overset{\sim}{\longrightarrow}}\ R_n(V_{m',+})\oplus R_n(V_{m',-}),$$
where $m'=2n+2-m$ and the isomorphism is induced by the normalized intertwining map $A_n(s_0,\chi)$. \hfill\break
The same statement holds for $m=2n$ when $\chi\ne1$.
\item[(iv)] For $m>2n+2$, or if $m=2n+2$ and $\chi\ne1$, $I_n(s_0,\chi) = R_n(V_{m,\pm})$ is irreducible.
\end{itemize}
The following `edge' cases then complete the picture.
\begin{itemize}
\item[(a)] For $m=1$, we have $s_0=-\frac{n}2$. Then $R_n(V_1)$ is the unique irreducible submodule of $I_n(-\frac{n}2,\chi)$, and
the quotient
$$I_n(-\frac{n}2,\chi)/R_n(V_1)\ {\overset{\sim}{\longrightarrow}}\ R_n(V_{2n+1,-})$$
is irreducible.
\item[(b)] For $m=2$, $\chi=1$, and $n>1$, we have $s_0= -\rho_n+1$. Then $R_n(V_2)$ is the unique irreducible submodule of $I_n(-\frac{n}2+\frac12,\chi)$ and the quotient
$$I_n(-\frac{n}2+\frac12,\chi)/R_n(V_2) \ {\overset{\sim}{\longrightarrow}}\ R_n(V_{2n,-})$$
is irreducible.
\item[(c)] For $m=2n$ and $\chi=1$, we have $s_0= \frac{n-1}2$. Then $I_n(\frac{n-1}2,\chi) = R_n(V_{2n,+})$, $R_n(V_{2n,-})$ is its unique irreducible submodule, and
$$I_n(\frac{n-1}2,\chi)/R_n(V_{2n,-}) \ {\overset{\sim}{\longrightarrow}}\ R_n(V_2).$$
\item[(d)] For $m=2n+1$, we have $s_0= \frac{n}2$. Then $I_n(\frac{n}2,\chi) = R_n(V_{2n+1,+})$, $R_n(V_{2n+1,-})$ is its unique irreducible submodule, and
$$I_n(\frac{n}2,\chi)/R_n(V_{2n+1,-}) \ {\overset{\sim}{\longrightarrow}}\ R_n(V_1).$$
\item[(e)] For $m=2n+2$ and $\chi=1$, we have $s_0=\rho_n$. Then $I_n(\rho_n,\chi) = R_n(V_{2n+2,+})$,
$R_n(V_{2n+2,-})$ is its unique irreducible submodule, and
$$I_n(\rho_n,\chi)/R_n(V_{2n+2,-}) \ {\overset{\sim}{\longrightarrow}}\ R_n(V_0),$$
where we formally view the trivial representation as $1\!\!1 = R_n(V_0)$.
\end{itemize}
Moreover, each of the quotients occurring in cases (b)--(e) is induced by the normalized intertwining operator $A_n(s_0,\chi)$, \cite{KR.rdps},
\cite{sweet.meta}.
\end{prop}
Note that $R_n(V_1)$ is the even Weil representation of $G'_n$.
Now return to the local matching problem and suppose that $V$ and $V'$ are quadratic spaces over $F$ of dimensions $m=\text{\boldmath$m$\unboldmath}+2$
and character $\chi$, but with opposite Hasse invariants.
We now vary $n$ with $1\le n \le \text{\boldmath$m$\unboldmath} = m-2$. In particular, $m\ge 3$.
For $\epsilon=\pm1$, let
$$S^o(V_{m,\epsilon}^n) = \{\,\varphi \in S(V_{m,\epsilon}^n)\mid \l_{V_{m,\epsilon}}(\varphi)\in R_n(V_{m,-\epsilon}) \, \},$$
so that, tautologically,
for every $\varphi\in S^o(V_{m,\pm}^n)$ there is a matching function $\varphi'\in S^o(V_{m,\mp}^n)$.
Also note that, since $R_n(V_{m,-\epsilon})$ is a $G'_n$ submodule of $I_n(s_0,\chi)$ and $\l_V$ is intertwining,
$S^o(V_{m,\epsilon}^n)$ is a $G'_n$-invariant subspace of $S(V_{m,\epsilon}^n)$.
\begin{prop}\label{prop3.7}
{\rm (1)} For $n<\frac{m}2-1$, or $n=\frac{m}2-1$ and $\chi\ne1$, $S^o(V_{m,\epsilon}^n) = S(V_{m,\epsilon}^n)$, and hence
for every $\varphi\in S(V_{m,\pm}^n)$ there is a matching function $\varphi'\in S(V_{m,\mp}^n)$.
\hfill\break
{\rm (2)} For $n=\frac{m}2-1$ and $\chi=1$, $s_0=\rho_n$. Then,
$$S^o(V_{m,-}^n) = S(V_{m,-}^n), \qquad S^o(V_{m,+ }^n) = \ker(A_n(\rho_n,\chi)\circ\l_V),$$
and
$$S(V_{m,+ }^n) /S^o(V_{m,+ }^n) \ {\overset{\sim}{\longrightarrow}}\ R_n(V_0)=1\!\!1.$$
Similarly,
for $n=\frac{m}2$ and $\chi=1$, $s_0=\rho_n-1$. Then,
$$S^o(V_{m,-}^n) = S(V_{m,-}^n), \qquad S^o(V_{m,+ }^n) = \ker(A_n(\rho_n-1,\chi)\circ\l_V),$$
and
$$S(V_{m,+ }^n) /S^o(V_{m,+ }^n) \ {\overset{\sim}{\longrightarrow}}\ R_n(V_2).$$
{\rm (3)}
For $n=\frac{m}2-\frac12$ so that $s_0=\rho_n-\frac12=\frac{n}2$,
$$S^o(V_{m,-}^n) = S(V_{m,-}^n), \qquad S^o(V_{m,+ }^n) = \ker(A_n(\rho_n-\frac{1}2,\chi)\circ\l_V)$$
and
$$S(V_{m,+ }^n) /S^o(V_{m,+ }^n) \ {\overset{\sim}{\longrightarrow}}\ R_n(V_1).$$
{\rm (4)} For $\frac{m}2<n\le m-2$, or for $n=\frac{m}2$ and $\chi\ne1$, there are exact sequences induced by the normalized intertwining
operator $A_n(s_0,\chi)$,
$$0\longrightarrow S^o(V_{m,\pm}^n) \longrightarrow S(V_{m,\pm}^n) \longrightarrow R_n(V_{m',\pm}) \longrightarrow 0,\qquad m'=2n+2-m.$$
\end{prop}
\begin{rem}\label{rem5.3} (i)
For a fixed $n$, Proposition~\ref{prop3.7}, together with the isomorphism (\ref{almost.matching}), provides a good supply of matching pairs to which Proposition~\ref{prop3.5} can be applied. \hfill\break
(ii)
Comparison of inner products is more difficult to achieve, and it is not clear if one can expect to find isomorphisms like that of Theorem~\ref{theo3.4}.
\hfill\break
(iii)
For example, suppose that $m=\text{\boldmath$m$\unboldmath} +2$ is even and $\chi\ne1$. Take $n = \frac12\text{\boldmath$m$\unboldmath}$, i.e., $m=2n+2$. Then by (iv), $R_n(V_{m,\pm}) = I_n(s_0,\chi)$
and hence every function $\varphi \in S(V^n_{m,+})$ has a matching function $\varphi'\in S(V^n_{m,-})$.
To compare an inner product of classes as in (i) of Proposition~\ref{prop-3.6}, we would want to have $\varphi_1\tt\varphi_2$ matching $\varphi_1'\tt\varphi_2'$ as well.
On the other hand, we have the sequence of surjective maps
$$S(V^n_{m,\pm})\tt S(V^n_{m,\pm}) = S(V^{\text{\boldmath$m$\unboldmath}}_{m,\pm}) \longrightarrow R_{\text{\boldmath$m$\unboldmath}}(V_{m,\pm}) \longrightarrow R_{\text{\boldmath$m$\unboldmath}}(V_{\text{\boldmath$m$\unboldmath},\pm}),$$
so that there can be no matching function when the image of $\varphi_1\tt\varphi_2$ in $R_{\text{\boldmath$m$\unboldmath}}(V_{\text{\boldmath$m$\unboldmath},\pm})$
is non-zero. This produces a supply of examples where the tensor products of matching functions are not matching.
\end{rem}
|
1,116,691,499,045 | arxiv | \section*{Introduction}
Two algebraic structures of the same kind in a linear category are said to be compatible if their sum also defines the same kind of algebraic structure. Compatible structures often appear in various fields of mathematics and mathematical physics. Among others, the notion compatible Lie algebras are closely related to linear deformations (in particular, deformations by Nijenhuis operators) of Lie algebras and classical Yang-Baxter equations \cite{nij-ric,golu3}. They also appeared in the study of principal chiral fields \cite{golu1}, loop algebras over Lie algebras \cite{golu2} and elliptic theta functions \cite{Odesskii1}. In the mathematical study of biHamiltonian mechanics, compatible Poisson structures first appeared in the work of Magri, Morosi and Schwarzbach \cite{mag-mor,kos}. There is a close connection between compatible Lie algebras and compatible Poisson structures via dualization \cite{bol}. In the same spirit, compatible associative algebras are introduced and widely studied \cite{Odesskii2,Odesskii3}. Note that a compatible associative algebra is a triple $A= (A, \mu_1, \mu_2)$ such that $(A, \mu_1)$, $(A, \mu_2)$ and $(A, \mu_1 + \mu_2)$ are all associative algebras. See Section \ref{section-ca} for more details. The relation between compatible associative algebras and associative Yang-Baxter equations, quiver representations, bialgebra theory are explored in \cite{mar,Odesskii2,Odesskii3,wu}. See \cite{dotsenko,stro} for study on compatible algebras from operadic points of view.
\medskip
Cohomology and homology are some invariants for algebraic structures that begins with the works of Hochschild, Harrison, Barr among others \cite{hoch,harr,barr}. In \cite{gers} Gerstenhaber developed the pioneer theory of formal deformations for associative algebras and subsequently generalized for Lie algebras by Nijenhuis and Richardson \cite{nij-ric}. Such deformations are governed by the cohomology of algebras. Later, Balavoine \cite{bala} generalized the results of Gerstenhaber and Nijenhuis-Richardson to algebras over any quadratic operads. On the other hand, the homology of algebras is useful to study K\"{a}hler differentials and differential forms on algebras. Recently, the authors in \cite{comp-lie} defined a cohomology theory for compatible algebras and study linear deformations of compatible Lie algebras. Their study relies on the construction of a so-called bidifferential graded Lie algebra whose Maurer-Cartan elements are compatible Lie algebra structures. This bidifferential graded Lie algebra is not far from the Nijenhuis-Richardson graded Lie algebra constructed in \cite{nij-ric} to study Lie algebras. Therefore, this bidifferential graded Lie algebra has some lack of information to study compatible Lie algebras which led the authors of \cite{comp-lie} not to study extensions of finite order deformations.
\medskip
Our first aim is to construct a graded Lie algebra suitable for compatible algebraic structures. In this paper, we mainly focus on compatible associative algebras. Compatible algebras over any quadratic operads will be treated elsewhere. Here we define a graded Lie algebra (using the Gerstenhaber bracket \cite{gers2}) whose Maurer-Cartan elements are given by compatible associative algebras. Then we construct the cohomology of a compatible associative algebra $(A, \mu_1, \mu_2)$ with coefficients in a suitable bimodule. We show that the cohomology with coefficients in itself can be seen as the cohomology induced by the corresponding Maurer-Cartan element. We note that our cohomology of compatible associative algebras is not a combination of cohomologies of $(A, \mu_1)$ and $(A, \mu_2)$. However, we observe that there is a morphism from the cohomology of compatible associative algebra $(A, \mu_1, \mu_2)$ to the cohomology of the associative algebra $(A, \mu_1 + \mu_2)$. As applications of cohomology, we study extensions and various types of deformations (e.g., linear, formal and finite order) of a compatible associative algebra. During the course, we introduce Nijenhuis operators that induce trivial linear deformations. We also show that the vanishing of the second cohomology group of a compatible associative algebra $A$ implies that $A$ is rigid. Moreover, the obstruction to extending a finite order deformation is always a $3$-cocycle. Thus, vanishing of the third cohomology group implies that any finite order deformations are extensible.
\medskip
In the next, we also introduce homology for compatible associative algebras with coefficients in a suitable bimodule. To do this, we first define a notion of compatible presimplicial module and show that a compatible presimplicial module induces a homology. Like cohomology, the homology of a compatible associative algebra $(A, \mu_1, \mu_2)$ is not a combination of the homologies of $(A, \mu_1)$ and $(A, \mu_2)$.
\medskip
The paper is organized as follows. In Section \ref{section:background}, we recall the (Hochschild) cohomology of associative algebras and the Gerstenhaber bracket. In the next section (Section \ref{section-ca}) we consider compatible associative algebras and compatible bimodules. We also construct the graded Lie algebra whose Maurer-Cartan elements are precisely compatible associative structures. In Section \ref{sec:cohomology}, we introduce the cohomology of a compatible associative algebra with coefficients in a compatible bimodule. Abelian extensions of compatible associative algebras are also treated. In Section \ref{section:deformation}, we study deformations of compatible associative algebras from cohomological points of view. Finally, compatible presimplicial modules and homology of compatible associative algebras are given in Section \ref{section:homology}. We end this paper by mentioning some problems of interest regarding (co)homology of compatible associative algebras.
\subsection*{Notations}
Given an associative algebra $(A, \mu)$, we use the notation $a \cdot b$ for the element $\mu (a, b),$ for $a,b \in A$. If $(M, l, r)$ is a bimodule over the associative algebra $A$, we use the same notation dot for left and right $A$-actions on $M$, i.e., we write $a \cdot m$ for the element $l(a,m)$ and $m \cdot a$ for the element $r (m,a)$. For a compatible associative algebra $(A, \mu_1, \mu_2)$, we write $a \cdot_1 b$ for $\mu_1 (a, b)$ and $a \cdot_2 b$ for $\mu_2 (a, b)$. Similar notations are used for compatible bimodules over a compatible associative algebra.
All vector spaces, (multi)linear maps and tensor products are over a field $\mathbb{K}$ of characteristic $0$. All vector spaces are finite-dimensional.
\section{Background} \label{section:background}
In this section, we recall the (Hochschild) cohomology theory of associative algebras and the Gerstenhaber bracket. Our main references are \cite{gers2,gers,loday-book}.
\subsection{Cohomology of associative algebras}
Let $(A, \mu)$ be an associative algebra, i.e., $A$ is a vector space and $\mu : A \otimes A \rightarrow A, (a, b) \mapsto a \cdot b$ is a bilinear map satisfying the associativity condition
\begin{align*}
( a \cdot b) \cdot c = a \cdot ( b \cdot c), ~\text{ for } a, b, c \in A.
\end{align*}
We may also denote an associative algebra simply by $A$ when the multiplication map is clear from the context.
\medskip
A bimodule over an associative algebra $A$ consists of a vector space $M$ together with bilinear maps (called left and right $A$-actions) $l : A \otimes M \rightarrow M, (a, m) \mapsto a \cdot m$ and $r : M \otimes A \rightarrow M, (m, a) \mapsto m \cdot a$ satisfying the following compatibilities
\begin{align*}
(a \cdot b ) \cdot m = a \cdot ( b \cdot m), \qquad ( a \cdot m ) \cdot b = a \cdot (m \cdot b) ~~~ \text{ and } ~~~ (m \cdot a) \cdot b = m \cdot ( a \cdot b),
\end{align*}
for $a, b \in A$ and $m \in M$. Note that we have used the same notation $\cdot$ for the multiplication on $A$ as well as left and right $A$-actions. It will be understood from the entries that which operation we mean. An $A$-bimodule as above may be simply denoted by $M$ when both left and right $A$-actions on $M$ are clear.
It follows that $A$ is an $A$-bimodule with both left and right $A$-actions are given by the algebra multiplication map.
Given an associative algebra $A$ and an $A$-bimodule $M$, the (Hochschild) cohomology groups can be defined as follows. For each $n \geq 0$, the $n$-th cochain group $C^n( A, M)$ is given by
\begin{align*}
C^n(A,M) : = \mathrm{Hom}(A^{\otimes n}, M),
\end{align*}
and the (Hochschild) coboundary map $\delta : C^n(A, M) \rightarrow C^{n+1} (A, M)$, for $n \geq 0$, is given by
\begin{align}\label{hoch-diff}
(\delta f) (a_1, a_2, \ldots, a_{n+1}) =~& a_1 \cdot f(a_2, \ldots , a_{n+1}) + \sum_{i=1}^{n} (-1)^i f ( a_1, \ldots, a_{i-1}, a_i \cdot a_{i+1}, a_{i+2}, \ldots, a_{n+1} ) \\
~& + (-1)^{n+1} f (a_1, \ldots, a_n) \cdot a_{n+1}, \nonumber
\end{align}
for $f \in C^n(A, M)$ and $a_1, \ldots, a_{n+1} \in A$. The corresponding cohomology groups are called the Hochschild cohomology of $A$ with coefficients in the $A$-bimodule $M$.
\subsection{The Gerstenhaber bracket}\label{sec:GB}
Recall that, in \cite{gers2} Gerstenhaber construct a graded Lie algebra structure on the graded space of all multilinear maps on a vector space $A$. More precisely, he considered the graded space $C^{\ast + 1} (A, A) = C^\ast (A, A) [1] = \oplus_{n \geq 0} C^{n+1} (A, A)$ and defined a graded Lie bracket (called the Gerstenhaber bracket) on $C^{\ast +1 }(A,A)$ by
\begin{align}
[f, g]_G =~& f \diamond g - (-1)^{mn}~ g \diamond f, ~~ \text{ where } \label{gers-brk}\\
(f \diamond g)(a_1, \ldots, a_{m+n+1} ) =~& \sum_{i = 1}^{m+1} (-1)^{(i-1)n}~f ( a_1, \ldots, a_{i-1}, g ( a_i, \ldots, a_{i+n}), \ldots, a_{m+n+1}), \nonumber
\end{align}
for $f \in C^{m+1} (A,A)$ and $g \in C^{n+1}(A,A)$. The importance of this graded Lie bracket is given by the following characterization of associative structures.
\begin{pro}
Let $A$ be a vector space and $\mu: A^{\otimes 2} \rightarrow A$ be a bilinear map on $A$. Then $\mu$ defines an associative structure on $A$ if and only if $[\mu,\mu]_G = 0$.
\end{pro}
Let $(A,\mu)$ be an associative algebra. Then it follows from (\ref{hoch-diff}) and (\ref{gers-brk}) that the Hochschild coboundary map $\delta : C^n (A,A) \rightarrow C^{n+1} (A,A)$ for the cohomology of $A$ with coefficients in itself is given by
\begin{align}\label{hoch-diff-brk}
\delta f = (-1)^{n-1} [\mu, f]_G, ~\text{ for } f \in C^n(A,A).
\end{align}
\section{Compatible associative algebras and their characterization}\label{section-ca}
In this section, we first recall compatible associative algebras and then define compatible bimodules over them. We also construct a graded Lie algebra whose Maurer-Cartan elements are compatible associative structures.
\subsection{Compatible associative algebras} In this subsection, we consider compatible associative algebras and compatible bimodules over them.
\begin{defi}
A compatible associative algebra is a triple $(A, \mu_1, \mu_2)$ in which $(A, \mu_1)$ and $(A, \mu_2)$ are both associative algebras satisfying the following compatibility
\begin{align*}
(a \cdot_1 b ) \cdot_2 c + ( a \cdot_2 b) \cdot_1 c = a \cdot_1 ( b \cdot_2 c ) + a \cdot_2 ( b \cdot_1 c ), ~ \text{ for } a, b, c \in A.
\end{align*}
Here $\cdot_1$ and $\cdot_2$ are used for the multiplications $\mu_1$ and $\mu_2$, respectively.
\end{defi}
We may denote a compatible associative algebra as above by $(A, \mu_1, \mu_2)$ or simply by $A$, and say that $(\mu_1, \mu_2)$ is a compatible associative algebra structure on $A$.
\begin{rmk}
It follows from the above definition that the sum $\mu_1 + \mu_2$ also defines an associative product on $A$. In other words, $(A, \mu_1 + \mu_2)$ is an associative algebra. In fact, one can show that $(A , k \mu_1 + l \mu_2)$ is an associative algebra, for any $k, l \in \mathbb{K}.$
\end{rmk}
\begin{defi}
Let $A = (A, \mu_1, \mu_2)$ and $A' = (A', \mu_1', \mu_2')$ be two compatible associative algebras. A morphism of compatible associative algebras from $A$ to $A'$ is a linear map $\phi : A \rightarrow A'$ satisfying $\phi \circ \mu_1 = \mu_1' \circ (\phi \otimes \phi)$ and $\phi \circ \mu_2 = \mu_2' \circ (\phi \otimes \phi)$.
\end{defi}
\begin{pro}
Let $A$ be a vector space. Then a pair $(\mu_1, \mu_2)$ of bilinear maps on $A$ defines a compatible associative algebra structure on $A$ if and only if
\begin{align*}
[\mu_1, \mu_1]_G = 0, \qquad [\mu_2, \mu_2]_G = 0 ~~~~ \text{ and } ~~~~ [\mu_1, \mu_2]_G = 0.
\end{align*}
\end{pro}
\begin{ex}
Let $(A, \mu)$ be an associative algebra. A Nijenhuis operator on $A$ is a linear map $N : A \rightarrow A$ satisfying
\begin{align*}
N(a) \cdot N(b) = N \big( N(a) \cdot b + a \cdot N(b) - N (a \cdot b) \big), \text{ for } a, b \in A.
\end{align*}
A Nijenhuis operator $N$ induces a new associative multiplication on $A$, denoted by $\mu_N : A \otimes A \rightarrow A, (a,b) \mapsto a \cdot_N b$ and it is defined by
\begin{align*}
a \cdot_N b := N(a) \cdot b + a \cdot N(b) - N (a \cdot b), ~ \text{ for } a, b \in A.
\end{align*}
Then it is easy to see that $(A, \mu, \mu_N)$ is a compatible associative algebra.
\end{ex}
\begin{ex}
Let $(A, \mu)$ be an associative algebra. Then the graded space $C^\ast (A,A) = \oplus_{n \geq 1} C^n (A, A)$ of Hochschild cochains of $A$ (with coefficients in itself) carries an associative cup-product \cite{gers2} given by
\begin{align*}
(f \smile_\mu g )(a_1, \ldots, a_{m+n}) = f ( a_1, \ldots, a_m) \cdot g ( a_{m+1}, \ldots, a_{m+n}),
\end{align*}
for $f \in C^m (A,A)$ and $g \in C^n(A,A)$. It is easy to see that if $(A, \mu_1, \mu_2)$ is a compatible associative algebra, then $(C^\ast (A,A), \smile_{\mu_1}, \smile_{\mu_2})$ is a compatible associative algebra.
\end{ex}
\begin{ex}
Let $(A, \mu)$ be an associative algebra and $M$ be an $A$-bimodule. If $f \in C^2 (A, M)$ is a Hochschild $2$-cocycle on $A$ with coefficients in the $A$-bimodule $M$, then $A \oplus M$ can be equipped with the $f$-twisted semidirect product associative algebra given by
\begin{align*}
(a, m) \cdot_f (b, n) = ( a \cdot b, a \cdot n + m \cdot b + f (a, b)), ~ \text{ for } (a, m), (b,n) \in A \oplus M.
\end{align*}
With this notation, it can be easily checked that $(A \oplus M, \cdot_0 , \cdot_f)$ is a compatible associative algebra.
\end{ex}
The next class of examples come from compatible Rota-Baxter operators on associative algebras. See \cite{guo-book} for more on Rota-Baxter operators.
\begin{defi}
Let $(A, \mu)$ be an associative algebra. A Rota-Baxter operator on $A$ is a linear map $R : A \rightarrow A$ satisfying
\begin{align*}
R(a) \cdot R(b) = R \big( R(a) \cdot b + a \cdot R(b) \big), ~ \text{ for } a, b \in A.
\end{align*}
\end{defi}
A Rota-Baxter operator $R$ induces a new associative multiplication $\mu_R : A \otimes A \rightarrow A,~(a,b) \mapsto a \cdot_R b$ on the underlying vector space $A$ and it is given by
\begin{align*}
a \cdot_R b = R(a) \cdot b + a \cdot R(b), ~ \text{ for } a, b \in A.
\end{align*}
\begin{defi}
Two Rota-Baxter operators $R, S : A \rightarrow A$ on an associative algebra $A$ are said to be compatible if for any $k,l \in \mathbb{K}$, the sum $k R + l S : A \rightarrow A$ is a Rota-Baxter operator on $A$. Equivalently,
\begin{align*}
R(a) \cdot S (b) + S(a) \cdot R (b) = R \big( S(a) \cdot b + a \cdot S(b) \big) + S \big( R(a) \cdot b + a \cdot R(b) \big), ~ \text{ for } a, b \in A.
\end{align*}
\end{defi}
The following result is straightforward.
\begin{pro}
Let $R, S : A \rightarrow A$ be two compatible Rota-Baxter operators on $A$. Then $(A, \cdot_R )$ and $(A, \cdot_S)$ are compatible associative algebra structures on $A$.
\end{pro}
\medskip
\begin{defi}
Let $A = (A, \mu_1, \mu_2)$ be a compatible associative algebra. A compatible $A$-bimodule consists of a quintuple $(M, l_1, r_1, l_2, r_2)$ in which $M$ is a vector space and
\begin{align*}
\begin{cases}
l_1 : A \otimes M \rightarrow M,~ (a,m) \mapsto a \cdot_1 m, \\
r_1 : M \otimes A \rightarrow M,~ (m,a) \mapsto m \cdot_1 a,
\end{cases}
~~~~
\begin{cases}
l_2 : A \otimes M \rightarrow M,~ (a,m) \mapsto a \cdot_2 m, \\
r_2 : M \otimes A \rightarrow M,~ (m,a) \mapsto m \cdot_2 a,
\end{cases}
\end{align*}
are bilinear maps such that
-~ $(M, l_1, r_1)$ is a bimodule over the associative algebra $(A, \mu_1)$;
-~ $(M, l_2, r_2)$ is a bimodule over the associative algebra $(A, \mu_2)$;
-~ the following compatibilities are hold: for $a, b \in A$ and $m \in M$,
\begin{align}
( a \cdot_1 b ) \cdot_2 m + ( a \cdot_2 b ) \cdot_1 m = a \cdot_1 ( b \cdot_2 m) + a \cdot_2 ( b \cdot_1 m), \label{comp-bi1}\\
( a \cdot_1 m ) \cdot_2 b + ( a \cdot_2 m ) \cdot_1 b = a \cdot_1 ( m \cdot_2 b) + a \cdot_2 ( m \cdot_1 b), \label{comp-bi2}\\
( m \cdot_1 a ) \cdot_2 b + ( m \cdot_2 a ) \cdot_1 b = m \cdot_1 ( a \cdot_2 b) + m \cdot_2 ( a \cdot_1 b). \label{comp-bi3}
\end{align}
\end{defi}
A compatible $A$-bimodule as above may be simply denoted by $M$ when no confusion arises.
\begin{ex}
Any compatible associative algebra $A = (A, \mu_1, \mu_2)$ is a compatible $A$-bimodule in which $l_1 = r_1 = \mu_1$ and $l_2 = r_2 = \mu_2$.
\end{ex}
\begin{rmk}
Let $A = (A, \mu_1, \mu_2)$ be a compatible associative algebra and $(M, l_1, r_1, l_2, r_2)$ be a compatible $A$-bimodule. Then it is easy to see that $(M, l_1 + l_2, r_1 + r_2)$ is a bimodule over the associative algebra $(A, \mu_1 + \mu_2).$
\end{rmk}
Given an associative algebra and a bimodule, one can construct the dual bimodule \cite{loday-book}. This can be generalized to the case of compatible associative algebras.
\begin{pro}
Let $A$ be a compatible associative algebra and $M$ be a compatible $A$-bimodule. Then the dual space $M^*$ also carries a compatible $A$-bimodule structure given by
\begin{align*}
\begin{cases}
( a \cdot_1 \alpha) (m) = \alpha ( m \cdot_1 a), \\
(\alpha \cdot_1 a) (m) = \alpha ( a \cdot_1 m),
\end{cases}
~~~~
\begin{cases}
( a \cdot_2 \alpha) (m) = \alpha ( m \cdot_2 a), \\
(\alpha \cdot_2 a) (m) = \alpha ( a \cdot_2 m), \text{ for } a \in A, \alpha \in M^*, m \in M.
\end{cases}
\end{align*}
\end{pro}
\begin{proof}
We only need to check the compatibility conditions (\ref{comp-bi1})-(\ref{comp-bi3}). For $a, b \in A$, $\alpha \in M^*$ and $m \in M$, we have
\begin{align*}
\big( (a \cdot_1 b) \cdot_2 \alpha + ( a \cdot_2 b ) \cdot_1 \alpha \big) (m) =~& \alpha \big( m \cdot_2 ( a \cdot_1 b) + m \cdot_1 ( a \cdot_2 b ) \big) \\
=~& \alpha \big( (m \cdot_1 a) \cdot_2 b + (m \cdot_2 a) \cdot_1 b \big) \\
=~& (b \cdot_2 \alpha ) (m \cdot_1 a ) + (b \cdot_1 \alpha) (m \cdot_2 a) \\
=~& \big( a \cdot_1 ( b \cdot_2 \alpha ) + a \cdot_2 ( b \cdot_1 \alpha) \big) (m).
\end{align*}
Hence, we have $(a \cdot_1 b) \cdot_2 \alpha + ( a \cdot_2 b ) \cdot_1 \alpha = a \cdot_1 ( b \cdot_2 \alpha ) + a \cdot_2 ( b \cdot_1 \alpha)$. This verifies the condition (\ref{comp-bi1}). The verifications of (\ref{comp-bi2}) and (\ref{comp-bi3}) are similar. Hence the proof.
\end{proof}
The following result generalizes the semidirect product for associative algebras \cite{loday-book}.
\begin{pro}\label{semi-prop}
Let $A$ be a compatible associative algebra and $M$ be a compatible $A$-bimodule. Then the direct sum $A \oplus M$ carries a compatible associative algebra structure given by
\begin{align*}
(a,m) \cdot_1 (b, n) = (a \cdot_1 b,~ a \cdot_1 n + m \cdot_1 b),\\
(a,m) \cdot_2 (b, n) = (a \cdot_2 b,~ a \cdot_2 n + m \cdot_2 b),
\end{align*}
for $(a,m), (b,n) \in A \oplus M$. This is called the semidirect product.
\end{pro}
\subsection{A new graded Lie algebra and characterization of compatible associative algebras}
Let $(\mathfrak{g} = \oplus_n \mathfrak{g}^n, [~,~])$ be a graded Lie algebra. An element $\theta \in \mathfrak{g}^1$ is said to be a Maurer-Cartan element of $\mathfrak{g}$ if $\theta$ satisfies
\begin{align*}
[\theta, \theta] = 0.
\end{align*}
\begin{rmk}\label{mc-rem}
\begin{itemize}
\item[(i)] A Maurer-Cartan element $\theta$ induces a degree $1$ coboundary map $d_\theta : =[ \theta , - ]$ on $\mathfrak{g}$. Infact, the differential $d_\theta$ makes the triple $(\mathfrak{g}, [~,~], d_\theta)$ into a differential graded Lie algebra.
\item[(ii)] Let $\theta$ be a Maurer-Cartan element. For any $\theta' \in \mathfrak{g}^1$, the sum $\theta + \theta'$ is a Maurer-Cartan element of $\mathfrak{g}$ if and only if $\theta'$ satisfies
\begin{align*}
d_\theta (\theta') + \frac{1}{2} [\theta', \theta' ] = 0.
\end{align*}
\end{itemize}
\end{rmk}
\begin{defi}
Two Maurer-Cartan elements $\theta_1$ and $\theta_2$ are said to be compatible if they additionally satisfy $[\theta_1, \theta_2] = 0$. In this case, we say that $(\theta_1, \theta_2)$ is a compatible pair of Maurer-Cartan elements of $\mathfrak{g}$.
\end{defi}
In the rest of this section, we assume that the underlying graded vector space $\mathfrak{g}$ is non-negatively graded, i.e., $\mathfrak{g} = \oplus_{n \geq 0} \mathfrak{g}^n$. We construct a new graded Lie algebra $\mathfrak{g}_c$ whose Maurer-Cartan elements are precisely compatible pair of Maurer-Cartan elements of $\mathfrak{g}.$
We define $\mathfrak{g}_c = \oplus_{n \geq 0} (\mathfrak{g}_c)^n$, where
\begin{align*}
(\mathfrak{g}_c)^0 = \mathfrak{g}^0 ~~~~ \text{ and } ~~~~ (\mathfrak{g}_c)^n = \underbrace{\mathfrak{g}^n \oplus \cdots \oplus \mathfrak{g}^n}_{(n+1) \text{ times}}, ~\text{ for } n \geq 1.
\end{align*}
Let $\llbracket ~,~ \rrbracket : (\mathfrak{g}_c)^m \times (\mathfrak{g}_c)^n \rightarrow (\mathfrak{g}_c)^{m+n}$, for $m,n \geq 0$, be the degree $0$ bracket defined by
\begin{align}\label{br}
\llbracket &(f_1, \ldots, f_{m+1}), (g_1, \ldots, g_{n+1}) \rrbracket := \\
&= \big( [f_1, g_1],~ [f_1, g_2] + [f_2, g_1],~ \ldots, \underbrace{[f_1, g_i] + [f_2, g_{i-1}] + \cdots + [f_i, g_1]}_{i\text{-th place}},~ \ldots, [f_{m+1}, g_{n+1}] \big), \nonumber
\end{align}
for $(f_1, \ldots, f_{m+1}) \in (\mathfrak{g}_c)^m$ and $(g_1, \ldots, g_{n+1}) \in (\mathfrak{g}_c)^n.$
\begin{pro}
\begin{itemize}
\item[(i)] With the above notations, $(\mathfrak{g}_c, \llbracket ~, ~ \rrbracket)$ is a graded Lie algebra. Moreover, the map $\phi : \mathfrak{g}_c \rightarrow \mathfrak{g}$ defined by
\begin{align*}
\phi (f ) =~& f, ~ \text{ for } f \in (\mathfrak{g}_c)^0 = \mathfrak{g}^0,\\
\phi ((f_1, \ldots, f_{n+1})) =~& f_1 + \cdots + f_{n+1},~ \text{ for } (f_1, \ldots, f_{n+1}) \in (\mathfrak{g}_c)^n
\end{align*}
is a morphism of graded Lie algebras.
\item[(ii)] A pair $(\theta_1, \theta_2)$ of elements of $\mathfrak{g}^1$ is a compatible pair of Maurer-Cartan elements of $\mathfrak{g}$ if and only if $(\theta_1, \theta_2) \in (\mathfrak{g}_c)^1 = \mathfrak{g}^1 \oplus \mathfrak{g}^1$ is a Maurer-Cartan element in the graded Lie algebra $(\mathfrak{g}_c, \llbracket ~, ~ \rrbracket).$
\end{itemize}
\end{pro}
\begin{proof}
(i) For $(f_1, \ldots, f_{m+1}) \in (\mathfrak{g}_c)^m$, $(g_1, \ldots, g_{n+1}) \in (\mathfrak{g}_c)^n$ and $(h_1, \ldots, h_{p+1}) \in (\mathfrak{g}_c)^p$,
\begin{align*}
&\llbracket (f_1, \ldots, f_{m+1}), \llbracket (g_1, \ldots, g_{n+1}) , (h_1, \ldots, h_{p+1}) \rrbracket \rrbracket \\
&= \llbracket (f_1, \ldots, f_{m+1}) , \big( [g_1, h_1], \ldots, \underbrace{\sum_{q+r = i+1} [g_q, h_r]}_{i\text{-th place}}, \ldots, [g_{n+1}, h_{p+1}] \big) \rrbracket \\
&= \big( [f_1, [g_1, h_1]], \ldots, \underbrace{ \sum_{p+q+r= i+2} [f_p, [g_q, h_r]]}_{i\text{-th place}}, \ldots, [f_{m+1}, [g_{n+1}, h_{p+1}]] \big)\\
&= \bigg( [[f_1, g_1], h_1] + (-1)^{mn} ~ [g_1, [f_1, h_1]]~, \ldots, \underbrace{\sum_{p+q+r = i+2} [[f_p, g_q], h_r] + (-1)^{mn} ~[g_q, [f_p, h_r]]}_{i\text{-th place}}, \\
& \qquad \qquad \qquad \ldots, [[f_{m+1}, g_{n+1}], h_{p+1}] + (-1)^{mn} ~ [g_{n+1}, [f_{m+1}, h_{p+1}]] \bigg) \\
&= \llbracket \llbracket (f_1, \ldots, f_{m+1}), (g_1, \ldots, g_{n+1}) \rrbracket, (h_1, \ldots, h_{p+1}) \rrbracket \\
& \qquad \qquad \qquad + (-1)^{mn}~ \llbracket (g_1, \ldots, g_{n+1}), \llbracket (f_1, \ldots, f_{m+1}), (h_1, \ldots, h_{n+1}) \rrbracket \rrbracket.
\end{align*}
Hence the first part follows. We also have
\begin{align*}
\phi \llbracket (f_1, \ldots, f_{m+1}), (g_1, \ldots, g_{n+1}) \rrbracket =~& \sum_{i=1}^{m+n+1} \sum_{q+r = i+1} [f_q, g_r] \\
=~&[f_1 + \cdots + f_{m+1}, ~g_1 + \cdots + g_{n+1}] \\
=~& [\phi (f_1, \ldots, f_{m+1}), \phi (g_1, \ldots, g_{n+1}) ],
\end{align*}
which completes the second part.
(ii) For a pair $(\theta_1, \theta_2)$ of elements of $\mathfrak{g}^1$, we have
\begin{align*}
\llbracket (\theta_1, \theta_2), (\theta_1, \theta_2) \rrbracket = ( [\theta_1, \theta_1], [\theta_1, \theta_2] + [\theta_2, \theta_1], [\theta_2, \theta_2]) = ([\theta_1, \theta_1], 2[\theta_1, \theta_2], [\theta_2, \theta_2]).
\end{align*}
Thus $(\theta_1, \theta_2) \in (\mathfrak{g}_c)^1$ is a Maurer-Cartan element in $\mathfrak{g}_c$ if and only if $(\theta_1, \theta_2)$ is a pair of compatible Maurer-Cartan elements in $\mathfrak{g}$.
\end{proof}
Thus, from the Gerstenhaber graded Lie bracket (defined in Subsection \ref{sec:GB}) and the above proposition, we get the following.
\begin{thm}
Let $A$ be a vector space.
\begin{itemize}
\item[(i)] Then $C^{\ast + 1}_c (A, A) := \oplus_{n \geq 0} C^{n+1}_c (A,A)$, where
\begin{align*}
C^1_c (A,A) = C^1(A,A) ~~~~ \text{ and } ~~~~ C^{n+1}_c (A,A) = \underbrace{C^{n+1} (A,A) \oplus \cdots \oplus C^{n+1}(A,A)}_{(n+1) \text{ times}}, ~\text{ for } n \geq 1
\end{align*}
is a graded Lie algebra with bracket given by (\ref{br}) where $[~,~]_G$ is replaced by $[~,~]$. Moreover, the map
\begin{align}\label{the-phi}
\phi : C^{\ast + 1}_c (A,A) \rightarrow C^{\ast + 1 } (A,A),~ (f_1, \ldots, f_{n+1}) \mapsto f_1 + \cdots + f_{n+1}, \text{ for } n \geq 0
\end{align}
is a morphism of graded Lie algebras.
\item[(ii)] A pair $(\mu_1, \mu_2) \in C^2(A,A) \oplus C^2(A,A)$ defines a compatible associative algebra structure on $A$ if and only if $(\mu_1, \mu_2) \in C^2_c(A,A)$ is a Maurer-Cartan element in the graded Lie algebra $(C^{\ast + 1}_c (A,A), \llbracket ~, ~ \rrbracket)$.
\end{itemize}
\end{thm}
Let $(A, \mu_1, \mu_2)$ be a compatible associative algebra. Then it follows from Remark \ref{mc-rem} that there is a degree $1$ coboundary map
\begin{align}\label{d-mu}
d_{(\mu_1, \mu_2)} := \llbracket (\mu_1, \mu_2), - \rrbracket : C^n_c (A,A) \rightarrow C^{n+1}_c (A,A), \text{ for } n \geq 1
\end{align}
which makes $(C^{\ast + 1}_c (A, A), \llbracket ~, ~ \rrbracket, d_{(\mu_1, \mu_2)} )$ into a differential graded Lie algebra.
\begin{rmk}
Later we will introduce the cohomology of a compatible associative algebra $A$ with coefficients in a compatible $A$-bimodule. We will see that the cohomology of the cochain complex $\{ C^\ast_c (A,A), d_{(\mu_1, \mu_2)} \}$ is the cohomology of $A$ with coefficients in itself.
\end{rmk}
We also have the following result from Remark \ref{mc-rem}.
\begin{pro}
Let $(\mu_1, \mu_2)$ be a compatible associative algebra structure on $A$. For any $(\mu_1', \mu_2') \in C^2_c (A,A) = C^2(A,A) \oplus C^2(A,A)$, the pair $(\mu_1 + \mu_1', \mu_2 + \mu_2')$ is a compatible associative algebra structure on $A$ if and only if $(\mu_1', \mu_2')$ satisfies
\begin{align*}
d_{(\mu_1, \mu_2)} (\mu_1', \mu_2') + \frac{1}{2} \llbracket (\mu_1', \mu_2') , (\mu_1', \mu_2') \rrbracket = 0.
\end{align*}
\end{pro}
\section{Cohomology of compatible associative algebras}\label{sec:cohomology}
In this section, we introduce the cohomology of a compatible associative algebra with coefficients in a compatible bimodule. When considering the cohomology of a compatible associative algebra with coefficients in itself, it carries a graded Lie algebra structure. We also introduce abelian extensions of a compatible associative algebra and classify equivalence classes of abelian extensions in terms of the second cohomology group.
\subsection{Cohomology}
Let $(A, \mu_1, \mu_2)$ be a compatible associative algebra and $M = (M, l_1, r_1, l_2, r_2)$ be a compatible $A$-bimodule. Let
\begin{align*}
\delta_1 : C^n(A, M) \rightarrow C^{n+1} (A, M), ~ n \geq 0,
\end{align*}
denotes the coboundary operator for the Hochschild cohomology of $(A, \mu_1)$ with coefficients in the bimodule $(M, l_1, r_1)$, and
\begin{align*}
\delta_2 : C^n(A, M) \rightarrow C^{n+1} (A, M), ~ n \geq 0,
\end{align*}
denotes the coboundary operator for the Hochschild cohomology of $(A, \mu_2)$ with coefficients in the bimodule $(M, l_2, r_2)$. Then we obviously have
\begin{align*}
(\delta_1)^2 = 0 \qquad \text{ and } \qquad (\delta_2)^2 = 0.
\end{align*}
Since the two associative structures $\mu_1$ and $\mu_2$ on $A$, and the corresponding bimodule structures on $M$ are compatible, we may expect some compatibility between the coboundaries $\delta_1$ and $\delta_2$. Before we state the compatibility, we observe the following.
Here we first give the interpretation of $\delta_1$ and $\delta_2$ in terms of two associative algebra structures on $A \oplus M$ given in Proposition \ref{semi-prop}. Let $\pi_1, \pi_2 \in C^2 (A \oplus M, A \oplus M)$ denote the elements corresponding to the associative products on $A \oplus M$.
Note that any map $f \in C^n(A, M)$ can be lifts to a map $\widetilde{f} \in C^n (A \oplus M, A \oplus M)$ by
\begin{align*}
\widetilde{f} \big( (a_1, m_1), \ldots, (a_n, m_n) \big) = \big( 0, f (a_1, \ldots, a_n ) \big),
\end{align*}
for $(a_i, m_i) \in A \oplus M$ and $i=1, \ldots, n$. Moreover, we have $f=0$ if and only if $\widetilde{f} = 0$. With all these notations, we have
\begin{align*}
\widetilde{(\delta_1 f )} = (-1)^{n-1}~[ \pi_1, \widetilde{f}]_\mathsf{G} \qquad \text{ and } \qquad \widetilde{(\delta_2 f )} = (-1)^{n-1}~[ \pi_2, \widetilde{f}]_\mathsf{G},
\end{align*}
for $f \in C^n(A, M)$. We are now ready to prove the compatibility condition satisfied by $\delta_1$ and $\delta_2$. More precisely, we have the following.
\begin{pro}\label{delta-comp}
The coboundary operators $\delta_1$ and $\delta_2$ satisfy
\begin{align*}
\delta_1 \circ \delta_2 + \delta_2 \circ \delta_1 = 0.
\end{align*}
\end{pro}
\begin{proof}
For any $f \in C^n (A,M)$, we have
\begin{align*}
&\widetilde{( \delta_1 \circ \delta_2 + \delta_2 \circ \delta_1)(f)} \\
&= (-1)^{n} ~ [\pi_1, \widetilde{\delta_2 f}]_\mathsf{G} ~+~ (-1)^{n} ~ [\pi_2, \widetilde{\delta_1 f}]_\mathsf{G} \\
&= - [\pi_1, [\pi_2, \widetilde{f}]_\mathsf{G} ]_\mathsf{G} - [\pi_2, [\pi_1, \widetilde{f}]_\mathsf{G} ]_\mathsf{G} \\
&= - [[\pi_1, \pi_2]_\mathsf{G}, \widetilde{f}]_\mathsf{G} = 0 ~~~~ \qquad (\text{because} ~[\pi_1, \pi_2]_\mathsf{G} = 0).
\end{align*}
Therefore, it follows that $ ( \delta_1 \circ \delta_2 + \delta_1 \circ \delta_1)(f) = 0$. Hence the result follows.
\end{proof}
The compatibility condition of the above proposition leads to cohomology associated with a compatible associative algebra with coefficients in a compatible bimodule. Let $A$ be a compatible associative algebra and $M$ be a compatible $A$-bimodule. We define the $n$-th cochain group $C^n_c (A, M)$, for $n \geq 0$, by
\begin{align*}
C^0_c (A, M) :=~& \{ m \in M ~|~ a \cdot_1 m - m \cdot_1 a = a \cdot_2 m - m \cdot_2 a, ~\forall a \in A \},\\
C^n_c (A, M) :=~& \underbrace{C^n (A, M) \oplus \cdots \oplus C^n (A, M)}_{n \text{ copies}}, ~ \text{ for } n \geq 1.
\end{align*}
Define a map $\delta_c : C^n_c (A, M) \rightarrow C^{n+1}_c (A, M)$, for $n \geq 0$, by
\begin{align}
\delta_c (m) (a) :=~& a \cdot_1 m - m \cdot_1 a = a \cdot_2 m - m \cdot_2 a, \text{ for } m \in C^0_c (A, M) \text{ and } a \in A, \label{dc-1}\\
\delta_c (f_1, \ldots, f_n ) :=~& ( \delta_1 f_1, \ldots, \underbrace{\delta_1 f_i + \delta_2 f_{i-1}}_{i-\text{th place}}, \ldots, \delta_2 f_n),\label{dc-2}
\end{align}
for $(f_1, \ldots, f_n ) \in C^n_c (A, M)$. The map $\delta_c$ can be understood by the following diagram:
\begin{align*}
\xymatrix{
& & & C^3(A,M) & \\
& & C^2(A,M) \ar[ru]^{\delta_1} \ar[rd]_{\delta_2} & & \\
C^0_c (A,M) \ar[r]^{\delta_1 = \delta_2} & C^1(A,M) \ar[ru]^{\delta_1} \ar[rd]_{\delta_2} & & C^3(A,M) & \cdots \cdot\\
& & C^2(A,M) \ar[ru]^{\delta_1} \ar[rd]_{\delta_2} & & \\
& & & C^3(A,M) & \\
}
\end{align*}
Observe that, Proposition \ref{delta-comp} and the above diagrammatic presentation of $\delta_c$ shows that $(\delta_c)^2 =0$. However, we give a rigorous proof of the same fact.
\begin{pro}
The map $\delta_c$ is a coboundary operator, i.e., $(\delta_c)^2 = 0$.
\end{pro}
\begin{proof}
For $m \in C^0_c (A,M)$, we have
\begin{align*}
(\delta_c)^2 (m) = \delta_c ( \delta_c m ) =~& (\delta_1 \delta_c m~,\delta_2 \delta_c m ) \\
=~& ( \delta_1 \delta_1 m~, \delta_2 \delta_2 m) = 0.
\end{align*}
Moreover, for any $(f_1, \ldots, f_n) \in C^n_c (A,M)$, $n \geq 1$, we have
\begin{align*}
(\delta_c)^2 (f_1, \ldots, f_n)
&= \delta_c \big( \delta_1 f_1, \ldots, \delta_1 f_i + \delta_2 f_{i-1}, \ldots, \delta_2 f_n \big) \\
&= \big( \delta_1 \delta_1 f_1~, \delta_2 \delta_1 f_1 + \delta_1 \delta_2 f_1 + \delta_1 \delta_1 f_2~, \ldots, \\
& \qquad \underbrace{ \delta_2 \delta_2 f_{i-2} + \delta_2 \delta_1 f_{i-1} + \delta_1 \delta_2 f_{i-1} + \delta_1 \delta_1 f_i }_{3 \leq i \leq n-1}~, \ldots, \\
& \qquad \delta_2 \delta_2 f_{n-1} + \delta_2 \delta_1 f_n + \delta_1 \delta_2 f_n ~,~ \delta_2 \delta_2 f_n \big) \\
&= 0 ~~~\quad (\text{from Proposition } \ref{delta-comp}).
\end{align*}
This proves that $(\delta_c)^2 = 0$.
\end{proof}
Thus, we have a cochain complex $\{ C^\ast_c (A, M), \delta_c \}$. Let $Z^n_c (A, M)$ denote the space of $n$-cocycles and $B^n_c (A, M)$ the space of $n$-coboundaries. Then we have $B^n_c (A, M) \subset Z^n_c (A, M)$, for $n \geq 0$. The corresponding quotient groups
\begin{align*}
H^n_c (A, M) := \frac{ Z^n_c (A, M) }{ B^n_c (A, M)}, \text{ for } n \geq 0
\end{align*}
are called the cohomology of the compatible associative algebra $A$ with coefficients in the compatible $A$-bimodule $M$.
\medskip
It follows from the above definition that
\begin{align*}
H^0_c (A, M) = \{ m \in M ~|~ a \cdot_1 m - m \cdot_1 a = a \cdot_2 m - m \cdot_2 a, ~ \forall a \in A \}.
\end{align*}
A linear map $D: A \rightarrow M$ is said to be a derivation on $A$ with values in the compatible $A$-bimodule $M$ if $D$ satisfies
\begin{align*}
D ( a \cdot_1 b ) = D(a) \cdot_1 b + a \cdot_1 D(b) \quad \text{ and } \quad D ( a \cdot_2 b ) = D(a) \cdot_2 b + a \cdot_2 D(b), \text{ for } a, b \in A.
\end{align*}
We denote the space of derivations by $\mathrm{Der}(A, M)$. A derivation $D$ is said to be an inner derivation if $D$ is of the form
$D (a) = a \cdot_1 m - m \cdot_1 a = a \cdot_2 m - m \cdot_2 a$, for some $m \in C^0_c (A, M).$ The space of inner derivations are denoted by $\mathrm{InnDer}(A, M).$ Then we have
\begin{align*}
H^1_c (A,M) = \frac{ \mathrm{Der}(A, M) }{ \mathrm{InnDer}(A, M) }.
\end{align*}
\subsection{Particular case: Cohomology with self coefficients}
Let $A = (A, \mu_1, \mu_2)$ be a compatible associative algebra. Then we have seen that $A$ is itself a compatible $A$-bimodule. If $\delta_1$ and $\delta_2$, respectively, denote the coboundary operator
for the Hochschild cohomology of $(A, \mu_1)$ and $(A, \mu_2)$ with self coefficients, then we have from (\ref{hoch-diff-brk}) that
\begin{align*}
\delta_1 f = (-1)^{n-1} [\mu_1, f]_G ~~~~ \text{ and } ~~~~ \delta_2 f = (-1)^{n-1} [\mu_2, f ]_G, ~ \text{ for } f \in C^n (A, A).
\end{align*}
Thus, it follows from (\ref{dc-2}) that the coboundary map $\delta_c : C^n_c (A, A) \rightarrow C^{n+1}_c(A, A)$ for the cohomology of the compatible associative algebra $A$ with self coefficients is given by
\begin{align}
\delta_c (f_1, \ldots, f_n ) =~& (-1)^{n-1} \big( [\mu_1, f_1]_G, [\mu_1, f_2]_G + [\mu_2, f_1]_G, \ldots, [\mu_2, f_n]_G \big) \label{diff-gers}\\
=~& (-1)^{n-1} \llbracket (\mu_1, \mu_2), (f_1, \ldots, f_n ) \rrbracket, \nonumber
\end{align}
for $(f_1, \ldots, f_n ) \in C^n_c (A, A)$, $n \geq 1$. This shows that $\delta_c$ is same as the coboundary operator $d_{(\mu_1, \mu_2)}$ (defined in (\ref{d-mu})) up to a sign. Therefore, the corresponding cohomologies are isomorphic.
As a consequence, we get the following.
\begin{thm}\label{thm-gla-coho}
Let $A$ be a compatible associative algebra. Then the graded Lie bracket $\llbracket ~, ~ \rrbracket$ on $C^{\ast +1 }_c (A, A)$ induces a graded Lie bracket on the graded space $H^{\ast +1}_c (A, A)$ of cohomology groups.
\end{thm}
Let $(A, \mu_1, \mu_2)$ be a compatible associative algebra. Then we know that $(A, \mu_1 + \mu_2)$ is an associative algebra. Note that the cohomology of the compatible associative algebra $(A, \mu_1, \mu_2)$ with coefficients in itself is induced by the Maurer-Cartan element $(\mu_1, \mu_2) \in C^2_c (A, A)$ in the graded Lie algebra $(C^{\ast +1}_c (A, A), \llbracket ~, ~ \rrbracket)$. On the other hand, the Hochschild cohomology of the associative algebra $(A, \mu_1+ \mu_2)$ with coefficients in itself is induced by the Maurer-Cartan element $\mu_1 + \mu_2 \in C^2(A, A)$ in the graded Lie algebra $(C^{\ast +1} (A, A), [~,~]_G)$. Moreover, it follows from (\ref{the-phi}) that $\phi ((\mu_1, \mu_2)) = \mu_1 + \mu_2$. Hence, the graded Lie algebra map $\phi$ takes the Maurer-Cartan element $(\mu_1, \mu_2) \in C^2_c (A, A)$ to the Maurer-Cartan element $\mu_1 + \mu_2 \in C^2(A,A).$ Therefore, $\phi$ gives rise to a map between the cohomologies induced by Maurer-Cartan elements.
\begin{thm}
Let $(A, \mu_1, \mu_2)$ be a compatible associative algebra. Then the map (\ref{the-phi}) induces a morphism
\begin{align*}
\phi_* : H^\ast_c (A,A) \rightarrow H^\ast (A,A)
\end{align*}
from the cohomology of the compatible associative algebra $(A, \mu_1, \mu_2)$ with coefficients in itself to the Hochschild cohomology of the associative algebra $(A, \mu_1 + \mu_2)$ with coefficients in itself.
\end{thm}
\subsection{Relation with the cohomology of compatible Lie algebras}
Recently, the authors in \cite{comp-lie} introduced a cohomology theory for compatible Lie algebras. In this subsection, we show that our cohomology of compatible associative algebras is related to the cohomology of \cite{comp-lie} by the skew-symmetrization process. Let us first recall some results from the above-mentioned reference.
\begin{defi}
A compatible Lie algebra is a triple $(\mathfrak{g}, [~,~]_1, [~,~]_2)$ consists of a vector space $\mathfrak{g}$ together with two Lie brackets $[~,~]_1$ and $[~,~]_2$ satisfying the compatibility
\begin{align*}
[x,[y,z]_1]_2 + [y,[z,x]_1]_2 + [z,[x, y]_1]_2 + [x,[y,z]_2]_1 + [y,[z,x]_2]_1 + [z,[x, y]_2]_1 = 0, \text{ x, y, z } \in \mathfrak{g}.
\end{align*}
\end{defi}
\begin{defi}
Let $(\mathfrak{g}, [~,~]_1, [~,~]_2)$ be a compatible Lie algebra. A compatible $\mathfrak{g}$-representation is a triple $(V, \rho_1, \rho_2)$, where $(V, \rho_1)$ is a representation of the Lie algebra $(\mathfrak{g}, [~,~]_1)$ and $(V, \rho_2)$ is a representation of the Lie algebra $(\mathfrak{g}, [~,~]_2)$ satisfying additionally
\begin{align*}
\rho_2 ([x, y]_1) + \rho_1 ([x, y]_2) = \rho_1 (x) \rho_2 (y) - \rho_1 (y) \rho_2 (x) + \rho_2 (x) \rho_1 (y) - \rho_2 (y) \rho_1 (x),~\text{ for } x, y \in \mathfrak{g}.
\end{align*}
\end{defi}
Given a compatible Lie algebra $(\mathfrak{g}, [~,~]_1, [~,~]_2)$ and a compatible $\mathfrak{g}$-representation $(V, \rho_1, \rho_2)$, there is a cochain complex $\{ C^\ast_{cL} (\mathfrak{g}, V), \delta_{cL})$ defined as follows:
\begin{align*}
C^0_{cL} (\mathfrak{g}, V) :=~& \{ v \in V |~ \rho_1 (x) (v) = \rho_2 (x)(v), \forall x \in \mathfrak{g} \}, ~~ \text{ and } \\
C^n_{cL} (\mathfrak{g}, V) :=~& \underbrace{C^n_L (\mathfrak{g}, V) \oplus \cdots \oplus C^n_L (\mathfrak{g}, V)}_{n \text{ times}}, \text{ for } n \geq 1,
\end{align*}
where $C^n_L (\mathfrak{g}, V) = \mathrm{Hom}(\wedge^n \mathfrak{g}, V).$ The coboundary operator $\delta_{cL} : C^n_{cL} (\mathfrak{g}, V) \rightarrow C^{n+1}_{cL} (\mathfrak{g}, V)$ is given by
\begin{align*}
\delta_{cL} (v) (x) := \rho_1 (x) (v) = \rho_2 (x) (v), ~ \text{ for } v \in C^0_{cL} (\mathfrak{g}, V), x \in \mathfrak{g},\\
\delta_{cL} (f_1, \ldots, f_n ) := ( \delta_{1L} f_1, \ldots, \underbrace{\delta_{1L} f_i + \delta_{2L} f_{i-1}}_{i\text{-th place}}, \ldots, \delta_{2L} f_n),
\end{align*}
for $(f_1, \ldots, f_n ) \in C^n_{cL} (\mathfrak{g}, V)$. Here $\delta_{1L}$ (resp. $\delta_{2L}$) is the coboundary operator for the Chevalley-Eilenberg cohomology of the Lie algebra $(\mathfrak{g}, [~,~]_1)$ with coefficients in $(V, \rho_1)$ ~~ (resp. of the Lie algebra $(\mathfrak{g}, [~,~]_2)$ with coefficients in $(V, \rho_2)$ ). The cohomology of the cochain complex $\{ C^\ast_{cL} (\mathfrak{g}, V), \delta_{cL} \}$ is called the cohomology of the compatible Lie algebra $(\mathfrak{g}, [~,~]_1, [~,~]_2)$ with coefficients in the compatible $\mathfrak{g}$-representation $(V, \rho_1, \rho_2)$, and they are denoted by $H^\ast_{cL} (\mathfrak{g}, V).$
\medskip
It is a well-known fact that the standard skew-symmetrization gives rise to a map from the Hochschild cochain complex of an associative algebra to the Chevalley-Eilenberg cohomology complex of the corresponding skew-symmetrized Lie algebra. This can be generalized to compatible algebras as well.
Let $(A, \mu_1, \mu_2)$ be a compatible associative algebra. Then it can be easily checked that the triple $(A, [~,~]_1, [~,~]_2)$
is a compatible Lie algebra, where
\begin{align*}
[a, b]_1 := a \cdot_1 b - b \cdot_1 a ~~~~ \text{ and } ~~~~ [a, b]_2 := a \cdot_2 b - b \cdot_2 a, \text{ for } a, b \in A.
\end{align*}
We denote this compatible Lie algebra by $A_s$. Moreover, if $M$ is a compatible associative $A$-bimodule, then $M$ can be regarded as a compatible $A_s$-representation by
\begin{align*}
\rho_1 (a)(m) := a \cdot_1 m - m \cdot_1 a ~~~~ \text{ and } ~~~~ \rho_2 (a)(m) := a \cdot_2 m - m \cdot_2 a, ~ \text{ for } a \in A_s, m \in M .
\end{align*}
This compatible $A_s$-representation is denoted by $M_s$. With these notations, we have the following.
\begin{thm}
Let $A$ be a compatible associative algebra and $M$ be a compatible $A$-bimodule. Then the standard skew-symmetrization
\begin{align*}
\Phi_n : C^n_c (A,& M) \rightarrow C^n_{cL} (A_s, M_s),~ (f_1, \ldots, f_n) \mapsto (\overline{f_1}, \ldots, \overline{f_n}),~ \text{ for } n \geq 0, \\~~~ &\text{ where }
\overline{f_i} ( a_1, \ldots, a_n ) = \sum_{\sigma \in \mathbb{S}_n} (-1)^\sigma~ f_i (a_{\sigma (1)}, \ldots, a_{\sigma(n)}),~~~ i = 1, \ldots, n,
\end{align*}
gives rise a morphism of cochain complexes. Hence it induces a map $\Phi_\ast : H^\ast_c (A, M) \rightarrow H^\ast_{cL} (A_s, M_s)$.
\end{thm}
\subsection{Abelian extensions of compatible associative algebras}
In this subsection, we generalize the classical abelian extensions of associative algebras \cite{loday-book} to the context of compatible associative algebras. We show that equivalence classes of abelian extensions of a compatible associative algebra are characterized by the second cohomology group of the compatible associative algebra.
Let $(A, \mu_1, \mu_2)$ be a compatible associative algebra and $M$ be any vector space. Note that $M$ can also be considered as a compatible associative algebra with trivial associative products.
\begin{defi}
An abelian extension of a compatible associative algebra $A$ be a vector space $M$ is an exact sequence
\begin{align}\label{abel-exact-seq}
\xymatrix{
0 \ar[r] & (M, 0, 0) \ar[r]^{i} & (B, \mu_1^B, \mu_2^B) \ar[r]^{j} & (A, \mu_1, \mu_2) \ar[r] & 0
}
\end{align}
of compatible associative algebras.
\end{defi}
It is important to note that an abelian extension is the whole exact sequence (including the structure maps $i$ and $j$), not just the compatible associative algebra $(B, \mu_1^B, \mu_2^B)$.
Let $s: A \rightarrow B$ be any map satisfying $j \circ s = \mathrm{id}_A$. Such a map always exist. In this case, $s$ is called a section of the map $j$. A section $s$ induces a compatible $A$-bimodule structure on $M$ given by
\begin{align*}
\begin{cases}
a \cdot_1 m = \mu_1^B ( s(a), i(m)), \\
m \cdot_1 a = \mu_1^B (i(m), s(a)),
\end{cases} \qquad \quad
\begin{cases}
a \cdot_2 m = \mu_2^B ( s(a), i(m)), \\
m \cdot_2 a = \mu_2^B (i(m), s(a)),
\end{cases}
\end{align*}
for $a \in A$ and $m \in M$. One can easily check that this compatible $A$-bimodule structure on $M$ is independent of the choice of $s$.
\begin{defi}
Two abelian extensions (two horizontal rows in the below diagram) of a compatible associative algebra $A$ by a vector space $M$ are said to be equivalent if there is a compatible associative algebra morphism $\phi : B \rightarrow B'$ making the following diagram commutative
\[
\xymatrix{
0 \ar[r] & (M, 0, 0) \ar[r]^{i} \ar@{=}[d] & (B, \mu_1^B, \mu_2^B) \ar[d]^{\phi} \ar[r]^{j} & (A, \mu_1, \mu_2) \ar[r] \ar@{=}[d] & 0 \\
0 \ar[r] & (M, 0, 0) \ar[r]_{i'} & (B', \mu_1^{B'}, \mu_2^{B'}) \ar[r]_{j'} & (A, \mu_1, \mu_2) \ar[r] & 0.
}
\]
\end{defi}
Let $A$ be a compatible associative algebra and $M$ be a given compatible $A$-bimodule. We denote by Ext$(A,M)$ the set of equivalence classes of abelian extensions of $A$ by the vector space $M$ so that the induced compatible $A$-bimodule structure on $M$ is the prescribed one.
Then we have the following which generalizes the classical result \cite{loday-book} about abelian extensions.
\begin{thm}
Let $A$ be a compatible associative algebra and $M$ be a compatible $A$-bimodule. Then there is a bijection between $\mathrm{Ext}(A,M)$ and the second cohomology group $H^2_c (A, M).$
\end{thm}
\begin{proof}
Let $(f_1, f_2) \in Z^2_c (A, M)$ be a $2$-cocycle. Then it is easy to see that the direct sum $A \oplus M$ carries a compatible associative algebra structure given by
\begin{align*}
\mu_1^B ((a,m), (b, n)) :=~& ( a \cdot_1 b ,~ a \cdot_1 n + m \cdot_1 b + f_1 (a, b)), \\
\mu_2^B ((a,m), (b, n)) :=~& ( a \cdot_2 b ,~ a \cdot_2 n + m \cdot_2 b + f_2 (a, b)),
\end{align*}
for $(a, m), (b, n) \in A \oplus M$. Moreover, the exact sequence
\begin{align*}
\xymatrix{
0 \ar[r] & (M, 0, 0) \ar[r]^{i} & (B, \mu_1^B, \mu_2^B) \ar[r]^{j} & (A, \mu_1, \mu_2) \ar[r] & 0
}
\end{align*}
defines an abelian extension, where $i (m) = (0, m)$ and $j (a, m) = a$. Suppose $(f_1', f_2') \in Z^2_c (A, M)$ is another $2$-cocycle cohomologous to $(f_1, f_2)$, and say,
\begin{align*}
(f_1, f_2) - (f_1', f_2') = \delta_c (g), ~\text{ for some } g \in C^1_c (A, M).
\end{align*}
Let $B' = A \oplus M$ be the abelian extension corresponding to the $2$-cocycle $(f_1', f_2')$. Then the two abelian extensions are equivalent and the equivalence is given by the compatible associative algebra map $\phi : B \rightarrow B'$, $(a,m) \mapsto (a, m + g (a))$. In other words, we obtain a well-defined map $H^2_c (A, M) \rightarrow \mathrm{Ext}(A,M).$
Conversely, let (\ref{abel-exact-seq}) be an abelian extension and $s: A \rightarrow B$ be any section of $j$. Then we may consider $B = A \oplus M$ and the maps $i, j, s$ are the obvious ones. Since $j$ is a compatible associative algebra map, we have
\begin{align*}
j \circ \mu_1^B ((a, 0), (b, 0)) = \mu_1 (a, b) ~~~ \text{ and } ~~~
j \circ \mu_2^B ((a, 0), (b, 0)) = \mu_2 (a, b), ~ \text{ for } a, b \in A.
\end{align*}
Hence it follows that
\begin{align*}
\mu_1^B ((a, 0), (b,0)) = (\mu_1 (a, b), f_1 (a, b)) ~~ \text{ and } ~~ \mu_2^B ((a, 0), (b,0)) = (\mu_2 (a, b), f_2 (a, b)),
\end{align*}
for some $f_1, f_2 : A^{\otimes 2} \rightarrow M$. Since $\mu_1^B, \mu_2^B$ defines a compatible associative structure on $B$, it follows that the pair $(f_1, f_2) \in C^2_c (A, M)$ is a $2$-cocycle in the cohomology of the compatible associative algebra $A$ with coefficients in the compatible $A$-bimodule $M$. It is left to the reader to verify that equivalent abelian extensions induce cohomologous $2$-cocycles (see \cite{loday-book} for the classical associative case). This shows that there is a well-defined map $\mathrm{Ext}(A,M) \rightarrow H^2_c (A, M)$.
Finally, the maps $H^2_c (A, M) \rightarrow \mathrm{Ext}(A,M)$ and $\mathrm{Ext}(A,M) \rightarrow H^2_c (A, M)$ constructed above are inverses to each other. Hence the proof.
\end{proof}
\section{Deformations of compatible associative algebras}\label{section:deformation}
In this section, we study various aspects of deformations of compatible associative algebras following the classical deformation theory of Gerstenhaber \cite{gers}.
\subsection{Linear deformations and Nijenhuis operators} In this subsection, we consider linear deformations of a compatible associative algebra $A$ and introduce Nijenhuis operators on $A$ that induce trivial linear deformations. We also introduce infinitesimal deformations of $A$ and show that equivalence classes of infinitesimal deformations are in one-to-one correspondence with the second cohomology group $H^2_c (A, A).$
Let $A= (A, \mu_1, \mu_2)$ be a compatible associative algebra.
\begin{defi}
A linear deformation of $A$ consists of two linear sums of the form
\begin{align*}
\mu_1^t = \mu_1 + t \omega_1 ~~~ \text{ and } ~~~ \mu_2^t = \mu_2 + t \omega_2, ~\text{ for some } \omega_1, \omega_2 \in C^2(A,A)
\end{align*}
which makes $(A, \mu_1^t, \mu_2^t)$ into a compatible associative algebra, for all values of $t$.
\end{defi}
In this case, we say that the pair $(\omega_1, \omega_2)$ generates a linear deformation of $A$.
\begin{ex}
The pair $(\mu_1, \mu_2)$ generates a linear deformation of $A$. It is called the `scaling'.
\end{ex}
Let $(\omega_1, \omega_2)$ generates a linear deformation of the compatible associative algebra $A$. It follows that $\mu_1^t = \mu_1 + t \omega_1$ and $\mu_2^t = \mu_2 + t \omega_2$ satisfy the following relations
\begin{align*}
[\mu_1^t, \mu_1^t]_\mathsf{G} = 0, \qquad [\mu_2^t, \mu_2^t]_\mathsf{G} = 0 ~~~~ \text{ and } ~~~~ [\mu_1^t, \mu_2^t]_\mathsf{G} = 0.
\end{align*}
These relations are equivalent to the followings:
\begin{align}
[\mu_1, \omega_1]_\mathsf{G} = 0, \qquad [\mu_2, \omega_2]_\mathsf{G} = 0, \qquad [\mu_1, \omega_2]_\mathsf{G} + [\mu_2 , \omega_1]_\mathsf{G} = 0, \label{lin-f}\\
[\omega_1, \omega_1]_\mathsf{G} = 0, \qquad [\omega_2, \omega_2]_\mathsf{G} = 0 \qquad \text{ and } \qquad [\omega_1, \omega_2 ]_\mathsf{G} = 0. \label{lin-s}
\end{align}
The three identities in (\ref{lin-f}) implies that
\begin{align*}
\delta_c (\omega_1, \omega_2) = 0,
\end{align*}
where $\delta_c$ is the coboundary operator for the cohomology of the compatible associative algebra $A$ with coefficients in itself. In other words, $(\omega_1, \omega_2) \in Z^2_c (A, A)$ is a $2$-cocycle.
On the other hand, the three conditions of (\ref{lin-s}) imply that the triple $(A, \omega_1, \omega_2)$ is a compatible associative algebra.
\begin{defi}
Let $(\omega_1, \omega_2)$ generates a linear deformation $(\mu_1^t, \mu_2^t)$ of the compatible associative algebra $A$, and $(\omega_1', \omega_2')$ generates another linear deformation $(\mu_1^{'t}, \mu_2^{'t})$ of $A$. They are said to be equivalent if there is a linear map $N : A \rightarrow A$ such that
\begin{align*}
\mathrm{id} + t N : (A, \mu_1^{t}, \mu_2^{t}) \rightarrow (A, \mu_1^{'t}, \mu_2^{'t})
\end{align*}
is a morphism of compatible associative algebras.
\end{defi}
The condition in the above definition implies that
\begin{align*}
(\mathrm{id} + tN)\circ \mu_i^{t} (a, b) = \mu_i^{'t} ( a + t Na, b + tNb),~\text{ for } i = 1,2 \text{ and } a, b \in A.
\end{align*}
If we write down explicitly, we get that
\begin{align}
\omega_i ( a, b ) - \omega_i' (a, b) =~& a \cdot_i N(b) + N(a) \cdot_i b - N ( a \cdot_i b), \label{lin-eq-1}\\
N\omega_i (a, b) =~& \omega_i' ( a, Nb) + \omega_i' (Na, b) + N(a) \cdot_i N(b), \label{lin-eq-2}\\
\omega_i' (Na, Nb) =~& 0, \label{lin-eq-3}
\end{align}
for $i=1,2$ and $a, b \in A$. From the two identities (for $i=1,2$) of (\ref{lin-eq-1}), we simply get that
\begin{align}\label{d-f}
(\omega_1, \omega_2) - (\omega_1', \omega_2') = \delta_c (N),
\end{align}
where we consider $N$ as an element in $C^1_c (A, A).$ As a summary of the previous discussions, we get the following.
\begin{thm}\label{thm-lin-def}
There is a map from the set of equivalence classes of linear deformations of a compatible associative algebra $A$ to the second cohomology group $H^2_c (A,A).$
\end{thm}
\medskip
Let us now discuss trivial linear deformations of a compatible associative algebra $A$.
\begin{defi}
A linear deformation generated by $(\omega_1, \omega_2)$ is said to be trivial if the deformation is equivalent to the undeformed one (i.e., generated by $(\omega_1', \omega_2') = (0,0))$.
\end{defi}
Thus, it follows from (\ref{lin-eq-1})-(\ref{lin-eq-3}) that a linear deformation generated by $(\omega_1, \omega_2)$ is trivial if there is a linear map $N : A \rightarrow A$ satisfying
\begin{align}
\omega_i (a, b) =~& a \cdot_i N(b) + N(a) \cdot_i b - N ( a \cdot_i b), \label{lin-nij-1}\\
N \omega_i (a, b) =~& N(a) \cdot_i N(b), ~ \text{ for } i=1,2 \text{ and } a, b \in A. \label{lin-nij-2}
\end{align}
These two identities motivate us to introduce the following definition.
\begin{defi}
Let $A = (A, \mu_1, \mu_2)$ be a compatible associative algebra. A linear map $N: A \rightarrow A$ is said to be a Nijenhuis operator on $A$ if $N$ is a Nijenhuis operator for both the associative products $\mu_1$ and $\mu_2$. In other words,
\begin{align*}
N(a) \cdot_i N(b) = N \big( a \cdot_i N(b) + N(a) \cdot_i b - N ( a \cdot_i b) \big),~\text{ for } i=1,2 \text{ and } a, b \in A.
\end{align*}
\end{defi}
It follows from the identities (\ref{lin-nij-1}) and (\ref{lin-nij-2}) that a trivial linear deformation of a compatible associative algebra $A$ induces a Nijenhuis operator on $A$. The converse is also true which is stated in the next proposition.
\begin{pro}
Let $N: A \rightarrow A$ be a Nijenhuis operator on a compatible associative algebra $A$. Then $N$ induces a trivial linear deformation of $A$ generated by $(\omega_1, \omega_2)$, where
\begin{align*}
\omega_i (a, b) := a \cdot_i N(b) + N(a) \cdot_i b - N ( a \cdot_i b),~\text{ for } i=1,2 \text{ and } a, b \in A.
\end{align*}
\end{pro}
\begin{proof}
First observe that $(\omega_1, \omega_2) = \delta_c (N)$. Therefore, we have
\begin{align*}
\delta_c (\omega_1, \omega_2) = 0.
\end{align*}
This is equivalent to the conditions in (\ref{lin-f}). Moreover, since $N$ is a Nijenhuis operator on the compatible associative algebra $(A, \mu_1, \mu_2)$, it follows that $(A, \omega_1 = (\mu_1)_N, \omega_2 = (\mu_2)_N)$ is a compatible associative algebra. In other words, the conditions in (\ref{lin-s}) hold. This implies that $(\omega_1, \omega_2)$ generates a linear deformation of the compatible associative algebra $A$.
Finally, this linear deformation obviously satisfies the conditions in (\ref{lin-nij-1}) and (\ref{lin-nij-2}) which implies that the linear deformation is trivial.
\end{proof}
\medskip
In Theorem \ref{thm-lin-def}, we have seen that there is a map
\begin{align}\label{lin-sim}
(\text{linear deformations of } A)/ \sim ~ \rightarrow ~ H^2_c(A,A)
\end{align}
from the set of equivalence classes of linear deformations of a compatible associative algebra $A$ to the second cohomology group $H^2_c (A, A).$ To generalize the map (\ref{lin-sim}) into certain isomorphism, we introduce the notion of infinitesimal deformations of a compatible associative algebra $A$.
\begin{defi}
An infinitesimal deformation of a compatible associative algebra $A$ is a linear deformation of $A$ over the base $\mathbb{K}[[t]]/ (t^2)$, the ring of dual numbers.
\end{defi}
One can also define equivalences between two infinitesimal deformations of $A$. Any $2$-cocycle $(\omega_1, \omega_2) \in Z^2_c (A,A)$ induces an infinitesimal deformation $(\mu_1^t = \mu_1 + t \omega_1, \mu_2^t = \mu_2 + t \omega_2)$ of $A$. Moreover, cohomologous $2$-cocycles induce equivalent infinitesimal deformations. More precisely, let $(\omega_1', \omega_2') \in Z^2_c (A,A)$ be another $2$-cocycle and cohomologous to $(\omega_1, \omega_2)$, say
\begin{align*}
(\omega_1, \omega_2) - (\omega_1', \omega_2') = \delta_c h, ~\text{ for some } h \in C^1_c (A, A).
\end{align*}
Then the infinitesimal deformations $(\mu_1^t = \mu_1 + t \omega_1, \mu_2^t = \mu_2 + t \omega_2)$ and $(\mu_1^{'t} = \mu_1 + t \omega'_1, \mu_2^{'t} = \mu_2 + t \omega'_2)$ are equivalent and an equivalence is given by $\mathrm{id} + t h : (A, \mu_1^t, \mu_2^t) \rightarrow (A, \mu_1^{'t}, \mu_2^{'t})$. Hence, summarizing this fact with Theorem \ref{thm-lin-def}, we obtain the following.
\begin{thm}
Let $A$ be a compatible associative algebra. Then there is a one-to-one correspondence between the set of equivalence classes of infinitesimal deformations of $A$ and the second cohomology group $H^2_c (A,A).$
\end{thm}
\subsection{Formal deformations and rigidity}
In this subsection, we consider formal deformations of a compatible associative algebra $A$. We show that the vanishing of the second cohomology group $H^2_c (A,A)$ implies that $A$ is rigid.
Let $A = (A, \mu_1, \mu_2)$ be a compatible associative algebra. Consider the space $A[[t]]$ of formal power series in $t$ with coefficients from $A$. Then $A[[t]]$ is a $\mathbb{K}[[t]]$-module.
\begin{defi}
A formal deformation of the compatible associative algebra $A$ consists of two formal power series
\begin{align*}
\mu_{1,t} :=~& \mu_{1,0} + t \mu_{1,1} + t^2 \mu_{1,2} + \cdots,\\
\mu_{2,t} :=~& \mu_{2,0} + t \mu_{2,1} + t^2 \mu_{2,2} + \cdots,
\end{align*}
where $\mu_{1, i}$'s and $\mu_{1, i}$'s are bilinear maps on $A$ with $\mu_{1,0} = \mu_1$ and $\mu_{2,0} = \mu_2$ such that $(A[[t]], \mu_{1,t}, \mu_{2,t})$ is a compatible associative algebra over $\mathbb{K}[[t]].$
\end{defi}
Thus, $(\mu_{1,t}, \mu_{2,t})$ is a formal deformation of $A$ if and only if
\begin{align*}
[\mu_{1,t}, \mu_{1,t}]_G = 0, \qquad [\mu_{2,t}, \mu_{2,t}]_G = 0 ~~ \text{ and } ~~ [\mu_{1,t}, \mu_{2,t}]_G = 0.
\end{align*}
They are equivalent to the following identities
\begin{align}\label{def-eqn}
\sum_{i+j = k} [\mu_{1,i}, \mu_{1,j}]_G = 0, \qquad \sum_{i+j = k} [\mu_{2,i}, \mu_{2,j}]_G = 0 ~~ \text{ and } ~~ \sum_{i+j = k} [\mu_{1,i}, \mu_{2,j}]_G = 0,
\end{align}
for $k=0,1,2, \ldots ~.$ The equations (\ref{def-eqn}) are called deformation equations.
Observe that the identities are hold for $k=0$ as $(\mu_1, \mu_2)$ defines a compatible associative algebra structure on $A$. However, for $k=1$, we get
\begin{align*}
[\mu_1, \mu_{1,1}]_G = 0, \qquad [\mu_2, \mu_{2,1}]_G = 0 ~~ \text{ and } ~~ [\mu_1, \mu_{2,1}]_G + [\mu_2 , \mu_{1,1}]_G =0.
\end{align*}
Hence it follows from (\ref{diff-gers}) that $(\mu_{1,1}, \mu_{2,1})$ is a $2$-cocycle in the cohomology of $A$ with coefficients in itself. It is called the `infinitesimal' of the deformation $(\mu_{1,t}, \mu_{2,t})$.
\begin{rmk}\label{n-co-def}
Let $(\mu_{1,t}, \mu_{2,t})$ be a formal deformation of the form $\mu_{1,t} = \mu_1 + \sum_{i \geq p} t^i \mu_{1,i}$ and $\mu_{2,t} = \mu_2 + \sum_{i \geq p} t^i \mu_{2,i}$. Then one can show that the pair $(\mu_{1,p}, \mu_{2,p})$ is a $2$-cocycle in the cohomology of $A$ with coefficients in itself.
\end{rmk}
\begin{defi}
Two formal deformations $(\mu_{1,t}, \mu_{2,t})$ and $(\mu_{1,t}', \mu_{2,t}')$ of a compatible associative algebra $A$ are said to be equivalent if there is a formal power series
\begin{align*}
\phi_t := \phi_0 + t \phi_1 + t^2 \phi_2 + \cdots ~~~~ (\text{with } \phi_0 = \mathrm{id}_A),
\end{align*}
where $\phi_i$'s are linear maps on $A$ such that the $\mathbb{K}[[t]]$-linear map $\phi_t : (A[[t]], \mu_{1,t}, \mu_{2,t}) \rightarrow (A[[t]], \mu_{1,t}', \mu_{2,t}')$ is a morphism of compatible associative algebras.
\end{defi}
By using the fact that $\phi_t$ is a compatible associative algebra morphism, we get that (similar to (\ref{d-f}))
\begin{align*}
(\mu_{1,1}, \mu_{2,1}) - (\mu_{1,1}', \mu_{2,1}') = \delta_c (\phi_1).
\end{align*}
This shows that infinitesimals corresponding to equivalent deformations are cohomologous, hence, corresponds to the same cohomology class in $H^2_c (A, A).$
\medskip
Next, we define the notion of rigidity of a compatible associative algebra.
\begin{defi}
A compatible associative algebra $A$ is said to be rigid if any formal deformation of $A$ is equivalent to the undeformed one.
\end{defi}
\begin{pro}
Let $(\mu_{1,t}, \mu_{2,t})$ be a formal deformation of a compatible associative algebra $A$. Then it is equivalent to a deformation $(\mu_{1,t}' = \mu_1 + \sum_{i \geq p} t^i \mu_{1,i}',~ \mu_{2,t}' = \mu_2 + \sum_{i \geq p} t^i \mu_{2,i}')$, where the first non-vanishing pair $(\mu_{1,p}', \mu_{2,p}')$ is a $2$-cocycle but not a $2$-coboundary.
\end{pro}
\begin{proof}
Suppose $\mu_{1,t}$ and $\mu_{2,t}$ are of the form
\begin{align*}
\mu_{1,t} = \mu_{1,0} + t^n \mu_{1,n} + t^{n+1} \mu_{1,n+1} + \cdots ~~ \text{ and } ~~ \mu_{2,t} = \mu_{2,0} + t^n \mu_{2,n} + t^{n+1} \mu_{2,n+1} + \cdots.
\end{align*}
Then we know from Remark \ref{n-co-def} that $(\mu_{1,n}, \mu_{2,n})$ is a $2$-cocycle in the cohomology of $A$ with coefficients in itself. If $(\mu_{1,n}, \mu_{2,n})$ is not a $2$-coboundary, we are done. However, if $(\mu_{1,n}, \mu_{2,n})$ is a $2$-coboundary, say $- \delta_c (\phi_n)$, then we set $\phi_t = \mathrm{id}_A + t^n \phi_n$. Define
\begin{align*}
\mu_{1,t}' = \phi_t^{-1} \circ \mu_{1,t} \circ ( \phi_t \otimes \phi_t ) ~~ \text{ and } ~~ \mu_{2,t}' = \phi_t^{-1} \circ \mu_{2,t} \circ ( \phi_t \otimes \phi_t ).
\end{align*}
Then $(\mu_{1,t}', \mu_{2,t}')$ is a deformation equivalent to $(\mu_{1,t}, \mu_{2,t})$, and further the coefficients of $t, t^2, \ldots, t^n$ in $\phi_{1,t}'$ and $\phi_{2,t}'$ are zero. By repeating this process, we get the required deformation.
\end{proof}
As a consequence, we get the following sufficient condition for rigidity.
\begin{thm}
Let $A$ be a compatible associative algebra. If $H^2_c(A, A) = 0$ then $A$ is rigid.
\end{thm}
\subsection{Finite order deformations and their extensions}
In this subsection, we consider finite order deformations of a compatible associative algebra $A$ and their extensions to deformations of the next order. We show that the corresponding obstruction class for such extension lies in the third cohomology group of $A$.
Let $(A, \mu_1, \mu_2)$ be a compatible associative algebra and let $N \in \mathbb{N}$ be a fixed natural number. Consider the space $A[[t]]/(t^{N+1})$ which is a module over the ring $\mathbb{K}[[t]]/(t^{N+1})$.
\begin{defi}
An order $N$ deformation of the compatible associative algebra $A$ consists of two polynomials of the form
\begin{align*}
\mu_{1,t}^N :=~& \mu_{1,0} + t \mu_{1,1} + t^2 \mu_{1,2} + \cdots + t^N \mu_{1,N},\\
\mu_{2,t}^N :=~& \mu_{2,0} + t \mu_{2,1} + t^2 \mu_{2,2} + \cdots + t^N \mu_{2,N},
\end{align*}
where $\mu_{i,j}$'s are bilinear maps on $A$ with $\mu_{1,0} = \mu_1$ and $\mu_{2,0} = \mu_2$ such that $(A[[t]]/ (t^{N+1}), \mu_{1,t}^N , \mu_{2,t}^N)$ is a compatible associative algebra over $\mathbb{K}[[t]]/(t^{N+1})$.
\end{defi}
\begin{ex}
An infinitesimal deformation of $A$ is an order $1$ deformation of $A$.
\end{ex}
Let $(\mu_{1,t}^N, \mu_{2,t}^N)$ be a deformation of order $N$. Then the following set of relations are hold:
\begin{align*}
\sum_{i+j = k} [\mu_{1,i}, \mu_{1,j}]_G = 0, ~~~~ \sum_{i+j = k} [\mu_{2,i}, \mu_{2,j}]_G = 0 ~~~~ \text{ and } ~~~~ \sum_{i+j = k} [\mu_{1,i}, \mu_{2,j}]_G = 0,
\end{align*}
for $k=0,1, \ldots, N$. These three sets of relations can be compactly written as
\begin{align}\label{finite-def}
\llbracket (\mu_1, \mu_2), ( \mu_{1, k}, \mu_{2,k}) \rrbracket = - \frac{1}{2} \sum_{i+j = k;~ i, j \geq 1} \llbracket (\mu_{1,i}, \mu_{2,i}), (\mu_{1,j}, \mu_{2,j}) \rrbracket, \text{ for } k=1, \ldots , N.
\end{align}
\begin{defi}
An order $N$ deformation $(\mu_{1,t}^N = \sum_{i=0}^N t^i \mu_{1,i}, \mu_{2,t}^N = \sum_{i=0}^N t^i \mu_{2,i})$ is said to be extensible if there exist two bilinear maps on $A$ (say, $\mu_{1, N+1}$ and $\mu_{2, N+1})$ such that
\begin{align*}
\big( \mu_{1,t}^{N+1} := \mu_{1,t}^N + t^{N+1} \mu_{1, N+1},~ \mu_{2,t}^{N+1} := \mu_{2,t}^N + t^{N+1} \mu_{2, N+1} \big)
\end{align*}
is a deformation of order $N+1$.
\end{defi}
Note that, for $(\mu_{1,t}^{N+1}, \mu_{2,t}^{N+1})$ to be a deformation of order $N+1$, we must have the relations (\ref{finite-def}) and one more equation to be satisfied, namely,
\begin{align}\label{finite-def-n1}
\llbracket (\mu_1, \mu_2), ( \mu_{1, N+1}, \mu_{2,N+1}) \rrbracket = - \frac{1}{2} \sum_{i+j = N+1;~ i, j \geq 1} \llbracket (\mu_{1,i}, \mu_{2,i}), (\mu_{1,j}, \mu_{2,j}) \rrbracket.
\end{align}
Observe that the right hand side of (\ref{finite-def-n1}) is a $3$-cochain on the cohomology complex of the compatible associative algebra $A$. Moreover, it does not contain $\mu_{1, N+1}$ and $\mu_{2, N+1}$. Hence it depends only on the order $N+1$ deformation $(\mu_{1,t}^N, \mu_{2,t}^N)$. It is called the obstruction cochain to extend the deformation $(\mu_{1,t}^N, \mu_{2,t}^N)$, denoted by $Ob_{(\mu_{1,t}^N, \mu_{2,t}^N)}$.
\begin{pro}
The obstruction cochain $Ob_{(\mu_{1,t}^N, \mu_{2,t}^N)}$ is a $3$-cocycle, i.e., $\delta_c \big( Ob_{(\mu_{1,t}^N, \mu_{2,t}^N)} \big) = 0$.
\end{pro}
\begin{proof}
We have
\begin{align*}
&\delta_c \big( Ob_{(\mu_{1,t}^N, \mu_{2,t}^N)} \big) \\
&= - \frac{1}{2} \sum_{i+j = N+1;~ i, j \geq 1} \llbracket (\mu_1, \mu_2), \llbracket (\mu_{1,i}, \mu_{2,i}), (\mu_{1,j}, \mu_{2,j}) \rrbracket \rrbracket \\
&= - \frac{1}{2} \sum_{i+j = N+1;~ i, j \geq 1} \big( \llbracket \llbracket (\mu_{1}, \mu_{2}), (\mu_{1,i}, \mu_{2,i}) \rrbracket, (\mu_{1,j}, \mu_{2,j}) \rrbracket ~-~ \llbracket (\mu_{1,i}, \mu_{2,i}), \llbracket (\mu_{1}, \mu_{2}), (\mu_{1,j}, \mu_{2,j}) \rrbracket \rrbracket \big) \\
&= \frac{1}{4} \sum_{i_1 + i_2 + j = N+1;~ i_1, i_2, j \geq 1} \llbracket \llbracket (\mu_{1,i_1}, \mu_{2,i_1}), (\mu_{1,i_2}, \mu_{2,i_2}) \rrbracket, (\mu_{1,j}, \mu_{2,j}) \rrbracket \\
& \qquad - \frac{1}{4} \sum_{i+j_1 + j_2 = N+1;~ i, j_1, j_2 \geq 1} \llbracket (\mu_{1,i}, \mu_{2,i}), \llbracket (\mu_{1,j_1}, \mu_{2,j_1}), (\mu_{1,j_2}, \mu_{2,j_2}) \rrbracket \rrbracket \qquad (\text{form } (\ref{finite-def})) \\
&= \frac{1}{2} \sum_{i+j+k = N+1;~ i, j, k \geq 1} \llbracket \llbracket (\mu_{1,i}, \mu_{2,i}), (\mu_{1,j}, \mu_{2,j}) \rrbracket , (\mu_{1,k}, \mu_{2,k}) \rrbracket = 0.
\end{align*}
Hence the proof.
\end{proof}
Thus, it follows that the obstruction cochain induces a third cohomology class $[Ob_{(\mu_{1,t}^N, \mu_{2,t}^N)}] \in H^3_c (A, A)$. It is called the obstruction class. Finally, as a consequence of (\ref{finite-def-n1}), we get the following.
\begin{thm}
An order $N$ deformation $(\mu_{1,t}^N, \mu_{2,t}^N)$ of a compatible associative algebra $A$ is extensible if and only if the corresponding obstruction class $[Ob_{(\mu_{1,t}^N, \mu_{2,t}^N)}] \in H^3_c (A, A)$ is trivial.
\end{thm}
As a corollary, we obtain these interesting results.
\begin{cor}
If $H^3_c (A,A) =0$ then any finite order deformation of $A$ is extensible.
\end{cor}
\begin{cor}
If $H^3_c(A,A)= 0 $ then any $2$-cocycle $(\omega_1, \omega_2) \in Z^2_c (A, A)$ is the infinitesimal of some formal deformations of $A$.
\end{cor}
\section{Homology of compatible associative algebras}\label{section:homology}
In this section, we introduce a notion of compatible presimplicial vector space and associate a chain complex to any compatible presimplicial vector space. As an application, we introduce the homology of compatible associative algebra $A$ with coefficients in a compatible $A$-bimodule.
\subsection{Compatible presimplicial vector spaces}
\begin{defi} \cite{loday-book}
A presimplicial vector space $(C, \Delta)$ consists of a collection $C = \{ C_n \}_{n \geq 0}$ of vector spaces, together with a collection of linear maps (called face maps)
\begin{align*}
\Delta = \{ \partial_i : C_n \rightarrow C_{n-1} ~|~ i=0,1, \ldots, n \}_{n \geq 1}
\end{align*}
satisfying
\begin{align*}
\partial_i \partial_j = \partial_{j-1} \partial_i,~\text{ for } 0 \leq i < j \leq n.
\end{align*}
\end{defi}
Given a presimplicial vector space $(C, \Delta)$, one can define a map $\partial : C_n \rightarrow C_{n-1}$, for $n \geq 1$, by $\partial = \sum_{i=0}^n (-1)^i \partial_i$.
Then it is easy to see that $\partial$ is a differential, i.e., $\partial^2 = 0$. In other words, $(C, \partial)$ is a chain complex.
\begin{defi}
A compatible presimplicial vector space is a triple $(C, \Delta, \Delta')$ in which $(C, \Delta)$ and $(C, \Delta')$ are presimplicial vector spaces satisfying additionally
\begin{align}\label{presim-id}
\partial_i \partial_j' + \partial_i' \partial_j = \partial_{j-1} \partial_i' + \partial_{j-1}' \partial_i, ~ \text{ for } 0 \leq i < j \leq n.
\end{align}
\end{defi}
Let $(C, \Delta, \Delta')$ be a compatible presimplicial vector space. Then we have the following.
\begin{pro}\label{comp-pre-pro}
The induced differentials $\partial$ and $\partial'$ satisfy ~$\partial \partial' + \partial' \partial = 0$.
\end{pro}
\begin{proof}
First observe that
\begin{align*}
\partial \partial' + \partial' \partial = \sum_i \sum_j (-1)^{i+j}~ \partial_i \partial_j' + \sum_i \sum_j (-1)^{i+j}~ \partial_i' \partial_j.
\end{align*}
We now split both of these sums into two parts: namely for $i <j$ and $i \geq j$. Thus,
\begin{align*}
\partial \partial' + \partial' \partial =~& \sum_{i <j } (-1)^{i+j} ~ ( \partial_i \partial_j' + \partial_i' \partial_j ) ~+~ \sum_{i \geq j} (-1)^{i+j} ~ ( \partial_i \partial_j' + \partial_i' \partial_j ) \\
=~& \sum_{i < j} (-1)^{i+j} ~ (\partial_{j-1} \partial_i' + \partial_{j-1}' \partial_i ) ~+~ \sum_{i \geq j} (-1)^{i+j} ~ ( \partial_i \partial_j' + \partial_i' \partial_j ) \quad (\text{from } (\ref{presim-id})).
\end{align*}
Replacing $i$ by $n$ and $j$ by $m+1$ in the first summation, we get that it is opposite to the second summation. Hence they cancel with each other. This completes the proof.
\end{proof}
We are now ready to define a chain complex $\{ C^c, \partial^c \}$ associated to a compatible presimplicial vector space $(C, \Delta, \Delta')$ as follows. We define the $n$-th chain group $(C^c)_n$, for $n \geq 0,$ as
\begin{align*}
(C^c)_0 :=~& \{ c \in C_0 ~|~ c = \partial_0 x - \partial_1 x = \partial_0' x - \partial_1' x, \text{ for some } x \in C_1 \},\\
(C^c)_n :=~& \underbrace{C_n \oplus \cdots \oplus C_n}_{n \text{ times}}, \text{ for } n \geq 1,
\end{align*}
and the map $\partial^c : (C^c)n \rightarrow (C^c)_{n-1},$ for $n \geq 1$, given by
\begin{align}
(\partial^c)(x) :=~& \partial (x) = \partial' (x), \text{ for } x \in C_1, \label{boun1} \\
(\partial^c) (x_1, \ldots, x_n) :=~& ( \partial x_1 + \partial' x_2, \partial x_2 + \partial' x_3, \ldots, \partial x_{n-1} + \partial' x_n ), \label{boun2}
\end{align}
for $(x_1, \ldots, x_n) \in (C^c)_n$. The map $\partial^c$ can be understood by the following diagram:
\begin{align*}
\xymatrix{
& C_3 \ar[rd]^{\partial} & & & \\
& & C_2 \ar[rd]^{\partial} & & \\
\cdots \cdot & C_3 \ar[rd]^{\partial} \ar[ru]_{\partial'} & & C_1 \ar[r]^{\partial = \partial'} & (C^c)_0.\\
& & C_2 \ar[ru]_{\partial'} & & \\
& C_3 \ar[ru]_{\partial'}& & &
}
\end{align*}
It is easy to verify using Proposition \ref{comp-pre-pro} that $(\partial^c)^2 =0$. In other words, $\{ C^c, \partial^c \}$ is a chain complex. The corresponding homology groups are called the homology associated with the compatible presimplicial vector space $(C, \Delta, \Delta').$
\subsection{Homology of compatible algebras}
Let $A$ be an associative algebra and $M$ be an $A$-bimodule. Consider the collection $C= \{ C_n (A, M) \}_{n \geq 0}$ of vector spaces given by
\begin{align*}
C_n (A, M) := M \otimes A^{\otimes n}, ~ \text{ for } n \geq 0.
\end{align*}
It forms a presimplicial vector space with the collection of face maps $\Delta = \{ \partial_i : C_n (A,M) \rightarrow C_{n-1}(A,M) ~|~ i =0, 1, \ldots, n \}_{n \geq v1}$ given by
\begin{align*}
\partial_0 (m \otimes a_1 \otimes \cdots \otimes a_n) =~& m \cdot a_1 \otimes a_2 \otimes \cdots \otimes a_n,\\
\partial_i (m \otimes a_1 \otimes \cdots \otimes a_n) =~& m \otimes a_1 \otimes a_i \cdot a_{i+1} \otimes \cdots \otimes a_n, ~~~ 1 \leq i \leq n-1\\
\partial_n (m \otimes a_1 \otimes \cdots \otimes a_n) =~& a_n \cdot m \otimes a_1 \otimes \cdots \otimes a_{n-1}.
\end{align*}
The induced differential $\partial : C_n (A,M) \rightarrow C_{n-1} (A,M),$ for $n \geq 1$, is the Hochschild differential for the homology of $A$ with coefficients in the $A$-bimodule $M$.
\medskip
Next, let $A = (A, \mu_1, \mu_2)$ be a compatible associative algebra and $M$ be a compatible $A$-bimodule. Then it follows from the above observation that $(C, \Delta)$ and $(C, \Delta')$ are presimplicial vector spaces, where $\Delta$ (resp. $\Delta'$) is a collection of face maps on $C$ induced by $(A, \mu_1)$-bimodule $(M,l_1, r_1)$ ~(resp. $(A, \mu_2)$-bimodule $(M,l_2, r_2)$ ).
\begin{pro}
With the above notations $(C, \Delta, \Delta')$ is a compatible presimplicial vector space.
\end{pro}
\begin{proof}
We only need to check the compatibility conditions (\ref{presim-id}). Let $0 \leq i < j \leq n$. In particular, if $0 < i < j-1$, then
\begin{align*}
&(\partial_i \partial_j' + \partial_i' \partial_j) ( m \otimes a_1 \otimes \cdots \otimes a_n ) \\
&= \partial_i ( m \otimes a_1 \otimes \cdots \otimes a_j \cdot_2 a_{j+1} \otimes \cdots \otimes a_n) ~+~ \partial_i' (m \otimes a_1 \otimes \cdots \otimes a_j \cdot_1 a_{j+1} \otimes \cdots \otimes a_n) \\
&= m \otimes a_1 \otimes \cdots \otimes a_i \cdot_1 a_{i+1} \otimes \cdots \otimes \cdots \otimes a_j \cdot_2 a_{j+1} \otimes \cdots \otimes a_n \\
&\quad ~+~ m \otimes a_1 \otimes \cdots \otimes a_i \cdot_2 a_{i+1} \otimes \cdots \otimes \cdots \otimes a_j \cdot_1 a_{j+1} \otimes \cdots \otimes a_n \\
&= \partial_{j-1}' (m \otimes a_1 \otimes \cdots \otimes a_i \cdot_1 a_{i+1} \otimes \cdots \otimes a_n ) ~+~ \partial_{j-1} (m \otimes a_1 \otimes \cdots \otimes a_i \cdot_2 a_{i+1} \otimes \cdots \otimes a_n ) \\
&= (\partial_{j-1}' \partial_i + \partial_{j-1} \partial_i' ) ( m \otimes a_1 \otimes \cdots \otimes a_n ).
\end{align*}
In fact, for various choices of $i$ and $j$ satisfying $0 \leq i < j \leq n$, one can similarly show that $\partial_i \partial_j' + \partial_i' \partial_j = \partial_{j-1} \partial_i' + \partial_{j-1}' \partial_i$. This completes the proof.
\end{proof}
The above proposition suggests us to construct a chain complex associated to the compatible presimplicial vector space $(C, \Delta, \Delta')$. More precisely, the $n$-th chain group $C_n^c (A, M)$, for $n \geq 0$, is given by
\begin{align*}
C_0^c (A, M) :=~& \{ m \in M ~|~ m= m' \cdot_1 a' - a' \cdot_1 m' = m' \cdot_2 a' - a' \cdot_2 m', \text{ for some } m' \otimes a' \in M \otimes A \}, \\
C_n^c (A, M) :=~& \underbrace{ C_n (A, M) \oplus \cdots \oplus C_n (A, M)}_{n \text{ times}}, ~\text{ for } n \geq 1.
\end{align*}
The differential $\partial^c : C_n^c (A, M) \rightarrow C_{n-1}^c (A, M)$, for $n \geq 1$, is given by the formula (\ref{boun1}) and (\ref{boun2}), where $\partial$ (resp. $\partial'$) is the Hochschild boundary operator for the algebra $(A, \mu_1)$ with coefficients in the bimodule $(M, l_1, r_1)$ (resp. for the algebra $(A, \mu_2)$ with coefficients in the bimodule $(M, l_2, r_2)$). The corresponding homology groups are called the homology of the compatible associative algebra $A$ with coefficients in the compatible $A$-bimodule $M$, and they are denoted by $H_\ast^c (A,M).$.
\medskip
Let $(A, \mu_1, \mu_2)$ be an unital and commutative compatible associative algebra. We denote by $\Omega^1_{A | \mu_1 + \mu_2}$ the space generated by $\mathbb{K}$-linear symbols of the form $da$, for $a \in A$, subject to the relations
\begin{align*}
d ( a \cdot_1 b ) = a \cdot_1 db + da \cdot_1 b ~~ \text{ and } ~~ d ( a \cdot_2 b ) = a \cdot_2 db + da \cdot_2 b, \text{ for } a, b \in A.
\end{align*}
The space $\Omega^1_{A | \mu_1 + \mu_2}$ is a left module over the associative algebra $(A, \mu_1 + \mu_2)$. The elements of $\Omega^1_{A | \mu_1 + \mu_2}$ are called K\"{a}hler differentials.
\begin{pro}
Let $(A, \mu_1, \mu_2)$ be an unital and commutative compatible associative algebra. Then there is a canonical isomorphism $H_1^c (A,A) \cong \Omega^1_{A | \mu_1 + \mu_2}$ as $(A, \mu_1 + \mu_2)$-left module.
\end{pro}
\begin{proof}
The first homology group $H_1^c (A,A)$ is the quotient of $A \otimes A$ by the relation
\begin{align*}
a \cdot_1 b \otimes c - a \otimes b \cdot_1 c + c \cdot_1 a \otimes b + a \cdot_2 b \otimes c - a \otimes b \cdot_2 c + c \cdot_2 a \otimes b = 0.
\end{align*}
First observe that the map $A \otimes H_1^c (A,A) \rightarrow H_1^c (A,A), ~ a \otimes [b \otimes c] \mapsto a \cdot_1 b \otimes c + a \cdot_2 b \otimes c$, is a well-defined left $(A, \mu_1 + \mu_2)$-module structure on $H_1^c (A,A)$.
Next, we define a map $H^c_1 (A, A) \rightarrow \Omega^1_{A | \mu_1 + \mu_2}$ by $[a \otimes b] \mapsto a \cdot_1 db + a \cdot_2 db$. It is easy to verify that this map is well-defined and invertible. The inverse map is given by $(a \cdot_1 db + a \cdot_2 db ) \mapsto [a \otimes b]$. One can see that $a \otimes b$ is a $1$-cycle as both $\mu_1$ and $\mu_2$ are commutative. Hence the proof.
\end{proof}
\begin{rmk}
Our method of constructing the homology of compatible associative algebras can be easily generalized to compatible Lie algebras. Moreover, the standard skew-symmetrization leads to a morphism from the homology of a compatible associative algebra to the homology of the corresponding compatible Lie algebra. In forthcoming papers, we will come back with more properties of (co)homology of compatible associative algebras.
\end{rmk}
\section{Further discussions}\label{section:further}
In this paper, we introduce (co)homology of a compatible associative algebra $A = (A, \mu_1, \mu_2)$ with coefficients in a compatible $A$-bimodule. As applications of cohomology, we study extensions and deformations of $A$. Note that our (co)homology is not the same or combination of the Hochschild (co)homologies of $(A, \mu_1)$ and $(A, \mu_2)$. Here we collect some further questions regarding this new (co)homology theory.
\medskip
(I) {\bf Gerstenhaber structure on the cohomology.} In Theorem \ref{thm-gla-coho}, we show that the shifted space $H^{\ast +1}_c (A, A)$ of the cohomology of a compatible associative algebra $A$ with coefficients in itself carries a graded Lie bracket. One may now ask the following question: Is there any associative cup-product $\smile$ on the cohomology $H^{\ast}_c (A,A)$ which together with the graded Lie bracket makes $H^{\ast}_c (A,A)$ into a Gerstenhaber algebra? In \cite{das-gers}, we find an affirmative answer to this question in a more general context. Specifically, given a nonsymmetric operad with two compatible multiplications, we first construct a new cohomology generalizing our cohomology of compatible associative algebras. Then we show that this new cohomology can be seen as the induced cohomology of another multiplicative nonsymmetric operad. Hence by a result of Gerstenhaber and Voronov \cite{gers-voro}, the cohomology carries a Gerstenhaber structure. In particular, we explicitly write down the cup-product on the cohomology $H^{\ast}_c (A,A)$.
\medskip
(II) {\bf Compatible Rota-Baxter operators and compatible dendriform algebras.} Dendriform algebras was introduced in \cite{loday-di} as Koszul dual of associative dialgebras. They are certain splitting of associative algebras and arise naturally from shuffle algebras, planar binary trees and Rota-Baxter operators. Note that Rota-Baxter operators are a noncommutative analogue of Poisson structures \cite{uchino}. Motivated from the study of compatible Poisson structures in geometry, in a forthcoming paper \cite{das-guo}, we study compatible dendriform algebras and compatible Rota-Baxter operators from cohomological points of view and find relations with the results of the present paper.
\medskip
(III) {\bf Cyclic (co)homology of compatible associative algebras.} The notion of cyclic (co)homology of an associative algebra generalizes the de Rham (co)homology of manifolds. Note that the cyclic (co)chain complexes (also called Connes complexes) are obtained from Hochschild (co)chain complexes modulo the actions of cyclic groups. In a future project, we aim to explore the cyclic (co)homology theory of compatible associative algebras and find its differential geometric significance.
\medskip
\noindent {\bf Acknowledgements.} The research of A. Das is supported by the postdoctoral fellowship of Indian Institute of Technology (IIT) Kanpur.
|
1,116,691,499,046 | arxiv | \section{Introduction}
To understand the physical properties of a material it is crucial to have a full description of the interactions between the electrons and atoms. The vibrations of the atoms encapsulated as phonons have a substantial impact on the properties of a material. The zero-point (ZP) phonon motion often has a significant effect, whereas the temperature dependence of several key properties, including electronic band gaps and equilibrium volumes, are driven mainly by the atomic vibrations.
The importance of considering the vibrational state of a solid was evident from the early days of quantum theory, motivating the Einstein\cite{ANDP:ANDP19063270110} and Debye\cite{ANDP:ANDP19123441404} models for the specific heat.
We will focus on the description of the phonon-driven temperature dependence of quantities such as band gaps and thermal expansion.\cite{0295-5075-10-6-011,RevModPhys.77.1173,PhysRevB.71.205214,nanotube_t_dependence,PhysRevLett.105.265501,PhysRevB.87.144302}
First-principles calculations have revolutionized the analysis of phonons in solids\cite{PhysRevB.43.7231,RevModPhys.73.515} and are invaluable for quantitative calculations of the ZP renormalization and the temperature dependent properties of solids. In particular, the formalism proposed in Ref.\ \onlinecite{PhysRevB.87.144302} delivers accurate quantitative results that serve as a platform for a general description of phonon renormalization. We can gain further insights through a phenomenological approach by considering the general properties of the theory without reference to an underlying microscopic theory or a specific material.
The comparison between analytic and first-principles methods leads to a wider picture of the effects of the vibrational state on physical observables.
In Sec.\ \ref{sec:theory} we present a general harmonic theory of the temperature dependence of phonon-renormalized properties of solids.
The theory exposes important approximations, and in Sec.\ \ref{sec:perturb} we assess the accuracy of several perturbative theories.\cite{0022-3719-9-12-013,doi:10.1080/00018738000101426} In Sec.\ \ref{sec:extrap} we propose two new models for use within an extrapolation scheme for obtaining ZP corrections to quantities such as band gaps or lattice parameters from experimental data\cite{doi:10.1080/01418639408240227,Cardona20053} that are an order of magnitude more accurate than previous models.
In Section\ \ref{sec:limits} we describe the asymptotic behaviour of band gaps at low temperatures, recovering the standard $T^4$ power law for three-dimensional systems\cite{PhysRevLett.92.196403} that arises from the linear dispersion of the acoustic branches.
Two-dimensional systems obey a $T^2$ power law, and one-dimensional systems follow a $T^{3/2}$ power law, both dominated by the quadratic acoustic branches. We summarize our findings in Sec.\ \ref{sec:conclusions}.
\section{Mathematical formulation} \label{sec:theory}
We first construct a general framework for calculating the expectation value of an observable $O$ that depends on the vibrational state of the solid. This will be done both at perturbative and non-perturbative levels of approximation, summarized in Table\ \ref{tab:approximations}, that offer compromises between exactness, ease of calculation, and physical insight. First-principles calculations will be used in Sec.\ \ref{sec:perturb} to compare the accuracy of the different approaches, and new models for obtaining ZP corrections from experimental data will be developed in Sec.\ \ref{sec:extrap}.
Throughout this paper the phrase ``ZP correction'' refers to the correction of a general physical observable and is not restricted to the specific correction of the vibrational energy.
\newlength{\stdgap}
\setlength{\stdgap}{5.5pt}
\begin{table*}[htbp]
\caption{Schemes for calculating phonon expectation values.}
\label{tab:approximations}
\begin{tabular}{lll}
\hline
\hline
\textbf{Method} & \hspace{0.1cm} \textbf{Advantages} &\hspace{0.1cm} \textbf{Disadvantages} \\
\hline
Non-perturbative &\hspace{0.1cm} Numerically exact &\hspace{0.1cm} New calculation required at each $T$ \\
[\stdgap]
Phonon interaction expansion &\hspace{0.1cm} Single calculation required for all $T$ &\hspace{0.1cm} Perturbative in phonon-phonon interactions \\
&\hspace{0.1cm} Access to the underlying physics &\\%\hspace{0.1cm} Requires a choice of basis \\
[\stdgap]
\hline
\hline
\end{tabular}
\end{table*}
We model a solid of $N$ atoms by a supercell subject to periodic boundary conditions. The vibrational motion of the atoms can be described within the harmonic approximation in terms of $3N$ phonon coordinates $\{q_{n\mathbf{k}}\}$, where $n$ is the branch index and $\mathbf{k}$ is a reciprocal space vector within the first Brillouin zone (BZ). In terms of phonon coordinates, the vibrational Hamiltonian $\hat{\mathcal{H}}$ reads
\begin{align}
\hat{\mathcal{H}}=\sum_{n,\mathbf{k}}\left(-\frac{1}{2}\frac{\partial^2}{\partial q_{n\mathbf{k}}^2} + \frac{1}{2}\omega^2_{n\mathbf{k}}q_{n\mathbf{k}}^2\right)\,,
\end{align}
where $\omega_{n\mathbf{k}}$ are the phonon frequencies. The energy associated with a phonon mode $(n,\mathbf{k})$ in state $m$ is $E_{n\mathbf{k};m}=\omega_{n\mathbf{k}}\left(m+1/2\right)$, and the corresponding state is $|\phi_m(q_{n\mathbf{k}})\rangle$.
We label the vibrational state of the solid by the $3N$-dimensional vector $\mathbf{M}$, whose element $M_{n\mathbf{k}}$ labels the state
of phonon $(n,\mathbf{k})$.
All equations are given in Hartree atomic units, $\hbar=|e|=m_{\mathrm{e}}=4\pi \epsilon_0=1$.
\subsection{Non-perturbative} \label{subsec:nonperturbative}
Let $\mathbf{Q}$ be a collective phonon coordinate with elements $q_{n\mathbf{k}}$. The expectation value at inverse temperature $\beta=1/k_{\mathrm{B}}T$ with respect to the vibrational state $|\Phi_{\mathbf{M}}\rangle=\prod_{n,\mathbf{k}}|\phi_{M_{n\mathbf{k}}}(q_{n\mathbf{k}})\rangle$ is
\begin{align}
\langle\hat{O}\rangle = \frac{1}{\mathcal{Z}}\sum_{\mathbf{M}}\langle\Phi_{\mathbf{M}}(\mathbf{Q})|\hat{O}(\mathbf{Q})|\Phi_{\mathbf{M}}(\mathbf{Q})\rangle \mathrm{e}^{-\beta E_{\mathbf{M}}}\,, \label{eq:expval}
\end{align}
where $\mathcal{Z}=\sum_{\mathbf{M}}\mathrm{e}^{-\beta E_{\mathbf{M}}}$ is the partition function. We regularize the operator $\hat{O}(\mathbf{Q})$ by subtracting the static lattice value $\hat{O}(\mathbf{0})$ to focus on the correction due to the vibrational state.
This expectation value can be evaluated directly by Monte Carlo sampling weighted by the phonon density.\cite{PhysRevB.73.245202,giustino_nat_comm} Although this approach leads to numerically exact results, the random sampling obscures the underlying physical processes. The phonon density is temperature dependent so a new calculation is required at each temperature, rendering this the most computationally expensive approach.
\subsection{Phonon interaction expansion}
To gain physical insight into the dominant processes and reduce the computational expense we construct an expansion in the phonon-phonon interactions.
We first recast $\hat{O}(\mathbf{Q})$ as\cite{PhysRevB.87.144302}
\begin{widetext}
\begin{align}
\hat{O}(\mathbf{Q})=\sum_{n,\mathbf{k}}\underbrace{a_{n\mathbf{k}}f_{n\mathbf{k}}(q_{n\mathbf{k}})}_{\hat{O}_{n\mathbf{k}}(q_{n\mathbf{k}})}+\sum_{(n,\mathbf{k})\neq(n',\mathbf{k}')}\underbrace{a_{\{n\mathbf{k}|n'\mathbf{k}'\}}f_{\{n\mathbf{k}|n'\mathbf{k}'\}}(q_{n\mathbf{k}},q_{n'\mathbf{k}'})}_{\hat{O}_{n\mathbf{k};n'\mathbf{k}'}(q_{n\mathbf{k}},q_{n'\mathbf{k}'})}+\cdots\,, \label{eq:expansion_op}
\end{align}
where $f$ is a basis set for the phonon spectrum and the set $\{a_{n\mathbf{k}},a_{\{n\mathbf{k}|n'\mathbf{k}'\}},\ldots\}$ are the coupling constants of the phonons with the observable $O$ that can be evaluated, for example, within a first-principles method.
This allows us to rewrite the phonon expectation value as
\begin{align}
\langle\hat{O}\rangle=
&\sum_{n,\mathbf{k}}\!\frac{1}{\mathcal{Z}_{n\mathbf{k}}}\!\sum_{m=0}^{\infty}\!\langle\phi_m(q_{n\mathbf{k}})|\hat{O}_{n\mathbf{k}}(q_{n\mathbf{k}})|\phi_m(q_{n\mathbf{k}})\rangle \mathrm{e}^{-\beta E_{n\mathbf{k};m}} \nonumber \\
+&\!\!\!\!\!\!\!\!\!\!\!\sum_{(n,\mathbf{k})\neq(n',\mathbf{k}')}\!\!\!\frac{1}{\mathcal{Z}_{n\mathbf{k}}\mathcal{Z}_{n'\mathbf{k}'}}\!\!\!\sum_{m,m'=0}^{\infty}\!\!\!\langle\phi_m(q_{n\mathbf{k}})\phi_{m'}(q_{n'\mathbf{k}'})|\hat{O}_{n\mathbf{k};n'\mathbf{k}'}(q_{n\mathbf{k}},q_{n'\mathbf{k}'})|\phi_{m'}(q_{n'\mathbf{k}'})\phi_m(q_{n\mathbf{k}})\rangle \mathrm{e}^{-\beta E_{n\mathbf{k};m}}\mathrm{e}^{-\beta E_{n'\mathbf{k}';m'}}\!\!+\cdots\,.
\end{align}
\end{widetext}
This perturbative method leads to numerically exact results only if sufficient phonon-phonon terms are included. However, as each phonon is treated explicitly it directly exposes the underlying physics. The most expensive stage is the first-principles computation of the coupling constants, thereafter the full temperature dependence can be studied at a small additional computational cost for systems with a band gap.
For computational purposes a choice of basis $f$ is required in Eq.\ (\ref{eq:expansion_op}). We choose a polynomial basis $\{q^s\}$ because it leads to analytic results and connects with standard theories of thermal expansion and band gap renormalization (see Sec.\ \ref{sec:perturb} below).
Within the harmonic approximation only even functions lead to non-zero expectation values, hence the polynomial basis may be rewritten as $\{|q|^s\}$, and the relevant matrix elements are
\begin{widetext}
\begin{align}
\mathcal{M}_{s,m}=\bigl\langle\phi_m(q)\bigl||q|^s\bigr|\phi_m(q)\bigr\rangle=\frac{s!}{(4\omega)^{s/2}}\frac{2^m}{m!}\sum_{p=\big\{\substack{\max(0,m-s), \mbox{ } s\mbox{ \scriptsize{even}} \\\!\!0,\hphantom{\max(m-s)\,\,} \mbox{ } s\mbox{ \scriptsize{odd}}}}^m\frac{({}^{m}C_{p})^2\,p!}{2^p\Gamma(s/2-m+p+1)}\,, \label{eq:matrix_element}
\end{align}
where ${}^mC_p$ is a binomial coefficient and $\Gamma$ is the gamma function. We then obtain
\begin{align}
\langle\hat{O}\rangle=&\sum_{n,\mathbf{k}}(1-\mathrm{e}^{-\beta\omega_{n\mathbf{k}}})\sum_{s=1}^{\infty}\sum_{m=0}^{\infty}a_{s;n\mathbf{k}}\mathcal{M}_{s,m}\mathrm{e}^{-m\beta\omega_{n\mathbf{k}}} \nonumber \\
+&\!\!\!\!\sum_{(n,\mathbf{k})\neq(n',\mathbf{k}')}\!\!\!(1-\mathrm{e}^{-\beta\omega_{n\mathbf{k}}})(1-\mathrm{e}^{-\beta\omega_{n'\mathbf{k}'}})\!\!\sum_{s,s'=1}^{\infty}\sum_{m,m'=0}^{\infty}a_{\{s;n\mathbf{k}|s';n'\mathbf{k}'\}}\mathcal{M}_{s,m}\mathcal{M}_{s',m'}\mathrm{e}^{-m\beta\omega_{n\mathbf{k}}}\mathrm{e}^{-m'\beta\omega_{n'\mathbf{k}'}}+\cdots\,, \label{eq:full}
\end{align}
\end{widetext}
for the single and double phonon terms, and similar expressions for higher order terms. The coupling constants $a_{s;n\mathbf{k}}$ have been rewritten in terms of the polynomial basis power $s$.
Equation\ (\ref{eq:full}) describes the temperature dependence of the expectation value of observable $O$.
We use this general framework to address three questions: (i) the use of perturbation theory for the calculation of phonon-renormalized expectation values, (ii) the calculation of ZP corrections from experimental data, and (iii) the low temperature asymptote of the expectation value of a class of such observables, including the electronic band gaps. For the first two problems we validate our findings with first-principles calculations. The third problem is an example of a situation that is not directly accessible in practise to first-principles calculations due to the fine $\mathbf{k}$-point sampling required near the BZ center.
\section{Beyond lowest order perturbation theory} \label{sec:perturb}
The calculation of the phonon renormalization of many physical observables is facilitated by a perturbation expansion in the phonon-phonon interaction. We now assess the validity of these expansions by comparing the analytic results from Sec.\ \ref{sec:theory} with first-principles calculations.\cite{PhysRevB.87.144302}
To calculate the renormalization of a general observable, we start from Eq.\ (\ref{eq:full}) in the previous section. We note that if $a_s=0$ for all $s\neq 2$ in the independent phonon term, and all phonon-phonon coupling terms vanish, we obtain
\begin{align}
\langle\hat{O}\rangle=\sum_{n,\mathbf{k}}\frac{a_{2;n\mathbf{k}}}{2\omega_{n\mathbf{k}}}\left[1+2n_{\mathrm{B}}(\omega_{n\mathbf{k}})\right]\,, \label{eq:bose}
\end{align}
where $n_{\mathrm{B}}(\omega)=(\mathrm{e}^{\beta\omega}-1)^{-1}$ is a Bose-Einstein (BE) factor. (The derivation of this result is described in Appendix~\ref{app:bose}.) This expression recovers the standard formulation\cite{doi:10.1080/01418639408240227} of the temperature dependence of band gaps within Allen-Heine-Cardona (AHC) theory\cite{0022-3719-9-12-013,PhysRevB.23.1495} and of the temperature dependence of lattice parameters within the Gr\"{u}neisen formalism.\cite{doi:10.1080/00018738000101426}
The high temperature limit of Eq.\ (\ref{eq:bose}) is
\begin{align}
\langle\hat{O}\rangle\underset{\scriptstyle{\beta\ll1}}{\to}\left(\sum_{n,\mathbf{k}}\frac{a_{1;n\mathbf{k}}}{\omega_{n\mathbf{k}}^2}\right)\beta^{-1}\,. \label{eq:highT}
\end{align}
Many physical systems are expected to be well-described by restricting the polynomial expansion to even powers $\{q^{2t}\}$ for $s=2t$, so we first focus only on these terms. Starting from Eq.\ (\ref{eq:full}), we can systematically improve the perturbation theory beyond Eq.\ (\ref{eq:bose}). The next order term is a sum of the independent phonon term corresponding to $s=4$,
\begin{align}
\langle\hat{O}\rangle\!=\!\!\sum_{n,\mathbf{k}}\frac{a_{4;n\mathbf{k}}}{4\omega_{n\mathbf{k}}^2}\!\!\left[1\!+\!12\,\mathrm{e}^{\beta\omega_{n\mathbf{k}}}\,n^2_{\mathrm{B}}(\omega_{n\mathbf{k}})\right]\,, \label{eq:bose2}
\end{align}
and the phonon-phonon term with $s=2$ and $s'=2$,
\begin{align}
\langle\hat{O}\rangle\!=\!\!\!\!\!\!\!\sum_{(n,\mathbf{k})\neq(n',\mathbf{k}')}\!\!\!\!\!\frac{a_{\{2;n\mathbf{k}|2;n'\mathbf{k}'\}}}{4\omega_{n\mathbf{k}}\omega_{n'\mathbf{k}'}}\!\left[1\!+\!2n_{\mathrm{B}}(\omega_{n\mathbf{k}})\right]\!\left[1\!+\!2n_{\mathrm{B}}(\omega_{n'\mathbf{k}'}\!)\right]\,. \label{eq:bose2}
\end{align}
These two perturbative terms combine to give a high-temperature limit
\begin{align}
\langle\hat{O}\rangle\underset{\scriptstyle{\beta\ll1}}{\to}\left(\sum_{n,\mathbf{k}}\frac{3a_{4;n\mathbf{k}}}{\omega_{n\mathbf{k}}^4}+\!\!\!\!\!\!\sum_{(n,\mathbf{k})\neq(n',\mathbf{k}')}\!\!\!\!\!\frac{a_{\{2;n\mathbf{k}|2;n'\mathbf{k}'\}}}{\omega_{n\mathbf{k}}^2\omega_{n'\mathbf{k}'}^2}\right)\beta^{-2}\,,
\end{align}
that dominates asymptotically over the linear term proportional to $\beta^{-1}$. More generally, the non-zero $a_s$ with the largest $s$ will dominate the high-temperature limit, giving a power law dependence of $\beta^{-s/2}$.
In general, the contributions of higher order terms beyond $q^2$ are unimportant because their coupling constants are several orders of magnitude smaller than $a_{2;n\mathbf{k}}$, justifying the widespread use of AHC theory for band gaps and the Gr\"{u}neisen formalism for thermal expansion. This means that the cross-over temperature to non-linear behaviour is high, and is irrelevant for the solid phase of the system. As an example, the cross-over temperature at which the quartic term becomes important in diamond is larger than $10^4$ K, which is beyond the melting temperature. In an experimental setting, nonlinear behaviour of the temperature dependence in the high-temperature limit could be taken as the signature of effects beyond the lowest order theory.
We have implemented the three methods of AHC theory Eq.\
(\ref{eq:bose}), the independent phonon term in Eq.\ (\ref{eq:full}),
and the non-perturbative approach in Sec.\
\ref{subsec:nonperturbative}, for calculating the temperature
dependence of band gaps within first-principles calculations.\cite{PhysRevB.87.144302}
We have studied diamond and helium using plane-wave density functional
theory\cite{PhysRev.136.B864,PhysRev.140.A1133} with ultrasoft
pseudopotentials\cite{PhysRevB.41.7892} as implemented in the
\textsc{castep} code\cite{CASTEP}.
All calculations used supercells containing $54$ atoms, and
all energy differences were converged to within $10^{-4}$ eV per unit
cell and all stresses were converged to within $10^{-2}$ GPa. Table\
\ref{tab:helium} shows the ZP correction to the thermal band gap of
diamond within the different approximations. AHC theory
underestimates the accurate non-perturbative result by only $0.02$ eV,
supporting its widespread use. The full independent phonon term leads
to excellent agreement with the non-perturbative approach, demonstrating that it is important to take full account of the phonon dispersion, but that higher order phonon-phonon coupling terms are unimportant in
diamond.
\begin{table}[t]
\caption{ZP correction to the electronic band gap of diamond and helium, in units of eV.}
\label{tab:helium}
\begin{tabular}{lcc}
\hline
\hline
& \hspace{0.1cm} Diamond \hspace{0.1cm} & Helium \hspace{0.1cm} \\
\hline
AHC theory & $-0.44$ & $+0.26$ \\
Independent phonon term & $-0.46$ & $+0.12$ \\
Non-perturbative & $-0.46$ & $+0.40$ \\
\hline
\hline
\end{tabular}
\end{table}
An important example of behaviour beyond lowest-order perturbation theory is the ZP correction to the band gap due to electron-phonon coupling in solid helium under the terapascal pressures found in white dwarf stars.\cite{helium} It is critical to have a detailed knowledge of the band gap as this has a significant impact on our understanding of white dwarf cooling, and consequently in estimates of the age of the Universe.\cite{helium}
Table\ \ref{tab:helium} also shows the ZP correction to the band gap arising from electron-phonon coupling in solid hexagonal closed-packed (hcp) helium at a pressure of $10$ TPa.
For helium, the comparison between the non-perturbative calculation and the various levels of perturbation theory makes explicit the limitations of the perturbative calculations for this system. AHC theory leads to a ZP correction of $+0.26$ eV, which is significantly modified by including the full independent phonon term, reducing the correction to $+0.12$ eV. This large difference is caused by a linear (rather than quadratic) dependence of the electronic band gap as a function of phonon amplitude for helium, which can be described very accurately by the odd power terms in Eq.\ (\ref{eq:matrix_element}), but not within AHC theory. Unlike the ZP correction of diamond, in the case of helium even the independent phonon term fails to recover the full non-perturbative correction of $+0.40$ eV due to phonon-phonon interactions. We note that although the AHC result seems to be in better agreement with the non-perturbative result than the independent phonon term result, this is an artifact of the poor convergence of AHC theory for this system.
In this section we have contextualized the widely used AHC theory, and presented, as far as we are aware, the first example of its failure. However, we expect that many systems are well-described by AHC theory, and in the rest of this paper we will restrict our attention to two further questions that can be addressed within this theory.
\section{Determining the zero-point correction} \label{sec:extrap}
The experimental characterization of the vibrational state of a solid is important for understanding many physical phenomena. The ZP correction to an observable is a direct measure of the coupling between the observable and the phonons. However, this quantity cannot be measured directly in experiments because it relates to an unphysical state without nuclear vibrations. With the first-principles method proposed in Ref.\ \onlinecite{PhysRevB.87.144302} we first expose the shortcomings of the different models used to extract the ZP correction from experimental data, and second propose and assess the accuracy of two new schemes.
With knowledge of the lowest order expression for the temperature dependence of phonon renormalised quantities,
\begin{align}
\langle\hat{O}\rangle=\frac{1}{2}\sum_{n,\mathbf{k}}A_{n\mathbf{k}}\left[1+2n_{\mathrm{B}}(\omega_{n\mathbf{k}})\right]\,, \label{eq:bose_alt}
\end{align}
where, from Eq.\ (\ref{eq:bose}), $A_{n\mathbf{k}}=a_{1;n\mathbf{k}}/\omega_{n\mathbf{k}}$,
the ZP correction to an observable $O$ is given by
\begin{align}
\langle\hat{O}\rangle_{\mathrm{ZP}}=\frac{1}{2}\sum_{n,\mathbf{k}}A_{n\mathbf{k}}\,,
\end{align}
and can be extracted as the zero temperature linear extrapolate from the high temperature limit $\beta\ll1/\omega$.\cite{doi:10.1080/01418639408240227}
In practical applications of the extrapolation scheme to experimental data, an approximation must be made because experiments rarely reach temperatures high enough to enter the asymptotic linear limit. One can construct an analytic model $F(T,\mathbf{A})$ for the $T$ dependence of the observable over the entire temperature range, fitted using variational parameters $\mathbf{A}$. This analytical model is then used in the extrapolation. With the newly developed first-principles method presented in Ref.\ \onlinecite{PhysRevB.87.144302} that describes the temperature dependence using Eq.\ (\ref{eq:bose}), we can for the first time assess different models $F(T,\mathbf{A})$. We propose two new schemes and test them against previous models and obtain an order-of-magnitude improvement in the accuracy of the extrapolated ZP correction. The models considered for $F(T,\mathbf{A})$ are enumerated in Table\ \ref{tab:models}. We note that the old models were not developed specifically for the ZP extrapolation, but instead to reproduce accurately the experimental data, which is usually available only at low temperatures. This might explain some of the failures in their application to extract accurate ZP corrections.
\begin{table*}[htbp]
\caption{Analytic models for the temperature dependence of phonon-renormalized quantities.}
\label{tab:models}
\begin{tabular}{ll}
\hline
\hline
\textbf{Model} & $F(T,\mathbf{A})$ \hspace{0.1cm} \\
\hline
Varshni & $A_0+\frac{A_1T^2}{A_2+T}$ \\
[\stdgap]
P\"{a}ssler & $A_0+\frac{A_1A_2}{2}\left\{\left[1+\left(\frac{2T}{A_2}\right)^{A_3}\right]^{1/A_3}+1\right\}$ \\
[\stdgap]
BE & $A_0+\frac{A_1}{\mathrm{e}^{A_2/k_{\mathrm{B}}T}-1}$ \\
[\stdgap]
Double BE & $A_0+\frac{A_1}{\mathrm{e}^{A_2/k_{\mathrm{B}}T}-1}+\frac{A_3}{\mathrm{e}^{A_4/k_{\mathrm{B}}T}-1}$ \\
[\stdgap]
Phonon dispersion & $A_0+\frac{A_1}{\mathrm{e}^{A_2/k_{\mathrm{B}}T}-1}+\frac{\mathrm{e}^{A_2/k_{\mathrm{B}}T}(1+\mathrm{e}^{A_2/k_{\mathrm{B}}T})A_3}{2k_{\mathrm{B}}T(\mathrm{e}^{A_2/k_{\mathrm{B}}T}-1)^2}+\cdots$ \\
[\stdgap]
Two step & $A_0+\frac{A_1}{\mathrm{e}^{A_2/k_{\mathrm{B}}T}-1}$ \\
& $\omega(T_{\mathrm{max}})\!=\!p_0\!+\!\frac{p_1p_0^2}{k_{\mathrm{B}}T_{\mathrm{max}}\ln(p_0/k_{\mathrm{B}}T_{\mathrm{max}})}\!+\!\frac{p_2p_0^3}{(k_{\mathrm{B}}T_{\mathrm{max}})^2\ln(p_0/k_{\mathrm{B}}T_{\mathrm{max}})^2}$ \\
& $A(T_{\mathrm{max}})=p_3\left(p_0+\!\frac{p_1p_0^2}{k_{\mathrm{B}}T_{\mathrm{max}}}\!+\!\frac{p_2p_0^3}{(k_{\mathrm{B}}T_{\mathrm{max}})^2}\right)$ \\
\hline
\hline
\end{tabular}
\end{table*}
A widely used model proposed by Varshni\cite{Varshni1967149} reproduces the high-temperature linear asymptote, but incorrectly assumes a $T^2$ dependence as $T\to0$.
P\"{a}ssler\cite{Passler} proposed a more complicated expression, which describes the low temperature behaviour by a fitting parameter that in principle could recover the low temperature $T^4$ limit (see Sec.\ \ref{sec:limits} below and Ref.\ \onlinecite{PhysRevLett.92.196403}). However, the low-temperature asymptote has little impact on the high-temperature limit or the ZP correction because the cross-over between a power law and the exponential dependence of Eq.\ (\ref{eq:bose}) occurs at very low temperatures (below $4$\,K for silicon\cite{PhysRevLett.92.196403}) and, moreover, the acoustic phonons that dominate in this regime have a low density of states.
This motivates neglecting the low temperature $T^4$ power law and instead focusing on the higher energy phonon branches that can be described by the Einstein approximation. This leads to a functional form consisting of a single BE oscillator,\cite{PhysRevB.30.1979} which amounts to assuming a dispersionless phonon spectrum.
As Eq.\ (\ref{eq:bose}) consists of a sum over many BE oscillators, a straightforward extension of the single BE oscillator model is to include a second oscillator.\cite{2be_oscillator_fit} For systems with non-monotonic temperature-dependent gaps, characterized by more than one Einstein frequency, the use of more than a single BE oscillator is essential.\cite{PhysRevB.86.195208}
\subsection{New models for the linear extrapolation scheme}
The BE oscillator model may fail to recover the ZP correction unless data exists up to high temperatures $k_{\mathrm{B}}T\gtrsim\omega$. This motivates us to propose two new methods, based on a single BE oscillator fit,\cite{PhysRevB.30.1979}
\begin{align}
F(T,\mathbf{A})=\frac{A}{\mathrm{e}^{\omega/k_{\mathrm{B}}T}-1}\,, \label{eq:model}
\end{align}
where $\mathbf{A}=(A,\omega)$. The two new models recover the correct ZP correction even with data restricted to low temperatures.
\subsubsection{Phonon dispersion method}
We start from Eq.\ (\ref{eq:bose_alt}), rewrite the phonon dispersion as $\omega_{n\mathbf{k}}=\overline{\omega}+\delta_{n\mathbf{k}}$, and retain the relevant temperature dependent terms, so that
\begin{align}
\sum_{n\mathbf{k}}\frac{A_{n\mathbf{k}}}{\mathrm{e}^{\omega_{n\mathbf{k}}/k_{\mathrm{B}}T}-1}=\sum_{n\mathbf{k}}\frac{A_{n\mathbf{k}}}{\mathrm{e}^{(\overline{\omega}+\delta_{n\mathbf{k}})/k_{\mathrm{B}}T}-1}\,.
\end{align}
The Einstein approximation assumes that it is possible to find a $\overline{\omega}$ such that the variations in the dispersion $\delta_{n\mathbf{k}}$ can be ignored, leading to $A=\sum_{n\mathbf{k}}A_{n\mathbf{k}}$. To go beyond the Einstein approximation, one can expand in small $\delta_{n\mathbf{k}}/\overline{\omega}\ll1$,
\begin{widetext}
\begin{align}
\sum_{n\mathbf{k}}\frac{A_{n\mathbf{k}}}{\mathrm{e}^{\omega_{n\mathbf{k}}/k_{\mathrm{B}}T}-1}=\sum_{n\mathbf{k}}\frac{A_{n\mathbf{k}}}{\mathrm{e}^{\overline{\omega}/k_{\mathrm{B}}T}-1}-\sum_{n\mathbf{k}}\frac{A_{n\mathbf{k}}\mathrm{e}^{\overline{\omega}/k_{\mathrm{B}}T}\delta_{n\mathbf{k}}}{k_{\mathrm{B}}T(\mathrm{e}^{\overline{\omega}/k_{\mathrm{B}}T}-1)^2}
+\sum_{n\mathbf{k}}\frac{A_{n\mathbf{k}}\mathrm{e}^{\overline{\omega}/k_{\mathrm{B}}T}(1+\mathrm{e}^{\overline{\omega}/k_{\mathrm{B}}T})\delta^2_{n\mathbf{k}}}{2(k_{\mathrm{B}}T)^2(\mathrm{e}^{\overline{\omega}/k_{\mathrm{B}}T}-1)^3}+\mathcal{O}(\delta_{n\mathbf{k}}^3)\,. \label{eq:method1}
\end{align}
\end{widetext}
This form provides a systematic way of improving upon the BE oscillator model, at the expense of increasing the number of fitting parameters. The even $\delta$ terms in the expansion are the most important ones because the $\delta$-spread about $\overline{\omega}$ is approximately equal on both sides, leading to a high degree of cancellations in the odd terms. This means that it is usually convenient to restrict the expansion to even terms.
\subsubsection{Two step method}
The phonon dispersion expansion introduces four fitting parameters, making it difficult to perform an accurate extrapolation with low-quality or sparse experimental data. We therefore propose an alternative method, based on fitting only the BE oscillator form to the experimental data, but requiring a recursive fit.
In general, the BE fit parameters $\mathbf{A}$ will depend on the maximum temperature included in the fit $\mathbf{A}=\mathbf{A}(T_{\mathrm{max}})$. As shown in Appendix\ \ref{app:extrapolation}, the high-temperature asymptotes for $\omega(T_{\mathrm{max}})$ and $A(T_{\mathrm{max}})$ in the BE oscillator fit are
\begin{align}
\omega(T_{\mathrm{max}})=& \,p_0+\frac{p_1p_0^2}{k_{\mathrm{B}}T_{\mathrm{max}}\ln(p_0/k_{\mathrm{B}}T_{\mathrm{max}})} \nonumber \\
&+\frac{p_2p_0^3}{(k_{\mathrm{B}}T_{\mathrm{max}})^2\ln(p_0/k_{\mathrm{B}}T_{\mathrm{max}})^2}\,, \label{eq:omega_t} \\
A(T_{\mathrm{max}})=&\, p_3\left(p_0+\frac{p_1p_0^2}{k_{\mathrm{B}}T_{\mathrm{max}}}+\frac{p_2p_0^3}{(k_{\mathrm{B}}T_{\mathrm{max}})^2}\right)\,. \label{eq:a_t}
\end{align}
This motivates a new scheme that can be implemented in two stages:
\begin{enumerate}
\item Fit the single BE, Eq.\ (\ref{eq:model}), to the data for a range of maximum temperatures $T_{\mathrm{max}}$.
\item Fit Eqs.\ (\ref{eq:omega_t}) and (\ref{eq:a_t}) to the functions $\omega(T_{\mathrm{max}})$ and $A(T_{\mathrm{max}})$ obtained in stage $1$.
\end{enumerate}
The final ZP correction is then $p_3p_0$.
This scheme only requires fitting of the two-parameter BE oscillator model to the experimental data.
\subsection{Benchmarking the extrapolation schemes}
First-principles calculations of the temperature dependence of the thermal band gap of diamond\cite{PhysRevB.87.144302} provide a solid platform from which we can test the relative merits and accuracy of our two new models and the previous schemes.
Diamond is a good case to study because both experimental data and first-principles results are available for the temperature dependence of the band gap. The upper part of Fig.\ \ref{fig:comparison_extrap} shows the temperature dependence of the band gap as given by Eq.\ (\ref{eq:bose}) including $162$ phonon modes (corresponding to a supercell with $54$ atoms) and with the couplings calculated from first principles using the method described in Ref.~\onlinecite{PhysRevB.87.144302}. The results of this calculation are in good agreement with experiment as shown in Fig.\ \ref{fig:comparison_extrap}, and the first-principles calculation gives a ZP correction to the gap of $-0.462$ eV.
\begin{figure}
\centering
\includegraphics[scale=0.4]{./combined_plot3.pdf}
\caption{(Color online) \textit{Upper}: Temperature dependence of the thermal band gap of diamond. The experimental data (black squares, from Ref.\ \onlinecite{Clark11021964}) are compared to the first-principles results (red line). \textit{Lower}: ZP correction to the thermal band gap of diamond obtained with the linear extrapolation scheme, using the most accurate models listed in Table\ \ref{tab:models}. }
\label{fig:comparison_extrap}
\end{figure}
In Fig.\ \ref{fig:comparison_extrap} we show a comparison of the first-principles ZP correction and the extrapolated ZP correction using the fitting functions shown in Table\ \ref{tab:models}. The extrapolated ZP corrections from a fit to the first-principles data recover the first-principles ZP correction if data at sufficiently high temperatures is included. We have not shown data for the Varshni and P\"{a}ssler forms, which lead to poor results that only converge at higher temperatures above the range of the plot.
Using one or two BE oscillators leads to reasonable fits allowing us to estimate the ZP correction. However, the convergence is slow, requiring data from temperatures of about $3,\!000$ K to estimate the ZP correction within $0.01$ eV. The P\"{a}ssler form has more degrees of freedom than a single BE oscillator and, even though (depending on the temperature range) it leads to a fit with a smaller mean square deviation, the extrapolation to zero temperature leads to worse results than fits based on the BE oscillator, and the extrapolated values are outside of the range of Fig.\ \ref{fig:comparison_extrap}. This can be explained by the emphasis of the P\"{a}ssler form on the low-temperature shape, which is not important for the asymptotic high-temperature limit or the ZP correction.
The new methods we have proposed lead to better estimates of the ZP correction. The phonon dispersion method with an expansion up to second order has the same number of fitting parameters as a double BE oscillator but consistently delivers better results. An expansion up to eighth order leads to results converged to better than $0.01$ eV above $350$ K. The two step method outperforms all but the phonon dispersion method with an eighth order expansion above $600$ K, and leads to results comparable to the latter above $900$ K.
\begin{table}[t]
\caption{ZP correction to the electronic band gap of diamond from the experimental data in Ref.\ \onlinecite{Clark11021964}.}
\label{tab:extrapolation_experiment}
\begin{tabular}{lc}
\hline
\hline
& \hspace{0.1cm} ZP correction (eV) \\
\hline
BE oscillator & $-0.29$ \\
Phonon dispersion & $-0.41$ \\
Two step & $-0.51$ \\
Isotope (Ref.\ \onlinecite{Cardona20053}) & $-0.36$ \\
\hline
\hline
\end{tabular}
\end{table}
Now that we have established the limited applicability of the standard extrapolation methods and proven the accuracy of our two new methods, we are well-positioned to revisit the diamond experimental data discussed in Refs.\ \onlinecite{RevModPhys.77.1173,Cardona20053}. We use a variety of models to estimate the ZP correction, with the results summarized in Table\ \ref{tab:extrapolation_experiment}. The isotope method in Table\ \ref{tab:extrapolation_experiment} is an alternative approach for the determination of ZP band gap corrections, and it is described in Ref.\ \onlinecite{Cardona20053}. The single BE oscillator fit leads to poor results, in agreement with the theoretical assessment above. We also note that the BE oscillator extrapolation value reported in Ref.\ \onlinecite{Cardona20053} disagrees with ours because we find different fit parameters than those reported there. The phonon dispersion result, using an expansion up to fourth order, leads to the better agreement with the estimate from the isotope effect, confirming it to be our recommended extrapolation tool. The two step technique does not perform as well as the phonon dispersion technique, as it is more sensitive to the absence of high temperature data, as seen in Fig.\ \ref{fig:comparison_extrap}.
Having completed the analysis of the experimental data, it is instructive to compare the estimate for the ZP correction to that from our first-principles calculations. The two new extrapolation schemes applied to the experimental data deliver $-0.41$ and $-0.51$~eV, lying in the same order as the theoretical assessment in Fig.\ \ref{fig:comparison_extrap}. This suggests an experimental ZP correction in the range $(-0.51,-0.41)$~eV, in good agreement with our first-principles result of $-0.46$~eV.
\section{Low temperature formalism} \label{sec:limits}
In recent years there has been a surge of interest in low-dimensional systems such as graphene and carbon nanotubes. When exploring the emergence of quantum critical physics at low temperatures it is important to understand the role played by phonons.
In this section we extract the asymptotic behavior\cite{PhysRevLett.92.196403} at low temperatures from our framework, and extend it for the first time, as far as we are aware, to low dimensional systems. At low temperatures, only the lowest energy acoustic modes are excited, so these modes must be treated explicitly. Our derivation follows closely that in Ref.~\onlinecite{ashcroft}
for the specific heat.
\subsection{Three-dimensional $T^4$ power law}
We first consider the three-dimensional system.
In the limit of a large solid, the $\mathbf{k}$-points become dense on the length scale over which physical quantities vary appreciably. This allows us to replace summations over $\mathbf{k}$ by integrals over the first BZ of volume $V_{\mathrm{BZ}}$,
\begin{align}
\langle\hat{O}\rangle=\sum_n\int_{\mathrm{BZ}}\frac{\mathrm{d}^3\mathbf{k}}{(2\pi V_{\mathrm{BZ}})^3}\langle\hat{O}_n(\mathbf{k})\rangle\,.
\end{align}
The BE factor in the operator expectation value (see Eq.\ (\ref{eq:bose})) means that the occupancies of the modes with energies $\omega_n(\mathbf{k})\gg k_{\mathrm{B}}T$ vanish exponentially with decreasing temperature. This allows us to make four assumptions in evaluating the integral:
\begin{enumerate}
\item Only the three acoustic modes $\omega_n(\mathbf{k})=c_n(\hat{\mathbf{k}})|\mathbf{k}|$
contribute as $T\to0$.
\item The acoustic modes dominate within the BZ but vanish exponentially outside of it. We can therefore expand the range of the integral to the entire $\mathbf{k}$-space.
\item As only the neighborhood of the $\Gamma$-point contributes to the integral, we can expand the couplings $a_{s;n}(\mathbf{k})$ in small $\omega_n(\mathbf{k})$,
\begin{align}
a_{s;n}(\mathbf{k})=\sum_{p=2}^{\infty}a_{s;n}^{(p)}(c_n(\hat{\mathbf{k}})|\mathbf{k}|)^p\,. \label{eq:expansion1}
\end{align}
This expression uses the fact that $a_{s;n}(\mathbf{0})=0$ at $\mathbf{k}=\mathbf{0}$, corresponding to translational invariance. Also, the dominant term is quadratic (rather than linear) for a broad class of observables including the electronic band gap.\cite{PhysRevLett.92.196403}
\item The dominant term in Eq.\ (\ref{eq:full}) is the independent phonon term with $s=1$.
\end{enumerate}
With these assumptions the expectation value reads
\begin{align}
\langle\hat{O}\rangle&=\sum_{p=2}^{\infty}\frac{3a_1^{(p)}}{2\pi^2V_{\mathrm{BZ}}^3c^3}(k_{\mathrm{B}}T)^{p+2}\int_0^{\infty}\mathrm{d}x\,x^{p+1}n_{\mathrm{B}}(x) \nonumber \\
&=\sum_{p=2}^{\infty}\frac{3\Gamma(p+2)\zeta(p+2)a_1^{(p)}}{2\pi^2V_{\mathrm{BZ}}^3c^3}(k_{\mathrm{B}}T)^{p+2}\,,
\end{align}
where $a_1^{(p)}c^{-3}=\frac{1}{3V_{\mathrm{BZ}}^3}\sum_na_{1;n}^{(p)}\int\frac{\mathrm{d}\Omega}{4\pi}c_n(\hat{\mathbf{k}})^{-3}$ and $\mathrm{d}\Omega$ is the infinitesimal angular element. $\Gamma(p+2)=(p+1)!$ is the gamma function for integer $p$, and $\zeta$ is the Riemann zeta function. In the low temperature limit the $p=2$ term dominates and we recover the power law of Ref.~\onlinecite{PhysRevLett.92.196403},
\begin{align}
\langle\hat{O}\rangle=\frac{\pi^2a_1^{(2)}}{10c^3}(k_{\mathrm{B}}T)^4\,,
\end{align}
with a prefactor that depends on the parameter $a_1^{(2)}/c^3$, which can be determined by fitting to experimental data.
We note that the third assumption above refers to the class of observables (including the band gap) for which $a_{s;n}(\mathbf{k})$ has a quadratic dependence on the energy around $\mathbf{k}=\mathbf{0}$.\cite{PhysRevLett.92.196403} However, the methodology presented here is more general, so it could be applied to other classes of observables with different asymptotic behaviour.
\subsection{Low dimensional systems}
In this section we extend the low temperature results to $2$- and $1$-dimensional systems. Low dimensional systems are important in understanding exotic physical properties and for technological applications, in particular graphene\cite{rise_of_graphene} and carbon nanotubes. Low dimensional systems are qualitatively different from $3$-dimensional systems because linear and quadratic acoustic branches coexist. This qualitatively different behaviour makes a detailed study of the low-dimensional systems essential.
In a $2$-dimensional system, there are two acoustic branches, one with linear dispersion, and the other with quadratic dispersion corresponding to out-of-plane atomic motion.
A $1$-dimensional system has a single acoustic linear branch and two quadratic acoustic branches.
For $2$ and $1$-dimensional systems the linear branches lead to
\begin{align}
\langle\hat{O}\rangle_{\mathrm{2D}}^{\mathrm{linear}}&=\frac{2\zeta(3)a_1^{(2)}}{\pi c^2}(k_{\mathrm{B}}T)^3\,, \\
\langle\hat{O}\rangle_{\mathrm{1D}}^{\mathrm{linear}}&=\frac{\zeta(2)a_1^{(2)}}{2\pi c}(k_{\mathrm{B}}T)^2\,.
\end{align}
The quadratic branches with $\omega_n(\mathbf{k})=c_n(\hat{\mathbf{k}})k^2$ lead to
\begin{align}
\langle\hat{O}\rangle_{\mathrm{2D}}^{\mathrm{quadratic}}&=\frac{\pi a_1^{(2)}}{24 c^2}(k_{\mathrm{B}}T)^2\,, \\
\langle\hat{O}\rangle_{\mathrm{1D}}^{\mathrm{quadratic}}&=\frac{a_1^{(2)}}{8(\pi c)^{1/2}}\zeta\left(\frac{3}{2}\right)(k_{\mathrm{B}}T)^{3/2}\,.
\end{align}
The low-temperature asymptote of the lower-dimensional systems is dominated by the quadratic phonon branches. As far as we are aware, this is the first time these limits are reported and they are relevant for materials of reduced dimensionality such as graphene and carbon nanotubes.
A finite ($0$-dimensional) system has discrete phonon modes. Hence, the temperature dependence as $T\to0$ is a discrete sum of BE oscillators with an asymptotic exponential behaviour.
\section{Conclusions} \label{sec:conclusions}
We have presented an analytic phenomenology for describing the temperature dependence of phonon-renormalized properties. This formalism has allowed us to study important physical limits, and contextualize standard approximations and models used in the literature.
We first recovered from our formalism the usual BE expressions for the temperature dependence of electronic band gaps and lattice parameters. We have considered extensions to the high-temperature behavior beyond lowest order perturbation theory. We have studied standard extrapolation schemes for estimating the ZP correction of phonon-dependent properties from knowledge of their temperature dependence. The standard schemes fail to recover the correct asymptotic limit, and we have proposed and tested two new strategies that deliver results of an order of magnitude higher accuracy. Finally, we applied our new schemes to extract a more accurate value for the ZP correction for the band gap of diamond from experimental data.
We have also discussed the properties of the temperature dependence of band gaps in the limit $T\to0$. We have recovered the standard $T^4$ power law for three-dimensional solids, and obtained a $T^2$ power law in two dimensions and a $T^{3/2}$ power law in one dimension. These new results are important for materials of reduced dimensionality such as graphene and carbon nanotubes.
The theory we have presented makes no reference to a specific expectation value, microscopic theory, or material, and is therefore applicable to a wide range of phonon-related phenomena. The closed-form analytic results facilitate the calculation of further properties or limits beyond those explicitly considered in this paper.
We have treated the vibrational degrees of freedom within the harmonic approximation. To extend the formalism to include anharmonic properties, the anharmonic coupling between otherwise independent modes could be treated at mean-field level as in Ref.\ \onlinecite{PhysRevB.87.144302}. One could then expand the total wave function in terms of a simple harmonic oscillator basis, which would lead to matrix elements similar to those in Eq.\ (\ref{eq:matrix_element}).
\begin{acknowledgments}
We thank Neil Drummond for useful discussions.
B.M. and R.J.N. acknowledge the financial support of the Engineering and Physical Sciences Research Council (UK), and G.J.C. funding from Gonville and Caius College.
\end{acknowledgments}
|
1,116,691,499,047 | arxiv | \section{Introduction}
The nonlinear partial differential equation
\begin{equation}\label{muCH}
\mu(u_t)-u_{xxt}=-2\mu(u)u_x+2u_xu_{xx}+uu_{xxx}, \qquad t > 0, \quad x \in S^1 = {\Bbb R}/{\Bbb Z}, \\
\end{equation}
where $u(x,t)$ is a real-valued spatially periodic function and $\mu(u)=\int_{S^1}u(x,t)dx$ denotes its mean, was recently introduced in \cite{KLM} as an integrable equation arising in the study of the diffeomorphism group of the circle. It describes the propagation of self-interacting, weakly nonlinear orientation waves in a massive nematic liquid crystal under the influence of an external magnetic field.
The closest relatives of (\ref{muCH}) are the Camassa-Holm equation \cite{C-H, F-F}
\begin{equation}\label{CH}
u_t-u_{txx}+3uu_x=2u_xu_{xx}+uu_{xxx},
\end{equation}
and the Hunter-Saxton \cite{H-S} equation
\begin{equation}\label{HS}
-u_{txx}=2u_xu_{xx}+uu_{xxx}.
\end{equation}
In fact, each of the equations (\ref{muCH})-(\ref{HS}) can be written in the form
\begin{equation}\label{CHmform}
m_t + u m_x + 2 u_x m = 0, \qquad m = Au,
\end{equation}
where the operator $A$ is given by $A= \mu - \partial_x^2$ in the case of (\ref{muCH}), $A = 1 - \partial_x^2$ in the case of (\ref{CH}), and $A = - \partial_x^2$ in the case of (\ref{HS}).
Following \cite{lmt}, we will refer to equation (\ref{muCH}) as the $\mu$-Camassa-Holm ($\mu$CH) equation.
Equations (\ref{muCH})-(\ref{HS}) share many remarkable properties: (a) They are all completely integrable systems with a corresponding Lax pair formulation, a bi-Hamiltonian structure, and an infinite sequence of conservation laws, see \cite{C-H, C-M, H-Z, KLM}. (b) They all arise geometrically as equations for geodesic flow in the context of the diffeomorphism group of the circle $\text{Diff}(S^1)$ endowed with a right-invariant metric \cite{KLM, K-M, kou, mis2, Shk98}. (c) They are all models for wave breaking (each equation admits initially smooth solutions which break in finite time in such a way that the wave remains bounded while its slope becomes unbounded) cf. \cite{C-H, con1, con4, C-M, H-S, KLM, mis1}.
A particularly interesting feature of the Camassa-Holm equation is that it admits peaked soliton solutions \cite{C-H}. These solutions (called peakons) are traveling waves with a peak at their crest and they occur both in the periodic and in the non-periodic setting. It was noted in \cite{lmt} that the $\mu$CH equation also admits peakons: For any $c \in {\Bbb R}$, the peaked traveling-wave $u(x, t) = c\varphi(x - ct)$, where (see figure \ref{peakon.pdf})
\begin{equation}\label{varphipeakon}
\varphi(x) = \frac{1}{26}(12 x^2 + 23) \quad \text{for} \quad x \in [-1/2, 1/2]
\end{equation}
and $\varphi$ is extended periodically to the real line, is a solution of (\ref{muCH}).
Note that the height of the peakon $c\varphi(x-ct)$ is proportional to its speed.
If waves such as the peakons are to be observable in nature, they need to be stable under small perturbations. The stability of the peakons is therefore of great interest. Since a small change in the height of a peakon yields another
one traveling at a different speed, the correct notion of stability here is that of {\it orbital stability}:
a periodic wave with an initial profile close to a peakon remains close to some translate of it for all later times. That is, the shape of the wave remains approximately the same for all times.
The Camassa-Holm peakons are orbitally stable in the non-periodic setting \cite{C-S} as well as in the periodic case \cite{le1}.
In this paper, we show that the periodic $\mu$CH peakons given by (\ref{varphipeakon}) are also orbitally stable:
\begin{theorem}\label{thm_stabintro}
The periodic peakons of equation (\ref{muCH}) are orbitally stable in $ H^1({S^1}). $
\end{theorem}
An outline of the proof of thereom \ref{thm_stabintro} is given in section \ref{outlinesec}, while a detailed proof is presented in section \ref{proofsec}.
We conclude the paper with section \ref{commentssec} where we discuss some results on the existence of solutions to (\ref{muCH}).
\begin{figure}
\begin{center}
\includegraphics{peakon.pdf} \\
\begin{figuretext}\label{peakon.pdf}
The periodic peakon $\varphi(x)$ of the $\mu$CH equation.
\end{figuretext}
\end{center}
\end{figure}
\section{Outline of Proof}\label{outlinesec}
There are two standard methods for studying stability of a solution of a dispersive wave equation. The first method consists of linearizing the equation around the solution. In many cases, nonlinear stability is governed by the linearized equation. However, for the $\mu$CH and CH equations, the nonlinearity plays the dominant role rather than being a higher-order perturbation of the linear terms. Thus, it is not clear how to prove nonlinear stability of the peakons using the linearized problem. Moreover, the peakons $c\varphi(x-ct)$ are continuous but not differentiable, which makes it hard to analyze the spectrum of the operator linearized around $c\varphi$.
The second method is variational in nature. In this approach, the solution is realized as an energy minimizer under appropriate constraints. Stability follows if the uniqueness of the minimizer can be established (otherwise one only obtains the stability of the set of minima). A proof of the stability of the Camassa-Holm peakons using the variational approach is given in \cite{C-L1} for the case on the line and in \cite{le2} for the periodic case.
In this paper, we prove stability of the peakon (\ref{varphipeakon}) using a method that is different from both of the above methods. Taking $c = 1$ for simplicity, our approach can be described as follows.
To each function $w:S^1 \to {\Bbb R}$, we associate a function $F_w(M,m)$ of two real variables $(M, m)$ in such a way that the correspondence $w \mapsto F_w$ has the following properties:
\begin{itemize}
\item If $u(x,t)$ is a solution of (\ref{muCH}) with maximal existence time $T>0$, then
\begin{equation}\label{Fgeqzero}
F_{u(t)}(M_{u(t)},m_{u(t)})\geq 0, \qquad t \in [0,T),
\end{equation}
where $M_{u(t)}= \max_{x \in S^1} \{u(x,t)\}$ and $m_{u(t)}= \min_{x \in {S^1}}\{u(x,t)\}$ denote the maximum and minimum of $u$ at the time $t$, respectively.
\item For the peakon, we have $F_\varphi \equiv F_{\varphi(\cdot)} = F_{\varphi(\cdot - t)}$ and $F_\varphi(M,m) \leq 0$ for all $(M,m)$ with equality if and only if $(M,m) = (M_\varphi, m_\varphi)$, see figure \ref{graphF.pdf}.
\item If $w:S^1 \to {\Bbb R}$ is such that $H_i[w]$ is close to $H_i[\varphi]$, $i = 0,1,2$, where $H_0, H_1, H_2$ are the conservation laws of (\ref{muCH}) given by
\begin{align} \label{conservationlaws}
H_0[u] = \int u dx, \quad
H_1[u] = \frac{1}{2}\int mu dx, \quad
H_2[u] = \int \left(\mu(u)u^2 + \frac{1}{2}uu_x^2 \right)dx,
\end{align}
then the function $F_{w}$ is a small perturbation of $F_{\varphi}$.
\end{itemize}
Using the correspondence $w \mapsto F_w$, stability of the peakon is proved as follows.
If $u$ is a solution starting close to the peakon $\varphi$, the conserved quantities $H_i[u]$ are close to
$H_i[\varphi]$, $i=0,1,2$, and hence $F_{u(t)}$ is a small perturbation of $F_{\varphi}$ for any $t \in [0, T)$. This implies that the set where $F_{u(t)} \geq 0$ is contained in a small neighborhood of
$(M_\varphi, m_\varphi)$ for any $t \in [0, T)$. We conclude from (\ref{Fgeqzero}) that
$(M_{u(t)},m_{u(t)})$ stays close to
$(M_\varphi,m_\varphi)$ for all times. The proof is completed by noting
that if the maximum of $u$ stays close to the maximum of the peakon,
then the shape of the whole wave remains close to that of the peakon.
Our proof is inspired by \cite{le1} where the stability of the periodic peakons of the Camassa-Holm equation
is proved.\footnote{The proof in \cite{le1} is in turn inspired by the proof of stability of the Camassa-Holm peakons on the line presented in \cite{C-S}.}
The approach here is similar, but there are differences. The main difference is that in \cite{le1} the function $F_u$ associated with a solution $u(x,t)$ could be chosen to be independent of time, whereas here the function $F_{u(t)}$ depends on time. Indeed, our definition of the function $F_{u(t)}(M,m)$ involves the $L^2$-norm $\|u(t)\|_{L^2({S^1})}$, which is not conserved in time. However, since this norm is controlled by the conservation law $H_1$, we can ensure that it remains bounded for all times. This turns out to be enough to ascertain that the function $F_{u(t)}$, despite its time-dependence, remains close to $F_\varphi$ for all $t \in [0, T)$.
\section{Proof of Stability}\label{proofsec}
We will identify $S^1$ with the interval $[0,1)$ and view
functions on $S^1$ as
periodic functions on the real line of period one. For an integer $n \geq 1$,
we let $H^n(S^1)$ denote the Sobolev space of all square integrable functions
$f \in L^2(S^1)$ with distributional derivatives $\partial_x^i f \in L^2(S^1)$ for
$i=1,\dots,n$. The norm on $H^n(S^1)$ is given by
$$\| f \|_{H^n(S^1)}^2 = \sum_{i=0}^n \int_{S^1} (\partial_x^i f)^2(x) dx.$$
Equation (\ref{muCH}) can be recast in conservation form as
\begin{equation} \label{weakmuCH}
u_t + uu_x
+
A^{-1}\partial_x \Big( 2\mu(u)u +\frac{1}{2}u_x^2 \Big)
= 0,
\end{equation}
where $A= \mu - \partial_x^2$ is an isomorphism between $H^s(S^1)$ and $H^{s-2}(S^1)$ cf. \cite{KLM}.
By a {\it weak solution} $u$ of (\ref{muCH}) on $[0,T)$ with $T>0$, we mean a function $u \in
C([0,T); H^1(S^1))$ such that (\ref{weakmuCH}) holds in distributional sense and
the functionals $H_i[u]$, $i=0,1,2$, defined in (\ref{conservationlaws}) are
independent of $t \in[0,T)$. The peakons defined in (\ref{varphipeakon}) are weak solutions in this sense \cite{lmt}.
Our aim is to prove the following precise reformulation of the theorem
stated in the introduction.
\begin{theorem} \label{thm_stabprecise}
For every $\epsilon >0$ there is a $\delta >0$ such that if
$u \in C([0,T); H^1({S^1}))$ is a weak solution of (\ref{muCH}) with
$$ \|u(\cdot, 0) - c\varphi \|_{H^1({S^1})} < \delta$$
then
$$ \|u(\cdot, t) - c\varphi (\cdot - \xi (t) + 1/2) \|_{H^1({S^1})} < \epsilon \quad
\textrm{for} \quad t \in [0,T),$$
where $\xi(t) \in {\Bbb R}$ is any point where the function $u(\cdot, t)$
attains its maximum.
\end{theorem}
The proof of theorem \ref{thm_stabprecise} will proceed through a series of lemmas. The first lemma summarizes the properties of the peakon. For simplicity we henceforth take $c=1$.
\begin{lemma}\label{peakonlemma}
The peakon $\varphi(x)$ is continuous on $S^1$ with peak at $x=\pm 1/2$. The extrema of $\varphi$ are
$$ M_{\varphi} = \varphi (1/2) = 1, \qquad
m_{\varphi} = \varphi (0) = \frac{23}{26}.$$
Moreover,
$$\lim_{x \uparrow 1/2} \varphi_x(x) = \frac{6}{13}, \qquad \lim_{x \downarrow -1/2} \varphi_x(x) = -\frac{6}{13},$$
and
\begin{align*}
& H_0[\varphi]
= \frac{12}{13}, \qquad
H_1[\varphi]
= \max_{x \in S^1} \varphi_x = \frac{6}{13}, \qquad
H_2[\varphi]
= \frac{9024}{10985}.
\end{align*}
\end{lemma}
\begin{proof}
These properties follow easily from the definition (\ref{varphipeakon}) of $\varphi$ and the definition (\ref{conservationlaws}) of $\{H_i\}_1^3$. For example,
$$H_0[\varphi] = \int_{-1/2}^{1/2} \frac{12x^2 + 23}{26} dx = \frac{12}{13}.$$
\end{proof}
We define the $\mu$-inner product $\langle \cdot, \cdot \rangle_{\mu}$ and the associated $\mu$-norm $\| \cdot \|_\mu$ by
\begin{equation}\label{muinnernorm}
\langle u, v \rangle_\mu = \mu(u)\mu(v) + \int_{S^1} u_xv_x dx, \qquad \| u \|_\mu^2 = \langle u, u \rangle_\mu = 2H_1[u], \qquad u,v \in H^1(S^1),
\end{equation}
and consider the expansion of the conservation law $H_1$ around the peakon $\varphi$ in the $\mu$-norm. The following lemma shows that the error term in this expansion is given by $12/13$ times the difference between $\varphi$ and the perturbed solution $u$ at the point of the peak.
\begin{lemma}\label{lm_H1est}
For every $u\in H^1({S^1})$ and $\xi \in {\Bbb R}$,
$$ H_1[u]-H_1[\varphi] = \frac{1}{2} \| u- \varphi (\cdot - \xi)\|^2_{\mu}
+ \frac{12}{13}(u(\xi + 1/2) - M_\varphi).$$
\end{lemma}
\begin{proof}
We compute
\begin{align*}
\frac{1}{2} \| u- \varphi (\cdot - \xi)\|^2_{\mu}
&= H_1[u] + H_1[\varphi (\cdot - \xi)] - \mu(u)\mu(\varphi) - \int_{S^1} {u_x(x)\varphi_x(x - \xi)dx}
\\
& = H_1[u] + H_1[\varphi] - \mu(u) \mu(\varphi) + \int_{S^1} u(x+\xi)\varphi_{xx}(x)dx.
\end{align*}
Since
\begin{equation}\label{eqn_phi_xx}
\varphi_{xx}= \frac{12}{13} - \frac{12}{13}\delta(x - 1/2),
\end{equation}
we find
$$\int_{S^1} {u(x+\xi)\varphi_{xx}(x)dx}= \frac{12}{13} \int_{S^1} u(x) dx - \frac{12}{13} u(\xi +1/2).$$
Using that $H_0[\varphi]=\mu(\varphi)= \frac{12}{13}$, we obtain
$$\frac{1}{2} \| u- \varphi (\cdot - \xi)\|^2_{\mu}
=H_1[u] - H_1[\varphi] + \frac{12}{13}(1 - u(\xi + 1/2)).$$
This proves the lemma.
\end{proof}
\begin{remark}
For a wave profile $u \in H^1({S^1})$, the functional $H_1[u]$ represents
kinetic energy. Lemma \ref{lm_H1est} implies that if a wave $u \in H^1({S^1})$ has energy $H_1[u]$ and height $M_u$ close to the peakon's energy and height, then the whole shape of $u$ is close to that of the peakon. Another physically relevant consequence of lemma \ref{lm_H1est} is that among all waves of fixed energy, the peakon has maximal height. Indeed, if $u \in H^1({S^1}) \subset C({S^1})$ is such that $H_1[u] = H_1[\varphi]$ and $u(\xi) = \max_{x \in {S^1}} u(x)$, then $u(\xi) \leq M_{\varphi}$.
\end{remark}
The peakon $\varphi$ satisfies the differential equation
\begin{equation} \label{peakondiff}
\varphi_x = \left\{
\begin{array}{ll}
-\frac{12}{13} \sqrt{\frac{13}{6}(\varphi - m_\varphi)} & \quad -1/2 < x \leq 0, \\
\frac{12}{13} \sqrt{\frac{13}{6}(\varphi - m_\varphi)} & \quad 0 \leq x< 1/2.
\end{array} \right.
\end{equation}
Let $u \in H^1({S^1}) \subset C({S^1})$ and write $M = M_u =
\max_{x \in {S^1}} \{u(x)\}$,
$m = m_u = \min_{x \in {S^1}}\{u(x)\}$. Let $\xi$ and $\eta$ be such
that $u(\xi)=M$ and $u(\eta)=m$. Inspired by (\ref{peakondiff}),
we define the real-valued function $g(x)$ by
\begin{displaymath}
g(x) = \left\{
\begin{array}{ll}
u_x + \frac{12}{13} \sqrt{\frac{13}{6}(u - m)} & \quad \xi < x \leq \eta, \\
u_x - \frac{12}{13}\sqrt{\frac{13}{6}(u - m)} & \quad \eta \leq x < \xi+1,
\end{array} \right.
\end{displaymath}
and extend it periodically to the real line. We compute
\begin{align*}
\int_{S^1} {g^2(x) dx} = &\; \int_{\xi}^{\eta} \left(u_x + \frac{12}{13} \sqrt{\frac{13}{6}(u - m)}\right)^2 dx
+ \int_{\eta}^{\xi+1} \left(u_x - \frac{12}{13} \sqrt{\frac{13}{6}(u - m)}\right)^2 dx
\\
=& \; \int_{\xi}^{\eta} {u_x^2 dx} + \frac{24}{13} \int_{\xi}^{\eta} u_x \sqrt{\frac{13}{6}(u - m)} dx
+\frac{144}{169} \int_{\xi}^{\eta} \frac{13}{6}(u - m) dx
\\
& + \int_{\eta}^{\xi+1} {u_x^2 dx}
- \frac{24}{13}\int_{\eta}^{\xi+1} u_x \sqrt{\frac{13}{6}(u - m)} dx
+ \frac{144}{169} \int_{\eta}^{\xi+1} \frac{13}{6}(u - m) dx.
\end{align*}
Notice that
$$\frac{d}{dx}\left[ 8 \sqrt{\frac{2}{39}} (u - m)^{3/2}\right]
= \frac{24}{13} u_x \sqrt{\frac{13}{6}(u - m)} .$$
Hence,
$$\int_{\xi}^{\eta} u_x \sqrt{\frac{13}{6}(u - m)} dx
= -\int_{\eta}^{\xi +1} u_x \sqrt{\frac{13}{6}(u - m)} dx$$
and
$$ \frac{24}{13} \int_{\xi}^{\eta}u_x \sqrt{\frac{13}{6}(u - m)} dx
= \biggl[8 \sqrt{\frac{2}{39}} (u - m)^{3/2}\biggr]_{\xi}^{\eta} = - 8 \sqrt{\frac{2}{39}} (M-m)^{3/2}.$$
We conclude that
\begin{align}\label{gsquare}
\frac{1}{2} \int_{S^1} {g^2(x) dx}
= H_1[u] - \frac{1}{2}\mu(u)^2 - 8 \sqrt{\frac{2}{39}} (M-m)^{3/2}
+ \frac{12}{13}(\mu(u) - m).
\end{align}
In the same way, we compute
\begin{align*}
\int_{S^1} u & g^2(x) dx
\\
= &\; \int_{\xi}^{\eta} {u\left(u_x + \frac{12}{13}\sqrt{\frac{13}{6}(u - m)}\right)^2 dx}
+ \int_{\eta}^{\xi+1} {u\left(u_x - \frac{12}{13} \sqrt{\frac{13}{6}(u - m)} \right)^2 dx}
\\
=&\; \int_{\xi}^{\eta} {u u_x^2 dx} + \frac{24}{13}\int_{\xi}^{\eta} u u_x \sqrt{\frac{13}{6}(u - m)} dx
+ \frac{144}{169} \int_{\xi}^{\eta} u \frac{13}{6}(u - m) dx
\\
& + \int_{\eta}^{\xi+1} {u u_x^2 dx} - \frac{24}{13}\int_{\eta}^{\xi+1} {u u_x \sqrt{\frac{13}{6}(u - m)} dx}
+ \frac{144}{169} \int_{\eta}^{\xi+1} u \frac{13}{6}(u - m) dx.
\end{align*}
Since
$$\frac{d}{dx}\left[\frac{8}{5} \sqrt{\frac{2}{39}} (u -m)^{3/2} (2 m+3 u)\right] = \frac{24}{13} u u_x \sqrt{\frac{13}{6}(u - m)},$$
we find
$$ \int_{\xi}^{\eta} u u_x \sqrt{\frac{13}{6}(u - m)} dx
= -\int_{\eta}^{\xi +1} u u_x \sqrt{\frac{13}{6}(u - m)} dx$$
and
\begin{align*}
\frac{24}{13} \int_{\xi}^{\eta} u u_x \sqrt{\frac{13}{6}(u - m)} dx
= - \frac{8}{5} \sqrt{\frac{2}{39}} (M -m)^{3/2} (2 m+3 M).
\end{align*}
Therefore,
\begin{align}\label{ugsquare}
\frac{1}{2} \int_{S^1} {u g^2(x) dx} = &\; H_2[u] - \left(H_0[u] - \frac{12}{13}\right) \int_{S^1} u^2 dx
- \frac{12}{13} m H_0[u]
\\ \nonumber
&- \frac{8}{5} \sqrt{\frac{2}{39}} (M -m)^{3/2} (2 m+3 M).
\end{align}
Combining (\ref{ugsquare}) with (\ref{gsquare}), we find
\begin{align} \nonumber
H_2[u] = &\; \frac{1}{2} \int_{S^1} {u g^2(x) dx}
+ \left(H_0[u] - \frac{12}{13}\right) \int_{S^1} u^2 dx
+ \frac{12}{13} m H_0[u]
\\ \nonumber
& + \frac{8}{5} \sqrt{\frac{2}{39}} (M -m)^{3/2} (2 m+3 M)
\\ \label{ineq}
\leq &\; \frac{M}{2} \int_{S^1} {g^2(x) dx}
+ \left(H_0[u] - \frac{12}{13}\right) \int_{S^1} u^2 dx
+ \frac{12}{13} m H_0[u]
\\ \nonumber
& + \frac{8}{5} \sqrt{\frac{2}{39}} (M -m)^{3/2} (2 m+3 M)
\\ \nonumber
=&\; M \biggl [ H_1[u] - \frac{1}{2}\mu(u)^2 - 8 \sqrt{\frac{2}{39}} (M-m)^{3/2}
+ \frac{12}{13}(\mu(u) - m) \biggr]
\\ \nonumber
& + \left(H_0[u] - \frac{12}{13}\right) \int_{S^1} u^2 dx
+ \frac{12}{13} m H_0[u]
+ \frac{8}{5} \sqrt{\frac{2}{39}} (M -m)^{3/2} (2 m+3 M).
\end{align}
We have actually proved the following lemma.
\begin{lemma}\label{lm_LyapunovF}
For any positive $u \in H^1({S^1})$, define a function
$$ {F_u:\{(M,m)\in {\Bbb R}^2:\, M \geq m > 0\} \rightarrow {\Bbb R}} $$
by
\begin{align*}
F_u(M ,m) = &\; M \biggl [ H_1[u] - \frac{1}{2}H_0[u]^2 - 8 \sqrt{\frac{2}{39}} (M-m)^{3/2}
+ \frac{12}{13}(H_0[u] - m) \biggr]
\\ \nonumber
& + \left(H_0[u] - \frac{12}{13}\right) \int_{S^1} u^2 dx
+ \frac{12}{13} m H_0[u]
\\ \nonumber
& + \frac{8}{5} \sqrt{\frac{2}{39}} (M -m)^{3/2} (2 m+3 M) - H_2[u].
\end{align*}
Then
$$F_u(M_u ,m_u) \geq 0,$$
where $M_u= \max_{x \in {S^1}} \{u(x)\}$ and $m_u= \min_{x \in {S^1}}\{u(x)\}.$
\end{lemma}
Note that the function $F_{u}$ depends on $u$ only through the three conservation laws $H_0[u]$, $H_1[u]$, and $H_2[u]$, and the $L^2$-norm of $u$.
\begin{figure}
\begin{center}
\includegraphics{graphF.pdf} \\
\begin{figuretext}\label{graphF.pdf}
The graph of the function $F_{\varphi}(M,m)$ near the point $(M_{\varphi},m_{\varphi}).$
\end{figuretext}
\end{center}
\end{figure}
The next lemma highlights some properties of the function $ F_{\varphi}(M ,m)$
associated to the peakon. The graph of $F_{\varphi}(M,m)$ is shown in figure \ref{graphF.pdf}.
\begin{lemma}\label{lm_derivativesF}
For the peakon $\varphi$, we have
$$ F_{\varphi}(M_{\varphi},m_{\varphi}) = 0,$$
$$ \frac{\partial F_{\varphi}}{\partial M}(M_{\varphi} ,m_{\varphi}) = 0,
\qquad \frac{\partial F_{\varphi}}{\partial m}(M_{\varphi}
,m_{\varphi}) = 0,$$
$$ \frac{\partial^2 F_{\varphi}}{\partial M^2}(M_{\varphi} ,m_{\varphi}) = -{12\over13},
\qquad \frac{\partial^2 F_{\varphi}}{\partial M \partial
m}(M_{\varphi} ,m_{\varphi}) = 0,
\qquad \frac{\partial^2 F_{\varphi}}{\partial m^2}(M_{\varphi}
,m_{\varphi}) = -{12\over13}.$$
\end{lemma}
\begin{proof}
It follows from (\ref{peakondiff}) that the function $g(x)$
corresponding
to the peakon is identically zero. Thus the inequality (\ref{ineq}) is an
equality in the case of the peakon. This means that
$ F_{\varphi}(M_{\varphi},m_{\varphi}) = 0.$
On the other hand, differentiation gives
\begin{align*}
\frac{\partial F_u}{\partial M} & = \biggl [ H_1[u] - \frac{1}{2}H_0[u]^2 - 8 \sqrt{\frac{2}{39}} (M-m)^{3/2}
+ \frac{12}{13}(H_0[u] - m) \biggr]
\\ \nonumber
&\quad\ -12\sqrt{{2\over39}}M(M-m)^{1/2}+{12\over5}\sqrt{2\over39}(M-m)^{1/2}(2m+3M)+{24\over5}\sqrt{2\over39}(M-m)^{3/2}\\ \nonumber
& = \biggl [ H_1[u] - \frac{1}{2}H_0[u]^2 - 8 \sqrt{\frac{2}{39}} (M-m)^{3/2}
+ \frac{12}{13}(H_0[u] - m) \biggr],
\end{align*}
and
\begin{align*}
\frac{\partial F_u}{\partial m} & = 12\sqrt{{2\over39}}M(M-m)^{1/2} - {12\over13}M+{12\over13}H_0[u]\\
&\quad\
+ {8\over5}\sqrt{2\over39}\biggl[ -{3\over2}(M-m)^{1/2}(2m+3M) + 2(M-m)^{3/2} \biggr]\\
&= {12\over13}(H_0[u]-M)+8\sqrt{2\over39}(M-m)^{3/2}.
\end{align*}
Further differentiation yields
\begin{align*}
\frac{\partial^2 F_u}{\partial M \partial m}&
= -{12\over13}+12\sqrt{{2\over39}}(M-m)^{1/2}, \\
\frac{\partial^2 F_u}{\partial M^2}=\frac{\partial^2 F_u}{\partial m^2}& = -12\sqrt{{2\over39}}(M-m)^{1/2}.
\end{align*}
To complete the proof, take $F_u = F_\varphi$, $M = M_\varphi$,
and $m = m_\varphi$ in the above expressions for the partial derivatives of
$F$ and use lemma \ref{peakonlemma}.
\end{proof}
\begin{lemma}\label{lm_maxf}
We have
\begin{equation} \label{maxineq}
\max_{x \in S^1} |f(x)| \leq \sqrt{ \frac{13}{12} }
\;\|f\|_{\mu}, \quad f \in H^1(S^1),
\end{equation}
where the $\mu$-norm is defined in \eqref{muinnernorm}.
Moreover, $\sqrt{ \frac{13}{12} }$ is the best constant and
equality holds in (\ref{maxineq}) if and only if $f=c\varphi(\cdot - \xi + 1/2)$ for some
$c,\xi \in {\Bbb R}$,
i.e. if and only if $f$ has the shape of a peakon.
\end{lemma}
\begin{proof}
For $x \in S^1$, by \eqref{muinnernorm} and \eqref{eqn_phi_xx}, we have
\begin{align*}
\frac{13}{12} \langle \varphi(\cdot-x+1/2), f \rangle_{\mu}
&= {13\over12}\mu(\varphi(\cdot-x+1/2))\mu(f) + \frac{1}{2} \int_{S^1} {\varphi'(y-x+1/2)f'(y) dy}\\
&= \frac{13}{12} \int_{S^1} {(\mu-\partial^2_y)\varphi(y-x+1/2)f(y) dy}
\\
&= \int_{S^1} {\delta(y-x)f(y) dy} = f(x)
\end{align*}
Thus, since
$$H_1[\varphi]= \frac{1}{2} \|\varphi\|_{\mu}^2 =
\frac{6}{13},$$
we get
\begin{equation} \label{fineq}
f(x) = \frac{13}{12} \langle \varphi(\cdot-x+1/2), f \rangle_{\mu}
\leq \frac{13}{12} \| \varphi \|_{\mu} \|f\|_{\mu}
= \sqrt{\frac{13}{12}} \; \|f\|_{\mu},
\end{equation}
with equality if and only if $f$ and $\varphi(\cdot-x+1/2)$ are proportional.
Taking the maximum of (\ref{fineq}) over $S^1$ proves the lemma.
\end{proof}
\begin{remark}\label{rk_height}
Lemma \ref{lm_maxf} again indicates that among all travelling waves of fixed energy, the
peakon has maximal height (see also \cite{C-S, le1}).
\end{remark}
The next lemma shows that the $\mu$-norm is equivalent to the $H^1({S^1})$-norm.
\begin{lemma}\label{lm_equivnorms}
Every $u\in H^1(S^1)$ satisfies
\begin{equation}\label{eqn_equivnorms}
\|u\|^2_\mu \leq \|u\|^2_{H^1(S^1)} \leq 3\|u\|^2_\mu.
\end{equation}
\end{lemma}
\begin{proof}
The first inequality holds because (by Jensen's inequality)
$$\mu(u)^2 \leq \int_{S^1} u^2 dx, \qquad u \in H^1(S^1).$$
The second inequality holds because, by lemma \ref{lm_maxf},
$$\|u\|^2_{H^1(S^1)}
\leq \max_{x \in S^1} |u(x)|^2 + \int_{S^1} u_x^2 dx
\leq \left(\frac{13}{12} + 1\right) \|u\|_{\mu}^2.$$
\end{proof}
\begin{remark}
The previous two lemmas can also be proved directly using a Fourier series argument. Indeed, for every $f\in H^3(S^1)$ and $\epsilon>0$, we have (cf. the proof of lemma 2 in \cite{C})
\begin{equation}\label{eqn_estmax}
\max_{x\in S^1}f^2(x)\leq {\epsilon+2\over 24}\int_{S^1}f_x^2 dx +{\epsilon+2\over \epsilon}\mu(f)^2.
\end{equation}
The inequality (see lemma 2.6 in \cite{le1})
\begin{equation}\label{maxH1estimate}
\max_{x \in S^1} |f(x)|^2 \leq \frac{\cosh(1/2)}{2\sinh(1/2)} \|f\|_{H^1(S^1)}^2, \qquad f \in H^1(S^1),
\end{equation}
implies that the map $f \mapsto \max_{x \in S^1} f(x)$ is continuous from $H^1(S^1)$ to ${\Bbb R}$.
Thus, since $H^3$ is dense in $H^1$, equation (\ref{eqn_estmax}) also holds for $f\in H^1(S^1)$.
It follows that, for every $u\in H^1(S^1)$ and every $\epsilon>0$,
\begin{equation}\label{eqn_equivnorms_epsilon}
\|u\|^2_\mu\leq \|u\|^2_{H^1(S^1)}\leq {\epsilon+2\over \epsilon} \mu^2(u) + {\epsilon+26\over24}\int_{S^1}u^2_x dx.
\end{equation}
In particular, we have (taking $\epsilon=1$)
\begin{equation}\label{eqn_equivnorms}
\|u\|^2_\mu\leq \|u\|^2_{H^1(S^1)}\leq 3 \mu^2(u) + {27\over24}\int_{S^1}u^2_x dx\leq 3\|u\|^2_\mu,
\end{equation}
again showing the equivalence of the two norms.
On the other hand, letting $\epsilon = 24$ in (\ref{eqn_estmax}), we recover (\ref{maxineq}). However, the proof we give in lemma \ref{lm_maxf} provides a better idea in concern with the best constant.
\end{remark}
\begin{lemma}\label{lm_contMm}\cite{le1}
If $u \in C([0,T); H^1({S^1}))$, then
$$ M_{u(t)}= \max_{x \in {S^1}} u(x,t) \quad \hbox{and} \quad
m_{u(t)}= \min_{x \in {S^1}} u(x,t)$$
are continuous functions of $t \in [0,T)$.
\end{lemma}
\begin{lemma}\label{lm_shape}
Let $u \in C([0,T); H^1({S^1}))$ be a solution of (\ref{muCH}). Given a small
neighborhood
$\mathcal{U}$ of $(M_\varphi, m_\varphi)$ in ${\Bbb R}^2$, there is a $\delta >0$ such that
\begin{equation} \label{MminU}
(M_{u(t)}, m_{u(t)}) \in \mathcal{U} \quad \hbox{for} \quad t \in [0,T) \quad
if \quad
\|u(\cdot, 0) - \varphi\|_{H^1({S^1})} < \delta.
\end{equation}
\end{lemma}
\begin{proof}
Suppose $w \in H^1(S^1)$ is a small perturbation of $\varphi$ such that
$H_i[w]=H_i[\varphi]+\epsilon_i$, $i=0,1,2$. Then
\begin{align*}
F_w(M,m) = F_\varphi(M,m) + M\left [\epsilon_1 - H_0[\varphi]\epsilon_0 - {1\over2}\epsilon_0^2 + \frac{12}{13} \epsilon_0 \right]
+ \epsilon_0\int_{S^1}w^2 dx + {12\over13}m\epsilon_0 - \epsilon_2.
\end{align*}
Suppose $\epsilon_1 < 6/13$ so that $H_1[w] \leq 2H_1[\varphi]$. Then, by lemma \ref{lm_equivnorms},
\begin{equation}\label{L2normbound}
\int_{S^1}w^2 dx \leq \|w\|_{H^1}^2 \leq 3 \|w\|_\mu^2 = 6 H_1[w] \leq 12 H_1[\varphi] = \frac{72}{13}.
\end{equation}
The point is that $\int_{S^1}w^2 dx$ is bounded. Thus, $F_w$ is a small perturbation of $F_\varphi$. The effect of the
perturbation near the point $(M_\varphi, m_\varphi)$ can be
made arbitrarily small by choosing the $\epsilon_i$'s small.
Lemma \ref{lm_derivativesF} says that $F_\varphi(M_\varphi, m_\varphi) = 0$ and that $F_\varphi$
has a critical point with negative definite second derivative at
$(M_\varphi, m_\varphi)$. By continuity of the second derivative, there is
a neighborhood around $(M_\varphi, m_\varphi)$ where $F_\varphi$ is concave
with curvature bounded away from zero. Therefore, the set where $F_w \geq 0$ near $(M_\varphi, m_\varphi)$ will be contained in a neighborhood of $(M_\varphi, m_\varphi)$.
Now let $\mathcal{U}$ be given as in the statement of the lemma.
Shrinking $\mathcal{U}$ if necessary,
we infer the existence of a $\delta' >0$ such that for $u\in C([0,T); H^1({S^1}))$ with
\begin{equation} \label{HclosetoH}
|H_i[u] - H_i[\varphi]|<\delta', \qquad i=0,1,2,
\end{equation}
it holds that the set where $F_{u(t)} \geq 0$ near $(M_\varphi, m_\varphi)$
is contained in $\mathcal{U}$ for each $t \in [0,T)$.
By lemma \ref{lm_LyapunovF} and lemma \ref{lm_contMm}, $M_{u(t)}$ and $m_{u(t)}$ are continuous functions of $t \in [0,T)$
and $F_{u(t)}(M_{u(t)},m_{u(t)}) \geq 0$ for $t \in [0,T)$. We conclude that
for $u$ satisfying (\ref{HclosetoH}), we have
$$(M_{u(t)}, m_{u(t)}) \in \mathcal{U} \quad \hbox{for} \quad t \in [0,T) \quad \hbox{if}
\quad (M_{u(0)}, m_{u(0)}) \in \mathcal{U}.$$
However, the continuity of the conserved functionals $H_i:H^1({S^1})\rightarrow {\Bbb R}$,
$i=0,1,2$,
shows that there is a $\delta >0$ such that (\ref{HclosetoH}) holds for
all $u$ with
$$ \|u(\cdot, 0) - \varphi\|_{H^1({S^1})} < \delta. $$
Moreover, in view of the inequality (\ref{maxH1estimate}),
taking a smaller $\delta$ if necessary, we may
also assume that $(M_{u(0)}, m_{u(0)}) \in \mathcal{U}$ if
$\|u(\cdot, 0) - \varphi\|_{H^1({S^1})} < \delta.$
This proves the lemma.
\end{proof}
\noindent {\it Proof of theorem \ref{thm_stabprecise}.}
Let $u \in C([0,T); H^1({S^1}))$ be a solution of (\ref{muCH})
and suppose we are given an $\epsilon >0$. Pick a neighborhood
$\mathcal{U}$ of $(M_\varphi, m_\varphi)$ small enough that
$|M - M_\varphi| < \frac{13\epsilon^2}{144}$ if
$(M,m) \in \mathcal{U}$.
Choose a $\delta > 0 $ as in lemma \ref{lm_shape} so that (\ref{MminU}) holds.
Taking a smaller $\delta$ if necessary we may also assume that
$$|H_1[u]-H_1[\varphi]|< \frac{\epsilon^2}{12}
\quad \hbox{if} \quad \|u(\cdot, 0) - \varphi\|_{H^1({S^1})} < \delta.$$
Applying lemma \ref{lm_equivnorms} and lemma \ref{lm_H1est}, we conclude that
\begin{align*}
\|u(\cdot, t) - \varphi (\cdot - \xi (t)) \|_{H^1({S^1})}^2&\leq 3\|u(\cdot, t) - \varphi (\cdot - \xi (t)) \|_\mu^2\\
& = 6(H_1[u]-H_1[\varphi]) + {72\over13}(M_\varphi - M_{u(t)}) < \epsilon^2,
\qquad t \in [0,T),
\end{align*}
where $\xi(t) \in {\Bbb R}$ is any point where $u(\xi(t) + 1/2, t)=M_{u(t)}.$
This completes the proof of the theorem.$\hfill\Box$ \bigskip
\begin{remark}\label{rk3}
Note that our proof of stability applies to any
$u \in C([0,T); H^1({S^1}))$ such that $H_i[u]$, $i=0,1,2$, are
independent of time. The fact that $u$ satisfies (\ref{weakmuCH})
in distributional sense was actually never used.
\end{remark}
\section{Comments}\label{commentssec}
Some classical solutions of (\ref{muCH}) exist for all time while others develop
into breaking waves \cite{F-L-Q, KLM, lmt}.
If $u_0 \in H^3({S^1})$, then there
exists a maximal time $T=T(u_0)>0$ such that (\ref{muCH}) has a unique solution
$u \in C([0,T); H^3({S^1})) \, \cap \, C^1([0,T); H^2({S^1}))$ with $H_0, H_1, H_2$
conserved. For $u_0 \in H^r({S^1})$ with $r>3/2$, it is known \cite{lmt}
that (\ref{muCH}) has a unique strong solution $u\in C([0,T); H^r({S^1}))$ for some
$T>0$, with $H_0, H_1, H_2$ conserved. However, the peakons do not belong to
the space $H^r({S^1})$ for $r>3/2$. Thus, to describe the peakons one has to study weak
solutions of (\ref{muCH}). The existence and uniqueness of weak solutions to \eqref{muCH} is still open at point. Therefore, close to a peakon, there may exist profiles that
develop into breaking waves and profiles that lead to globally existing waves.
Our stability theorem is applicable in both cases up to breaking time.
|
1,116,691,499,048 | arxiv | \section{Introduction}
\label{sec:intro}
About 1/3 of the \(\gamma\)-ray sources listed in the 2nd \textit{Fermi} catalog
\citep[2FGL,][]{nolan12} have not yet been associated with counterparts at lower energies. A precise
knowledge of the number of unidentified gamma-ray sources (UGSs) is extremely relevant since for example it could help to provide the tightest constraint on the dark matter models
ever determined \citep{abdo2013}. Many UGSs could be blazars, the largest identified population of extragalactic
\(\gamma\)-ray sources, but how many are actually blazars is not yet known due in part to the
incompleteness of the catalogs used for the associations \citep{2011ApJ...743..171A}. The first step to reduce the number of UGSs is therefore to recognize those that could be blazars.
Blazars are the rarest class of Active Galactic Nuclei, dominated by variable, non-thermal radiation over the
entire electromagnetic spectrum \citep[e.g.,][]{1995PASP..107..803U,2013MNRAS.431.1914G}. Their observational properties are generally interpreted in terms of a relativistic jet aligned within a small angle to our line of sight \citep{1978bllo.conf..328B}.
Blazars have been classified as {BL Lacs and FSRQs (or BZBs and BZQs according to the nomenclature proposed by \citealt{2011bzc3.book.....M}), with the latter showing similar optical spectra except for the stronger emission lines, as well as} higher radio polarization. In particular, if the only spectral features observed are emission
lines with rest frame equivalent width \(EW \leq 5\) \AA~the object is classified as a BZB \citep{1991ApJ...374..431S,1997ApJ...489L..17S}, otherwise it is classified as BZQ \citep{1999ApJ...525..127L,2011bzc3.book.....M}.
{Systematic projects aimed at obtaining optical spectroscopic observations of blazars are currently carried out by different groups (see, e.g., \citealt {sbarufatti06,sbarufatti09,2012A&A...543A.116L,2013AJ....145..114L}\footnote{\href{http://archive.oapd.inaf.it/zbllac/index.html}{http://archive.oapd.inaf.it/Wallace/index.html}}; \citealt{2013ApJ...764..135S}).}
The blazar spectral energy distributions (SEDs) typically show two peaks: one in the range of {radio}
- soft X-rays, due to synchrotron emission by highly relativistic electrons within the jet; and another one at hard X-ray or \(\gamma\)-ray energies, interpreted as inverse
Compton upscattering by the same electrons of the seed photons provided by the synchrotron emission
\citep{1996ApJ...463..555I} with the possible addition of seed photons from outside the jets yielding
contributions to the non-thermal radiations due to external inverse Compton scattering
\citep[see][]{1993ApJ...416..458D,2009ApJ...692...32D} often dominating {the} \(\gamma\)-ray outputs
\citep{2009A&A...502..749A,2011ApJ...743..171A}.
\begin{table*}
\caption{WISE sources discussed in this paper. In the upper part of the Table we list the \(\gamma\)-ray blazar candidates
associated with UGSs or AGUs, while in the lower part we list the sources associated with
known \(\gamma\)-ray blazars. Column description is given in the main text (see Sect. \ref{sec:sources}).}\label{table_sources}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcclll}
\hline
\hline
WISE NAME & RA & DEC & OTHER NAME & NAME 2FGL & NOTES \\
& J2000 & J2000 & & & \\
\hline
J022051.24+250927.6 & 02:20:51.24 & +25:09:27.6 & NVSSJ022051+250926 & 2FGLJ0221.2+2516 & UGS X-KDE \\
J050558.78+611335.9 & 05:05:58.79 & +61:13:35.9 & NVSSJ050558+611336 & 2FGLJ0505.9+6116 & AGU WISE \\
J060102.86+383829.2 & 06:01:02.87 & +38:38:29.2 & WN0557.5+3838 & 2FGLJ0600.9+3839 & UGS WENSS \\
J064459.38+603131.7 & 06:44:59.39 & +60:31:31.8 & & 2FGLJ0644.6+6034 & UGS WISE \\
J104939.34+154837.8 & 10:49:39.35 & +15:48:37.9 & GB6J1049+1548 & 2FGLJ1049.4+1551 & AGU R-KDE \\
\hline
J022239.60+430207.8 & 02:22:39.61 & +43:02:07.9 & BZBJ0222+4302 & 2FGLJ0222.6+4302 & A, Z=0.444? \\
J100800.81+062121.2 & 10:08:00.82 & +06:21:21.3 & BZBJ1008+0621 & 2FGLJ1007.7+0621 & B, CAND \\
J131443.81+234826.7 & 13:14:43.81 & +23:48:26.8 & BZBJ1314+2348 & 2FGLJ1314.6+2348 & B, CAND \\
J172535.02+585140.0 & 17:25:35.03 & +58:51:40.1 & BZBJ1725+5851 & 2FGLJ1725.2+5853 & B, CAND \\
\hline
\hline
\end{tabular}}
\end{center}
\end{table*}
Recently, \citet{2013ApJS..206...12D} proposed an association procedure to recognize
\(\gamma\)-ray blazar candidates on the basis of their positions in the three-dimensional WISE
color space. As a matter of fact, blazars - whose emission is dominated by beamed, non thermal
emission - occupy a defined region in such a space, well separated from that occupied by other
sources in which thermal emission prevails \citep{2011ApJ...740L..48M,paper2}. Applying this method, \citet{2013arXiv1308.1950C} recently identified thirteen gamma-ray emitting blazar candidates from a sample of 102 previously unidentified sources selected from Astronomer's Telegrams and the literature.
\citet{2013arXiv1303.3585M} applied the classification method proposed by
\citet{2013ApJS..206...12D} {to} 258 UGSs and 210 active galaxies of uncertain type (AGUs)
listed in the 2FGL \citep{nolan12} finding candidate blazar counterparts for 141 {UGSs} and 125 {AGUs}.
The classification method proposed by \citet{2013ApJS..206...12D}, however, can only be applied to
sources detected in all 4 WISE bands, i.e., 3.4, 4.6, 12 and 22 \(\mu\)m.
Using the X-ray emission in place of the 22 \(\mu\)m detection, \citet{2013ApJS..209....9P} proposed a method to select \(\gamma\)-ray blazar candidates
among \textit{Swift}-XRT sources considering those that feature a WISE counterpart detected at least in
the first 3 bands, and with IR colors compatible with the 90\% two-dimensional densities of known
\(\gamma\)-ray blazar evaluated using the Kernel Density Estimation (KDE) technique \citep[see,
e.g.,][and reference therein]{2004ApJS..155..257R,2009MNRAS.396..223D,2011MNRAS.418.2165L}, so selecting 37 new
\(\gamma\)-ray blazar candidates. Similarly, using the radio emission as additional information, \citet{massaro2013c} investigated all the radio
sources in NVSS and SUMSS surveys that lie within positional uncertainty region of \textit{Fermi} UGSs and, considering those sources with IR colors compatible with the 90\% two-dimensional KDE densities of known
\(\gamma\)-ray blazar, selected 66 additional \(\gamma\)-ray blazar candidates.
Finally, \citet{massaro2013b} investigated the low-frequency radio emission of blazars and
searched for sources with similar features combining the information derived from the WENSS
and NVSS surveys, identifying 26 \(\gamma\)-ray candidate blazars in the \textit{Fermi} LAT the
positional uncertainty region of 21 UGSs.
In this paper we present a pilot project to assess the effectiveness of the three methods described before (position in the three dimensional WISE IR colors space, {radio or} X-ray detection plus
position in the two dimensional WISE IR color space and low-frequency radio properties) in selecting gamma-ray blazar candidates. To this end, we report on optical spectra acquired using MMT, Loiano and OAN telescopes of 5 WISE \(\gamma\)-ray blazar candidates - counterparts of three UGSs and {two AGUs} - in order to identify their nature and to test the reliability of these different approaches in selecting \(\gamma\)-ray candidate blazars.
In addition, we also present optical spectra of 4
known \(\gamma\)-ray blazars with uncertain redshift estimates
or unknown classification (BZB vs BZQ,) \citep{2011ApJ...743..171A,nolan12}
with a WISE counterpart identified by \citet{2013ApJS..206...12D}.
{We note that our approach in selecting the targets for our observations is different from that adopted in other works \citep[i.e.][]{2013ApJ...764..135S}, that is, selecting the source closest to radio or optical coordinates. Our approach for the target selection, as reported in \citet{2013ApJS..206...12D}, \citet{2013arXiv1303.3585M}, \citet{2013ApJS..209....9P} and \citet{massaro2013b}, is the following:
\begin{itemize}
\item[a)] For \textit{Fermi} UGSs or AGUs, among all the sources inside the 95\% {LAT} uncertainty region (\(\sim 10\)') we select gamma-ray blazar candidates on the basis of their multi-wavelength properties {(IR, radio+IR, X-ray+IR, low-frequency radio)}. As a consequence, our selected targets are not necessarily the closest to optical or radio coordinates. {They} may - in principle - not even have a radio counterpart.
\item[b)] For known gamma-ray blazars \citet{2013ApJS..206...12D} associate to {Roma-}BZCAT \citep{2011bzc3.book.....M} sources the closest WISE source inside 3.3'' selected on the basis of its WISE colors. So, even if the these source are spatially compatible with radio or optical coordinates due to the WISE PSF extension (\(\sim 6\)'' in W1 band and \(\sim 12\)'' in W4 band, \citealt{2010AJ....140.1868W}), we cannot a-priori be sure that this IR source is actually the blazar counterpart. Since the probability of having two different blazars in 3.3'' is essentially 0 (the blazar density in the sky is about 1 source per 10 square degrees), if the selected WISE source does show a blazar spectrum we can be confident that this is indeed the IR blazar counterpart and that \citeauthor{2013ApJS..206...12D} procedure correctly classified the WISE source.
\end{itemize}}
The paper is structured as follows: in Sect. \ref{sec:observations} we describe the observation
procedures and the data reduction process adopted, in Sect. \ref{sec:sources} we present our results on individual sources and discuss them in Sect. \ref{sec:discussion}, while in
Sect. \ref{sec:conclusions} we present our conclusions.
Throughout this paper USNO-B magnitudes are reported as photographic magnitudes, SDSS magnitudes are reported in AB system, and 2MASS magnitudes are reported in VEGA system.
\section{Observations}
\label{sec:observations}
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=0.95]{finding_charts.pdf}
\end{center}
\caption{Optical images of the fields of 9 of the WISE sources selected in this paper for optical
spectroscopic follow-up (see Table \ref{table_sources}). The object name, image scale and orientation are indicated in each panel.
The proposed optical counterparts are indicated with red marks and the fields are extracted from the DSS-II-Red survey.}\label{charts}
\end{figure*}
The spectroscopic observations for all sources with the exception of WISEJ022239.60+430207.8 and WISEJ104939.34+154837.8 were carried out during nights of January 17 and 18, 2013 with the 6.5 m
Multiple Mirror Telescope (MMT) and its Blue Channel Spectrograph with a 300 gpm grating and a 2x180''
slit, for a resolution of about 6.2 \AA. The spectra covered about 4800 \AA, centered on 5900 \AA, and
the 3072 x 1024 pixel ccd22 was used as a detector. For each target, we obtained a series of two
spectra, with exposure times of 1800-2700 s and combined them during the reduction process. We used
helium-neon-argon calibration lamps before and after each exposure. A few spectroscopic standards were also observed and used to remove the spectral response
and to fluxcalibrate the data.
The object WISEJ022239.60+430207.8 was observed spectroscopically with
the 1.5-meter ``G.D. Cassini" telescope in Loiano (Italy) equipped with
the BFOSC spectrograph, which carries a 1300$\times$1340 pixels EEV CCD.
Two 1800-s spectroscopic frames were secured on 3 December 2012, with
start times at 21:12 and 21:44 UT, respectively. Data were acquired using
Grism \#4 and with a slit width of 2$\farcs$0, giving a nominal spectral
coverage between 3500 and 8700 \AA~and a dispersion of 4.0 \AA/pix.
Wavelength calibration was obtained with Helium-Argon lamps.
Likewise, three optical spectra of 1800 s each of source WISEJ104939.34+154837.8 were secured with the 2.1-meter telescope of the
Observatorio Astron\'omico Nacional (OAN) in San Pedro M\'artir (M\'exico) on 2
May 2013 with mid-exposure time 04:58 UT. The telescope carries a Boller \& Chivens spectrograph and a 1024x1024 pixels E2V-4240 CCD. A slit width 2$\farcs$.5 was used. The spectrograph was tuned in the \(\sim 4000\div 8000\) \AA~range (grating 300 l/mm), with a resolution of 4 \AA/pixel, which corresponds to 8 \AA~(FWHM).
Data were wavelength calibrated using Copper-Helium-Neon-Argon lamps, while for flux calibration spectrophotometric standard stars were observed twice during every night of the observing run.
The data reduction was carried out using the \textsc{IRAF} package of NOAO including bias subtraction,
spectroscopic flat fielding, optimal extraction of the spectra and interpolation of the wavelength
solution. All spectra were reduced and calibrated employing standard techniques in \textsc{IRAF} and our own IDL routines (see, e.g., \citealt{Matheson08}).
\begin{table*}
\caption{Main observation properties of WISE sources discussed in this paper. For each source we indicate the name (WISE NAME), the date of the observation (OBS. DATE), the telescope used for the observations (TELESCOPE), the exposure time (EXPOSURE), the rest frame EW of the identified lines (EW), and the estimated redshift (REDSHIFT).}\label{table_log}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{threeparttable}
\begin{tabular}{llcccccccccccc}
\hline
\hline
WISE NAME & OBS. DATE & TELESCOPE & EXPOSURE (min) & \multicolumn{9}{c}{EW (\AA)} & REDSHIFT \\
& & & & Mg \textsc{ii} & [Ne \textsc{v}] & [O \textsc{ii}] & [Ne \textsc{iii}] & Ca \textsc{ii} H & Ca \textsc{ii} K & H \(\delta\) & H \(\beta\) & [O \textsc{iii}] & \\
\hline
J022051.24+250927.6 & 2013-01-17 & MMT & 2\(\times\)30 & \(30.8 \pm 0.7\) & \(1.8 \pm 0.3\) & \(1.7 \pm 0.3\) & \(1.1 \pm 0.4\) & - & - & - & - & \(15.8 \pm 0.3\) & \(0.4818 \pm 0.0002\) \\
J050558.78+611335.9 & 2013-01-18 & MMT & 2\(\times\)30 & - & - & - & - & - & - & - & - & - & - \\
J060102.86+383829.2 & 2013-01-18 & MMT & 2\(\times\)45 & - & - & - & - & - & - & - & - & - & - \\
J064459.38+603131.7 & 2013-01-18 & MMT & 2\(\times\)30 & \(6.1 \pm 0.4\) & - & - & - & - & - & \(7 \pm 2\) & \(4 \pm 2\) & - & \(0.3582\pm 0.0008\) \\
J104939.34+154837.8 & 2013-05-02 & OAN & 3\(\times\)30 & - & - & - & - & \(0.7 \pm 0.1\) & \(0.6 \pm 0.1\) & - & - & - & \(0.3271 \pm 0.0003\) \\
\hline
J022239.60+430207.8 & 2012-12-03 & Loiano & 2\(\times\)30 & - & - & - & - & - & - & - & - & - & - \\
J100800.81+062121.2& 2013-01-17 & MMT & 2\(\times\)30 & - & - & - & - & - & - & - & - & - & \(0.6495\)* \\
J131443.81+234826.7 & 2013-01-17 & MMT & 2\(\times\)30 & - & - & - & - & - & - & - & - & - & - \\
J172535.02+585140.0 & 2013-01-17 & MMT & 2\(\times\)30 & - & - & - & - & - & - & - & - & - & - \\
\hline
\hline
\end{tabular}
\begin{tablenotes}[para]
\item {Notes:}\\
\item[*] Tentative estimate.
\end{tablenotes}
\end{threeparttable}}
\end{center}
\end{table*}
\section{Results on individual sources}
\label{sec:sources}
In Table \ref{table_sources} we list the WISE sources presented in this paper. In the upper part of the table we report the \(\gamma\)-ray blazar candidates
associated with UGSs or AGUs; in particular, in the NAME 2FGL column we indicate the name of associated
\textit{Fermi} source, in the OTHER NAME column we indicate the relative radio
counterpart and in the NOTES column we indicate with X-KDE, WISE, WENSS and R-KDE the source selected as
\(\gamma\)-ray blazar candidate according to \citet{2013ApJS..209....9P}, \citet{2013arXiv1303.3585M}, \citet{massaro2013b} and
\citet{massaro2013c}, respectively. In the lower part of the Table we list the sources associated with
known \(\gamma\)-ray blazars with the classification method proposed by
\citet{2013ApJS..206...12D}, with additional information from BZCAT catalog; in
particular, for these sources in the OTHER NAME column we indicate the associated blazar name, and in
the NOTES column we report the class depending on the probability of the WISE source to be compatible
with the model of the WISE \textit{Fermi} Blazar locus \citep[][see Sect.
\ref{sec:discussion}]{2013ApJS..206...12D} and we indicate with CAND the sources listed as blazar
candidates or the reported redshift estimate \citep[see][]{2011ApJ...740L..48M}.
Optical images of the fields containing these sources are presented in Fig. \ref{charts}, while the
extracted spectra are presented in Figs. \ref{fig:spectra1} and \ref{fig:spectra2}.
The main observational results are presented in Table \ref{table_log}, and a discussion for each individual target is given in the following sub-sections. For each WISE source we report the main properties of the closest sources found in major radio, IR and optical surveys (together with the centroid separation) to obtain additional information on the source nature\footnote{{Although a proper counterpart identification would require more
sophisticated techniques \citep[see for example][]{2006ApJ...641..140B}
for the scope of this work we are simply presenting a list of counterparts
associations only based on positional match. A detailed discussion of the spatial association procedure of blazar with IR and low frequency radio catalog has been performed by \citet{2013ApJS..206...12D}, \citet{2013arXiv1303.3585M} and \citet{massaro2013b}, as well as an estimation of the chance of spurious associations, that has also been discussed by \citet{2013ApJS..209....9P} for the X-ray – IR case.}}.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.45]{2FGLJ0221.pdf}
\includegraphics[scale=0.45]{2FGLJ0505.pdf}
\includegraphics[scale=0.44]{2FGLJ0600.pdf}
\includegraphics[scale=0.43]{2FGLJ0644.pdf}
\includegraphics[scale=0.45]{2FGLJ1049.pdf}
\caption{Optical {spectra of the five WISE \(\gamma\)-ray} blazar candidates associated with \textit{Fermi}-LAT UGS or AGU. The WISE name of each source is
indicated in the relative panel, as well as the identified emission lines. With the exception of
J104939.34+154837.8, whose spectrum has been obtained with OAN telescope, all other spectra have
been obtained with MMT Blue Channel Spectrograph.}\label{fig:spectra1}
\end{center}
\end{figure*}
\subsection{WISEJ022051.24+250927.6}
This source lies in the positional uncertainty region at 95\% level of confidence of the
\textit{Fermi} UGS 2FGLJ0221.2+2516 as reported in the 2FGL catalog \citep{nolan12}, and it is
associated with the NVSS \citep{1998AJ....115.1693C} radio source NVSSJ022051+250926 with a \(\sim 1''\) angular separation. The USNO-B \citep{2003AJ....125..984M} optical counterpart, at \(\sim 0.1''\), features magnitudes
B1=18.74 mag, R1=18.82 mag, B2=19.80 mag, R2=19.51 mag and I2=18.10 mag. \citet{2013ApJS..209....9P} showed that this source is
positionally consistent (\(\sim 4.7''\)) with the X-ray \textit{Swift} source SWXRTJ022051.5+250930,
featuring an unabsorbed flux \(\sim 1.3\times{10}^{-13}\mbox{ erg}\mbox{ cm}^{-2}\mbox{ s}^{-1}\). On
the basis of its position in the two dimensional WISE IR color space - that is the [3.4]-[4.6] {vs} [4.6]-[12] color plane - this source has been selected by the same authors as \(\gamma\)-ray blazar
candidate. The spectrum of WISEJ022051.24+250927.6 presented in Fig. \ref{fig:spectra1}a clearly shows
strong emission lines that we identify as broad Mg \textsc{ii} (\(EW = 30.8 \pm 0.7\) \AA), narrow [Ne \textsc{v}] (\(EW = 1.8 \pm 0.3\) \AA), narrow [O \textsc{ii}] (\(EW = 1.7 \pm 0.3\) \AA), narrow [Ne \textsc{iii}] (\(EW = 1.1 \pm 0.4\) \AA) and narrow [O \textsc{iii}] (\(EW = 15.8 \pm 0.3\) \AA), yielding a redshift \(z = 0.4818 \pm 0.0002\).
\subsection{WISEJ050558.78+611335.9}
{The \textit{Fermi} AGU 2FGLJ0505.9+6116 has been associated with the gamma-ray blazar candidate WISEJ050558.78+611335.9 by \citet{2013arXiv1303.3585M}. The WISE source is associated with the radio sources NVSSJ050558+611336 (\(\sim 0.5''\)) and WENSS \citep{1997A&AS..124..259R} WN0501.4+6109 (\(\sim 2.3''\)).}
The closest source in the USNO-B catalog (\(\sim 0.4''\) from WISE source, {\(\sim 0.1''\) from NVSS source}) features magnitudes R1=18.71 mag, B2=20.73 mag, R2=18.67 mag and I2=17.30 mag, while the closest counterpart in the SDSS \citep{2012A&A...548A..66P,2013arXiv1307.7735A} survey is
SDSSJ050558.78+611335.8 (\(\sim 0.1''\) {from WISE source, \(\sim 0.4''\) from NVSS source}) with magnitudes u=21.66 mag, g=20.58 mag, r=19.65 mag, i=19.02 mag and z=18.58 mag.
WISEJ050558.78+611335.9 is also associated with the 2MASS \citep{2006AJ....131.1163S} IR counterpart 2MASSJ05055874+6113359 (\(\sim 0.1''\)) with magnitudes H=16.228 mag and K=15.156 mag, with a lower limit on the J magnitude of 17.136 mag.
The optical source J0505+6113 (\(\sim 0.4''\)) has been observed with Marcario Low Resolution Spectrograph on the Hobby-Eberly Telescope \citep{2013ApJ...764..135S}{, yielding a featureless spectrum that did not allow a redshift estimate. The 2LAC source coordinates for 2FGLJ0505.9+6116 are closer to the USNO (\(\sim 0.2''\)) source than to the SDSS source (\(\sim 0.3''\)). Although the latter optical sources, at a separation \(\sim 0.4''\), are marginally consistent with each other - considering the astrometric uncertainties of \(0.2''\) for USNO-B catalog and \(0.1''\) for SDSS - it is likely that \citeauthor{2013ApJ...764..135S} observed the closest, brighter USNO source.}
According to the source position in the three dimensional WISE IR color space, this source has been selected by \citet{2013arXiv1303.3585M} as \(\gamma\)-ray BZB candidate{, and its featureless spectrum shown in Figure \ref{fig:spectra1}b confirms this classification of the WISE source.
The similarly featureless spectrum shown by \citeauthor{2013ApJ...764..135S} features comparable fluxes but a different spectral shape, which is not unexpected due to the blazar strong optical variability \citep{2009ApJ...705...46B}.}
\subsection{WISEJ060102.86+383829.2}
The \textit{Fermi} UGS 2FGLJ0600.9+3839 has been associated with the gamma-ray blazar candidate WISEJ060102.86+383829.2 by \citet{2013arXiv1303.3585M}.
The correspondent optical counterpart in the USNO-B catalog, at \(\sim 0.1''\), features magnitudes R1=19.11 mag, R2=19.84 mag
and I2=18.48 mag. According to \citet{2013ApJS..209....9P} this source is
positionally consistent (\(\sim 0.6''\)) with the X-ray \textit{Swift} source SWXRTJ060102.8+383829
having a \(0.3-10\mbox{ keV}\) unabsorbed flux \(\sim 2.3\times{10}^{-13}\mbox{ erg}\mbox{ cm}^{-2}\mbox{ s}^{-1}\).
It is also associated with the radio sources NVSSJ060102+383828 (\(\sim 0.5''\)) and
WN0557.5+3838 (\(\sim 0.8''\)); in particular, based on the source low-frequency radio properties,
\citet{massaro2013b} classified this source as a \(\gamma\)-ray blazar candidate. As shown in Figure
\ref{fig:spectra1}c the featureless spectrum of WISEJ050558.78+611335.9 confirms its BZB nature.
\subsection{WISEJ064459.38+603131.7}
This source is the gamma-ray blazar candidate counterpart of the UGS 2FGLJ0644.6+6034 proposed by \citet{2013arXiv1303.3585M}.
The USNO-B optical counterpart lying at \(\sim 0.4''\) features magnitudes B1=19.44 mag, R1=19.03 mag,
B2=19.33 mag, R2=18.23 mag and I2=18.28 mag, while the IR counterpart 2MASSJ06445937+6031318 (\(\sim 0.1''\))
features magnitudes J=16.923 mag, H=15.979 mag and K=15.371 mag.
According to \citet{2013ApJS..209....9P} this source is positionally consistent (\(\sim 0.1''\)) with the X-ray
\textit{Swift} source SWXRTJ064459.9+603132 with a \(0.3-10\mbox{ keV}\) unabsorbed flux \(\sim 2.1\times{10}^{-13}\mbox{
erg}\mbox{ cm}^{-2}\mbox{ s}^{-1}\).
This source does not feature any obvious radio counterpart
{in the NVSS survey (the source is outside FIRST and SUMSS footprints), therefore down to \(\sim 2.5\) mJy}; nevertheless, {based} on its position in the three dimensional WISE IR color space, this
source has been selected by \citet{2013arXiv1303.3585M} as {a} \(\gamma\)-ray blazar candidate.
The spectrum of WISEJ064459.38+603131.7 presented in Figure \ref{fig:spectra1}d show an almost featureless continuum with narrow Mg \textsc{ii} (\(EW = 6.1 \pm 0.4\) \AA), {and a weak detection of narrow H\(\delta\) (\(EW = 7 \pm 2\) \AA) emission lines (probably affected by contamination from noise/cosmic rays). For completeness we also report a poorly significant detection of narrow H\(\beta\) (\(EW = 4 \pm 2\) \AA), Such a spectrum is} reminiscent of weak emission line quasar spectra \citep[see e.g.,][and references therein]{2006ApJ...644...86S,2009ApJ...696..580S}, and yielding \(z=0.3582\pm 0.0008\).
\begin{figure*}
\begin{center}
\includegraphics[scale=0.45]{BZBJ0222.pdf}
\includegraphics[scale=0.45]{BZBJ1008.pdf}
\includegraphics[scale=0.45]{BZBJ1314.pdf}
\includegraphics[scale=0.44]{BZBJ1725.pdf}
\caption{Optical spectra of the {four} WISE sources associated by \citet{2013ApJS..206...12D} with known
\textit{Fermi}-LAT \(\gamma\)-ray blazars. As in Fig. \ref{fig:spectra1}, the WISE name of each source
is indicated in the relative panel, as well as the identified emission lines. With the exception of
J022239.60+430207.8, whose spectrum has been obtained with Loiano telescope, all other spectra have
been obtained with MMT Blue Channel Spectrograph.}\label{fig:spectra2}
\end{center}
\end{figure*}
\subsection{J104939.34+154837.8}
This source lies in the positional uncertainty region of the \textit{Fermi} AGU 2FGLJ1049.4+1551. It
is associated with the {NVSS} radio sources NVSSJ104939+154838 (\(\sim 1.0''\)) and FIRST \citep{1995ApJ...450..559B} FIRSTJ104939.3+154837 (\(\sim 0.3''\)), and with the optical counterpart SDSSJ104939.35+154837.6, with magnitudes u=18.58 mag, g=18.11 mag, r=17.64 mag, i=17.36 mag and z=17.13 mag. Furthermore this source is associated {with} 2MASSJ10493935+1548374 (\(\sim 0.4''\)) with magnitudes H=14.899 mag and K=14.144 mag and J=15.622 mag. {According to the} selection method presented by \citet{massaro2013c}, the radio emission from this source and its position in the two dimensional WISE IR color space classify this source as a \(\gamma\)-ray blazar candidate.
The spectrum of WISEJ104939.34+154837.8 presented in Figure \ref{fig:spectra1}e shows an almost featureless spectrum typical of BZBs, with two weak absorption lines consistent with Ca \textsc{ii} H \& K (\(EW = 0.70 \pm 0.13\) \AA~and \(EW = 0.60 \pm 0.10\) \AA, respectively) located at observed wavelength of 5220.6 and 5266.7 \AA. Given this identification, the estimated redshift for this source is \(z = 0.3271 \pm 0.0003\).
\subsection{WISEJ022239.60+430207.8}
\citet{2013ApJS..206...12D} {selected} WISEJ022239.60+430207.8 {as} the IR counterpart of 2FGLJ0222.6+4302, associated in the 2FGL and in the 2LAC catalogs \citep{2011ApJ...743..171A,nolan12} with the blazar BZBJ0222+4302, also known as 3C66A. This is a well known TeV
detected BZB{, associated with the radio source NVSSJ022239+430208,} with a long and debated redshift estimate history. In fact, a past tentative measurement
of \(z = 0.444\) \citep{1978bllo.conf..176M,1991ApJS...75..645K} {was} based on the measurement of
single, weak line (the optical spectrum is not published, see \citealt{2008MNRAS.391..967L}). There have
also been suggestions that 3C66A is a member of a cluster at \(z \sim 0.37\)
\citep{1976ApJ...209L..11B,1993AJ....106..869W,1997ApJ...480..547W}, while a lower limit of
\(z > 0.096\) based on the expected equivalent widths of absorption features in the blazar host galaxy
has been set by \citet{2008A&A...477..513F}, and an upper limit of \(z<0.58\) has been set by
\citet{2010PASJ...62L..23Y} comparing the measured and intrinsic VHE spectra due to extragalactic background light absorption. In the same way, an estimate for the blazar redshift of \(z =
0.34\pm 0.05\) was found by \citet{2010MNRAS.405L..76P}. Recently, \citet{2013ApJ...766...35F}
making use of far-ultraviolet HST/COS spectra, evaluated for 3C66A a redshift range \(0.3347 < z <
0.41\) at 99\% confidence.
{The source WISEJ022239.60+430207.8 is associated with the IR
counterpart 2MASSJ02223961+4302078 (\(\sim 0.1''\)), with magnitudes J=12.635 mag, H=11.880 mag and
K=11.151 mag, while the closest source in the USNO-B catalog, at \(\sim 0.1''\), has brightnesses of B1=15.88.44 mag, R1=15.15 mag, B2=14.94 mag, R2=14.35 mag and I2=12.59 mag. \citep{2013ApJ...764..135S} observed 3C66A with the Low Resolution Imaging Spectrograph at the W. M. Keck Observatory, producing a featureless spectra that did not yield a redshift estimate As shown in Figure \ref{fig:spectra2}a, WISEJ022239.60+430207.8 shows a similar featureless spectrum, typical of BZBs so, even if we are not able to obtain a spectroscopic redshift estimate, we confirm the blazar nature of WISEJ022239.60+430207.8, which therefore is the IR counterpart of 3C66A.}
\subsection{WISEJ100800.81+062121.2}
\citet{2013ApJS..206...12D} selected this WISE source as the IR counterpart of {the gamma-ray source 2FGLJ1007.7+0621 \citep{2011ApJ...743..171A,nolan12}, associated with the blazar candidate BZBJ1008+0621, associated with the radio sources NVSSJ100800+062121 and FIRSTJ100800.8+06212. The WISE source is} also associated with a USNO-B source (\(\sim 0.2''\)) with brightnesses of B1=17.72 mag, R1=16.72 mag, B2=18.54 mag, R2=16.73 mag and I2=16.74 mag, and with the SDSS source
SDSSJ100800.81+062121.2 (\(\sim 0.1''\)) with magnitudes u=18.65 mag, g=18.11 mag, r=17.64 mag, i=17.29 mag and
z=17.02 mag; {the associated optical spectrum shows weak emission lines yielding a QSO classification with a redshift estimate \(z = 1.36456 \pm 0.00015\).}
{The associated IR counterpart 2MASSJ10080081+0621212 (\(\sim 0.1''\)) has magnitudes J=14.121 mag, H=13.345 mag and K=12.458 mag, and has been classified by \citet{2009ApJ...698.1095U} as an high variable blazar with \(z=1.72\) on the basis of optical spectroscopy performed with the ESI instrument of the W. M. Keck Observatory telescope. This is at variance with the {findings} of \citep{2013ApJ...764..135S}, that observed BZBJ1008+0621 with the Low Resolution Imaging Spectrograph at the W. M. Keck Observatory obtaining a featureless spectrum without a redshift estimate}
{As shown in Figure \ref{fig:spectra2}b, WISEJ100800.81+062121.2 feature an almost featureless spectrum with only a weak narrow absorption line that
we tentatively identify as Mg \textsc{ii}, yielding a redshift estimate of \(z = 0.6495\). Although the source variability can explain the variations in magnitudes and spectral shape, it cannot explain the differences in redshift estimates.
A direct comparison with \citeauthor{2009ApJ...698.1095U} is however not possible because the authors did not presented their observed spectrum, since their work mainly dealt with red quasars.
Although cannot firmly {estimate} the source redshift, our spectrum confirms the BZB nature of the WISE source, which therefore is the IR counterpart of BZBJ1008+0621.}
\subsection{WISEJ131443.81+234826.7}
This WISE source has been selected by \citet{2013ApJS..206...12D} as the IR
counterpart of \textit{Fermi} source 2FGLJ1314.6+2348, associated with the
blazar candidate BZBJ1314+2348 \citep{2011ApJ...743..171A,nolan12},
{associated with the radio sources NVSSJ131443+234827 and FIRSTJ131443.8+234826 \citep{2011A&A...526A.102B,2011AJ....142..105P,2011ApJ...726...16L,2012ApJ...744..177L}. The WISE source is {associated} with the IR source 2MASSJ13144382+2348267 (\(\sim 0.1''\)), with brightnesses} J=15.514 mag, H=14.688 mag and K=13.832 mag \citep{2011NewA...16..503M}. Its optical counterpart found in the USNO-B
catalog (\(\sim 0.1''\)) features magnitudes B1=17.05, R1=15.43 mag,
B2=17.80 mag, R2=17.06 mag and I2=16.15 mag, while the closest SDSS source
is SDSSJ131443.80+234826.7 (\(\sim 0.1''\)) with magnitudes u=17.55 mag,
g=17.14 mag, r=16.80 mag, i=16.54 mag and z=16.31 mag; {the
associated low S/N optical spectrum in SDSS DR10 shows a number of lines yielding a QSO classification with redshift estimate \(z = 2.05885 \pm 0.00065\), although the {\tt SMALL\_DELTA\_CHI2} flag indicates that there is more than
one template that fits the spectrum (a feature most commonly seen in
low S/N spectra). In addition we note that the SDSS DR9 spectrum led to a galaxy classification with an uncertain redshift estimate \(z = 0.22561 \pm 0.23874\). BZBJ1314+2348 has been observed with the Low Resolution Imaging Spectrograph at the W. M. Keck Observatory \citep{2013ApJ...764..135S} without yielding a redshift {estimate}. As shown in Figure \ref{fig:spectra2}c, WISEJ131443.81+234826.7 features a similar featureless spectrum, typical of BZBs so, even if we are not able to obtain a spectroscopic redshift estimate, we confirm the blazar nature of WISEJ131443.81+234826.7, which therefore is the IR counterpart of BZBJ1314+2348.}
\subsection{WISEJ172535.02+585140.0}
\citet{2013ApJS..206...12D} found this WISE source to be the counterpart of the \textit{Fermi} source 2FGLJ1725.2+5853, associated in the
2FGL and 2LAC catalogs \citep{2011ApJ...743..171A,nolan12} with the BZB candidate BZBJ1725+5851 \citep{2011bzc3.book.....M}. This WISE source is also associated with the radio sources NVSSJ172535+585139 (\(\sim 0.8''\)), FIRSTJ172535.0+585139
(\(\sim 0.3''\)) and WN1724.8+5854 (\(\sim 2.2''\)). The closest USNO-B source (\(\sim 0.3''\)) has brightnesses of B1=17.56 mag, R1=16.54 mag, B2=17.14 mag, R2=16.15 mag and I2=15.47 mag, while the
closest SDSS source is SDSSJ172535.01+585139.9 (\(\sim 0.2''\)) with magnitudes u=18.38 mag,
g=17.90 mag, r=17.55 mag, i=17.27 mag and z=17.00 mag. The associated IR source 2MASSJ17253500+5851400 (\(\sim
0.1''\)) has brightnesses J=15.549 mag, H=14.705 mag and K=13.952 mag.
Our optical spectrum of WISEJ172535.02+585140.0, is presented in Figure \ref{fig:spectra2}d; it shows a featureless spectrum typical of BZBs. This supports our identification of WISEJ172535.02+585140.0 as the likely counterpart of BZBJ1725+5851. The other optical-IR sources nearby to the WISE position are
SDSSJ172535.03+585140.0 (\(\sim 0.1''\)) and SSTXFLSJ172535.0+585139 (\(\sim 0.1''\)). \citep{2004ApJS..155..257R} and \citep{2007ApJ...663..218M}, report that these source are counterparts of 2MASSJ17253500+5851400, but while the former authors indicate for this source a photometric redshift estimate of \(z = 2.025\) with a 53.3\% probability of the source redshift lying in the range \(2.00<z<2.20\), the latter authors report a tentative redshift upper limit \(z < 0.2974\) estimated from the 4000 \AA~break.
While the latter estimate is compatible with our evidence of this source being a BZB (redshift of BZB in BZCAT range from 0.023 to 1.34, peaking at \(z\sim 0.3\)), the former is unlikely for such a source, indicating either a doubtful association of SDSSJ172535.03+585140.0 with 2MASSJ17253500+5851400 (or of the 2MASS source with WISEJ172535.02+585140.0) or an unreliable photometric redshift estimate.
\section{Discussion}
\label{sec:discussion}
The optical spectra we obtained with MMT, Loiano and OAN telescopes provide the first confirmation of the association procedure and the tentative classification of gamma-ray blazar candidates developed by \citet{2013ApJS..206...12D} and adopted by \citet{2013arXiv1303.3585M}, as well as those proposed in \citet{massaro2013b}, \citet{2013ApJS..209....9P} and \citet{massaro2013c}.
The four WISE sources associated with known \(\gamma\)-ray blazar counterparts have been tentatively classified as BZBs by
\citet{2013ApJS..206...12D}. In fact, the authors assign to every source
a class A, B, or C depending on the probability of the WISE source to be compatible with the model of
the WISE \textit{Fermi} Blazar locus (WFB) in the three dimensional color space: class A sources are
considered the most reliable candidate blazars for the high-energy source, while class B and class C
sources are less compatible with the WFB locus but are still deemed as gamma-ray blazar candidates. According to this classification, the source WISEJ022239.60+430207.8 is a class A BZB \(\gamma\)-ray candidate, while the other
sources here analyzed are class B BZBs. The spectra presented in Figure \ref{fig:spectra2} confirm
for all these sources their BZB nature. In addition, for the source WISEJ100800.81+062121.2 (associated with the
blazar BZBJ1008+0621) we provide for the first time a tentative redshift estimate \(z = 0.65\).
In addition, optical spectroscopy can be used to test the predictions of the different association procedure
that are used to find \(\gamma\)-ray blazar candidates lying in the uncertainty regions at 95\% level of confidence of
{UGSs or AGUs listed in the 2FGL or 2LAC}. In particular the sources WISEJ050558.78+611335.9 and
WISEJ064459.38+603131.7 are selected by \citet{2013arXiv1303.3585M} as class C \(\gamma\)-ray blazar
candidates of BZB and undefined type, respectively; WISEJ060102.86+383829.2 is selected as \(\gamma\)-ray blazar candidate by \citet{massaro2013b} on the basis of its low-frequency radio
properties; WISEJ022051.24+250927.6 is selected as \(\gamma\)-ray blazar candidate by \citet{2013ApJS..209....9P}
combining its \textit{Swift} X-ray emission and its IR WISE colors; finally, WISEJ104939.34+154837.8 is selected as \(\gamma\)-ray blazar candidate by \citet{massaro2013c}
combining its radio emission and its IR WISE colors. As shown in Figure
\ref{fig:spectra1} all these WISE sources show a blazar-like optical spectrum.
In particular, WISEJ050558.78+611335.9 and WISEJ060102.86+383829.2 show featureless BZB spectra, while WISEJ104939.34+154837.8 shows an almost featureless spectrum with weak absorption lines consistent with Ca \textsc{ii} H \& K yielding a redshift estimate \(z = 0.33\); the EW of these line - \(0.7\) \AA~- is however consistent with the BZB definition given in Sect. \ref{sec:intro}.
On the other hand, WISEJ022051.24+250927.6 and WISEJ064459.38+603131.7 show BZQ type spectra with emission lines with \(EW \sim 30\) \AA~and \(EW \sim 6\) \AA, yielding redshift values of \(z = 0.48\) and \(z=0.36\), respectively. The spectrum of WISEJ064459.38+603131.7, in particular, is somewhat reminiscent of weak emission line quasar spectra \citep{2006ApJ...644...86S,2009ApJ...696..580S}, but the blazar identification for this source appears problematic.
In fact the WISEJ064459.38+603131.7 is not detected by NVSS survey, so we can only put an upper limit on its flux {\(\sim 2.5\) mJy}. Even if it is possible that deeper radio observations will detect emission from the source, blazars are traditionally defined as radio-loud sources basing on present radio data. All confirmed blazar from BZCAT are {in fact} detected at 1.4 GHz with fluxes \(\gtrsim 1\) mJy, and radio-quiet blazars are extremely rare objects \citep{2004MNRAS.352..903L}.
{Given the observed optical and X-ray flux and assuming \(z=0.36\) we can evaluate the effective spectral indices defined between the rest-frame frequencies of \(5\) GHz, \(5000\) \AA~ and \(1\) keV \citep[e.g.,][]{1999MNRAS.310..465G}, and put an upper limit on \(\alpha_{ro}<0.34\) and \(\alpha_{rx}<0.62\), while we have \(\alpha_{ox}=1.21\). We note that \(\sim 25\%\) of the BZBs in BZCAT catalog have \(\alpha_{ro}\) and \(\alpha_{rx}\) smaller than the evaluated upper limits. However, these values are similar to the peak of the spectral index distributions found in the same catalog, that fall in the ranges \(0.4<\alpha_{ro}<0.5\), \(0.6<\alpha_{rx}<0.7\) and \(1.2<\alpha_{ox}<1.3\).}
The blazar nature of our candidates is also reinforced when comparing their multi-wavelength SEDs with
those of the known \(\gamma\)-ray blazars, presented in Figures \ref{seds_figure1} and
\ref{seds_figure2} in Appendix \ref{app:seds}, respectively. Despite the non simultaneity of the observations, we can clearly see
the two main spectral components - that is, lower energy synchrotron and higher energy inverse Compton - typical of blazar SEDs.
{In the same figures we overplotted the optical spectra presented in Figures \ref{fig:spectra1} and \ref{fig:spectra2}. Although a blazars are characterized by strong variability - and therefore a precise matching between non simultaneous data is not to be expected - we notice a general agreement between optical photometric and spectroscopic data.}
\section{Conclusions}
\label{sec:conclusions}
We presented {a pilot project to assess the effectiveness of several methods in selecting gamma-ray blazar candidates. To this end, we presented} optical spectroscopic observations for a sample of five \(\gamma\)-ray blazar candidates selected with different methods based on their radio to IR properties, and for a sample of four WISE counterparts to known \(\gamma\)-ray blazar.
The main results of our analysis are summarized as follows:
\begin{enumerate}
\item We confirm the blazar nature of all the sources associated with known \(\gamma\)-ray {blazars}. In addition, we obtain for the first time a tentative redshift estimate \(z = 0.65\) for the blazar BZBJ1008+0621.
\item We confirm the blazar nature of all the \(\gamma\)-ray blazar candidates selected by \citet{2013arXiv1303.3585M}, \citet{massaro2013b}, \citet{2013ApJS..209....9P} and \citet{massaro2013c}. In addition, we obtain for WISEJ104939.34+154837.8, WISEJ022051.24+250927.6 and WISEJ064459.38+603131.7 redshift estimates of \(z = 0.33\), \(z = 0.48\) and \(z=0.36\), respectively.
\item The source WISEJ064459.38+603131.7, in particular, is intriguing since it shows an almost featureless continuum with weak emission lines reminiscent of weak emission line quasar spectra \citep{2006ApJ...644...86S,2009ApJ...696..580S}, but it lacks any obvious radio counterpart, which is required for a blazar classification \citep{2012MNRAS.420.2899G,2013MNRAS.431.1914G}.
\end{enumerate}
While these preliminary results seem to confirm the effectiveness of the classification method presented by
\citet{2013ApJS..206...12D} and of the selection methods presented by \citet{2013arXiv1303.3585M},
\citet{massaro2013b}, \citet{2013ApJS..209....9P} and \citet{massaro2013c}, additional ground-based, optical and near IR, spectroscopic follow up observations of a larger sample of \(\gamma\)-ray blazar candidates are needed to confirm the nature of the selected sources and to obtain their redshift.
~\\
\acknowledgements
{We acknowledge useful comments and suggestions by our anonymous referee.} We are grateful to E. Falco for his valuable support and for the enjoyable nightly discussions at MMT telescope.
This work is supported by the NASA grant NNX12AO97G.
The work at SAO is partially supported by the NASA grant NNX13AP20G.
The work by G. Tosti is supported by the ASI/INAF contract I/005/12/0.
E. Jim\'enez-Bail\'on acknowledges funding by CONACyT research grant 129204 (Mexico).
V. Chavushyan acknowledges funding by CONACyT research grant 151494 (Mexico).
TOPCAT\footnote{\href{http://www.star.bris.ac.uk/$\sim$mbt/topcat/}{http://www.star.bris.ac.uk/$\sim$mb
t/topcat/}}\citep{2005ASPC..347...29T} has been used in this work for the preparation and manipulation
of the tabular data and the images. The WENSS project was a collaboration between the Netherlands
Foundation for Research in Astronomy and the Leiden Observatory.
We acknowledge the WENSS team consisted of Ger de Bruyn, Yuan Tang,
Roeland Rengelink, George Miley, Huub Rottgering, Malcolm Bremer,
Martin Bremer, Wim Brouw, Ernst Raimond and David Fullagar
for the extensive work aimed at producing the WENSS catalog.
Part of this work is based on archival data, software or on-line services provided by the ASI Science
Data Center.
This research has made use of data obtained from the High Energy Astrophysics Science Archive
Research Center (HEASARC) provided by NASA's Goddard
Space Flight Center; the SIMBAD database operated at CDS,
Strasbourg, France; the NASA/IPAC Extragalactic Database
(NED) operated by the Jet Propulsion Laboratory, California
Institute of Technology, under contract with the National Aeronautics and Space Administration.
This research has made use of software provided by the Chandra X-ray Center (CXC) in the application
packages CIAO, ChIPS, and Sherpa.
Part of this work is based on the NVSS (NRAO VLA Sky Survey);
The National Radio Astronomy Observatory is operated by Associated Universities,
Inc., under contract with the National Science Foundation.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint
project of the University of Massachusetts and the Infrared Processing and Analysis Center/California
Institute of Technology, funded by the National Aeronautics and Space Administration and the National
Science Foundation.
This publication makes use of data products from the Wide-field Infrared Survey Explorer,
which is a joint project of the University of California, Los Angeles, and
the Jet Propulsion Laboratory/California Institute of Technology,
funded by the National Aeronautics and Space Administration.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation,
the Participating Institutions, the National Science Foundation, the U.S. Department of Energy,
the National Aeronautics and Space Administration, the Japanese Monbukagakusho,
the Max Planck Society, and the Higher Education Funding Council for England.
The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions.
The Participating Institutions are the American Museum of Natural History,
Astrophysical Institute Potsdam, University of Basel, University of Cambridge,
Case Western Reserve University, University of Chicago, Drexel University,
Fermilab, the Institute for Advanced Study, the Japan Participation Group,
Johns Hopkins University, the Joint Institute for Nuclear Astrophysics,
the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group,
the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory,
the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA),
New Mexico State University, Ohio State University, University of Pittsburgh,
University of Portsmouth, Princeton University, the United States Naval Observatory,
and the University of Washington.
The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the U.K.
|
1,116,691,499,049 | arxiv | \section{Introduction}
Measurements of the masses and radii of neutron stars provide some of
the most direct constraints on the equation of state of the matter in
the cores of these compact objects. Time resolved X-ray spectroscopy
of thermonuclear bursts observed from some of the low mass X-ray
binaries has been one of the observational methods to constrain the
neutron star masses and radii (see, e.g., van Paradijs\ 1978, 1979;
Damen et al.\ 1990; Lewin, van Paradijs, \& Taam\ 1993). The method
involves modeling high time resolution, high signal-to-noise X-ray
burst data to spectroscopically measure the apparent radius and the
Eddington luminosity for the neutron star, both of which are related
to its mass and radius.
The first few seconds of some of the brightest X-ray bursts show a
characteristic pattern in which the color temperature increases and
then decreases, while the apparent radius monotonically increases
(see, e.g., Galloway et al.\ 2008a). Eventually, the apparent radius
starts to decrease as the color temperature reaches a peak and the
burst starts to decay. In the meantime, the flux remains nearly
constant at a peak value. This phenomenon is understood as the
response of the outermost layers of the neutron star to a
super-Eddington burst flux, where the photosphere expands to few times
the stellar radius and subsequently contracts back to the neutron star
surface. During the expansion and the contraction phase, the X-ray
flux stays very close to the Eddington limit and any excess energy is
transferred into kinetic energy of the outflow (see, e.g., Kato 1983;
Ebisuzaki, Hanawa, \& Sugimoto 1983; Paczynski \& Proszynski\ 1986).
Accordingly, X-ray bursts from which this phenomenon is observed are
called Photospheric Radius Expansion (PRE) events and the fluxes
attained during the expansion episodes of these bursts are used as a
measure of the local Eddington limit of the neutron star (van
Paradijs\ 1978), where the gravitational and radiation forces are
balanced.
PRE events can be used to determine the Eddington luminosity if the
distance to the X-ray binary is known (see, e.g., Basinska et
al.\ 1984, Damen et al.\ 1990; Kuulkers et al.\ 2003, Galloway et
al.\ 2008a, b). Using the X-ray bursters located in globular clusters
and the peak fluxes reached during X-ray bursts, Kuulkers et
al.\ (2003) tested the idea that the PRE events can be used as a
standard candle. They found that the peak fluxes attained during the
PRE events can indeed be used as standard candles and are accurate to
at least within 15\%. Similarly, using 66 and 40 X-ray bursts from
4U~1728$-$34 and 4U~1636$-$536, Galloway et al.\ (2003, 2006) found
that the peak fluxes reached during photospheric radius expansion
events are normally distributed with a standard deviation of
$\approx$~3\% and 7.6\%, respectively, after corrections related to
the orbital modulation and the composition of the atmosphere are
applied.
Even though a measurement of the Eddington limit of an accreting
neutron star is useful toward measuring its mass and radius, the
determination of the exact moment when a given X-ray burst reaches
this limit is not always straightforward. The observed X-ray flux
during the photospheric radius expansion episode is expected to vary
due to changes in the gravitational redshift as the apparent
photospheric radius rises and falls (see, e.g., Damen et al.\ 1990).
The first moment the flux reaches the Eddington limit occurs during
the burst rise and is not always robustly identified for all of the
bursts. Alternatively, the Eddington limit can be measured at the
moment when the photosphere ``falls'' back to the neutron star
surface. This has been called the touchdown moment (Damen et al.\
1990) and is identified as the point at which the blackbody
temperature reaches the highest value during the burst while the
apparent radius is lowest. Combined with a measurement of the distance
and apparent angular size of the neutron star, the measurement of the
Eddington flux at touchdown can lead to uncorrelated measurements of
the neutron star mass and radius (see, e.g., Ebisuzaki\ 1987; Damen et
al.\ 1990; \"Ozel et al.\ 2009; G\"uver et al.\ 2010a, b).
Nearly continuous observations of bursting low mass X-ray binaries
over the last 15 years with the Rossi X-ray Timing Explorer (RXTE)
provided high quality data for over one thousand X-ray bursts from
more than forty X-ray binaries (Galloway et al.\ 2008a). This rich
database of X-ray burst observations enables a study of the spectra of
PRE bursts from which the Eddington limit can be measured and any
systematic variations in the inferred spectral parameters of the X-ray
bursts can be inferred. Such an assessment is essential to better
establish the reliability of the mass and radius measurements from
time-resolved spectroscopic analysis of X-ray bursts.
Using the archival RXTE observations, we recently studied the
systematic uncertainties present in the apparent radius measurements
during the cooling tails of the X-ray bursts (G\"uver et al.\ 2011,
hereafter Paper I). Our analysis showed that the vast majority of the
X-ray spectra extracted from the cooling tails of 447 X-ray bursts are
statistically consistent with Planckian functions and the inferred
spectral parameters for the majority of the bursts follow the expected
F~$\propto$~T$^{4}$ relation for most of the sources. These results
enabled us to measure the apparent radii of a number of neutron stars
and assess the systematic uncertainties in these measurements.
In this paper, we continue to analyze all of the X-ray bursts observed
from low mass X-ray binaries in order to determine the uncertainties
related to spectroscopic measurements of the Eddington limit in PRE
bursts. We focus on the measurement of the Eddington flux at the
touchdown moments in twelve X-ray binaries from which multiple PRE
events have been observed. Our aim is to determine any systematic
uncertainties in these measurements.
In \S2, we briefly summarize the observations and data analysis
techniques, which we discuss in full detail in Paper I. In \S3, we
introduce a systematic method to select the PRE events from the burst
archive using time resolved spectroscopic measurements. In \S4 and 5,
we describe the statistical tools based on Bayesian Gaussian mixture
algorithms that we use to determine the Eddington limit and associated
systematic uncertainties for each source. Finally, in \S6, we present
our results and discuss their implications.
\section{Observations and Data Analysis}
Galloway et al.\ (2008a) presented a catalog of RXTE observations of
X-ray bursts from 48 low mass X-ray binaries. Following Paper I, we
chose 12 X-ray binaries from this sample based on a number of
criteria. We included only the sources that show at least two PRE
events (as defined in Galloway et al.\ 2008a). We excluded all X-ray
binaries that are known to be dippers, ADC sources, or have high
inclinations as well as the known millisecond pulsars. Because they
are likely to be affected by source confusion (Galloway et al.\ 2008a;
Keek et al.\ 2010), we excluded observations of GRS~1741.9$-$2853 and
2E~1742.9$-$2929 and also a small number of bursts from Aql~X$-$1,
4U~1728$-$34, and 4U~1746$-$37. Finally, since a study of the PRE
events observed from EXO~1745$-$248, 4U~1608$-$52, and 4U~1820$-$30
have been reported elsewhere (\"Ozel et al. 2009; G\"uver et al.\
2010a, b), the results for these sources will not be repeated here.
As in Paper I, we imposed a limit on the persistent flux measured
prior to each X-ray burst such that it does not exceed 10\% of the
peak burst flux, i.e., $\gamma\equiv F_{\rm per}/F_{\rm Edd}<0.1$ as
calculated by Galloway et al.\ (2008a). Imposing this limit reduces
the systematic uncertainties introduced by subtracting the pre-burst
emission from the X-ray burst spectra.
\begin{deluxetable}{ccccc}
\tablecolumns{5}
\tablewidth{370pt}
\tablecaption{THE NUMBER OF PRE EVENTS FOR EACH SOURCE}
\tablehead{\colhead{Name} & \colhead{Number of
Bursts\tablenotemark{a}} &
\colhead{Catalog PRE\tablenotemark{b}} & \colhead{$\gamma$
Limit\tablenotemark{c}} &
\colhead{n$_{PRE}$}}
\startdata
4U~0513$-$40 & 7 & 2 & 2 & 2 \\
4U~1636$-$53 & 172 & 52 & 49 & 46 \\
4U~1702$-$429 & 47 & 6 & 6 & 1 \\
4U~1705$-$44 & 47 & 4 & 4 & 2 \\
4U~1724$-$307 & 3 & 3 & 3 & 2\tablenotemark{d} \\
4U~1728$-$34 & 106 & 80 & 71 & 16 \\
KS~1731$-$260 & 27 & 6 & 4 & 2 \\
4U~1735$-$44 & 11 & 8 & 4 & 2 \\
4U~1746$-$37 & 30 & 3 & 0 & -- \\
SAX~J1748.9$-$2021 & 16 & 8 & 3 & 2 \\
SAX~J1750.8$-$2900 & 4 & 2 & 2 & 2 \\
Aql~X$-$1 & 57 & 10 & 10 & 6
\enddata
\tablenotetext{a} {Values are adopted from Galloway et al.\ (2008a)
and show the total number of X-ray bursts detected by RXTE.}
\tablenotetext{b} {The total number of X-ray bursts tagged as PRE or
potentially PRE events in the Galloway et al.\ (2008a) catalog.}
\tablenotetext{c}{The number of remaining bursts with peak flux that
exceeds the pre-burst emission by a factor of 10.}
\tablenotetext{d}{As discussed in detail in Paper I, we excluded the
first burst observed from 4U~1724$-$307 from our analysis, since
model fits of the X-ray spectra extracted from this burst can not be
fitted with a Planckian function and addition of absorption edges at
several energies is needed (in't Zand et al. 2010).}
\label{sourcestable}
\end{deluxetable}
The final list of all the X-ray binaries and the X-ray bursts we
studied is presented in Table~\ref{sourcestable}. We performed the
data analysis following the methods detailed in Galloway et
al.\ (2008a) and in Paper I. We extracted time resolved
2.5$-$25.0~keV X-ray spectra from all the RXTE/PCA layers. We varied
the exposure time between 0.25~s and 1~s to keep the signal-to-noise
ratio constant based on the count rate during the burst. We also used
a 16~s spectrum, obtained prior to each burst, as background.
Response matrix files were generated using the PCARSP version 11.7,
HEASOFT release 6.7, and HEASARC's remote calibration database. We
took into account the offset pointing of the PCA during the creation
of the response matrix files. Finally, we corrected all of the X-ray
spectra for PCA deadtime following the method suggested by the RXTE
team.\footnote{ftp://legacy.gsfc.nasa.gov/xte/doc/cook\_book/pca\_deadtime.ps}
We used the Interactive Spectral Interpretation System (ISIS), version
1.4.9-55 (Houck \& Denicola 2000) and custom built
S-Lang\footnote{http://www.jedsoft.org/slang/} scripts for spectral
analysis. We fit each spectrum with a blackbody function using the
{\it bbodyrad} model (as defined in XSPEC; Arnaud 1996) and with {\it
tbabs} (Wilms, Allen, McCray\ 2000) to model the interstellar
extinction. For each source, we fixed the hydrogen column density
(N$_{\rm H}$) to the values given in Table 1 of Paper I. In the same
analysis, we also determined that the level of systematic uncertainty
required to make the X-ray burst spectra of each source consistent
with blackbody functions is less than 5\% (see Section 3.1 and Table 2
in Paper I for details). During each fit, we included these minor
systematic uncertainties that we inferred for each source. We then
created for each burst that has high temporal and spectral data
coverage, a time series of blackbody temperatures $T_{\rm c}$ (units
of keV) and normalizations $A$ (in units of [km/10~kpc]$^2$)
throughout the burst that resulted from the time-resolved spectral
analysis. We used Equation (3) of Galloway et al. (2008a) to
calculate the bolometric fluxes. In the following sections we adopted
the burst numbering system introduced by Galloway et al. (2008a).
\section{Determination of Photospheric Radius Expansion Events}
Our first aim is to select the PRE bursts in the X-ray burst sample,
so that we can use the fluxes attained in them as a measure of the
local Eddington limit on the neutron star surface. As a signature of
PRE, we look specifically for a significant increase in the measured
blackbody radius in the burst rise and a following decrease, in bursts
where the X-ray flux remains almost constant at a peak value. Galloway
et al.\ (2008a) devised a set of criteria based on the spectral
parameter variation in each burst in order to identify PRE events and
to differentiate them from other typical X-ray bursts. We adopt and
augment these criteria, as we discuss below.
Galloway et al.\ (2008a) took the following measures as the evidence
that a radius expansion occurred: (1) the blackbody normalization $A$
reached a (local) maximum close to the time of peak flux; (2) lower
values of the normalization $A$ were measured following the maximum,
with the decrease significant to 4 $\sigma$ or more; and (3) there was
evidence of a (local) minimum in the fitted temperature $T_c$ at the
same time as the maximum in $A$. In Figure~\ref{examples1}, we show
examples of the spectral evolution of two different bursts that
satisfy these criteria. While the burst in the left panel is a clear
PRE event, with the photosphere at the peak flux reaching many times
the neutron star radius in the cooling tail, the event on the right
shows a higher normalization late in the burst than it does during the
assumed photosphere expansion. In fact, the blackbody normalization
during the early local maximum is smaller than the asymptotic
normalization of even non-PRE bursts during their cooling tails. We,
therefore, conclude that the latter example is not a secure PRE event.
In order to eliminate such cases, we added an additional criterion
that is based on the comparison of the peak blackbody normalization
reached during an X-ray burst, $A_{\rm peak}$, to the measurement of
the average normalization, $A_{\rm cool}$, found from the cooling tail
for each source. For the former quantity, $A_{\rm peak}$, we select
the peak normalization that occurs when the measured flux is higher
than half of the peak flux. This flux limit ensures that the peak
normalization is selected when the photospheric radius expansion is
expected to occur. For the latter quantity, $A_{\rm cool}$, we used
the average value found from the cooling tails of all the bursts for
each source as reported in Paper I. Note that for Aql~X$-$1 and
4U~0513$-$401, large systematic uncertainties present in the cooling
tails prevented a reliable measurement of their apparent radii in
Paper I. Because of that, we used approximate values of
$R/D=14.6$~km/10~kpc and $R/D=5.7$~km/10~kpc, respectively, which
correspond to the highest flux bins of their cooling tails.
In Figure~\ref{norm_histo}, we show the histogram of all the
normalization ratios $A_{\rm peak}/A_{\rm cool}$ for all the bursts
observed from all the sources included in this study. The resulting
histogram shows that the distribution of the ratio of the peak
normalization to the apparent radius has a main peak around unity and
an extended tail towards higher values. The high peak around unity at
the peak normalization shows that, for the majority of the X-ray
bursts, the burning covers the apparent surface area of the neutron
star found from the cooling tails. However, there are a number of
X-ray bursts where the radius of the photosphere reached values well
beyond the apparent neutron star radius. We consider these as the
secure events where the photospheric radius expansion occurred. Based
on this histogram, we tagged an X-ray burst as a PRE event if $A_{\rm
peak}/A_{\rm cool} > 1.65$. This value corresponds to the end of the
tail of the main peak in the histogram.
We excluded from the final selected sample one X-ray burst (burst \#
92) observed from the direction of 4U~1636$-$536. Even though this
burst satisfied the selection criteria, the measured peak flux, $1.75
\times 10^{-8}$~erg~s$^{-1}$~cm$^{-2}$, is much lower than the fluxes
reached in the rest of the burst sample and only half of the peak flux
reached in burst ID 16, which is thought to be a hydrogen rich burst
Galloway et al.\ (2006). In Table~\ref{sourcestable}, we present the
number of PRE bursts for each source that are obtained as a result of
the full set of criteria listed above. The additional criterion, which
eliminated bursts such as the one shown in the right panel of
Figure~\ref{examples1}, naturally led to numbers of secure PRE events
per source that is somewhat lower than those selected by Galloway et
al.\ (2008a). In addition, some of the difference in the number of PRE
events is caused by the $\gamma$ limit we imposed in the burst
selection in order to minimize uncertainties related to the
subtraction of the persistent flux, which we take as background.
Table~\ref{sourcestable} shows the number of bursts for each source
that remain after the application of these criteria.
The number of PRE events was most significantly affected by the more
strict selection criteria for 4U~1728$-$34: 16 out of the 69 events
that were tagged potentially as PRE by Galloway et al.\ (2008a) passed
the additional criteria. This was either because the increase in the
normalization was not statistically significant when compared to the
apparent radius of the neutron star in the cooling tails of bursts or
because the normalization showed a second increase during the cooling
tail of the burst that sometimes exceeded the peak normalization
during the PRE phase, as in the example shown in the right panel of
Figure~\ref{examples1}. X-ray bursts showing similar spectral
evolution were previously reported by van Straaten et al.\ (2001) and
also by Galloway et al.\ (2003). Given the fact that both at the peak
and during the cooling tails of these bursts the normalization values
are comparable to the apparent radius of the neutron star, it is
possible that the variation in the blackbody normalization is caused
by a significant variation in the color temperature and is not due to
a photospheric radius expansion. This is also further supported by
the fact that during the peak of these particular X-ray bursts, color
temperatures were significantly higher than 2.5 keV and similar trends
in the blackbody normalization were also noted in Paper I at these
high temperatures. X-ray bursts showing similar spectral evolution
were also seen from 4U~1702$-$429 and Aql~X$-$1.
\begin{figure}
\centering
\includegraphics[scale=0.3, angle=0]{f1a.ps}
\includegraphics[scale=0.3, angle=0]{f1b.ps}
\caption{Examples of X-ray bursts observed from 4U~1728$-$34. The
left panel shows burst \#86, which satisfies our criteria for PRE
identification summarized in Section 3. The right panel shows
burst \#104, which does not satisfy the criteria hence is not
labeled as a PRE event. The selected touchdown moment for the
PRE event is also shown by a vertical line.}
\label{examples1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.45, angle=270]{f2.ps}
\caption{The ratio of the peak blackbody normalization values
(A$_{\rm peak}$) found from all the X-ray bursts analyzed here to
those obtained from the cooling tails (A$_{\rm cool}$) of all the
X-ray bursts (Paper I). Larger ratios correspond to more
distinguishable photospheric radius expansion episodes. The
dashed line shows our limit between the secure and non secure PRE
events.}
\label{norm_histo}
\end{figure}
We finally explored whether PRE bursts occur only during certain
spectral states of the neutron star binaries. To this end, we used the
data from Galloway et al.\ (2008a) to produce color-color diagrams for
the burst sources and marked on these diagrams the locations of the
PRE and non-PRE bursts. Figure~\ref{ccfig} shows the soft and hard
color for 4U~1728$-$34 and 4U~1636$-$536 prior to the detection of
each X-ray burst. The large (red) data points correspond to PRE bursts
while the small (black) points show all other thermonuclear bursts
from that source. The PRE bursts appear to occur predominantly when
the sources lie near the soft vertices of their color-color diagrams
(see also Muno et al.\ 2000). However, the regions with PRE bursts
still extend across $\simeq 1/2$ of the lengths of the color-color
tracks. This minimizes the possibility that the reproducibility of the
inferred touchdown fluxes simply reflects the fact we are considering
only very similar X-ray bursts in a very narrow range of accretion
rates.
Our limit on the pre-burst flux, i.e., the requirement that
$\gamma<0.1$, excludes the brightest regions of the color-color
diagram of each source and may also introduce a bias in our selection
of only particular PRE bursts. This is not the case here, however, as
only a very small fraction of the color-color diagram of each source
corresponds to $\gamma>0.1$ (compare, for example, the color-color
diagram in Figure~\ref{ccfig} to the entire color-color diagram of
4U~1728$-$34 in Figure~1 of Muno et al. 2002).
\begin{figure}
\centering
\includegraphics[scale=0.35, angle=0]{f3a.ps}
\includegraphics[scale=0.35, angle=0]{f3b.ps} \caption{The
positions of 4U~1636$-$536 (left panel) and 4U~1728$-$34 (right
panel) on their color-color diagrams prior to the detection of an
X-ray burst, using the data from Galloway et al.\ (2008a). Red
filled squares correspond to events that show clear evidence of
photospheric radius expansion. Secure PRE events appear to occur
predominantly near the soft vertex of the color-color diagrams.}
\label{ccfig}
\end{figure}
\section{Determination of the Touchdown Moment and the Eddington
Limit}
We now discuss the determination of the touchdown moment and the
measurement of the touchdown flux for the PRE bursts in our sample.
We present here the details of the analysis for 4U~1636$-$536 and
4U~1728$-$34, which are the sources with the highest number of PRE
events.
The touchdown moment is defined as the moment when the photosphere
falls back onto the neutron star, which is thought to occur when the
observed blackbody normalization reaches its lowest and the
temperature its highest value. In a very small number of X-ray bursts,
however, a statistically insignificant temperature maximum can occur
several seconds past the peak flux, as in the example of the PRE burst
from 4U~1636$-$536 shown in Figure~\ref{examples2}. In these cases, we
selected the first temperature maximum (and normalization minimum)
past the peak flux, ensuring that the temperature at this point is
within 1$-\sigma$ of its global maximum. The touchdown moments in a
total of 6 out of 83 bursts from all of the sources were selected in
this way.
The precise determination of the touchdown moment can also be affected
by data gaps that are present in the science event mode data in some
burst observations. In these cases, where a gap may have an effect on
the determination of the touchdown moment, we checked whether a
``burst catcher'' mode with spectral information (e.g., mode
CB\_8ms\_64M\_0\_249\_H) was used. We found that only in 6 cases,
there were no burst catcher mode data with spectral information. For
the rest of the X-ray bursts, we made use of the data in the burst
catcher mode to determine the exact touchdown moments.
\begin{figure}
\centering
\includegraphics[scale=0.4, angle=0]{f4.ps}
\caption{ An example X-ray burst observed from 4U~1636$-$536 (burst
ID\# 150) where the touchdown moment is not defined at the moment
when the temperature reached its global maximum and the
normalization its minimum but defined as the first moment when
the temperature is within 1$-\sigma$ of the highest value.}
\label{examples2}
\end{figure}
We fit the spectrum that we extracted at the touchdown moment for each
PRE event as described in Section 2. The resulting $X^2$/dof
histograms for 4U~1636$-$536 and 4U~1728$-$34 are shown in
Figure~\ref{chi2} (see Paper~I for the definition of this
statistic). Using the $X^2$/dof limits determined in Paper~I, we can
determine whether a particular fit is statistically acceptable or it
should be excluded from further analysis. The X-ray spectra at the
touchdown moments were well described with blackbody functions,
leading in general to small $X^2$/dof values. Therefore, applying the
$X^{2}$/dof limits forced us to exclude only one X-ray burst from
4U~1705$-$44 (burst \# 1) and two X-ray bursts from 4U~1636$-$536
(bursts \# 3 and 9).
\begin{figure}
\centering \includegraphics[scale=0.3, angle=0]{f5a.ps}
\includegraphics[scale=0.3, angle=0]{f5b.ps} \caption{Distributions of
$X^2$/dof values obtained from model fits to the X-ray spectra
extracted at the touchdown moments of X-ray bursts observed from
4U~1636$-$536 (left panel) and 4U~1728$-$34 (right panel). The
solid lines show the expected distributions for the number of
degrees of freedom in the fits and the dashed lines show the highest
values of $X^2$/dof that was considered as statistically acceptable
in Paper I using the spectral fits of the cooling tails of all the
X-ray bursts for each source. During touchdown the spectra are
described well by blackbody functions.}
\label{chi2}
\end{figure}
\section{Systematic Uncertainties in the Eddington Limit}
In this section, we will address the formal and systematic
uncertainties in the touchdown fluxes obtained from the PRE bursts of
each source. As before, we will first focus on the two sources with
the highest number of bursts to present the details of the method and
then extend our analysis to the rest of the sample. We will start by
discussing our determination of the bolometric flux at touchdown and
its formal uncertainty. We will then explore whether the different PRE
bursts from the same source reach a touchdown flux that remains
statistically constant between bursts.
For each burst, the bolometric flux at touchdown is obtained from the
combination of the blackbody temperature and normalization. Figure 6
shows the 68\% and 95\% confidence contours of the blackbody
normalization and temperature inferred from fitting the X-ray spectra
obtained during the touchdown moment for 4U~1728$-$34 and
4U~1636$-$536. We also plot in these figures contours of constant
bolometric flux, shown as dotted (red) lines. Even though the
uncertainties in the normalization and temperature are correlated, the
bolometric flux in each burst is well constrained. Furthermore, as
Figure 6 shows, the individual confidence contours from each burst
appear to be in very good statistical agreement with each other for
both sources.
\begin{figure}
\centering
\includegraphics[scale=0.4, angle=0]{f6a.ps}
\includegraphics[scale=0.4, angle=0]{f6b.ps}
\includegraphics[scale=0.4, angle=0]{f6c.ps}
\includegraphics[scale=0.4, angle=0]{f6d.ps}
\caption{{\it Left Panels :} 68\% and 95\% confidence contours of the blackbody
normalization and temperature obtained from fitting the X-ray
spectra at the touchdown moments of each PRE burst observed from
4U~1636$-$536 and 4U~1728$-$34. The dotted red lines show contours
of constant bolometric flux. {\it Right Panels:} 68\% and 95\%
confidence contour of the parameter of an assumed underlying
Gaussian distribution of touchdown fluxes. The width of the
underlying distribution reflects the systematic uncertainty in the
measurements. The dashed red lines show the width when the
systematic uncertainty is 5\% and \%10 of the mean touchdown flux.}
\label{tdex}
\end{figure}
The distribution of inferred bolometric fluxes at touchdown is
expected to have a finite width both because of measurement
uncertainties and because of the possible variations in the physical
conditions that determine the emerging flux during a PRE burst. The
measurement uncertainties include formal uncertainties from counting
statistics, the uncertainties in the bolometric correction, the
subtraction of the background emission, and the determination of the
touchdown moment. Anisotropies in the bursts, variations in the
composition and the reflection off the accretion flow (e.g., Galloway
et al.\ 2004, 2006), and variations in the Compton upscattering in the
converging inflow prior to touchdown are some of the physical
mechanisms that can contribute to the intrinsic spread.
For the high temperatures observed during the touchdown phases of the
bursts, most of the burst spectrum falls within the RXTE energy range,
resulting in bolometric corrections that are at most 7\% (Galloway et
al.\ 2008a). Therefore, any uncertainties in the bolometric correction
can only introduce minimal spread to the width of the observed
touchdown fluxes. Uncertainties in the determination of the touchdown
moment are also expected to be of the same magnitude since the fluxes
in the nearby time bins differ typically by less than 10\% (see, e.g.,
Figures 2 and 4). Our 10\% limit on the pre-burst persistent flux
bounds the uncertainties introduced by our subtraction of the
background. We can also estimate the expected variations due to the
Compton upscattering in the converging flow: this effect scales as
$v/c$ and can, therefore, introduce an uncertainty at most of the
order of 10\% (van Paradijs and Stollman\ 1984). On the other hand,
variations in the isotropy or the composition of the bursts can, in
principle, generate larger spread in the touchdown fluxes.
Our goal is to quantify the widths of the underlying distributions of
touchdown fluxes, which we will call systematic uncertainties that are
potentially caused by any of these effects. In order to achieve this,
we need an approach that is valid both in the limit when the formal
uncertainty for each measurement is much smaller than the variance of
the distribution of their central values as well as in the opposite
extreme. In the first case, the variance of the mean values is
practically equal to the width of the underlying distribution. In the
opposite limit, when the formal uncertainties in each measurement are
comparable or larger than the variance of the mean values, one could
compute the systematic uncertainty $\sigma_{\rm sys}$ by subtracting
in quadrature the formal uncertainty $\sigma_{\rm form}$ from the
variance $\sigma_{\rm var}$, i.e.,
\begin{equation}
\sigma_{\rm sys}^2 = \sigma_{\rm var}^2 - \sigma_{\rm form}^2.
\end{equation}
This can be carried out only if the formal uncertainties in each
measurement are the same.
In our sample, however, each flux measurement has a different formal
uncertainty and the uncertainty in each measurement is sometimes
comparable to and sometimes smaller than the variance of the mean flux
values for different sources. In order to properly account for this,
we follow here the Bayesian analysis method discussed in Paper I that
allows us to determine the intrinsic spread of touchdown fluxes.
In the Bayesian analysis, we first determine the formal uncertainties
of the measured bolometric fluxes for each burst and each source using
the confidence contours shown in Figure~\ref{tdex} and report these in
Table~\ref{tdres}. We model the underlying distribution of touchdown
fluxes as a Gaussian. The observed distribution is a convolution of
the underlying distribution with the individual formal uncertainties
for each burst that we measured above. We then use the Bayesian
technique presented in Paper I to determine the most probable value
$F_0$ and width $\sigma$ of the underlying distribution of touchdown
fluxes for each source. Figure~\ref{tdsys} shows the histogram of
observed touchdown fluxes as well as the most probable underlying
distribution for the two sources 4U~1636$-$536 and 4U~1728$-$34. The
right panels of Figure~\ref{tdex} show the full confidence contours
for the parameters of these underlying distributions. Even though the
most probable values for the touchdown fluxes can be determined within
a few percent, there is clear evidence for a 5\%-10\% spread, which we
attribute to the physical mechanisms discussed above.
\begin{figure}
\centering
\includegraphics[scale=0.4, angle=0]{f7a.ps}
\includegraphics[scale=0.4, angle=0]{f7b.ps} \caption{Histogram of
measured touchdown fluxes for 4U~1636$-$536 (left panel) and
4U~1728$-$34 (right panel). The red solid line shows the
underlying Gaussian distribution as inferred from the Bayesian
analysis. The black dashed curve shows the distribution of the
touchdown fluxes when the observational uncertainties are taken
into account. The width of the underlying distribution reflects
the systematic uncertainty in the measurements.}
\label{tdsys}
\end{figure}
In Figures~8$-$9 and Table~\ref{tdaverage}, we show the results of the
same analysis for all the other sources. Naturally, the intrinsic
widths in the touchdown fluxes for the sources with very few bursts
are more difficult to determine. In all cases except Aql~X-1, however,
the level of systematic uncertainties is not inconsistent with the
5-10\% level inferred for the two sources with many bursts. Notably,
Aql X-1 is also the source with the largest variation in the apparent
surface area during the cooling tails of its bursts (Paper I).
\begin{figure}
\centering
\includegraphics[scale=0.3, angle=0]{f8a.ps}
\includegraphics[scale=0.3, angle=0]{f8b.ps}
\includegraphics[scale=0.3, angle=0]{f8c.ps}
\includegraphics[scale=0.3, angle=0]{f8d.ps}
\includegraphics[scale=0.3, angle=0]{f8e.ps}
\includegraphics[scale=0.3, angle=0]{f8f.ps}
\includegraphics[scale=0.3, angle=0]{f8g.ps}
\includegraphics[scale=0.3, angle=0]{f8h.ps}
\caption{Same as Figure~\ref{tdex} for the sources 4U~0513$-$401, 4U~1724$-$307,
KS~1731$-$26, and 4U~1735$-$44.}
\label{td1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3, angle=0]{f9a.ps}
\includegraphics[scale=0.3, angle=0]{f9b.ps}
\includegraphics[scale=0.3, angle=0]{f9c.ps}
\includegraphics[scale=0.3, angle=0]{f9d.ps}
\includegraphics[scale=0.3, angle=0]{f9e.ps}
\includegraphics[scale=0.3, angle=0]{f9f.ps}
\caption{Same as Figure~\ref{td1}, for the sources SAX~J1750.8$-$2900,
SAX~J1748.9$-$2021, and Aql~X$-$1.}
\label{td2}
\end{figure}
Even though it is difficult to determine the shape and width of the
underlying distribution for any of the sources with only 2 PRE bursts,
it is worth noting that for 3 out of the 6 cases, the fractional
difference between the two touchdown fluxes, $F_1$ and $F_2$, as
defined by
\begin{equation}
R\equiv \frac{2(F_1 - F_2)}{F_1+F_2}\;,
\label{eq:R}
\end{equation}
is less than 7\%. It would be very unlikely for the underlying
distribution of touchdown fluxes in each source to be much broader
than this level and for half of the randomly picked pairs of touchdown
fluxes to be within 7\%.
In the following section, we quantify this statement by making the
assumption that all sources have a distribution of touchdown fluxes
with the same fractional width. We use the $R$ value for each of the
burst pairs given in Table~\ref{table:R} to show that the most likely
fractional width of the underlying distribution of touchdown fluxes is
$11^{+5}_{-3}\%$ (68\% confidence level).
\section{Systematic Uncertainties in Sources with few PRE Bursts}
Our aim here is to estimate the most likely fractional dispersion of
touchdown fluxes in X-ray bursters that can reproduce the observed $R$
values for the six sources in our sample for which we have only two
observations of PRE bursts each. Because of the small number of data
points available, we will assume that the underlying distribution of
touchdown fluxes in each source is a Gaussian, with the same
fractional dispersion $\sigma$, i.e.,
\begin{equation}
P_{\rm td}(F/F_0;\sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}}
\exp\left[-\frac{(F/F_0 -1)^2}{2\sigma^2} \right]\;,
\end{equation}
where $F$ is the touchdown flux of each burst and $F_0$ is the mean
touchdown flux for each source.
If we draw a random pair of touchdown fluxes $F_1$ and $F_2$ from this
distribution and calculate their fractional difference $R$
(eq.~[\ref{eq:R}]), then the distribution of the R values will be
given by
\begin{equation}
P(R;\sigma) = C \int P_{\rm td}\left(\frac{F}{F_0};\sigma\right)
\left\{P_{\rm td}\left[\left(\frac{2-R}{2+R}\right)\frac{F}{F_0};\sigma\right]
+ P_{\rm td}\left[\left(\frac{2+R}{2-R}\right)\frac{F}{F_0};\sigma\right]
\right\} d\left(\frac{F}{F_0} \right)\;,
\end{equation}
where $C$ is an appropriate normalization constant. The distribution
$P(R;\sigma)$ peaks at $R=0$ for all values of $\sigma$ and drops
quickly to zero such that the median value of $R$ for this
distribution is $R_{\rm 50\%}=\sigma$. Given that half of our 6
sources with only pairs of PRE bursts have $R$ values that are less
than 7\%, we expect that the most probable value of the fractional
dispersion of their touchdown fluxes will be of the same order.
For each source with a pair of PRE bursts, we assign a Gaussian
likelihood of $R$ values, taking into consideration the fact that the
$R$ value is always positive, as
\begin{equation}
P_{\rm obs}(R;R_0^i, \sigma_R^i)=\frac{1}{\sqrt{2\pi}\sigma_R^{i}}
\left\{
\exp\left[-\frac{(R-R_0^i)^2}{2(\sigma_R^{i})^2}\right]+
\exp\left[-\frac{(-R-R_0^i)^2}{2(\sigma_R^{i})^2}\right]\right\}\;,\qquad
R>0\;.
\end{equation}
with a most likely value $R_0^i$ and a dispersion $\sigma_R^i$ given
in Table~\ref{table:R}. The likelihood $P_{\rm obs}(R;R_0^i;\sigma_R^i)$
for each source with a pair of PRE bursts is shown in Figure~\ref{Rvalue}.
The likelihood of observing the $N=6$ pairs of $R$ values with the
likelihood shown in Figure~\ref{Rvalue}, given an underlying fractional
dispersion $\sigma$, is
\begin{equation}
P({\rm data}\vert \sigma) = \prod_{i=1}^{N} \int P(R;\sigma)
P_{\rm obs}(R;R_0^i, \sigma_R^i) dR\;.
\end{equation}
Using Bayes' theorem, we can then calculate the likelihood of
each fractional dispersion $\sigma$ given the data, as
\begin{equation}
P(\sigma \vert {\rm data}) = C^\prime P({\rm data} \vert \sigma)
P_\sigma(\sigma)\;.
\end{equation}
where $C^\prime$ is another appropriate normalization constant and we
take the prior probability over all possible fractional dispersions
$P_\sigma(\sigma)$ to be constant over the range of interest.
Figure~\ref{sigma_prob} shows the posterior probability over the
fractional dispersion $\sigma$ that is consistent with the 6 observed
$R$ values. The most likely fractional dispersion of touchdown fluxes
for our sample of 6 sources with only one pair of observed PRE bursts
each is $11^{+5}_{-3}$\%, where we determined the quoted uncertainty
at the 68\% level in the asymmetric probability distribution.
\begin{figure}
\centering
\includegraphics[scale=0.7, angle=0]{f10.ps}
\caption{The likelihood of the fractional difference $R$ between
the touchdown fluxes, $F_1$ and $F_2$, of pairs of bursts in
sources with only two PRE bursts.}
\label{Rvalue}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.7, angle=0]{f11.ps}
\caption{The posterior probability over the fractional width
$\sigma$ of the touchdown flux distribution for the six sources
that exhibit only a pair of PRE bursts each.}
\label{sigma_prob}
\end{figure}
\section{Discussion}
We used the RXTE archive of thermonuclear X-ray bursts to select the
bursts that show a clear evidence for photospheric radius
expansion. We determined systematically the touchdown moment of each
burst and inferred the bolometric flux at that point in the burst. We
then used a Bayesian technique to infer the most probable value and
the width of the distribution of touchdown fluxes in each source. In
the two sources with more than a few bursts, the inferred width is
within 5\%$-$10\% of the most probable touchdown flux. In the six
sources with only one pair of PRE bursts each, the width of the
underlying distribution is consistent with being at a similar
level. When the latter group of sources is taken as a representative
sample, the most likely fractional width of their touchdown fluxes is
$\simeq 11$\%. The only clear exception is Aql X-1, where the
systematic uncertainties exceed $\sim 20\%$.
As we explored in Section~5, the distribution of the touchdown fluxes
is expected to have a finite width for a number of observational and
physical reasons. For a number of these effects, we were able to
estimate that they introduce a 5\%$-$10\% level of systematic
uncertainty in the fluxes. The two unknowns that potentially introduce
larger systematic uncertainties are the asymmetry of the PRE event and
the composition of the material at the photosphere. Our results show,
however, that even these unknowns do not introduce uncertainties
larger than 10\%.
The PRE bursts allow us to measure the Eddington limit at the surface
of the neutron star for each source. Determining the Eddington limit
requires an absolute flux measurement, which is affected by the
overall flux calibration of the X-ray detector used. Such calibrations
are notoriously difficult to achieve and are usually based on a
particular set of assumptions regarding the spectrum and variability
of the Crab nebula (Jahoda et al.\ 2006; see also Toor \& Seward 1974;
Kirsch et al.\ 2005; Weisskopf et al.\ 2010). Any bias in the absolute
flux calibration cannot increase the spread of touchdown fluxes that
we infer here for each source. However, it can affect the mean
touchdown flux, which, in turn, enters into the measurement of neutron
star masses and radii. We will quantify the potential systematic
uncertainties introduced by the absolute flux calibration of PCA in
Paper III of this series.
It is also important to emphasize here that our results are based on a
statistical analysis of the entire sample of PRE bursts per source and
do not preclude the possibility that any one individual burst may show
a rather different touchdown flux. Indeed, there is at least one burst
observed from 4U~1636$-$536 (ID \#16)\footnote{Although this burst is
a PRE event and we find a touchdown flux that is very similar to the
one in Galloway et al.\ (2006), it is not included in this study
because the persistent flux of the binary before this X-ray burst was
higher than our limit.}, for which the touchdown flux was smaller
compared to the average value by a factor of 1.7~Galloway et al.\
(2006). In this particular case, a variation in the hydrogen mass
fraction from $X=0$ to $X=0.7$ between the bursts has been considered
as a natural explanation of the difference in touchdown fluxes
(Sugimoto et al.\ 1984; Galloway et al.\ 2006). The fact that such
outliers may and do exist makes it essential that proper statistical
tools are used in all inferences based on measurements of the
touchdown fluxes of PRE bursts.
In conclusion, our results indicate that the systematic uncertainties
in the measurements of touchdown fluxes in radius-expansion bursts
from low-mass X-ray binaries are within $\simeq 10$\%, for nearly the
entire source sample. Such systematic uncertainties do not preclude,
in and of themselves, neutron star mass-radius measurements with high
enough precision to distinguish between different equations of state
of neutron-star matter.
\LongTables
\begin{deluxetable}{lcccc}
\tablecolumns{5}
\tablewidth{0pc}
\tablecaption{Measured Touchdown Flux Values From PRE Events.}
\tablehead{ \colhead{} & \colhead{} & \colhead{} & \colhead{Touchdown}
& \colhead{Normalization}
\\
\colhead{Source Name} & \colhead{BID\tablenotemark{a}} &
\colhead{MJD\tablenotemark{a}} &
\colhead{Flux\tablenotemark{b}}& \colhead{Ratio} }
\startdata
4U~0513$-$40 & 6 &53442.08752& 1.32\er0.07 & 3.30 \\*
& 7 &54043.68856& 1.06\er0.06 & 9.00 \\
\hline
4U~1636$-$53 &1 & 50445.94404 & 7.25\er0.15 & 6.63 \\
&4 & 50448.73395 & 7.09\er0.15 & 6.19 \\
&6 & 51044.48934 & 7.43\er0.19 & 4.95\\
&7 & 51045.15288 & 7.64\er0.23 & 9.82 \\
&10& 51297.07198 & 7.55\er0.21 & 8.21 \\
&12& 51339.24688 & 7.23\er0.18 & 4.82 \\
&13& 51347.98824 & 6.35\er0.16 & 4.16 \\
&14& 51348.72984 & 6.86\er0.15 & 5.01 \\
&15& 51350.79575 & 6.52\er0.14 & 5.12 \\
&20& 51710.21233 & 7.81\er0.20 & 7.24 \\
&21& 51765.05463 & 6.28\er0.20 & 5.71 \\
&22& 51765.37284 & 7.00\er0.40 & 5.60 \\
&23& 51768.98081 & 7.52\er0.18 & 5.90 \\
&24& 51820.98112 & 7.24\er0.17 & 5.47 \\
&25& 51853.18194 & 6.64\er0.22 & 3.07 \\
&26& 51860.75171 & 6.02\er0.16 & 3.74 \\
&27& 51937.11612 & 6.70\er0.16 & 5.65 \\
&28& 51941.87558 & 6.43\er0.16 & 3.74 \\
&29& 51942.10024 & 6.62\er0.23 & 5.26 \\
&30& 52004.71326 & 6.65\er0.17 & 7.54 \\
&31& 52029.22818 & 6.74\er0.16 & 4.04 \\
&34& 52075.13477 & 7.97\er0.25 & 9.89 \\
&38& 52149.27871 & 6.35\er0.15 & 2.67 \\
&45& 52182.61618 & 8.11\er0.22 & 5.10 \\
&49& 52283.01850 & 6.93\er0.18 & 4.36 \\
&50& 52273.69081 & 5.56\er0.17 & 7.02 \\
&61& 52286.05404 & 8.36\er0.20 & 5.01 \\
&62& 52286.55466 & 7.42\er0.20 & 6.78 \\
&68& 52287.52190 & 6.15\er0.33 & 8.32 \\
&72& 52288.51431 & 6.85\er0.20 & 6.28 \\
&79& 52288.97438 & 5.60\er0.17 & 6.52 \\
&86& 52289.29282 & 7.89\er0.21 & 5.92 \\
&87& 52289.97694 & 6.43\er0.20 & 8.64 \\
&88& 52304.96314 & 5.84\er0.16 & 4.59 \\
&94& 52310.93185 & 6.69\er0.17 & 7.21 \\
&110& 52316.73272 & 7.06\er0.20 & 4.24 \\
&111& 53516.31312 & 7.05\er0.17 & 4.47 \\
&122& 52551.25121 & 6.26\er0.17 & 1.89 \\
&136& 53516.31312 & 7.81\er0.24 & 10.17\\
&137& 53524.38883 & 7.70\er0.20 & 7.47 \\
&148& 53592.23376 & 7.33\er0.17 & 5.69 \\
&149& 53596.08782 & 6.37\er0.22 & 4.62 \\
&150& 53598.07334 & 7.12\er0.26 & 6.37 \\
&168& 53688.95192 & 7.36\er0.20 & 7.32 \\
\hline
4U~1702$-$429 & 19 & 52957.62907 & 9.05\er0.26 & 3.12 \\
\hline
4U~1705$-$44& 5 & 50542.50287 & 4.13\er0.13 & 2.63 \\
\hline
4U~1724$-$307 & 2 & 53058.40140 & 4.56\er0.13 & 1.76 \\*
& 3 & 53147.21828 & 6.01\er0.17 & 1.67 \\
\hline
4U~1728$-$34 & 2& 50128.88220 & 8.13\er0.17 & 2.30 \\*
& 21& 50718.47163 & 9.21\er0.27 & 3.04 \\*
& 22& 50718.66257 & 8.41\er0.16 & 4.47 \\*
& 38& 51133.42394 & 8.88\er0.23 & 3.32 \\*
& 39& 51133.67299 & 8.36\er0.21 & 2.46 \\*
& 41& 51134.57233 & 8.97\er0.23 & 2.35 \\*
& 48& 51204.00117 & 8.50\er0.19 & 2.65 \\*
& 49& 51204.12990 & 8.86\er0.28 & 1.80 \\*
& 51& 51206.14068 & 8.86\er0.19 & 4.05 \\*
& 53& 51209.91806 & 8.16\er0.26 & 1.86 \\*
& 54& 51210.08245 & 8.18\er0.18 & 1.85 \\*
& 55& 51213.93849 & 8.80\er0.19 & 2.04 \\*
& 69& 51443.01361 & 8.43\er0.24 & 1.66 \\*
& 83& 51949.12600 & 10.68\er0.38 & 1.99 \\*
& 85& 52007.61313 & 8.09\er0.20 & 2.03 \\*
& 86& 52008.08709 & 8.29\er0.20 & 3.38 \\
\hline
KS~1731$-$260 & 8 & 51235.71747 &4.65\er0.13 & 4.49 \\*
& 9 & 51236.72580 &4.75\er0.13 & 3.90 \\
\hline
4U~1735$-$44 & 6 & 50963.42981 &3.27\er0.12 & 2.68 \\*
& 7 & 50963.48944 &3.07\er0.10 & 2.26 \\
\hline
SAX~J1748.9$-$2021 & 1& 52190.38947 & 4.52\er0.14 & 33.99 \\*
& 2& 52190.46882& 3.54\er0.12 & 3.37 \\
\hline
SAX~J1750.8$-$2900 & 2& 52011.59758 & 5.63\er0.16 & 1.86 \\*
& 3& 52014.71002 & 5.58\er0.19 & 2.13 \\
\hline
Aql~X$-$1 & 4 & 50508.97681 & 11.95\er0.19 & 3.91 \\*
& 5 & 50696.52359 & 12.16\er0.19 & 5.56 \\*
& 10& 51332.77990 & 11.55\er0.24 & 7.52 \\*
& 19& 51856.15690 & 8.45\er0.25 & 7.36 \\*
& 28& 52324.99055 & 12.09\er0.21 & 6.22 \\*
& 29& 52347.18234 & 6.38\er0.18 & 2.24
\enddata
\tablenotetext{a}{Burst IDs and burst start times are adopted from (Galloway et al. (2008).}
\tablenotetext{b}{Values are given in units of 10$^{-8}$ ergs cm$^{-2}$ s$^{-1}$ and are
calculated using the equation 3 of Galloway et al. (2008).}
\label{tdres}
\end{deluxetable}
\begin{deluxetable}{lccc}
\tablecolumns{4}
\tablewidth{300pt}
\tablecaption{The Measured Touchdown Fluxes}
\tablehead{
\colhead{Source Name} & \colhead{Touchdown Flux\tablenotemark{a}} &
\colhead{$\sigma_{Sys}$\tablenotemark{b}}
& \colhead{$\sigma_{Formal}$\tablenotemark{c}}}
\startdata
4U~0513$-$401 & 1.19 & 0.11 & 0.06\\
4U~1636$-$536 & 6.93 & 0.64 & 0.20\\
4U~1724$-$307 & 5.29 & 0.70 & 0.16 \\
4U~1728$-$34 & 8.63 & 0.46 & 0.22 \\
KS~1731$-$260 & 4.71 & n/a & 0.13 \\
4U~1735$-$44 & 3.15 & n/a & 0.11 \\
SAX~J1748.9$-$2021 & 4.03 & 0.54 & 0.13 \\
SAX~J1750.8$-$2900 & 5.61 & 0.01 & 0.17 \\
Aql~X$-$1 & 10.44 & 2.22 & 0.21
\enddata
\tablenotetext{a}{Fluxes and uncertainties are in units of 10$^{-8}$ erg s$^{-1}$cm$^{-2}$}
\tablenotetext{b}{These reflect the most probable widths of the underlying distributions}
\tablenotetext{c}{These reflect the uncertainties in measuring the most probable values
of the underlying distributions}
\label{tdaverage}
\end{deluxetable}
\newpage
\begin{deluxetable}{lc}
\tablecolumns{2}
\tablewidth{200pt}
\tablecaption{Fractional Differences of Pairs of Touchdown Fluxes}
\tablehead{
\colhead{Source Name} & \colhead{$R$\tablenotemark{a}}}
\startdata
4U~0513$-$401 & 0.218$\pm$0.077\\
4U~1724$-$307 & 0.274$\pm$0.039\\
KS~1731$-$260 & 0.021$\pm$0.039\\
4U~1735$-$44 & 0.063$\pm$0.049\\
SAX~J1748.9$-$2021 & 0.243$\pm$0.045\\
SAX~J1750.8$-$2900 & 0.009$\pm$0.044
\enddata
\tablenotetext{a}{defined as $R\equiv 2\vert F_2-F_1\vert/(F1+F2)$}
\label{table:R}
\end{deluxetable}
\acknowledgements
We thank the anonymous referee for constructive suggestions. This work
was supported by NASA ADAP grant NNX10AE89G and Chandra Theory grant
TMO-11003X. DP was supported by the NSF CAREER award NSF 0746549 and
Chandra Theory grant TMO-11003X. This research has made use of data
obtained from the High Energy Astrophysics Science Archive Research
Center (HEASARC), provided by NASA's Goddard Space Flight Center.
|
1,116,691,499,050 | arxiv | \section{Introduction}
Correlation functions of operators in strongly coupled conformal field theories can often be computed using the
AdS/CFT correspondence. Euclidean correlators have a long history\cite{Witten:1998qj,Freedman:1998tz} while the rich analytic structure of various Lorentzian signature correlators can also be obtained. The earliest proposal for the latter was by Son and Starinets\cite{Son:2002sd}, and there have also been several elaborations of that method (see for example \cite{Herzog:2002pc,Iqbal:2009fd}). Recently, Skenderis and van Rees\cite{Skenderis:2008dg,Skenderis:2008dh} showed how the complex time contour of an arbitrary correlation function can systematically be accounted for by gluing together manifolds of various signatures, carefully matching fields at the interfaces. This method was used to calculate scalar two-point functions in AdS space, and in asymptotically AdS spaces.
The extension of gauge-gravity duality ideas to spacetimes of Galilean isometries and field theories with non-relativistic invariance \cite{Son:2008ye,Balasubramanian:2008dm} has been of much interest in the recent literature. In particular, it is expected that such systems are of more direct relevance to condensed matter models. Correlation functions have recently been computed using standard holographic methods for scalars \cite{Son:2008ye,Balasubramanian:2008dm,Volovich:2009yh} and for fermions \cite{Akhavan:2009ns}.
In this paper, we reconsider {\it Lorentzian} correlators of non-relativistic systems by directly calculating them using the techniques of Refs. \cite{Skenderis:2008dg,Skenderis:2008dh} in Schr\"odinger geometries.
We consider the time-ordered correlator and the Wightman function, as well as thermal correlators.
\section{The Schr\"odinger Geometry and Scalar Fields}
We consider the $d+3$ dimensional Lorentzian geometry\cite{Son:2008ye}
\begin{equation}\label{metric}
ds^2 = L^2\left(- b^2\frac{dt^2}{z^4} + \frac{2dt d\xi + d\vec{x}^2 + dz^2}{z^2}\right)
\end{equation}
where $z\geq 0$ and $b,L$ are length scales. This geometry has Schr\"odinger isometry with dynamical exponent equal to two. The Killing vectors are of the form
\begin{eqnarray}
N&=&\partial_\xi\\
D&=&z\partial_z+\vec x\cdot\vec\partial+2t\partial_t\\
H&=&\partial_t\\
C&=&tz\partial_z+t\vec x\cdot\vec\partial+t^2\partial_t-\frac{1}{2}(\vec x^2+z^2)\partial_\xi\\
\vec K&=& -t\vec\partial+\vec x\partial_\xi\\
\vec P&=&\vec \partial
\end{eqnarray}
$N$ is central, and $D,H,C$ form an $SL(2,\bb{R})$ algebra.
\comment{The $SL(2,\bb{R})$ algebra is $[H,D]=2H$, $[H,C]=D$, $[C,D]=-2C$. The quadratic Casimir is ${\cal C}_2=D^2-2(CH+HC)$.}
Consider a massive complex scalar propagating on the
non-relativistic (Lorentzian) geometry
with action
\begin{equation}
S = -\frac{1}{2} \int d^{d+3}x \sqrt{-g} \left( g^{\mu\nu}\partial_\mu \bar\phi\partial_\nu \phi +
m_0^2/L^2 |\phi|^2 \right)\label{action}
\end{equation}
The usual interpretation is that the dual theory lives on $\bb{R}^{1,d}$ at $z=0$ and is coordinatised by the $(t,\vec{x})$ coordinates--$\xi$ is not geometric in the usual sense. The isometry $N:\xi\mapsto\xi+a$ is central and thus $N$ is strictly conserved. Each operator of the boundary theory can be taken to have a fixed momentum (`particle number') conjugate to $\xi$. $\xi$ is usually taken compact (with circumference $R$) so that the spectrum of possible momenta is discrete. In this case, the dimensionless ratio $b/R$ is a parameter of the theory.
For example, the graviton mode coupling to the stress energy tensor of the boundary theory has
particle number zero \cite{Herzog:2008wg, Adams:2008wt}. Here, we will consider a complex scalar with definite but arbitrary particle number $n$. As we will see, it is very important that the scalar be complex. First, it carries a charge under $N$ and so we should expect it to be complex. More importantly though, it is dual to an operator in a non-relativistic theory, and in such a theory there is a sort of polarization: a simple example of this occurs in free field theories, in which the elementary field creates a particle (and not anti-particle) state.
Now, in this paper we consider correlators of various types. In this regard, as developed by Skenderis and van Rees\cite{Skenderis:2008dg,Skenderis:2008dh}, we regard the metric (\ref{metric}) as defined formally for complex $t$, and a given correlator is constructed from a particular contour in the complex $t$ plane. Here, we consider two such cases, in which the contour is constructed from horizontal (Lorentzian time) and vertical (Euclidean time) contour segments (see Fig. \ref{fig:contour.eps}).
\myfig{contour.eps}{11}{Contours corresponding to the time-ordered correlator and the Wightman function, respectively.}
In the next two subsections, we consider scalar fields in Lorentzian time and in Euclidean time, respectively.
\subsection{Lorentzian signature}
Given the metric (\ref{metric}) for real time, the scalar equation of motion takes the form
\begin{equation}
z^2 \partial ^2_z \phi - (d+1)z \partial _z \phi + z^2 (2 \partial_t\partial_\xi + \partial^2_i\phi
) +b^2\partial_\xi^2 \phi - m_0^2 \phi = 0.
\end{equation}
We look for solutions of the form
\begin{equation}
\phi_{(n)} = e^{in\xi} e^{-i\omega t + i \vec k\cdot \vec x} f_{\omega,n,\vec k}(z),\ \ \ \ \
\bar\phi_{(n)} = e^{-in\xi} e^{i\omega t - i \vec k \cdot \vec x} \bar f_{\omega,n,\vec k}(z)
\end{equation}
in which case $f$ satisfies
\begin{equation}\label{eom}
z^2 \partial^2_z f - (d+1)z \partial_z f + z^2 (2 \omega n - \vec k^2 )f -
m^2 f = 0,
\end{equation}
where $m^2 = m_0^2 + n^2b^2$. The general solution of (\ref{eom}) can be written in terms of
modified Bessel functions as
\begin{eqnarray}
f_{n,\omega,\vec k}(z) &=& A(\omega,\vec k) z^{\frac{d}{2}+1} K_\nu(q z) + B(\omega,\vec k)
z^{\frac{d}{2}+1} I_\nu(q z
\end{eqnarray}
with $\nu = \sqrt{(\frac{d}{2}+1)^2 + m^2}$ and $q = \sqrt{q^2} =
\sqrt{ \vec k^2-2\omega n}$. $K_\nu$ and $I_\nu$ correspond to non-normalizable and
normalizable modes, respectively. Their asymptotic behavior is as
follows
\begin{eqnarray}
z^{\frac{d}{2}+1} K_\nu(q z \to 0) &=& \Gamma(\nu)
\frac{z^{\frac{d}{2}+1 -\nu}}{2^{-\nu+1}q^\nu} +...\\
z^{\frac{d}{2}+1} I_\nu(q z \to 0) &=& \frac{1}{\Gamma(\nu+1)}
\frac{z^{\frac{d}{2}+1 +\nu}}{2^{\nu}q^{-\nu}} +...\\
z^{\frac{d}{2}+1} K_\nu(|q z| \to \infty) &=& \sqrt{\frac{\pi z^{d+1}}{2q}} e^{-q z}+...\\
z^{\frac{d}{2}+1} I_\nu(|q z| \to \infty) &=& \sqrt{\frac{z^{d+1}}{2\pi
q}} \Big{[}e^{q z}(1+...)+ e^{-q z - i\pi(\nu + 1/2)}(1+...)\Big{]}...
\end{eqnarray}
For $q^2 < 0$, both $K_\nu$ and $I_\nu$ are regular everywhere, while
for $q^2 > 0$, $I_\nu$ diverges for large $z$ and should be discarded. This situation is
very similar to that of a scalar field propagating on $AdS_{d+3}$, where the solution can also be
written in terms of modified Bessel functions. In fact this
similarity is very useful and was employed in Ref. \cite{Volovich:2009yh} to
compute the non-relativistic bulk-to-boundary propagator. We note though that there is
a small but important difference due to the non-relativistic nature of
the boundary theory, that we will explain presently.
Without loss of generality, we take $n > 0$. To construct the most general solution (with fixed $n$), we must integrate over all values of $\omega,\vec k$. However, $q$ has a branch point at $\omega=\vec k^2/2n$, and we must then say how to integrate over $\omega$.
Following \cite{Skenderis:2008dg}, we do so by moving the branch point off of the real $\omega$ axis by defining $q_\epsilon = \sqrt{-2\omega n + \vec k^2 - i\epsilon}$, $\bar q_\epsilon =
\sqrt{-2\omega n + \vec k^2 + i\epsilon}$. The branch cut is taken along the negative real axis.
Clearly, we have made a choice here, but we will see later that this is the correct choice, for physical reasons. Notice that since $Re(q_\epsilon), Re(\bar q_\epsilon) > 0$, $K_\nu$ always decays exponentially as $|qz| \to \infty$. In
contrast, the large $z$ behavior of $I_\nu$ tells us that $q, \bar q$
cannot have a real part. As a result, the $i\epsilon$
insertion should not be applied for the normalizable mode.\footnote{This fact was not clearly spelled out in Ref. \cite{Skenderis:2008dg} in the relativistic analogue, but we will see later that it is an important point.}
With these comments, we arrive at the general solution to (\ref{eom}) in Lorentzian signature
\begin{eqnarray}\label{general solu}
\phi_{(n)}(t,\vec x) &=& e^{in\xi} \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\
e^{-i\omega t + i \vec k\cdot \vec x} z^{\frac{d}{2}+1}
\left( A(\omega,\vec k) K_\nu(q_\epsilon z)
+ \theta(-q^2)B(\omega,\vec k) J_\nu(|q| z) \right
\end{eqnarray}
where we have used $I_\nu(\sqrt{q^2} z) = I_\nu(-i|q|z) \sim J_\nu(|q|z)$.
\subsection{Euclidean signature}\label{sec:eucl}
Next, we consider a similar analysis in Euclidean signature. To do so, we Wick rotate the metric (\ref{metric}) to\cite{Fuertes:2009ex}
\begin{equation}\label{E-action}
ds^2 = L^2\left( b^2\frac{d\tau^2}{z^4} + \frac{-2i d\tau d\xi + d\vec{x}^2 + dz^2}{z^2}\right)
\end{equation}
Although this metric is complex and thus not physical, it is possible to trace carefully through the analysis, and this is
what we need to do in any case for Euclidean signature.
The general solution is
\begin{eqnarray}\label{E-general solu}
\phi_{(n)}(\tau,\vec x) &=& e^{in\xi} \int \frac{d\omega_E}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega_E \tau +
i \vec k\cdot \vec x} z^{\frac{d}{2}+1} A(\omega_E,\vec k) K_\nu(q_E z)\\
\bar\phi_{(n)}(\tau,\vec x) &=& e^{-in\xi} \int \frac{d\omega_E}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{i\omega_E \tau
-i \vec k\cdot \vec x} z^{\frac{d}{2}+1} \bar A(\omega_E,\vec k)
K_\nu(\bar q_E z)
\end{eqnarray}
where now $q_E = \sqrt{q_E^2} = \sqrt{\vec k^2-i2\omega_E n}$. Note that in this case, the branch point is at
imaginary $\omega_E$, and so no $i\epsilon$ insertion is necessary.
In contrast to the Lorentzian case, the Euclidean scalar does not have a
normalizable mode. This is because $q_E$ and $\bar q_E$ cannot be
pure imaginary, so $I_\nu(q_E z)$ is never regular in the interior.
It is important to note, however, that this statement applies to the case $\tau\in (-\infty,\infty)$. If $\tau$ is
restricted, a normalizable mode can emerge. For example, if $\tau \in [0,\infty)$, we write $\omega_E = -i\omega$ for $\phi$ and
$\omega_E = i\omega$ for $\bar\phi$ and the following mode is
allowable
\begin{eqnarray}
\phi &\sim& e^{in\xi} e^{-\omega \tau + i \vec k\cdot \vec x}
z^{\frac{d}{2}+1} I_\nu(q z)\\
\bar\phi &\sim& e^{-in\xi}e^{-\omega \tau - i \vec k\cdot \vec x}
z^{\frac{d}{2}+1} I_\nu(\bar q z)
\end{eqnarray}
as long as $\omega > 0$ and $-2\omega n + \vec k^2 < 0$, or
equivalently $\omega > \vec k^2/2n$.
A similar result pertains in the finite temperature case where $\tau\in [0,\beta]$.
Observe however that in contrast to the relativistic real-time formalism, there is no normalizable mode for the Euclidean segment if we restrict $\tau \in (-\infty,0)$. This is because we would need both $\omega < 0$ and
$-2\omega n + \vec k^2 < 0$, and these contradict each other. This will have important consequences. In particular we note that there is no normalizable mode in the segment $M_0$ of either contour in Fig. \ref{fig:contour.eps}.
\section{Non-Relativistic Holography and Correlators}
\subsection{Matching Conditions}
To construct correlation functions, we must match solutions at the interfaces between contour segments. We will label field values on a contour segment $M_n$ by a subscript, $\phi_n$.
Let us begin by considering the Lorentzian$(M_1)$-Lorentzian$(M_2)$ interface in Fig. \ref{fig:contour.eps}b,
where $t_1 \in [0,T]$ and $t_2 \in [T,2T]$ (where $T\to\infty$ is a large time). The total action (for these two segments) is
\begin{align}
S=S_{M_1}+S_{M_2} = \int_0^T dt_1 \left(g^{\mu\nu}_{M_1}\partial_\mu\bar\phi_1 \partial_\nu\phi_1+m_0^2/L^2\bar\phi_1\phi_1\right) - \int_T^{2T} dt_2
\left(g^{\mu\nu}_{M_1}\partial_\mu\bar\phi_2\partial_\nu\phi_2+m_0^2/L^2\bar\phi_2\phi_2\right)
\end{align}
The relative minus sign arises because $M_1$ and $M_2$ have opposite orientation.
For the same reason, the metric in $M_2$ is
\begin{align}
ds^2_{M_2} = L^2 \Big{(}- \frac{dt_2^2}{z^4} + \frac{-2dt_2 d\xi +
d\vec{x}^2 + dz^2}{z^2}\Big{)},
\end{align}
which has an extra minus sign in the off-diagonal component.
Requiring continuity of the momentum conjugate to $\bar\phi$
at the intersection $t_1 = t_2 = T$, we get
\begin{align}
\partial_\xi\phi_1 = \partial_\xi\phi_2.
\end{align}
Along with the continuity of $\phi$, we conclude that the matching
conditions at $t_1 = t_2 = T$ are
\begin{align}\label{matchingcondphi}
\phi_1(T) &= \phi_2(T)\\
n_1 &= n_2\label{matchingcondn}
\end{align}
Thus, we do not need to impose first-order time derivative
continuity of fields along the contour as in the relativistic case --- it is just replaced by particle
number conservation. It turns out that (\ref{matchingcondphi},\ref{matchingcondn}) are also
the matching conditions for Euclidean -- Lorentzian interfaces.
\subsection{Convergence and the Choice of Vacuum}\label{sec:choice}
The non-relativistic holographic correspondence is in general the
same as its relativistic counterpart, where the path integral with
specified boundary conditions in the bulk is identified with the
partition function with sources inserted in the boundary theory. In
the case of a complex bulk scalar, we must temporarily treat the
sources $\phi_{(0)}$ and $\bar\phi_{(0)}$ as independent. The near
boundary expansion of the fields are qualitatively the same as
scalars on $AdS_{d+3}$ \begin{eqnarray}
\phi_{(n)} &=& e^{in\xi}\Big{\{}\{ z^{\Delta_-} \left(\phi_{(0)}+ z^2 \phi_{(2)} + o(z^4)\right) + z^{\Delta_+} \left(v_{(0)} + z^2 v_{(2)} + o(z^4)\right)\Big{\}}\\
\bar\phi_{(n)} &=& e^{in\xi}\Big{\{}\{ z^{\Delta_-}
\left(\bar\phi_{(0)}+ z^2 \bar\phi_{(2)} + o(z^4)\right) +
z^{\Delta_+} \left(\bar v_{(0)} + z^2 \bar v_{(2)} +
o(z^4)\right)\Big{\}}, \end{eqnarray} with $\Delta_\pm=1+d/2\pm\nu$ and
\begin{align}
\phi_{(2m)} = \frac{1}{2m (2\Delta_+ - (d + 2) -
2m)}\mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}_0\phi_{(2m-2)},
\end{align}
where here $\mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}_0 = 2in\partial_t + \partial^2_i$ is the
non-relativistic Laplacian. As usual the holographic correspondence
implies \begin{equation} e^{iS^{bulk}_C[\bar\phi_{(0)},\phi_{(0)}]} = \langle
e^{i\int_C ( \hat {\cal O}^\dag\phi_{(0)} + \bar\phi_{(0)} \hat
{\cal O})}\rangle, \end{equation} where $C$ denotes the contour. Although we
have a very different geometry, it's easily seen that in each patch
of the contour the bulk (either Euclidean or Lorentzian) on-shell
action
\begin{align}
S_{os} = \frac{1}{2}\int_\epsilon d^{d+1}x d\xi\sqrt{|g|}
\hspace{3pt}\bar\phi \hspace{3pt}g^{zz}\hspace{2pt}\partial_z \phi
\end{align}
is essentially the same as scalars on $AdS_{d+3}$. As a result, the
renormalization procedure proceeds in the same way as
$AdS_{d+3}/CFT_{d+2}$, which was carried out in much details in
\cite{Skenderis:2008wp}. In specific, for Lorentzian signature the
counter terms take the form,
\begin{align}
S_{ct} = \int_{\epsilon} d^{d+1}xd\xi \sqrt{-\gamma}\Big{(}\frac{d+2
-\Delta_+}{2}\bar\phi \phi + \frac{1}{2(\Delta_+ - d -
4)}\bar\phi\mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}_\gamma\phi +\dots \Big{)},
\end{align}
where $\sqrt{-\gamma} = z^{-(d+2)}$ is the $(d+2)$-dimensional
induced metric determinant and $\mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}_\gamma = z^2 (2in\partial_t
+ \partial^2_i)$ (we will set $L=1$ from now on). The dots represent
higher derivative terms. For special cases where $\nu$ is an
integer, logarithmic counter terms $\sim \log{\epsilon}$ may appear
\cite{Skenderis:2008wp}. It's important to note that $S_{ct}$
preserves the Galilean subalgebra, since $[\mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}_\gamma, K_i] =
0$. This is in parallel with relativistic holography where the
Poincare subalgebra is preserved by the counter terms. In any case,
$v_{(0)}$ will determine the v.e.v of the dual operator and its
derivative with respect to the source $\phi_{(0)}$ gives us the
2-point functions.
There is, however, a subtlety of which we must be cognizant. Unlike
relativistic field theories, in non-relativistic field theories an
elementary field $\Psi$ and its Hermitian conjugate $\Psi^\dag$ play
the role of creation and annihilation operators. There is a freedom
to choose which is an annihilator, or equivalently a freedom to pick
the vacuum. Once a convention is chosen, $\Psi$ and $\Psi^\dag$ are
no longer on the same footing. This is also true for any operator
$\hat {\cal O}$, $\hat {\cal O}^\dag$, in which $\hat {\cal O}$ is
constructed only from annihilators. This corresponds to the fact
that there is only a single pole in the complex $\omega$-plane in
the non-relativistic case. Consequently, the time-ordered propagator
will in fact have only a single temporal $\theta$-function present.
We expect to see this coming about in the analysis, but to see this
properly, one has to be careful with the convergence of various
integrals.
\section{Correlation Functions}
In both cases shown in Fig. \ref{fig:contour.eps}, we have an initial vertical contour $M_0$. The correlation functions of
interest are computed by including source(s) on horizontal component(s) of the contour. We first
show that given such a contour component $M_0$, there is no normalizable mode (such a mode would be everywhere
subleading in the $z\to 0$ expansion). This implies that any
solution with a specific boundary condition is {\it unique}. Indeed, we argued in Section \ref{sec:eucl} that there is
no non-trivial normalizable solution in $M_0$. So in the cases of interest (no sources on $M_0$), $\phi_0 = 0$
identically. The matching condition between $\phi_0$ and $\phi_1$
then requires that $\phi_1(t_1 = 0, \vec x, z) = 0$. The
most general normalizable solution on $M_1$ is
\begin{equation}
\phi^{norm}_1(t_1,\vec x, z) =e^{in\xi} \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega t_1 + i
\vec k\cdot \vec x} z^{\frac{d}{2}+1} \theta(-q^2)B(\omega, \vec k)
J_\nu(|q| z).
\end{equation}
Multiply by $z^{-\frac{d}{2}} e^{-i n \xi -i\vec k'\cdot \vec x} J_\nu(|q'|z)$ with $q'^2 = - 2\omega'
n + \vec k'^2 < 0$ and integrate over $\vec x$ and $z$. We then find
\begin{equation}
0 = \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}d^dx\ e^{i\vec x\cdot (\vec k - \vec
k')} B(\omega, \vec k) \theta(-q^2)\Big{(}\int_0^\infty dz
\hspace{3pt}zJ_\nu(|q|z) J_\nu(|q'|z)\Big{)}
\end{equation}
The $z$-integral is elementary (see Appendix, eq. (\ref{1})) and this becomes
\begin{eqnarray}
0 &=& \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d} d^d x \hspace{3pt}e^{i\vec x\cdot (\vec k - \vec
k')}
B(\omega, \vec k) \theta(-q^2) \frac{1}{|q'|} \delta(|q| - |q'|)\\
&=& \frac{1}{n}\int \frac{d\omega}{2\pi} B(\omega, \vec k') \theta(2\omega n - \vec k'^2) \delta(\omega - \omega')\\
&=& \frac{1}{2\pi n} B(\omega',\vec k')\theta(-{q'}^2).
\end{eqnarray}
Thus, if $\phi_1(t,\vec x, z) = 0$ at some time, there is no non-trivial
normalizable mode. This reasoning in fact applies for all segments of both contours in Fig. \ref{fig:contour.eps}.
\subsection{Bulk-Boundary Propagator and Time-ordered Correlator}
Given the absence of a normalizable mode, any solution with sources that we find for
the two contours in Fig. \ref{fig:contour.eps} is unique.
In this subsection, we consider contour Fig. \ref{fig:contour.eps}a, with segments $M_0$ ($\tau_0 \in (-\infty, 0]$), $M_1$ ($t_1 \in [0,T]$), $M_2$ ($\tau_2 \in [0,\infty)$).
We place a single $\delta$-function source at $\vec x =
0, t_1 = \hat t_1$ on $M_1$. From our discussions above, $\phi_1$ must be of the form
\begin{equation}\label{phi1}
\phi_{1,(n)}(t_1,\vec x,z) =\frac{2}{\Gamma(\nu)}e^{in\xi} z^{1+d/2} \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (t_1-\hat t_1) + i \vec k\cdot \vec x}
\left(\frac{q_\epsilon}{2}\right)^\nu K_\nu(q_\epsilon z).
\end{equation}
as this satisfies $\left. z^{-\Delta_-}\phi_{1,(n)}(t_1,\vec x,z)\right|_{z\to 0} = e^{in\xi}\delta(t_1-\hat t_1)\delta(\vec x)$, and any ambiguity corresponds to normalizable modes, which we have argued are zero.
Since there are no sources on $M_2$, $\phi_2$ takes the form
\begin{equation}
\phi_{2,(n)} = \frac{2\pi i}{\Gamma(\nu)} e^{in\xi} z^{1+d/2} \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-\omega (\tau + iT-i\hat t_1) + i \vec k\cdot \vec x}
\theta(-q^2)\left(\frac{|q|}{2}\right)^\nu J_\nu(|q|z).
\end{equation}
which has been deduced from the matching condition $\phi_1(t_1=T) = \phi_2(\tau = 0)$ as follows. For any time $t_1 >\hat t_1$, we can re-expand $\phi_1$ in terms of $J_\nu$'s. In particular, at $t_1= T$,
we should have
\begin{equation}
\int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (T-\hat t_1) + i
\vec k\cdot \vec x} q^\nu_\epsilon zK_\nu(q_\epsilon z)
= \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (T-\hat t_1) + i
\vec k\cdot \vec x} C(\omega, \vec k)
\theta(-q^2)zJ_\nu(|q|z)
\end{equation}
for some $C(\omega, \vec k)$. To find this coefficient we use the
same trick as in the last subsection: multiply both sides by
$e^{i \omega'(T - \hat t_1) -i\vec k' \vec
x} J_\nu(|q'|z)$ with $q'^2 = - 2\omega' n + \vec k'^2 < 0$ and
integrate over $\vec x, z$. The right-hand side gives $\frac{1}{2\pi n}
\theta(-q^2)C(\omega',\vec k')$, while the left-hand side can be computed using
(\ref{2}) to give $\frac{i}{2n} |q'|^\nu$.
The bulk-boundary propagator
is essentially identified with $\phi_1$ itself: if we simply strip off the $e^{in\xi}$ factor, we can write
\begin{eqnarray}
K_{n,n'}(t,\vec x,z) &=& \delta_{n,n'} K_{(n)}(t,\vec x,z)\\
K_{(n)}(t,\vec x,z;\hat t)&=&\frac{2 z^{1+d/2}}{\Gamma(\nu)} \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (t-\hat t) + i \vec k\cdot \vec x} \left(\frac{q_\epsilon}{2}\right)^\nu
K_\nu(q_\epsilon z).
\end{eqnarray}
As shown in Ref. \cite{Volovich:2009yh} for example, this is closely related to the bulk-boundary propagator in $AdS_{d+3}$. Alternatively, we may perform the integration directly, following the analogous treatment
in Ref. \cite{Skenderis:2008dg}. To do so, it is convenient to convert the $\omega$-integral to an integration over $p=q_\epsilon$, and the contour in the $p$-plane is as shown in Fig. \ref{fig:bulk-boundary-contour.eps}.
\myfig{bulk-boundary-contour.eps}{6}{Contour of integration in the complex $p$-plane for the Lorentzian bulk-boundary propagator.}
Here though there is just one branch point (at $\omega=\vec k^2/2n-i\epsilon$) and the $i\epsilon$ tells us in which sense to traverse the cut. One arrives at
\begin{equation}\label{bulk-boundary}
K_{(n)}(t,\vec x,z;\hat t)= \theta(t_1-\hat t_1)\frac{1}{\pi^{d/2}\Gamma(\nu)}\left(\frac{n}{2i}\right)^{\Delta_+-1}
\left(\frac{z}{t_1-\hat t_1}\right)^{\Delta_+} e^{in\frac{z^2 + \vec x^2 +i\epsilon}{2(t_1-\hat t_1) }}
\end{equation}
where $\Delta_\pm=1+d/2\pm\nu$.
The correlator is then identified with the $z^{\Delta_+}$
coefficient in the near boundary expansion of $\phi_1$ (without the
$e^{in\xi}$ factor) \begin{equation}\label{time-ordered} \langle T\Big{(}\hat
{\cal O}_{(n)}(\vec x,t_1) \hat {\cal O}_{(n)}^\dag(\vec
x',t'_1)\Big{)}\rangle =
\frac{1}{\pi^{d/2}\Gamma(\nu)}\left(\frac{n}{2i}\right)^{\Delta_+-1}
\frac{\theta(t_1-t_1')}{(t_1- t_1')^{\Delta_+}} e^{in\frac{(\vec x -
\vec x')^2 + i\epsilon}{2(t_1-t_1')} }. \end{equation}
\subsection{Wightman function}
The time-ordered correlator, as we have explained, contains a single temporal $\theta$-function. It does not tell us about $\langle
\hat {\cal O}(\vec x,t_1) \hat {\cal O}^\dag(\vec x',t_1')\rangle$ for $t_1' > t_1$. To find this
2-point function we work with the contour of Fig. \ref{fig:contour.eps}b. Denote the
segments by $M_0$ ($\tau_0 \in (-\infty, 0]$), $M_1$ ($t_1 \in
[0,T]$), $M_2$ ($t_2 \in [T,2T]$) and $M_3$ ($\tau_3 \in
[0,\infty)$) as sketched in the figure. We place a $\delta$-function
source at $\vec x = 0, t_1 = \hat t_1$ on $M_1$ and nowhere else. The Wightman function is obtained then from $\phi_2$, the field on $M_2$.
Here $\phi_0 = 0$ and $\phi_1$ remain the same as (\ref{phi1}).
Given experience from the last subsection, we can see immediately
that $\phi_2$ should be
\begin{align}
\phi_{2,(n)} = \frac{2\pi i}{\Gamma(\nu)} e^{in\xi} z^{1+d/2}\int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (2T - t_2-
\hat t_1) + i \vec k\cdot \vec x} \left(\frac{|q|}{2}\right)^{\nu}
\theta(-q^2)J_\nu(|q|z).
\end{align}
This has been determined by requiring the matching condition $\phi_1(t_1 = T) = \phi_2(t_2 = T )$. Notice the unusual $e^{+i\omega t_2 + i\vec k\cdot \vec x}$ wave factor. It is related to the fact mentioned before that along this part of the contour, the metric has an extra minus
sign in the $g_{t_2\xi}$ component.
It is now necessary to compute $\phi_2$ in coordinate space.
We make a change of variable $p = |q| = \sqrt{2\omega n- \vec k^2}$
\begin{equation}
\phi_2= \frac{i}{n\Gamma(\nu)2^\nu} e^{in\xi} z^{1+d/2}
\int_0^\infty dp\ e^{-ip^2 (2T-t_2-\hat t_1)/2n} p^{\nu+1} J_\nu(pz)
\int \frac{d^d k}{(2\pi)^d}\ e^{-ik^2 (2T-t_2-\hat t_1)/2n}e^{i \vec k\cdot \vec x}.
\end{equation}
We note that both integrals converge if $2T-t_2-\hat t_1\to 2T-t_2-\hat t_1-i\epsilon$. The first integral can be computed using (\ref{3}), while the second
one is just a Gaussian integral. The final result is
\begin{equation}\label{phi2}
\phi_2 = e^{in\xi}\frac{1}{\pi^{d/2}\Gamma(\nu)}\left(\frac{n}{2i}\right)^{\Delta_+-1}
\Big{(}\frac{z}{\tilde t_2-\hat t_1-i\epsilon}\Big{)}^{\Delta_+}
e^{in\frac{z^2 + \vec x^2}{2(\tilde t_2-\hat t_1 - i\epsilon) }}.
\end{equation}
where $\tilde t_2=2T-t_2$. Observe that $\phi_2$ is closely related to the bulk-boundary propagator
(\ref{bulk-boundary}) except for the absence of the step function
and a different $i\epsilon$ insertion, as expected.
The vacuum expectation value of $\hat {\cal O}(\tilde t_2,\vec x)$ is
\begin{align}
\langle \hat {\cal O}(\tilde t_2,\vec x) e^{i(\phi_{1(0)} \hat {\cal O}^\dag +
\bar\phi_{1(0)} \hat {\cal O})}\rangle =\frac{1}{\pi^{d/2}\Gamma(\nu)}\left(\frac{n}{2i}\right)^{\Delta_+-1}
\int d t_1 d^d x'\frac{e^{in\frac{(\vec x-\vec x')^2}{2(\tilde t_2- t_1 - i\epsilon) }}}{(\tilde t_2-t_1-i\epsilon)^{\Delta_+}}
\phi_{1(0)}(t_1, \vec x') .
\end{align}
Taking a derivative with respect to $\phi_{1(0)}$ and setting the source to zero,
we get the Wightman function
\begin{equation}
\langle \hat {\cal O}(\tilde t_2,\vec x) \hat {\cal O}^\dag(t_1, \vec x') \rangle
=\frac{1}{\pi^{d/2}\Gamma(\nu)}\left(\frac{n}{2i}\right)^{\Delta_+-1}
\frac{e^{in\frac{(\vec x-\vec x')^2}{2(\tilde t_2- t_1 - i\epsilon) }}}{(\tilde t_2-t_1-i\epsilon)^{\Delta_+}}
\end{equation}
Notice that $\hat {\cal O}^\dag$ is always in the front of $\hat {\cal O}$ because
$t_1$ is always the earlier contour time.
\subsection{Thermal Correlator}
Finally, we compute a thermal correlator by taking the time direction to be compact of period $\beta$.
\myfig{thermal-contour.eps}{7.5}{Thermal contour. Points with a circle are identified.}
To compute the thermal time-ordered correlator and Wightman function, we
consider the thermal contour shown in Fig. \ref{fig:thermal-contour.eps}, where $t = 0$ and $t =
-i\beta$ are identified. We place a $\delta$-function
source at $t_1 = \hat t_1, \vec x = 0$. Note that in contrast to the previous discussions, here there is no $M_0$ component of the contour. It is convenient in this context to write the general solution along $M_1$ in the form
\begin{equation}\label{phi1T}
\phi_1 = \frac{2 e^{in\xi}z^{1+d/2}}{\Gamma(\nu)}\int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (t_1 - \hat t_1)+ i\vec k\cdot\vec x }\Big{(} A(\omega,\vec k) \left(\frac{q_\epsilon}{2}\right)^\nu K_\nu(q_\epsilon z) + B(\omega,\vec k)\left(\frac{q_{-\epsilon}}{2}\right)^\nu K_\nu(q_{-\epsilon} z)\Big{)}.
\end{equation}
where $q_{-\epsilon} = \bar q_\epsilon = \sqrt{-2\omega n + \vec
k^2 + i\epsilon}$.
In order that this correspond to a $\delta$-function source for $z\to 0$, we must have $A + B = 1$. (Furthermore, the case $B=-A$ corresponds to a normalizable mode.) Note that because of the condition on $A,B$, although $A$ and $B$ are not necessarily analytic functions, their sum is analytic. Thus for example, for any pole in $A$, there will be a corresponding pole in $B$ with opposite residue. All of their poles will
contribute opposite residues and cancel out each other in the limit
$\epsilon \to 0$. In (\ref{phi1T}), the first term has support for $t_1>\hat t_1$, while the second has support for $t_1<\hat t_1$.
The matching condition at $(M_1,M_2)$ and $(M_2,M_3)$ intersections imply that
\begin{eqnarray}
\phi_2 &=&\frac{2\pi i e^{in\xi}z^{1+d/2}}{\Gamma(\nu)}
\int\frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (2T - t_2 -\hat t_1) + i\vec k\cdot\vec x }
A(\omega,\vec k) \left(\frac{|q|}{2}\right)^\nu J_\nu(|q| z)\theta(-q^2)\label{phi2T}\\
\phi_3 &=&\frac{2\pi i e^{in\xi}z^{1+d/2}}{\Gamma(\nu)}
\int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-\omega (\tau_3 - i\hat t_1) + i\vec k\cdot\vec x }
A(\omega,\vec k) \left(\frac{|q|}{2}\right)^\nu J_\nu(|q| z)\theta(-q^2)\label{phi3T}
\end{eqnarray}
The thermal condition $\phi_1(t_1 = 0) = \phi_3(\tau_3 = \beta)$
along with $A+B=1$
then gives
\begin{equation}
A = \frac{1}{1- e^{-\beta \omega}}, \hspace{20pt} B = \frac{1}{1 -e^{+\beta \omega}}.
\end{equation}
As usual, the time-ordered propagator is the coefficient of
$z^{\Delta_+}$ in the small $z$ expansion of $\phi_1$ (without the
$e^{in\xi}$ factor). Hence we get\footnote{For integer $\nu$, there is an extra logarithmic factor, namely
$q_{\pm\epsilon}^{2\nu}$ is replaced by $q^{2\nu}_{\pm\epsilon}
\ln{q_{\pm\epsilon}^2}$.}
\begin{equation}\label{time-orderedT}
\langle T\Big{(}\hat O(x) \hat O^\dag(x') \Big{)}\rangle \sim \int
\frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (t - t') + i\vec k\cdot(\vec x - \vec x') }
\Big{(} \frac{(-2\omega n + \vec k^2 - i\epsilon)^{\nu}}{1-e^{-\beta \omega}}
+ \frac{(-2\omega n + \vec k^2 +i\epsilon)^{\nu}}{1 - e^{\beta \omega}} \Big{)}.
\end{equation}
Note that this has the expected form for a thermal correlator\cite{Skenderis:2008dg}
\begin{equation}
\langle T\Big{(}\hat O(x) \hat O^\dag(x') \Big{)}\rangle = -N(\omega)\Delta_A(\omega,\vec k)+(1+N(\omega))\Delta_R(\omega,\vec k)
\end{equation}
In the present notation, $N=-B$. We can also write this as the zero temperature result plus a finite temperature piece:
\begin{equation}
\langle T\Big{(}\hat O(x) \hat O^\dag(x') \Big{)}\rangle\sim \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (t - t') + i\vec k\cdot(\vec x - \vec x') }\left[ q_\epsilon^{2\nu} - \frac{1}{1-e^{\beta\omega}}(q_\epsilon^{2\nu}-q_{-\epsilon}^{2\nu})\right]
\end{equation}
The Wightman function can also be read off from $\phi_2$
\begin{align}
\langle \hat {\cal O}(x) \hat {\cal O}^\dag(x') \rangle \sim i\pi \int \frac{d\omega}{2\pi} \frac{d^d k}{(2\pi)^d}\ e^{-i\omega (t - t' - i\epsilon) + i\vec k(\vec x
-\vec x')} \frac{(2\omega n - \vec k^2)^\nu}{1- e^{-\beta \omega}}
\theta(2\omega n - \vec k^2)
\end{align}
|
1,116,691,499,051 | arxiv | \section{Introduction}
With a binary fraction of $\sim$ 100\%, the presence of a companion plays a crucial role in the evolution of massive stars \citep[e.g., ][]{Sana2011,Duchene2013, Moe2017}. Throughout their lives, it is expected that approximately 70\% of massive stars will interact with a companion \citep[e.g., ][]{Sana2012} and of which 40\% (24\% of all massive stars) will evolve through an overcontact phase \citep[e.g., ][]{Pols1994, Wellstein2001, deMink2007}. Despite this, very few massive overcontact systems are known \citep[see e.g., ][]{Leung1978, Popper1978, Hilditch2005, Penny2008, Lorenzo2014, Lorenzo2017, Almeida2015, Martins2017, Mahy2020a, Janssens2021}.
Despite the rarity of these systems, the overcontact phase can be of crucial importance in the evolution of massive binary systems. The unique geometry and strong binary interactions during this phase make the internal processes difficult to accurately constrain \citep[see e.g., ][]{Fabry2022}. Depending on the treatment of these internal processes and the rate of mass transfer as a binary system first comes into contact, systems evolving through this phase can have drastically different end products. For example, objects such as magnetic massive stars \citep{Schneider2019}, Be stars \citep{Shao2014}, Luminous Blue Variables \citep{Justham2014, Smith2018}, blue stragglers \citep{Eggen1989,Mateo1990} and peculiar Type-II supernovae like SN-1987A \citep{Podsiadlowski1992, Menon2017, Urushibata2018} have all been postulated to be the direct result of massive binary mergers. Alternatively, if the conditions are right (i.e. efficient internal mixing), theoretical studies predict that overcontact systems may be able to avoid merging while on the main sequence, instead forming double black hole binary systems and eventually gravitational wave sources via the chemically homogeneous evolution pathway \citep{deMink2016, Mandel2016, Marchant2016, duBuisson2020, Riley2021}.
An important question when considering the future evolution of a massive overcontact binary system is whether it is evolving on a nuclear timescale, implying that the system is relatively stable, or on a thermal timescale, implying that the system is unstable and will most likely either merge or separate \citep{Pols1994}. Due to their extremely short-lived nature, observing a thermal-timescale overcontact system is expected to be very unlikely, so it is often assumed that the known massive overcontact systems are evolving on the nuclear timescale. Theoretical studies focused on stable massive overcontact binaries indicate that these systems should very quickly equalize in mass, and then continue to evolve on a nuclear timescale \citep{Marchant2016, Menon2021}. Observationally, however, most of the known massive overcontact binaries are found in unequal mass systems.
The discrepancy between the observed and expected mass ratios in combination with the lower than expected number of known massive overcontact binaries when compared with predictions from population synthesis studies \citep[e.g.][]{Langer2020, Menon2021} lead to several interesting open questions. Is the contact phase less stable and therefore shorter lived than we expect? Are we preferentially observing systems before they equalize in mass or is our prediction that these systems equalize flawed? By investigating how the period changes over several years, we can begin to answer some of these questions.
In this paper, we combine archival photometric data sets obtained over a long period of time to investigate the period stability of known massive overcontact systems. By determining how quickly the orbital period is changing, we can determine whether these systems are evolving via the nuclear or the thermal time scale. Further, we can determine whether these systems are in the process of equalizing in mass or if they are evolving as long-lived but unequal mass overcontact binaries. In Sect. \ref{Sample}, we discuss our sample selection and the available photometry for each source as well as our data reduction techniques (when applicable). In Sect. \ref{Methods} we detail our period determination procedure and how we calculate the period stability. We present our results in Sect. \ref{Results}, and we discuss the implications of our findings in Sect. \ref{Discussion}. Finally, Sect. \ref{Conclusions} summarizes our findings and discusses future prospects.
\section{Sample and Observations}\label{Sample}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{data_overview.pdf}
\caption{Overview of the photometric data available for each object in our sample. Each colored shaded region corresponds to a different instrument or data set. Red indicates Hipparcos, blue indicates INTEGRAL OMC, green indicates OGLE, black indicates ASAS, brown indicates ANDICAM, yellow indicates data from \citet{Lorenzo2014} and finally, data from TESS are indicated in purple. Note that these ranges indicate the date ranges of the respective data sets, some of which are sporadic and without a regular cadence.}
\label{fig:data_overview}
\end{figure*}
\begin{table*}
\caption{Selected orbital parameters for each of the systems in our sample. The last column indicates the reference from which the orbital solutions were derived. Note that while the other parameters come directly from each of these papers, the fillout factors were calculated using Eq. \ref{eq:fillout_factor} as described in Sect. \ref{Sample}.}
\centering
\begin{tabular}{ccccccc}
\hline\hline
& $P_\mathrm{orb}$ & $M_1$ & $M_2$ & $q$ ($M_1/M_2$) & $f$ & reference \\
& [d] & [M$_{\odot}$] & [M$_{\odot}$] & & & \\
\hline
LSS 3074 & 2.1852 & 17.2 $\pm$ 1.4 & 14.8 $\pm$ 1.1 & 0.86 $\pm$ 0.04 & 1.05 & \citet{Raucq2017}\\
MY Cam & 1.1754514 & 37.7 $\pm$ 1.6 & 31.6 $\pm$ 1.4 & 0.84 $\pm$ 0.03 & 1.01 & \citet{Lorenzo2014}\\
SMC 108086 & 0.8830987 & 16.9 $\pm$ 1.2 & 14.3 $\pm$ 1.7 & 0.85 $\pm$ 0.06 & 1.70 & \citet{Hilditch2005}\\
TU Mus & 1.387282 & 16.7 $\pm$ 0.4 & 10.4 $\pm$ 0.4 & 0.623 $\pm$ 0.009 & 1.12 & \citet{Penny2008}\\
V382 Cyg & 1.885545 & 26.1 $\pm$ 0.4 & 19.0 $\pm$ 0.3 & 0.727 $\pm$ 0.005 & 1.10 & \citet{Martins2017}\\
VFTS 352 & 1.1241452 & 28.9 $\pm$ 0.3 & 28.6 $\pm$ 0.3 & 0.99 $\pm$ 0.10 & 1.28 & \citet{Almeida2015}\\
\hline
\end{tabular}
\label{table:orb_sol}
\end{table*}
Since the goal of this investigation is to characterize the period change in massive overcontact systems, we select our sample based on a set of criteria designed to ensure that we remove as many biases as possible. These are detailed below:
\begin{itemize}
\item The optimal solution for the system must be an overcontact configuration and further, this must have been determined via a combined photometric and radial velocity fit. Ensuring that the system is in an overcontact configuration is of the utmost importance since semidetached and detached systems with ellipsoidal deformations will have different period evolutions and will thus probe different physical effects than those that dominate during the overcontact phase.
\item The system must not be in a confirmed triple or higher order multiple system unless we can ensure that the additional components are far enough from the binary such that they have a negligible effect on the dynamics of the system \citep[see e.g., ][]{Toonen2016}. The presence of a nearby third object ($P_\mathrm{out} \lesssim 10$ yr for massive overcontact systems) is known to alter the period and orbital parameters of the inner binary system via von Zeipel-Kozai-Lidov oscillations \citep[vZKL; ][]{vonZeipel1910, Kozai1962, Lidov1962}. These perturbations could bias the period variation measurements and for this reason, these systems are excluded from our sample.
\item If the photometric data of the system is contaminated with other periodic signals, the signature of the binary must be the dominant signal.
\item Both of the system's components must be main sequence O-type stars. This criterion is meant to ensure that the sample is as complete as possible in the given spectral range, while also limiting the sample size to a manageable amount.
\end{itemize}
With the above criteria, our final sample consists of 6 objects that are spread over different metallicity regimes including the Milky Way, Large and Small Magellanic Clouds. These systems and their available photometric data are discussed in detail below. An overview of the photometric data used for each target as well as the time bases that the different datasets cover is presented in Fig. \ref{fig:data_overview}. Additionally, the most relevant parameters to this study including the periods, mass ratios and fillout factors are summarized in Table \ref{table:orb_sol}.
For the purposes of this study, we define the primary as the currently more massive component and the mass ratio ($q$) as the mass of the secondary over the mass of the primary such that $q \leq 1$. The fillout factor, which is a measure of the degree to which a system is overfilling its Roche lobes, has several different definitions in the literature. Here, we adopt the definition of the fillout factor $f$ from \citet{Mochnacki1972}, which states:
\begin{equation}\label{eq:fillout_factor}
f = \frac{\Omega_{n,1} - \Omega_n}{\Omega_{n,1} - \Omega_{n,2}} + 1,
\end{equation}
where $\Omega_{n,1}$ and $\Omega_{n,2}$ denote the potential of the surface passing through L1 and L2 respectively, and $\Omega_n$ indicates the measured surface potential of the system. In this definition, an overcontact system has a fillout factor $1 < f < 2$, with higher fillout factors corresponding to systems in deeper contact. Since the degree of deformation for the systems in our sample are not presented in a consistent way throughout the literature, we compute the fillout factor according to the above definition for each object in our sample to ensure homogeneity.
\subsection{LSS 3074}
LSS 3074 was initially characterized as a contact system by \citet{Raucq2017} and is located in the Milky Way. With a fillout factor of 1.05 the system is just barely in contact, however, the photometric analysis strongly favors a contact configuration over a semidetached configuration. The period was measured to be 2.1852 days, making it the longest period system in our sample. This, in combination with its masses of 17.2 and 14.8 M$_{\odot}$, imply that it may be slightly more evolved than the rest of our sample. While the spectral types of both components appear to be solidly in the O-type regime, the anomalous combination of certain spectral features did not allow \citet{Raucq2017} to firmly determine a spectral type for each component.
The photometric data set for LSS 3074 consists of data from the All Sky Automated Survey (ASAS), data from A Novel Dual Imaging Camera (ANDICAM) and two sectors of data from the Transiting Exoplanet Survey Satellite (TESS). The data from ANDICAM were collected between March and May of 2001 and were observed in the Johnson B-, V-, R- and I-bands \citep{Raucq2017}. The ASAS data were collected sporadically over a $\sim$ 9 year period between 2000 and 2009 and were observed in the V-band \citep{Pojmanski1997, Pojmanski2002, Pojmanski2003, Pojmanski2004, Pojmanski2005b, Pojmanski2005a}. Since LSS 3074 is a southern object, it was observed during the first and third year of TESS mission with data in sectors 11 and 38, respectively \citep{Ricker2015}. It should be noted that there is also data from the International Gamma-Ray Astrophysics Laboratory Optical Monitoring Camera (INTEGRAL-OMC) available for the target, however, the quality of the light curve was not good enough to allow us to detect a statistically significant peak near the orbital frequency so we do not include it in this analysis.
\subsection{MY Cam}
MY Cam is located in the Milky Way and was first characterized as a contact system by \citet{Lorenzo2014}. With component masses of 37.7 and 31.6 and spectral types of O5.5 and O7 respectively, it is the most massive overcontact system currently known. Its period was measured to be $\sim$ 1.175 days and it has a mass ratio of 0.84. Of all of the systems in our sample, MY Cam has the lowest measured fillout factor at only 1.01, meaning that it just barely qualifies as an overcontact system.
The photometric data set for MY Cam consists of data from INTEGRAL-OMC and TESS as well as data from two private telescopes. The INTEGRAL-OMC data were observed in the Johnson V-band and were collected sporadically over an $\sim$ 18 year time frame between 2003 and 2021 \citep{Alfonso-Garzon2012}. Unfortunately, only one sector of TESS data is available, which was observed in sector 19 during the second year of the TESS mission. In addition to these, photometric data were collected from two private telescopes during a 6 month period in 2008. These two telescopes were a Meade LX200 and a Vixen VISAC and observed in the Johnson R-band \citep{Lorenzo2014}. Since the telescope and instrument names were not provided, we refer to this dataset as M\&V henceforth reflecting the telescope models from which the data were collected.
\subsection{OGLE SMC-SC10 108086}
OGLE SMC-SC10 108086 (SMC 108086 henceforth) was first characterized as a contact system by \citet{Hilditch2005}, and as its name suggests, it is located in the Small Magellanic Cloud (SMC). The primary and secondary components have spectral types of O9 and O9.5 respectively and their locations on the Hertzsprung-Russell diagram (HRD) indicate that they are very close to the Zero-Age Main Sequence \citep{Abdul-Masih2021}. With a fillout factor of 1.7 and a period of around 0.88 days \citep{Pawlak2016}, it is both the deepest massive overcontact system currently known and the shortest period system in our sample. This, in combination with its mass ratio of 0.85, makes it an ideal test case for this investigation.
The photometric dataset for SMC 108086 consists of both Optical Gravitational Lensing Experiment (OGLE) and TESS data. As part of the OGLE II, III and IV campaigns, it was observed sporadically over a total time span of $\sim$ 16 years \citep{Udalski1997, Udalski2008, Udalski2015, Szymanski2005}. While only I-band data is available for OGLE II, it was observed in both the I- and V-bands during OGLE III and IV. Being in the southern hemisphere, it was observed during the first and third year of TESS with a total of 4 sectors of data available (sectors 1, 2, 27, and 28).
\subsection{TU Mus}
Along with V382 Cyg, TU Mus was one of the first massive overcontact systems identified and is located in the Milky Way. It was originally characterized as a contact system by \citet{Andersen1975}, and has been studied extensively since then \citep[e.g., ][]{Stickland1995, Terrell2003, Linder2007, Qian2007, Penny2008}. It has a period of around 1.387 days and a fillout factor of 1.12, and with a mass ratio of 0.623, it is the most unequal mass system in our sample. While it is universally agreed upon that the primary is an O-type star, there is some ambiguity in the literature as to the status of the secondary; some sources claim that it is a late O-type star \citep[e.g., ][]{Terrell2003, Penny2008} while others claim that it's spectral type is early B \citep[e.g., ][]{Sota2014, MaizApellaniz2016}. It is also important to note that \citet{Qian2007} found evidence of a third object gravitationally bound to the system, but given its long period ($\sim$47 years) and low component mass, it is expected to have a negligible effect on the dynamics of the inner contact system. Based on the parameters of the system, the vZKL oscillations are expected to operate on timescales of $\sim$0.9 Myr \citep[see Eq. 24 in][]{Toonen2016}
The photometric dataset for TU Mus consists of data from Hipparcos, ASAS, INTEGRAL-OMC and TESS. The Hipparcos data \citep{Perryman1997} were collected between December 1989 and November 1992, and were observed in the Hipparcos passband ($Hp$). The ASAS data were collected sporadically over a $\sim$ 9 year period between December 2000 and December 2009 and were observed in the V-band. The INTEGRAL-OMC data were collected in the Johnson V-band between 2003 and 2021. Finally, there are four sectors of TESS data available, two sectors (11 and 12) in the first year and two sectors (37 and 38) in the third year of the TESS campaign.
\subsection{V382 Cyg}
V382 Cyg was first identified and characterized in the late 1970's \citep{Cester1978, Popper1978} and has been the subject of numerous studies since then \citep[e.g., ][]{Popper1991, Harries1997, Burkholder1997, Degirmenci1999, Qian2007, Yasarsoy2013}. Located in the Milky Way, the primary and secondary components have spectral types of O6.5 and O6 respectively. Recently, \citet{Martins2017} reanalyzed the system and updated the orbital parameters, reporting an orbital period of $\sim$ 1.89 days with a fillout factor of 1.10 and a mass ratio of 0.727. Despite its low fillout factor, recent spectroscopic observations of this system indicate potentially high levels of mixing between the two components, giving further evidence that the system is indeed in a contact configuration \citep{Abdul-Masih2021}. As with TU Mus, \citet{Qian2007} found evidence that V382 Cyg has a tertiary component, but its period and mass suggest that it is likely to have a negligible effect on the dynamics of the contact system with a vZKL oscillation timescale of $\sim$ 0.8 Myrs. This was later confirmed by \citet{Yasarsoy2013}, who updated the period to be $\sim$ 43 years.
The photometric data set for V382 Cyg is comprised of data from Hipparcos, INTEGRAL-OMC and TESS. The data from the Hipparcos Catalog were observed between October 1989 and February 1993 and were observed in the Hipparcos passband. The data from the INTEGRAL-OMC on the other hand were observed sporadically over a $\sim$18 year time frame between 2002 and 2019 and were observed in the Johnson V-band. In addition to these, V382 Cyg was observed by TESS in sectors 14 and 15 during the second year of the TESS mission.
\subsection{VFTS 352}
VFTS 352 is located in the Large Magellanic Cloud (LMC henceforth) and was first characterized by \citet{Almeida2015}. The nearly twin components ($q=0.99$) have masses of $\sim$29 M$_{\odot}$\ and have spectral types of O4.5 and O5.5, making it the earliest overcontact system currently known \citep{Walborn2014, Almeida2015, Almeida2017, Mahy2020a, Mahy2020b}. Its high component masses, short period ($\sim$1.124 days) and relatively high fillout factor (1.28) make it a promising candidate for a gravitational wave progenitor \citep{deMink2016, Mandel2016, Marchant2016, Abdul-Masih2019, Abdul-Masih2020b, Abdul-Masih2021}.
The photometric data set for VFTS 352 is comprised of data from the OGLE-III and -IV campaigns as well as TESS. The data from OGLE III and IV were collected sporadically over a total time span of $\sim$ 13 years between 2001 and 2014 in both the I- and V-bands. Given its location in the LMC, VFTS 352 fell in TESS's continuous viewing zone meaning that it was observed for the entirety of the first and third years.
\subsection{Rejected systems}
Several other O-type overcontact systems are known, but these were not included in our sample for various reasons. LY Aur is a known triple with vZKL oscillation timescales on order of $\sim$ 0.2Myrs \citep{Stickland1994, Zhao2014}. Given the short oscillation timescale, we reject it from our sample.
V729 Cyg is long period ($\sim$ 6.6 days), evolved overcontact system that is no longer on the main sequence so it is not included \citep{Antokhina2016}.
OGLE-SMC-ELC-4690 is thought to be a contact system, but no combined photometric and radial velosity fit has been performed on the object. Furthermore, it has a known triple companion on a relatively close orbit \citep{Zasche2017}.
BAT 99-126 is a higher order system that contains a O-type contact system, however the orbital configuration of the system is not known so it is rejected \citep{Janssens2021}.
HD 64315 is a quadruple system containing two pairs of close binaries, one of which is in a contact configuration. Unfortunately, the separation between the two pairs of binaries is not known so we do not include it in our sample \citep{Lorenzo2017}.
Finally, UW CMa appears to be a contact system, but the light curve has some unexplained features in it which makes the fitting unreliable. So far, no reliable orbital solution has been found \citep{Leung1978,Antokhina2011}.
\subsection{Photometric data preparation}
While most of the photometric data used in this investigation were already reduced, some needed to be cleaned. Specifically, in the case where quality flags were provided, we removed all data points that had bad quality flags following the individual recommendations of each data set. For the data sets without quality flags, we removed obvious outliers.
In the case of TESS, only some of the objects in our sample had reduced light curves associated. While TESS is a nearly all sky survey, only some of the many stars observed have been reduced with the official TESS pipeline \citep[SPOC; ][]{Jenkins2016}. Of the six stars in our sample, only V382 Cyg and TU Mus have SPOC light curves, so for these objects we use the available light curves (see Fig. \ref{fig:LC} as an example of the TESS light curve for V382 Cyg). For the four remaining sources, we utilize \textsc{lightkurve} \citep{LightkurveCollaboration2018} to aid in the extraction.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{V382_Cyg_lc_tess.pdf}
\caption{Portion of the TESS light curve associated with V382 Cyg}
\label{fig:LC}
\end{figure}
\textsc{lightkurve} is a Python package designed for the retrieval and extraction of Kepler, K2 and TESS light curves. From the full frame image, we first created a 9 x 9 pixel cutout centered on the source in question \citep{Brasseur2019}. We then created a mask, which only includes the central pixel of the 9 x 9 cutout and generate a light curve from this mask. We choose to use only the central pixel in all cases to remain consistent between objects and sectors and to minimize the chances of contamination. VFTS 352 and SMC 108086 are located in crowded fields and given the TESS pixel size, eliminating contamination entirely is not possible. Since no additional periodicities were found in the light curves of these objects, and since we are only concerned with the period, the presence of third light in these objects is not problematic for our specific science case. Once the light curves were extracted, we removed NaNs and outliers, and we detrended the resulting light curve using the \textsc{lightkurve} flatten function. In some cases, there were trends at the beginning or end of the sectors as well as just before and after the mid sector downlinks. In these cases, we remove the spurious points.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{V382_Cyg_fourier_tess.pdf}
\caption{Fourier spectrum of the TESS light curve associated with V382 Cyg. The inset shows a zoom in around the dominant frequency. The other peaks visible in the periodogram corresponds to harmonics of the fundamental frequency. }
\label{fig:FT}
\end{figure}
\section{Methods}\label{Methods}
\subsection{Period determination with \textsc{PERIOD04}}
In order to accurately determine the orbital period ($P_\textrm{orb}$) from each photometric data set, we used the software package \textsc{PERIOD04} \citep{Lenz2005}. This tool, based on classical Fourier Analysis Techniques, is especially dedicated to the statistical analysis of large astronomical data sets containing gaps. Using \textsc{PERIOD04}, we computed the frequency spectrum of each light curve (see Fig. \ref{fig:FT} as an example of the fourier spectrum of the TESS light curve for V382 Cyg) and identified the dominant periodicity in each data set. Since the orbital periods of our targets are known (see Table \ref{table:orb_sol}), we could easily determine if the dominant frequencies detected by the software were the true periods, a fraction/multiple of this, or a different periodicity present in the data. When the dominant frequency was not associated to the known period, we pre-whitened its contribution from the original data and continued extracting and pre-whitening frequencies until we could measure the period.
The uncertainties associated with each measured frequency were calculated by means of Monte Carlo simulations computed with \textsc{PERIOD04}. For each data set, we generated 1000 simulated time series with the times of observation matching those from the real data and the magnitudes (or intensities) were calculated from the magnitudes predicted by the best fit plus Gaussian noise. For every time string, a least-squares calculation was performed and the frequency uncertainty was calculated from the distribution of the Monte Carlo results.
The frequency peaks associated to the orbital periods were easily identified in all data sets with signal-to-noise ratios ranging between $\sim$5, for the shortest and most sparsely covered data sets, and $\sim$40, for the frequency spectrum of the TESS light curves. For all data sets and all targets, the orbital frequencies that we found corresponded to double the true orbital frequencies, which is expected given the symmetry shown by overcontact binary light curves (see Fig. \ref{fig:LC}).
Since our data sets vary widely in both cadence and time base, we treat some data sets slightly differently than others. While the period determination process used is the same, some are split into smaller data subsets in order to avoid period smearing. For example, due to the long time base ($\sim$20 years) and sporadic nature of the observations, we split the INTEGRAL-OMC data sets in half and analyze each independently. Similarly, we treat data from OGLE II, III and IV separately and determine independent periods for each. Finally, due to the biennial nature of the TESS mission, the TESS data set is divided by year.
\subsection{Determination of the change in period ($\dot{P}$)}
Once the periods associated with each dataset were determined, we fit a linear regression through the data for each object to determine the overall change in period ($\dot{P}$). In some cases, multiple filters or apertures were observed simultaneously for a given data set (e.g. ASAS and ANDICAM), so to avoid unfairly weighing these data sets, we only include the aperture or filter that returned the lowest sigma from the period determination step in the linear fit. In the case of OGLE, since the I- and V-band observations were not taken simultaneously we include both as distinct data sets when available.
To avoid correlations between the two free parameters in the linear regression (namely the slope and the y-intercept), we offset the times such that a time of 0 corresponds to the midpoint between the first and last central Barycentric Julian Date (BJD) for each object. Here we define the central BJD of each data set as the midpoint of the observations. We optimize the two free parameters using the 'curve\_fit' function of the \textsc{SciPy} package, which utilizes non-linear least squares to fit \citep{Virtanen2020}.
\section{Results}\label{Results}
Table \ref{table:periods} includes the central BJDs and orbital periods determined for each target from each independent data set together with their uncertainties. Additionally, Table \ref{table:periods} also lists the $\dot{P}$ and corresponding errors as well as the $P/ |\dot{P}|$ for each object in our sample. An example of a more graphical representation of our results can be found in Fig. \ref{fig:V382_Cyg_pdot}, which shows the measured periods for each of the data sets associated with V382 Cyg as well as the linear fit through these defining the $\dot{P}$. A similar figure for each of the other objects in our sample can be found in Appendix \ref{appendix1}.
In general, the measured periods were well constrained with small error bars (on order of 1 second or less) and agreed with one another within a few seconds for each object, with two notable exceptions. In the case of LSS 3074, the errors on the period measurements were significantly larger than the rest of the sample by more than an order of magnitude, which in turn led to a larger error on the derived $\dot{P}$. The other notable exception is SMC 108086, which showed small, but statistically significant downward trend over the time frame of the observations.
The measured $\dot{P}$ values for each object were all on the order of 0.1 seconds per year or less, with the exception of LSS 3074, which was about an order of magnitude higher. That being said, SMC 108086 was the only object in our sample whose $\dot{P}$ measurement was not consistent with 0 within error. Calculating $P/|\dot{P}|$, we find that most of our sample has period variation time scales of $\sim$1 Myr or larger, while LSS 3074 shows a variation timescale of closer to 0.3 Myr. These values indicate that all objects in our sample are evolving on the nuclear timescale.
Several previous works have computed the period changes for some of the objects in our sample using various methods, and in general, we find a very good agreement between our measurements and previous measurements. In the case of V382 Cyg, there are a few independent period change measurements available in the literature \citep{Degirmenci1999,Qian2007, Yasarsoy2013}, and all indicate a period increase of between $\sim$ 0.03 and 0.04 seconds per year, which agrees with our measurement within error. For VFTS 352 the period change was never directly measured, however \citet{Almeida2015} reports a peak to peak period difference of $\sim2$ seconds over a 12.5 year time frame, which corresponds to an upper limit of $|\dot{P}|\leq$ 0.16 seconds per year. This value is in good agreement with the upper limit that we measure of 0.15 seconds per year. Finally, TU Mus has one period change measurement in the literature from \citep{Qian2007}, who measured $\dot{P} = 0.035$ seconds per year, which agrees nicely with our measurement within error.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{V382_Cyg_pdot.pdf}
\caption{Measured period as a function of time for V382 Cyg. The period is given in seconds and the period associated with the first observation, rounded to the nearest second, is subtracted. A best fit line is plotted in black and its associated uncertainties are represented with the shaded region.}
\label{fig:V382_Cyg_pdot}
\end{figure}
\begin{table*}
\caption{Measured periods for each of the data subsets for each system in the sample and the resulting period change and period stability.}
\centering
\setlength{\extrarowheight}{6pt}
\begin{tabular}{ccccccc}
\hline\hline
Object & Source & Central BJD & \multicolumn{2}{c}{Period} & $\dot{P}$ & $P/ |\dot{P}|$\\
& & (BJD - 2440000) & [d] & [s] & [s yr$^{-1}$] & [Myr] \\
\hline
\multirow{4}{*}{LSS 3074} & ANDICAM (V) & 12015 & 2.1844 $\pm$ 0.0006 & 188730 $\pm$ 50 & \multirow{4}{*}{-0.7 $\pm$ 2.4} & \multirow{4}{*}{$0.27^{+\infty}_{-0.21}$}\\
& ASAS (Ap3) & 13483 & 2.185090 $\pm$ 0.000014 & 188791.8 $\pm$ 1.2 & & \\
& TESS (yr 1) & 18610 & 2.1850 $\pm$ 0.0003 & 188780 $\pm$ 30 & & \\
& TESS (yr 3) & 19347 & 2.1834 $\pm$ 0.0013 & 188650 $\pm$ 120 & & \\
\hline
\multirow{4}{*}{MY Cam} & M\&V & 14662 & 1.175476 $\pm$ 0.000004 & 101561.1 $\pm$ 0.3 & \multirow{4}{*}{-0.1 $\pm$ 0.6} & \multirow{4}{*}{$1.0^{+\infty}_{-0.8}$}\\
& OMC (1/2) & 14458 & 1.175427 $\pm$ 0.000006 & 101556.9 $\pm$ 0.5 & & \\
& OMC (2/2) & 17770 & 1.175441 $\pm$ 0.000012 & 101558.1 $\pm$ 1.1 & & \\
& TESS (yr 2) & 18828 & 1.17543 $\pm$ 0.00011 & 101557. $\pm$ 9. & & \\
\hline
\multirow{7}{*}{SMC 108086} & OGLE II (I) & 11250 & 0.883097 $\pm$ 0.000003 & 76299.6 $\pm$ 0.3 & \multirow{7}{*}{-0.11 $\pm$ 0.08} & \multirow{7}{*}{$0.7^{+2.1}_{-0.3}$}\\
& OGLE III (I) & 13529 & 0.883102 $\pm$ 0.000001 & 76300.04 $\pm$ 0.08 & & \\
& OGLE III (V) & 14140 & 0.883084 $\pm$ 0.000003 & 76298.4 $\pm$ 0.2 & & \\
& OGLE IV (I) & 16018 & 0.883089 $\pm$ 0.000003 & 76298.9 $\pm$ 0.2 & & \\
& OGLE IV (V) & 16000 & 0.883103 $\pm$ 0.000014 & 76300.1 $\pm$ 1.2 & & \\
& TESS (yr 1) & 18353 & 0.88306 $\pm$ 0.00002 & 76296. $\pm$ 2. & & \\
& TESS (yr 3) & 19060 & 0.88301 $\pm$ 0.00002 & 76292.4 $\pm$ 1.8 & & \\
\hline
\multirow{6}{*}{TU Mus} & Hipparcos & 8411 & 1.387287 $\pm$ 0.000017 & 119861.6 $\pm$ 1.5 & \multirow{6}{*}{-0.005 $\pm$ 0.085} & \multirow{6}{*}{$25.4^{+\infty}_{-24.1}$}\\
& OMC (1/2) & 14443 & 1.387260 $\pm$ 0.000005 & 119859.3 $\pm$ 0.5 & & \\
& OMC (2/2) & 17941 & 1.387290 $\pm$ 0.000005 & 119861.9 $\pm$ 0.5 & & \\
& ASAS (Ap3) & 13527 & 1.387287 $\pm$ 0.000002 & 119861.6 $\pm$ 0.2 & & \\
& TESS (yr 1) & 18596 & 1.387271 $\pm$ 0.000016 & 119860.3 $\pm$ 1.4 & & \\
& TESS (yr 3) & 19333 & 1.387279 $\pm$ 0.000017 & 119860.9 $\pm$ 1.5 & & \\
\hline
\multirow{4}{*}{V382 Cyg} & Hipparcos & 8452 & 1.88553 $\pm$ 0.00003 & 162910. $\pm$ 3. & \multirow{4}{*}{0.01 $\pm$ 0.03} & \multirow{4}{*}{$16.5^{+\infty}_{-12.5}$}\\
& OMC (1/2) & 14163 & 1.885523 $\pm$ 0.000007 & 162909.2 $\pm$ 0.6 & & \\
& OMC (2/2) & 17261 & 1.885525 $\pm$ 0.000005 & 162909.3 $\pm$ 0.5 & & \\
& TESS & 18710 & 1.88554 $\pm$ 0.00003 & 162911. $\pm$ 2. & & \\
\hline
\multirow{6}{*}{VFTS 352} & OGLE III (I) & 13569 & 1.124167 $\pm$ 0.000001 & 97128.05 $\pm$ 0.10 & \multirow{6}{*}{-0.05 $\pm$ 0.10} & \multirow{6}{*}{$1.9^{+\infty}_{-1.2}$}\\
& OGLE III (V) & 13952 & 1.124154 $\pm$ 0.000005 & 97126.9 $\pm$ 0.5 & & \\
& OGLE IV (I) & 15988 & 1.124151 $\pm$ 0.000001 & 97126.62 $\pm$ 0.12 & & \\
& OGLE IV (V) & 15963 & 1.124162 $\pm$ 0.000003 & 97127.6 $\pm$ 0.3 & & \\
& TESS (yr 1) & 18489 & 1.124195 $\pm$ 0.000005 & 97130.4 $\pm$ 0.4 & & \\
& TESS (yr 3) & 19211 & 1.124172 $\pm$ 0.000006 & 97128.4 $\pm$ 0.5 & & \\
\hline
\end{tabular}
\label{table:periods}
\end{table*}
\section{Discussion}\label{Discussion}
In order to assess our theoretical understanding of the past and future evolution of massive overcontact systems, we compare our observations with population synthesis results adapted from \cite{Menon2021}. This population synthesis was originally computed from a grid of binary models corresponding to the metallicity of the Large Magellanic Cloud (LMC). The original parameter space of the models spans an initial total mass of 20$-80$\,M$_{\odot}$, initial period of P$_\textrm{i} = 0.6-2$\,days and initial mass ratio of q$_\textrm{i}=0.6-1$. Given that this current work focuses on O+O overcontact systems, we only consider models that have current primary and secondary masses $\geq14$\,M$_{\odot}$\, to compute the theoretical distribution of the observed parameters, namely $P_\textrm{orb}$, $q$ and $\dot{P}$. The reader is referred to \citet{Menon2021} for a more detailed description of the population synthesis computations.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\linewidth]{ppdot_vs_q_full_square_rot_ax_errorbars.pdf}
\caption{Normalized theoretical probability distribution of the $P / |\dot{P}|$ as a function of the mass ratio based on models from \citet{Menon2021}. The background color represents the probability of finding a system with the given combination of parameters. Lighter colors represent lower probabilities while darker colors represent higher probabilities. Each of the four panels represents a different period bin, which is indicated in the upper left corner. The locations of the observed overcontact systems are indicated with black dots and labeled. Error bars are also plotted for each system and when applicable, arrows are used to indicate that the value does not have an upper limit.}
\label{fig:q_vs_PPdot}
\end{figure*}
In general, we find that the models are able to reproduce the orbital parameters of the observed systems. However, based on Fig. \ref{fig:q_vs_PPdot}, it is clear that the observed systems do not follow the expected distribution as determined via population synthesis. Almost all of the systems in the sample fall in low probability regions of the parameter space, indicating that these combinations of parameters are expected to be either very short-lived or rare.
A notable feature of the population synthesis results, which can be seen in Fig. \ref{fig:q_vs_PPdot} and the left panel of Fig \ref{fig:P_i_comparison}, is that the theoretical distribution for $P / |\dot{P}|$ peaks at around 100 Myrs, which is longer than the expected main sequence lifetime for stars in this mass range. Given the fact that most of our measured $\dot{P}$ values are consistent with 0, this means that we are unable to rule out this possibility, however it is unlikely that these theoretical timescales are reliable. The large $P/|\dot{P}|$ values from the models are likely due to the way in which mass transfer is implemented during the contact phase in \texttt{MESA} \citep{Paxton2015,Marchant2016}. The mass transfer rate during the contact phase slows down to the order of 10$^{-7}$\,M$_{\odot}$/yr as soon as $q$ becomes close to 1, after which, the mass ratio asymptotically approaches $q = 1$ until the system finally merges \citep{Menon2021}. This causes the models to spend the majority of their main-sequence lives with mass ratios close to 1 as reflected in the theoretical distributions. The observations on the other hand do not seem to support this, as the mass ratios are fairly well distributed between 0.6 and 1.
Our observed mass ratio distribution is consistent with findings for lower mass contact systems as well, where low mass ratio contact systems are common \citep[see e.g., ][ and references therein]{Yang2015, Qian2020}. In studies of low mass convective core contact systems, observations have shown that the period stabilities are comparable to the values that we find here. Further, several systems have period changes suggesting that they are evolving towards a lower mass ratio rather than to 1 \citep{Yang2015}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.45\linewidth]{2dhist_P_Pdot_0.0_errs.pdf}
\includegraphics[width=0.45\linewidth]{2dhist_P_Pdot_1.2_errs.pdf}
\caption{Same as Fig. \ref{fig:q_vs_PPdot} but for different initial period ranges:$P_i = 0.6-2.0$ and $P_i = 1.2-2.0$ for the left and right panels respectively. Additionally, 1D histograms corresponding to each of the axes are plotted.}
\label{fig:P_i_comparison}
\end{figure*}
Among the O+O models, we find that the main source of the q = 1 contact binaries are models with initial periods $P_\textrm{i} \leq 1.2$\,days. If we only consider models with initial periods larger than 1.2\,days, while the peak of the distribution still lies at q = 1, the distribution flattens considerably over the $q$ dimension, and we begin to see a clear correlation between the mass ratio and the period stability (see Fig. \ref{fig:P_i_comparison}). Interestingly, however, this correlation does not appear to be present in the observed distribution, suggesting that these systems may not equalize on the timescales that the models predict. That being said, these findings may suggest that the observed overcontact binaries are originating from systems with longer initial periods \citep[in line with findings from ][]{Ramirez-Tannus2017, Ramirez-Tannus2021, Sana2017}, however a more dedicated theoretical investigation is needed to confirm this hypothesis.
While there appears to be a definite discrepancy between the observed population and the one predicted from population synthesis, there are several factors that should be considered before drawing conclusions:
First, it should be noted that the models from \citet{Menon2021} are calculated assuming LMC metallicity, while most of our sample is Galactic. This difference in metallicity could affect the periods and period stabilities as massive stars at higher metallicities tend to have slightly larger radii and stronger winds at the same evolutionary stage, which may lead to shorter overall period stabilities.
Additionally, the population synthesis results assume systems that have an initial period of two days or less and assume that all mass transfer is conservative. Given that higher initial periods seem to allow for a more even q distribution, including initial periods of greater than two days in the population synthesis could allow for a better agreement between the population synthesis results and the observations.
Furthermore, the population synthesis results assume the systems have an initial mass ratio of greater than 0.6. As discussed in \citet{Menon2021}, the likelihood of a system coming into contact, as well as the duration of the contact phase are strongly correlated with the initial mass ratio, implying that these systems would represent a small minority of the currently observable contact systems. That being said, the inclusion of systems with lower initial mass ratios may allow for a marginally better agreement between the population synthesis results and the observations.
An additional factor to consider is that the binary models do not include energy transfer. Considering the mass-radius relationship of single stars, as well as the strict relationship on their radii when a system is in contact, stable overcontact systems with a mass ratio away from unity would not be expected to exist theoretically \citep{Kuiper1941}. However, as energy transfer is expected to occur in overcontact layers, the mass-radius relationship becomes dependent on the mass ratio and separation of the system, potentially allowing for stable solutions to exist \citep[see e.g., ][]{shu1976}. A detailed analysis on the impact of energy transfer on populations of massive overcontact binaries has, however, not been done yet.
Finally, the implementation of the contact scheme itself in \texttt{MESA}, leads to the binary model spending an inordinately large amount of its contact lifetime close to a mass ratio of q = 1. This may indicate the requirement to improve the current contact scheme used in our models. While each of these assumptions will surely affect the final distribution, it is unlikely that the changes would be significant enough to rectify the discrepancy between the observations and the theoretical predictions. This could however account for the mass ratio gap that is seen in the bottom right panel of Fig. \ref{fig:q_vs_PPdot}, and could perhaps allow the models to reproduce the location of LSS 3074.
One additional point to consider involves the comparison of the observed period stability with the theoretical values. As discussed in \citet{King2021}, the measured $P / |\dot{P}|$ may be misleading on small time scales as changes in period can be caused by variations on the flow or temporary digressions from synchronicity. Over the long term, these fluctuations would average out, allowing a more robust comparison with theoretical models. It should be noted, however that \citet{King2021} and studies like it \citep[see e.g.][]{Pringle1975} focus on ultraluminous X-ray (ULX) sources, where the primary stars are overflowing through L1, transferring mass to their companions. It is unlikely that overcontact systems would suffer from the same level of period variations as ULX sources given that overcontact systems are expected to be in hydrostatic equilibrium and rotating synchronously. Nevertheless, comparing the $P / |\dot{P}|$ of the complete sample of O+O overcontact systems as a whole instead of individual sources allows us to circumvent this potential issue.
\section{Conclusions}\label{Conclusions}
We have performed a period stability study of known O+O type overcontact systems. Using archival photometric data and the software package PERIOD04, we calculated the periods of the systems over a time span of tens of years. For each system in our sample, we determined the rate at which the period is changing via a linear regression through the period measurements of each data subset. We find that all systems in our sample show period changes consistent with 0 with the exception of SMC 108086, which shows a slight but non-negligible negative period change. These results indicate that all of the systems in our sample have periods that are stable on the nuclear timescale. Furthermore, we find no correlation between the mass ratio and the period stability, implying that these systems will continue to evolve as unequal mass overcontact binaries.
Comparing our results with population synthesis simulations, we find discrepancies between the predicted and observed distributions. While the population synthesis simulations predict that the overwhelming majority of overcontact systems should be found in equal mass systems, the mass ratios of the observed systems are fairly evenly distributed between $q = $ 0.6 and 1. This discrepancy is marginally lessened by removing the shortest period systems in the population synthesis simulations, suggesting that the observed population of overcontact systems may have originated from binaries with longer initial periods. A more in depth theoretical investigation is needed to confirm this, however. That being said, without a larger sample size, it is difficult to draw strong conclusions, highlighting the need for a dedicated effort to search for and characterize currently undiscovered massive overcontact systems.
\begin{acknowledgements}
This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the TESS mission is provided by NASA’s Science Mission directorate.
This research made use of Lightkurve, a Python package for Kepler and TESS data analysis \citep{LightkurveCollaboration2018}.
AM would like to thank the Alexander von Humboldt foundation for supporting this project.
L.M. thanks the European Space Agency (ESA) and the Belgian Federal Science Policy Office (BELSPO) for their support in the framework of the PRODEX Programme.
PM acknowledges support from the FWO junior postdoctoral fellowship No. 12ZY520N
\end{acknowledgements}
\bibliographystyle{aa}
\section{Introduction}
In the \emph{nucleated instability\/} (also called core
instability) hypothesis of giant planet
formation, a critical mass for static core envelope
protoplanets has been found. \citet{mizuno} determined
the critical mass of the core to be about $12 \,M_\oplus$
($M_\oplus=5.975 \times 10^{27}\,\mathrm{g}$ is the Earth mass), which
is independent of the outer boundary
conditions and therefore independent of the location in the
solar nebula. This critical value for the core mass corresponds
closely to the cores of today's giant planets.
Although no hydrodynamical study has been available many workers
conjectured that a collapse or rapid contraction will ensue
after accumulating the critical mass. The main motivation for
this article
is to investigate the stability of the static envelope at the
critical mass. With this aim the local, linear stability of static
radiative gas spheres is investigated on the basis of Baker's
(\citeyear{baker}) standard one-zone model.
Phenomena similar to the ones described above for giant planet
formation have been found in hydrodynamical models concerning
star formation where protostellar cores explode
(Tscharnuter \citeyear{tscharnuter}, Balluch \citeyear{balluch}),
whereas earlier studies found quasi-steady collapse flows. The
similarities in the (micro)physics, i.e., constitutive relations of
protostellar cores and protogiant planets serve as a further
motivation for this study.
\section{Baker's standard one-zone model}
\begin{figure*}
\centering
\caption{Adiabatic exponent $\Gamma_1$.
$\Gamma_1$ is plotted as a function of
$\lg$ internal energy $\mathrm{[erg\,g^{-1}]}$ and $\lg$
density $\mathrm{[g\,cm^{-3}]}$.}
\label{FigGam}%
\end{figure*}
In this section the one-zone model of \citet{baker},
originally used to study the Cephe{\"{\i}}d pulsation mechanism, will
be briefly reviewed. The resulting stability criteria will be
rewritten in terms of local state variables, local timescales and
constitutive relations.
\citet{baker} investigates the stability of thin layers in
self-gravitating,
spherical gas clouds with the following properties:
\begin{itemize}
\item hydrostatic equilibrium,
\item thermal equilibrium,
\item energy transport by grey radiation diffusion.
\end{itemize}
For the one-zone-model Baker obtains necessary conditions
for dynamical, secular and vibrational (or pulsational)
stability (Eqs.\ (34a,\,b,\,c) in Baker \citeyear{baker}). Using Baker's
notation:
\[
\begin{array}{lp{0.8\linewidth}}
M_{r} & mass internal to the radius $r$ \\
m & mass of the zone \\
r_0 & unperturbed zone radius \\
\rho_0 & unperturbed density in the zone \\
T_0 & unperturbed temperature in the zone \\
L_{r0} & unperturbed luminosity \\
E_{\mathrm{th}} & thermal energy of the zone
\end{array}
\]
\noindent
and with the definitions of the \emph{local cooling time\/}
(see Fig.~\ref{FigGam})
\begin{equation}
\tau_{\mathrm{co}} = \frac{E_{\mathrm{th}}}{L_{r0}} \,,
\end{equation}
and the \emph{local free-fall time}
\begin{equation}
\tau_{\mathrm{ff}} =
\sqrt{ \frac{3 \pi}{32 G} \frac{4\pi r_0^3}{3 M_{\mathrm{r}}}
}\,,
\end{equation}
Baker's $K$ and $\sigma_0$ have the following form:
\begin{eqnarray}
\sigma_0 & = & \frac{\pi}{\sqrt{8}}
\frac{1}{ \tau_{\mathrm{ff}}} \\
K & = & \frac{\sqrt{32}}{\pi} \frac{1}{\delta}
\frac{ \tau_{\mathrm{ff}} }
{ \tau_{\mathrm{co}} }\,;
\end{eqnarray}
where $ E_{\mathrm{th}} \approx m (P_0/{\rho_0})$ has been used and
\begin{equation}
\begin{array}{l}
\delta = - \left(
\frac{ \partial \ln \rho }{ \partial \ln T }
\right)_P \\
e=mc^2
\end{array}
\end{equation}
is a thermodynamical quantity which is of order $1$ and equal to $1$
for nonreacting mixtures of classical perfect gases. The physical
meaning of $ \sigma_0 $ and $K$ is clearly visible in the equations
above. $\sigma_0$ represents a frequency of the order one per
free-fall time. $K$ is proportional to the ratio of the free-fall
time and the cooling time. Substituting into Baker's criteria, using
thermodynamic identities and definitions of thermodynamic quantities,
\begin{displaymath}
\Gamma_1 = \left( \frac{ \partial \ln P}{ \partial\ln \rho}
\right)_{S} \, , \;
\chi^{}_\rho = \left( \frac{ \partial \ln P}{ \partial\ln \rho}
\right)_{T} \, , \;
\kappa^{}_{P} = \left( \frac{ \partial \ln \kappa}{ \partial\ln P}
\right)_{T}
\end{displaymath}
\begin{displaymath}
\nabla_{\mathrm{ad}} = \left( \frac{ \partial \ln T}
{ \partial\ln P} \right)_{S} \, , \;
\chi^{}_T = \left( \frac{ \partial \ln P}
{ \partial\ln T} \right)_{\rho} \, , \;
\kappa^{}_{T} = \left( \frac{ \partial \ln \kappa}
{ \partial\ln T} \right)_{T}
\end{displaymath}
one obtains, after some pages of algebra, the conditions for
\emph{stability\/} given
below:
\begin{eqnarray}
\frac{\pi^2}{8} \frac{1}{\tau_{\mathrm{ff}}^2}
( 3 \Gamma_1 - 4 )
& > & 0 \label{ZSDynSta} \\
\frac{\pi^2}{\tau_{\mathrm{co}}
\tau_{\mathrm{ff}}^2}
\Gamma_1 \nabla_{\mathrm{ad}}
\left[ \frac{ 1- 3/4 \chi^{}_\rho }{ \chi^{}_T }
( \kappa^{}_T - 4 )
+ \kappa^{}_P + 1
\right]
& > & 0 \label{ZSSecSta} \\
\frac{\pi^2}{4} \frac{3}{\tau_{ \mathrm{co} }
\tau_{ \mathrm{ff} }^2
}
\Gamma_1^2 \, \nabla_{\mathrm{ad}} \left[
4 \nabla_{\mathrm{ad}}
- ( \nabla_{\mathrm{ad}} \kappa^{}_T
+ \kappa^{}_P
)
- \frac{4}{3 \Gamma_1}
\right]
& > & 0 \label{ZSVibSta}
\end{eqnarray}
For a physical discussion of the stability criteria see \citet{baker} or \citet{cox}.
We observe that these criteria for dynamical, secular and
vibrational stability, respectively, can be factorized into
\begin{enumerate}
\item a factor containing local timescales only,
\item a factor containing only constitutive relations and
their derivatives.
\end{enumerate}
The first factors, depending on only timescales, are positive
by definition. The signs of the left hand sides of the
inequalities~(\ref{ZSDynSta}), (\ref{ZSSecSta}) and (\ref{ZSVibSta})
therefore depend exclusively on the second factors containing
the constitutive relations. Since they depend only
on state variables, the stability criteria themselves are \emph{
functions of the thermodynamic state in the local zone}. The
one-zone stability can therefore be determined
from a simple equation of state, given for example, as a function
of density and
temperature. Once the microphysics, i.e.\ the thermodynamics
and opacities (see Table~\ref{KapSou}), are specified (in practice
by specifying a chemical composition) the one-zone stability can
be inferred if the thermodynamic state is specified.
The zone -- or in
other words the layer -- will be stable or unstable in
whatever object it is imbedded as long as it satisfies the
one-zone-model assumptions. Only the specific growth rates
(depending upon the time scales) will be different for layers
in different objects.
\begin{table}
\caption[]{Opacity sources.}
\label{KapSou}
$$
\begin{array}{p{0.5\linewidth}l}
\hline
\noalign{\smallskip}
Source & T / {[\mathrm{K}]} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Yorke 1979, Yorke 1980a & \leq 1700^{\mathrm{a}} \\
Kr\"ugel 1971 & 1700 \leq T \leq 5000 \\
Cox \& Stewart 1969 & 5000 \leq \\
\noalign{\smallskip}
\hline
\end{array}
$$
\end{table}
We will now write down the sign (and therefore stability)
determining parts of the left-hand sides of the inequalities
(\ref{ZSDynSta}), (\ref{ZSSecSta}) and (\ref{ZSVibSta}) and thereby
obtain \emph{stability equations of state}.
The sign determining part of inequality~(\ref{ZSDynSta}) is
$3\Gamma_1 - 4$ and it reduces to the
criterion for dynamical stability
\begin{equation}
\Gamma_1 > \frac{4}{3}\,\cdot
\end{equation}
Stability of the thermodynamical equilibrium demands
\begin{equation}
\chi^{}_\rho > 0, \;\; c_v > 0\, ,
\end{equation}
and
\begin{equation}
\chi^{}_T > 0
\end{equation}
holds for a wide range of physical situations.
With
\begin{eqnarray}
\Gamma_3 - 1 = \frac{P}{\rho T} \frac{\chi^{}_T}{c_v}&>&0\\
\Gamma_1 = \chi_\rho^{} + \chi_T^{} (\Gamma_3 -1)&>&0\\
\nabla_{\mathrm{ad}} = \frac{\Gamma_3 - 1}{\Gamma_1} &>&0
\end{eqnarray}
we find the sign determining terms in inequalities~(\ref{ZSSecSta})
and (\ref{ZSVibSta}) respectively and obtain the following form
of the criteria for dynamical, secular and vibrational
\emph{stability}, respectively:
\begin{eqnarray}
3 \Gamma_1 - 4 =: S_{\mathrm{dyn}} > & 0 & \label{DynSta} \\
\frac{ 1- 3/4 \chi^{}_\rho }{ \chi^{}_T } ( \kappa^{}_T - 4 )
+ \kappa^{}_P + 1 =: S_{\mathrm{sec}} > & 0 & \label{SecSta} \\
4 \nabla_{\mathrm{ad}} - (\nabla_{\mathrm{ad}} \kappa^{}_T
+ \kappa^{}_P)
- \frac{4}{3 \Gamma_1} =: S_{\mathrm{vib}}
> & 0\,.& \label{VibSta}
\end{eqnarray}
The constitutive relations are to be evaluated for the
unperturbed thermodynamic state (say $(\rho_0, T_0)$) of the zone.
We see that the one-zone stability of the layer depends only on
the constitutive relations $\Gamma_1$,
$\nabla_{\mathrm{ad}}$, $\chi_T^{},\,\chi_\rho^{}$,
$\kappa_P^{},\,\kappa_T^{}$.
These depend only on the unperturbed
thermodynamical state of the layer. Therefore the above relations
define the one-zone-stability equations of state
$S_{\mathrm{dyn}},\,S_{\mathrm{sec}}$
and $S_{\mathrm{vib}}$. See Fig.~\ref{FigVibStab} for a picture of
$S_{\mathrm{vib}}$. Regions of secular instability are
listed in Table~1.
\begin{figure}
\centering
\caption{Vibrational stability equation of state
$S_{\mathrm{vib}}(\lg e, \lg \rho)$.
$>0$ means vibrational stability.
}
\label{FigVibStab}
\end{figure}
\section{Conclusions}
\begin{enumerate}
\item The conditions for the stability of static, radiative
layers in gas spheres, as described by Baker's (\citeyear{baker})
standard one-zone model, can be expressed as stability
equations of state. These stability equations of state depend
only on the local thermodynamic state of the layer.
\item If the constitutive relations -- equations of state and
Rosseland mean opacities -- are specified, the stability
equations of state can be evaluated without specifying
properties of the layer.
\item For solar composition gas the $\kappa$-mechanism is
working in the regions of the ice and dust features
in the opacities, the $\mathrm{H}_2$ dissociation and the
combined H, first He ionization zone, as
indicated by vibrational instability. These regions
of instability are much larger in extent and degree of
instability than the second He ionization zone
that drives the Cephe{\"\i}d pulsations.
\end{enumerate}
\begin{acknowledgements}
Part of this work was supported by the German
\emph{Deut\-sche For\-schungs\-ge\-mein\-schaft, DFG\/} project
number Ts~17/2--1.
\end{acknowledgements}
|
1,116,691,499,052 | arxiv | \section{Details of the Experimental System}
\subsection{Experimental Design}
The basic experimental design (see Fig.~2 upper panel of main text for schematic) follows the setup reported in Ref. \cite{Bandi2013} with several design improvements to be elaborated below. It consisted of a quasi two-dimensional chamber constructed from a steel frame of inner dimensions 0.6 m $\times$ 1.1 m $\times$ 0.02 m, with a transparent acrylic bottom plate. A transparent Teflon sheet was glued to the top side of the acrylic bottom plate facing into the chamber to reduce friction between disks and the bottom plate. Whereas our primary measurements are conducted with disks comprised of stress birefringent (photoelastic) polymer, the current experiments were conducted with disks machined out of Lexan Polycarbonate. We do not detail photoelastic measurements here. The top and bottom of each disk was glued with a transparent teflon sheet, but the sides were intentionally machined with high roughness to achieve inter-disk contact friction responsible for friction-induced hysteresis studied here. The machined roughness yielded a static friction coefficient of $\mu = 0.21$ between disk contacts as measured using the method detailed in Ref. \cite{Bandi2013}. The friction between disks and bottom plate was measured to have a static friction coefficient of $\mu = 0.04$.
Our experiments were designed to achieve high precision in translation using feedback control {\it via} capacitive displacement sensors. The setup therefore demanded very high machining tolerance in dimensions of the chamber frame as well as the internal and external movable and immovable boundaries. Since it is impossible to achieve machining tolerance of 100 nm over part lengths of meter dimensions, the frame and boundary parts were machined in individual pieces of 10 cm length on Computer Numerically Controlled mills. These individual parts were then assembled in an interlocking French cleat mechanism with a 37.5$^{\circ}$ ratchet geometry and held by screws to assure structural rigidity to obtain a contiguous structure. The assembled chamber was clamped rigid to an optical table standing on graded concrete foundation and floated by compressed air to isolate extraneous ground vibrations. A bidisperse set of large (diameter $D_L = 1.5$ cm) and small (diameter $D_S = 1$ cm) disks of thickness 0.975 cm in equal number ratio, were placed in the quasi two-dimensional experimental chamber with two opposing movable boundaries (Compression Axis) and two transverse, immovable boundaries (Transverse Axis). Four acoustic transducers placed asymmetrically as shown in Fig.~2 upper panel of main text (also see Fig.~\ref{MemSIFig1}a), provided spatially homogeneous acoustic excitation to the pack as explained later.
\begin{figure*}
\begin{center}
\includegraphics[width = 7 in]{MemSIFig1.eps}
\end{center}
\caption{(Color online) Compression axis schematic (a) Side view: A linear bearing passes through the interrogation chamber frame and connects an internal boundary to an external one. The internal boundary avoids frictional contact with the acrylic bottom (1 mm gap), but makes contact with, and moves disks. A force sensor (red) located within the external boundary makes contact with the linear bearing rod and is held rigid in place by a screw. A circular polarized DC LED light source illuminates the pack from below. (b) Top view: The primary translation stage moves the outer and inner boundaries. A secondary translation stage is capacitively coupled to the movable boundary through a parallel plate capacitor (light blue). Both primary and secondary stages are motorized by LabView. After adjusting initial capacitor gap to 50 nm, the secondary stage is held stationary while the primary stage displaces boundaries and increases the capacitor gap. A 5 V DC signal across the capacitor is sampled at 10 KHz, constantly monitors displacement and achieves a minimum precise quasi-static step of $\Delta L/2 = 250$ nm per boundary through closed loop LabView control.}
\label{MemSIFig1}
\end{figure*}
An internal steel boundary extended 5 cm into the chamber from each chamber wall, thus providing actual interrogation chamber dimensions of 0.5 m ($L$) $\times$ 1 m ($W$) $\times$ 0.02 m ($D$). The internal boundaries are what we show in the schematic presented in Fig.~2 upper panel of main text marked ``Movable'' and ``Fixed'', and in fig.~\ref{MemSIFig1}a and b as the ``Internal movable boundary'' for the compression axis perimeter. As schematically shown in fig.~\ref{MemSIFig1}a, the internal boundaries maintain a 1 mm vertical gap with the bottom acrylic plate, but do contact disks whose height extends to 0.975 cm. Four steel rods connect each internal boundary to an external boundary by passing through high precision linear bearings placed within the steel chamber frame. A precision force sensor was located at the terminating point of each steel rod within the external boundary to measure boundary pressure of the granular pack. A total of 16 sensors (4 per boundary) provided the boundary pressure read out. In the absence of frictional contact between internal boundaries and acrylic bottom, and high lubrication translation provided by linear bearings, the forces measured by the sensors were predominantly those experienced by the granular pack alone, with no systematic errors in measured signals as will be explained in the following.
The external boundaries along the transverse axis were clamped rigid to the floating optical table thereby rendering the transverse axis boundaries immovable. On the other hand, the external boundaries along the compression axis connected to a high precision motorized translation stage to achieve quasi-static compression. A circular DC light source illuminated with cold LEDs was placed under the experimental chamber to provide backlit illumination through the transparent acrylic bottom. Since measurements reported here only concern boundary pressure measurements, and do not involve imaging, the details will be presented elsewhere.
\begin{figure*}
\begin{center}
\includegraphics[width = 7 in]{MemSIFig2.eps}
\end{center}
\caption{(Color online) (a) The force sensor (red) within the external boundary is enclosed in a heat insulating ceramic jacket (yellow). Its nipple makes contact with the linear bearing rod with a ceramic heat insulating disk glued to its end. Nitrogen (N$_2$ gas) at 68K flows through to cool the sensor. (b) Electronic circuit schematic for correlated noise removal has two inputs, one from the DC regulated voltage source which drives the sensor and the sensor output. Noise in sensor output due to voltage driver is spectrally filtered out and fed to LabView data acquisition system. (c) Probability density function (PDF) of force with correlated noise (solid black circle) follows the Generalized Extreme Value (GEV) distribution (solid black line) whereas the noise post cross-correlator stage (solid red squares) is fit very well by a Gaussian (dashed red line). Force fluctuations measured post cross-correlation follows Johnson-Nyquist form and (d) its standard deviation $\sigma_F$ (solid black circle) falls as square root of Temperature (solid black line). The linear fit (dashed red line) is plotted for comparison.}
\label{MemSIFig2}
\end{figure*}
\subsection{Movable Boundary Translation}
The compression axis boundary positions were controlled by very high precision motorized translation stages as schematically shown in fig.~\ref{MemSIFig1}a and b. A Newport CONEX MFA-CC servo translation stage with 250 mm translational distance and 100 nm minimum step size was employed for automated translation control. We found the CONEX-CC controller usually supplied with the stage did not offer the precision we desired. We therefore constructed a precision displacement sensor in house to measure quasi-static compressive displacements along the movable boundary. Our choice of inhouse design for the displacement sensor was dictated by experimental considerations. We found optical interferometric displacement sensors failed due to unavoidable mechanical vibrations induced during the experimental protocol. Instead we employed a set of two translation stages separated by 25 nm, and a homebuilt parallel plate capacitor was installed as shown in fig.~\ref{MemSIFig1}b. The parallel place capacitor was used as a capacitive displacement sensor and was interfaced with a LabView data acquisition and control system to monitor displacements at 10 KHz sampling frequency and control compression axis quasi-static displacements down to steps of $\Delta L/2 = 250$ nm. Our tests showed the real achievable displacement resolution was 17.6 nm with an uncontrollable backlash introduced through the stage manufacturing process of 2.35 nm, both being far below our desired displacement resolution of 250 nm.
We employed the following quasi-static compression protocol:\\
1) At the start of each quasi-static step, the secondary stage was moved into position to achieve a parallel plate capacitor gap of 50 nm between the primary and secondary stages.\\
2) Keeping the secondary stage position fixed, the primary stage was translated at a speed of 1 mm/s, with capacitance being monitored at 10 KHz sampling frequency by the LabView system.\\
3) The closed loop control circuit implemented in LabView automatically stopped the primary translation stage once a distance of $\Delta L/2$ was achieved, where the minimum achievable $\Delta L/2$ was 250 nm.\\
4) This quasi-static compressive displacement transforms to a quasi-static step in packing fraction $\Delta \Phi = V/(\Delta L \times W)$, since both opposing movable boundaries were translated simultaneously. This latter point albeit subtle, becomes important in light of the strong protocol dependence observed in frictional measurements.
\subsection{Boundary Force Measurement}
The boundary pressure was measured with subminiature force sensors (LCMKD-50N, Omegadyne). Each force sensor was placed within the external boundary and held in place by a screw from one end. At the other end, the nipple of the button sensor made contact during compression with the linear bearing rod which terminated within the external boundary. The sensor was encased within a 3D printed (Visijet PXL-Core 3D printer) heat insulating ceramic jacket and a ceramic tip was glued to the teminating end of the linear bearing rod as shown in Fig.~\ref{MemSIFig2}a. A small gap was maintained between the sensor and linear bearing rod through which Nitrogen gas at 68K flowed through a hole drilled in the external boundary to cool down the force sensor.
The raw force sensor output exhibits correlated noise from several sources, including ground loops, capacitive coupling, and correlated noise from the regulated DV voltage circuit that drives the force sensor. After accounting for ground loops and capacitive coupling in the system, a noise cross-correlator circuit designed inhouse was employed (Fig.~\ref{MemSIFig2}b). Output from the regulated DC voltage circuit was split with one line driving the force sensor and the other forming one of two inputs to the cross-correlator circuit. The force sensor output formed the second input to the cross-correlator, which spectrally filtered the noise in force output arising from fluctuations in the DC regulated voltage supply. The output from the cross-correlator was then sent to a LabView data acquisition system for Analog to Digital Conversion (ADC).
Figure~\ref{MemSIFig2}c shows the probability density function (PDF) of force sensor output with (solid black circles) correlated noise, i.e. prior to entering cross-correlator circuit, and is fit very well by the Generalized Extreme Value (GEV) or the Fischer-Tippett distribution (solid black line in Fig.~\ref{MemSIFig2}c) of the form:
\begin{equation}
\Pi(F) = \frac{1}{\sigma_F}t(F)^{\xi + 1}e^{-t(F)}
\end{equation}
where $t(F) = \left(1+\left(\frac{F - \langle F \rangle}{\sigma_F}\right)\xi \right)^{-1/\xi}$ if $\xi \neq 0$ and $t(F) = e^{-(F - \langle F \rangle )/\sigma_F}$ if $\xi = 0$. Here $F$ is measured force in Newtons, $\langle F \rangle$ is mean force time-averaged over the duration of signal acquisition, $\sigma_F$ is the standard deviation, with $\xi$ forming the only fit parameter for the data, which was found to be $\xi = 0.5 \simeq 0$, and accordingly $t(F) = e^{-(F - \langle F \rangle )/\sigma_F}$.
On the other hand, the force sensor output post cross-correlator stage (solid red squares in Fig.~\ref{MemSIFig2}c) exhibits nearly Gaussian fluctuations (dashed red curve in Fig.~\ref{MemSIFig2}c). This classical Johnson-Nyquist form for post cross-correlator noise permits one to employ noise reduction by exploiting the fluctuation-dissipation theorem by virtue of the fact that the force sensor output is a voltage which is linearly proportional to the measured force. Just as Johnson-Nyquist noise follows the form $V_{rms} = \sqrt{4k_BTR\Delta f}$ where $k_B$ is Boltzmann constant, $T$ is temperature, $R$ is resistance and $\Delta f$ is the frequency bandwidth, we have for the force measurement:
\begin{equation}
\sigma_F \propto \sqrt{4k_BTR\Delta f}
\end{equation}
Indeed, cryogenically cooling the electronics and force sensors exhibits noise reduction with a square-root dependence on temperature $T$ as shown in Fig.~\ref{MemSIFig2}d. There, the standard deviation in force $\sigma_F$ (solid black circles) fell as we cooled the circuits from room temperature ($T = 291$ K) down to $T = 68$ K. The square-root fit with temperature (solid black line) is decidedly better than a linear fit (dashed red line). We found flicker noise (Shot Noise) if the circuits were cooled below $T = 68$ K. Rather than employ further electronic measures, we instead adopted noise averaging by sampling the force at 1 KHz for 10 second duration and taking its average. Since the uncorrelated noise is expected to average as $1/\sqrt{N}$ where $N$ is the number of force measurement samples, one achieves a further noise reduction. The final precision we achieved in force measurement was 5.3 mN.
\subsection{Acoustic Perturbation}
Mechanical perturbation of granular media is usually achieved by applying vibrations at the boundaries, but such schemes are not spatially homogeneous because the dissipative collisions among particles cause a gradient in the perturbation magnitude as one moves from the boundary into the system interior. Spatially homogenous perturbation being necessary for the current experiments, we stuck four acoustic transducers (SD1G from Solid Drive) to the bottom of the acrylic plate (see Fig.~\ref{MemSIFig1}a) with an asymmetric placement as shown in Fig.~2 of main text. With acoustic energy being transferred to the acrylic bottom plate, it acted as the speaker and thus provided homogeneous perturbation. Each transducer was powered by an amplifier (SD250 from Solid Drive) connected to an independent function generator which output white noise with a frequency cutoff of 15 KHz, thereby providing a reasonable approximation for $\delta$-correlation in time. However, acoustic waves have long correlation lengths in the transmitted medium as one trivially observes in Chaldni patterns; achieving a reasonable approximation for $\delta$-correlation in space is therefore not so straightforward. In a beautiful study, Cadot et al demonstrated \cite{Cadot2010} nonlinear response when an elastic medium is subjected to random acoustic forcing. In addition to approximately Gaussian in time acoustic excitation, we additionally scrambled the amplitudes of all four transducers with a fifth function generator that provided band-limited (to 15 KHz) white noise. The four transducer amplitudes were scrambled in a manner such that their sum was always constant at any given instant.
Following the protocol of Cadot et al, we used laser vibrometry (see Fig.~\ref{MemSIFig3}a for schematic) to measure spatial cross-correlations in surface deformations. The cross-correlation function is defined as:
\begin{equation}
X(r) = \frac{\langle h(\vec{r_1},t) h(\vec{r_2},t) \rangle}{\sigma_{h(\vec{r_1})} \sigma_{h(\vec{r_2})}}
\end{equation}
where $h(\vec{r_1},t)$ and $h(\vec{r_2},t)$ are instantaneous (at time $t$) height variations at positions $\vec{r_1}$ and $\vec{r_2}$ respectively, and $\sigma_{h(\vec{r_1})}$ and $\sigma_{h(\vec{r_2})}$ are standard deviations of heights at positions $\vec{r_1}$ and $\vec{r_2}$ respectively, where the height variation $h$ was recorded by high-speed cameras (Phantom v640) that captured laser beams reflected off the acrylic plate. This cross-correlation function $X(r)$ (solid red circles) is plotted versus distance $r = |\vec{r_1} - \vec{r_2}|$ in mm in log-linear scale in Fig.~\ref{MemSIFig3}. $X(r)$ exhibits an exponential decay with decay length of 3.7 mm obtained from fit to data (solid black line in Fig.~\ref{MemSIFig3}). This decay length of 3.7 mm being less than small disk diameter of 1 cm, we treat this as approximately $\delta$-correlated perturbation in space.
\begin{figure}
\begin{center}
\includegraphics[width = 3.5 in]{MemSIFig3.eps}
\end{center}
\caption{(Color online) (a) Laser vibrometry schematic: Two laser beams incident onto the bottom acrylic plate under acoustic excitation at two locations separated by distance $r$. Height variations caused by acoustic waves cause perturbations in the reflected laser beams. These perturbations are captured by two high-speed cameras (Phantom v640) at 1500 frames per second, and the images are analysed to obtain the instantaneous spatial cross-correlation $X(r)$. (b) $X(r)$ (solid red circles) versus $r$ in log-linear scale shows exponential decay with a decay length of 3.7 mm obtained from data fit (solid black line).}
\label{MemSIFig3}
\end{figure}
\section{Details of the Numerical Simulations}
Conjugate gradient method is generally used to study the frictionless granular materials \cite{97MLGLBW}; but when the particles have friction, MD simulation is preferred as it correctly keeps track of both normal and history dependent tangential forces. So we employ MD simulation for the current study. Simulation of uniaxial compression of two dimensional granular packings are performed using open source codes, LAMMPS \cite{95P} and LIGGGHTS \cite{12KGHAP}. To simulate the experimental system the particles are taken as bi-disperses disks of unit mass with diameters 1 and $1.4$ respectively. The particles are placed randomly in a three dimensional box of dimension, $57$ (along $x$), $102$ (along $y$) and $1.4$ (along $z$). Quasistatic compression is implemented by displacing the boundary particles. A side wall made of particles is placed in the direction perpendicular to the compression direction.
The contact fores (both the normal and tangential forces which arises due to friction) are modeled according to the DEM (discrete element method) developed by Cundall and Strack \cite{79CS}. Implementation of static friction is done via tracking the elastic part of the shear displacement from the time contact was first formed.
When the disks are compressed they interact via both
normal and tangential forces. Particles $i$ and $j$, at positions ${\B r_i, \B r_j}$ with velocities ${\B v_i, \B v_j}$ and angular velocities ${\B \omega_i, \B \omega_j}$ will experience a relative normal compression on contact given by $\Delta_{ij}=|\B r_{ij}-D_{ij}|$, where $\B r_{ij}$ is the vector joining the centers of mass and $D_{ij}=R_i+R_j$; this gives rise to a normal force $ \B F^{(n)}_{ij} $. The normal force is modeled as a Hertzian contact, whereas the tangential force is given by a Mindlin force \cite{79CS}. Defining $R_{ij}^{-1}\equiv R_i^{-1}+R_j^{-1}$, the force magnitudes are,
\begin{eqnarray}
\B F^{(n)}_{ij}\!&=&\!k_n\Delta_{ij} \B n_{ij}-\frac{\gamma_n}{2} \B {v}_{n_{ij}}\ , \:
\B F^{(t)}_{ij}\!=\!-k_t \B t_{ij}-\frac{\gamma_t}{2} \B {v}_{t_{ij}} \\
k_n &=& k_n^{'}\sqrt{ \Delta_{ij} R_{ij}} \ , \quad
k_t = k_t^{'} \sqrt{ \Delta_{ij} R_{ij}} \\
\gamma_{n} &=& \gamma_{n}^{'} \sqrt{ \Delta_{ij} R_{ij}}\ , \quad
\gamma_{t} = \gamma_{t}^{'} \sqrt{ \Delta_{ij} R_{ij}} \ .
\end{eqnarray}
Here $\delta _{ij}$ and $t_{ij}$ are normal and tangential displacement; $\B n_{ij}$ is the normal unit vector. $k_n^{'}$ and $k_t^{'}$ are spring stiffness for normal and tangential mode of deformation: $\gamma_n^{'}$ and $\gamma_t^{'}$ are viscoelastic damping constant for normal and tangential deformation.
$\B {v_n}_{ij}$ and $\B {v_t}_{ij}$ are respectively normal and tangential component of the relative velocity between two particles. The relative normal and tangential velocity are given by
\begin{eqnarray}
\B {v}_{n_{ij}}&=& (\B {v}_{ij} .\B n_{ij})\B n_{ij} \\
\B {v}_{t_{ij}}&=& \B {v}_{ij}-\B {v}_{n_{ij}} - \frac{1}{2}(\B \omega_i + \B \omega_j)\times \B r_{ij}.
\end{eqnarray}
where $\B {v}_{ij} = \B {v}_{i} - \B {v}_{j}$. Elastic tangential displacement $ \B t_{ij}$ is set to zero when the contact is first made and is calculated using $\frac{d \B t_{ij}}{d t}= \B {v}_{t_{ij}}$ and also the rigid body rotation around the contact point is accounted for to ensure that $ \B t_{ij}$ always remains in the local tangent plane of the contact \cite{01SEGHLP}.
The translational and rotational acceleration of particles are calculated from Newton's second law; total forces and torques on particle $i$ are given by
\begin{eqnarray}
\B F^{(tot)}_{i}&=& \sum_{j}\B F^{(n)}_{ij} + \B F^{(t)}_{ij} \\
\B \tau ^{(tot)}_{i}&=& -\frac{1}{2}\sum_{j}\B r^{ij} \times \B F^{(t)}_{ij}.
\end{eqnarray}
The tangential force varies linearly with the relative tangential displacement at the contact point as long as the tangential
force does not exceed the limit set by the Coulomb limit
\begin{equation}
F^{(t)}_{ij} \le \mu F^{(n)}_{ij} \ , \label{Coulomb}
\end{equation}
where $\mu$ is a material dependent coefficient. When this limit is exceeded the contact slips in a dissipative
fashion. In our simulations we reset the
value of $t_{ij}$ so that $F^{(t)}_{ij} =0.8 \mu F^{(n)}_{ij}$. This choice is somewhat arbitrary, but recommended on the basis of frictional slip events measured in
experiments in the laboratory of J. Fineberg \cite{Fine}. A global damping is implemented to reach the static equilibrium in reasonable amount of time. After each compression step, a relaxation step is added so that the system reaches the static equilibrium and then the global stress tensor is measured by taking averages of the dyadic products between the contact forces and the branch vector over all the contacts in a given volume,
\begin{equation}
\sigma_{\alpha \beta} =\frac{1}{V}\sum_{j\neq i}\frac{r^{\alpha}_{ij} F^{\alpha}_{ij} }{2}
\end{equation}
\begin{figure}
\includegraphics[scale=0.25]{MemSIFig4.eps}
\caption{The power law for the decaying areas under the hysteresis loops as measured in the numerical simulation. Here $\mu=0.3$, Black dots are data and the red line is the best fitting power law with $\theta=-1.005$. }
\label{powerhighmu}
\end{figure}
\begin{figure}
\includegraphics[scale=0.35]{MemSIFig5.eps}
\caption{Log-log plot of $X_n$ vs. $n$. Here $\mu=0.3$, the black dots are the data, the blue line is the best fitting scaling law with the exponents being -1.005.}
\label{Xnvsnhighmu}
\end{figure}
The pressure is determined from the trace of the stress and it is measured as a function of the packing fraction. After a full compression cycle, the packing is again decompressed to zero pressure and then again the next compression cycle begins. The area between the compression and decompression curve is also calculated as a function of number of cycles. We also calculate the total energy loss due to sliding events in each cycle. The system size is $N= 4000$ and the data are averaged over ten different initial configurations. We used two different friction coefficient $ 0.1$ and $0.3$.
In Fig.~\ref{powerhighmu} we plot loop area after subtracting it from asymptote area as a function of cycles for friction coefficient of $0.3$ and the exponent of the power law remains the same as that of low friction case (See Fig. 4 main paper). We also plot $X_n$ as a function of number of cycles for the same friction coefficient in Fig. ~\ref{Xnvsnhighmu} and the measured exponent (See Fig. 5 main paper) indicates the universality in the scaling law.
|
1,116,691,499,053 | arxiv | \section{Introduction}
\cleq
The Courant bracket \cite{courant, courant1} represents the generalization of the Lie bracket on spaces of generalized vectors, understood as the direct sum of the elements of the tangent bundle and the elements of the cotangent bundle. It was obtained in the algebra of generalized currents firstly in \cite{c}. Generalized currents are arbitrary functionals of the fields, parametrized by a pair of vector field and covector field on the target space. Although the Lie bracket satisfies the Jacobi identity, the Courant bracket does not.
In bosonic string theory, the Courant bracket is governing both local gauge and general coordinate transformations, invariant upon T-duality \cite{doucou, dft}. It is a special case of the more general $C$-bracket \cite{siegel1, siegel}. The $C$-bracket is obtained as the T-dual invariant bracket of the symmetry generator algebra, when the symmetry parameters depend both on the initial and T-dual coordinates. It reduces to the Courant bracket once when parameters depend solely on the coordinates from the initial theory.
It is possible to obtain the twisted Courant bracket, when the self T-dual generator algebra is considered in the basis obtained from the action of the appropriate $O(D,D)$ transformation \cite{cdual}. The Courant bracket is usually twisted by a 2-form $B$, giving rise to what is known as the twisted Courant bracket \cite{twist}, and by a bi-vector $\theta$, giving rise to the $\theta$-twisted Courant bracket \cite{royt}. In \cite{c, cdual, nick1, nick2}, the former bracket was obtained in the generalized currents algebra, and it was shown to be related to the the latter by self T-duality \cite{crdual}, when the T-dual of the $B$ field is the bi-vector $\theta$.
The $B$-twisted Courant bracket contains $H$ flux, while the $\theta$-twisted Courant bracket contains non-geometric $Q$ and $R$ fluxes. The fluxes are known to play a crucial role in the compactification of additional dimensions in string theory \cite{granaflux}. Non-geometric fluxes can be used to stabilize moduli. In this paper, we are interested in obtaining the Poisson bracket representation of the twisted Courant brackets that contain all fluxes from the generators algebra. Though it is possible to obtain various twists of the $C$-bracket as well \cite{twistC}, we do not deal with them in this paper.
The realization of all fluxes using the generalized geometry was already considered, see \cite{flux1} for a comprehensive review. In \cite{flux2}, one considers the generalized tetrads originating from the generalized metric of the string Hamiltonian. As the Lie algebra of tetrads originating from the initial metric defines the geometric flux, it is suggested that all the other fluxes can be extracted from the Courant bracket of the generalized tetrads. Different examples of $O(D, D)$ and $O(D) \times O(D)$ transformations of generalized tetrads lead to the Courant bracket algebras with different fluxes as its structure constants.
In \cite{flux3}, one considers the standard Lie algebroid defined with the Lie bracket and the identity map as an anchor on the tangent bundle, as well as the Lie algebroid with the Koszul bracket and the bi-vector $\theta$ as an anchor on the cotangent bundle. The tetrad basis in these Lie algebroids is suitable for defining the geometric $f$ and non-geometric $Q$ fluxes. It was shown that by twisting both of these Lie algebroids by $H$-flux one can construct the Courant algebroid, which gives rise to all of the fluxes in the Courant bracket algebra. Unlike previous approaches where generalized fluxes were defined using the Courant bracket algebra, in a current paper we obtain them in the Poisson bracket algebra of the symmetry generator.
Firstly, we consider the symmetry generator of local gauge and global coordinate transformations, defined as a standard inner product in the generalized tangent bundle of a double gauge parameter and a double canonical variable. The $O(D,D)$ group transforms the double canonical variable into some other basis, in terms of which the symmetry generator can be expressed.
We demonstrate how the Poisson bracket algebra of this generator can be used to obtain twist of the Courant bracket by any such transformation. We give a brief summary of how $e^{\hat{B}}$ and $e^{\hat{\theta}}$ produce respectively the $B$-twisted and $\theta$-twisted Courant bracket in the Poisson bracket algebra of generators \cite{cdual}.
Secondly, we consider the matrix $e^{\breve{B}}$ used for twisting the Courant bracket simultaneously by a 2-form and a bi-vector. The argument $\breve{B}$ is defined simply as a sum of the arguments $\hat{B}$ and $\hat{\theta}$. Unlike $\hat{B}$ or $\hat{\theta}$, the square of $\breve{B}$ is not zero. The full Taylor series gives rise to the hyperbolic functions of the parameter depending on the contraction of the 2-form with the bi-vector $\alpha^{\mu}_{\ \nu} = 2\kappa \theta^{\mu \rho} B_{\rho \nu}$. We represent the symmetry generator in the basis obtained acting with the twisting matrix $e^{\breve{B}}$ on the double canonical variable. This generator is manifestly self T-dual and its algebra closes on the Courant bracket twisted by both $B$ and $\theta$.
Instead of computing the $B-\theta$ twisted Courant bracket directly, we introduce the change of basis in which we define some auxiliary generators, in order to simplify the calculations. This change of basis is also realized by the action of an element of the $O(D,D)$ group. The structure constants appearing in the Poisson bracket algebra have exactly the same form as the generalized fluxes obtained in other papers \cite{flux1, flux2, flux3}. The expressions for fluxes is given in terms of new auxiliary fields $\mathring{B}$ and $\mathring{\theta}$, both being the function of $\alpha^{\mu}$.
The algebra of these new auxiliary generators closes on another bracket, that we call $\mathring{C}$-twisted Courant bracket. We obtain its full Poisson bracket representation, and express it in terms of generalized fluxes. We proceed with rewriting it in the coordinate free notation, where many terms are recognized as the well known brackets, such as the Koszul or Schouten-Nijenhuis bracket, but some new brackets, that we call star brackets, also appear. These star brackets as a domain take the direct sum of tangent and cotangent bundle, and as a result give the graph of the bi-vector $\mathring{\theta}$ in the cotangent bundle, i.e. the sub-bundle for which the vector and 1-form components are related as $\xi^\mu = \kappa \mathring{\theta}^{\mu \nu} \lambda_\nu$. We show that they can be defined in terms of the projections on isotropic subspaces acting on different twists of the Courant bracket.
Lastly, we return to the previous basis and obtain the full expression for the Courant bracket twisted by both $B$ and $\theta$. It has a similar form as $\mathring{C}$-twisted Courant bracket, but in this case the other brackets contained within it are also twisted. The Courant bracket twisted by both $B$ and $\theta$ and the one twisted by $\mathring{C}$ are directly related by a $O(D,D)$ transformation represented with the block diagonal matrix.
\section{The bosonic string essentials}
\cleq
The canonical Hamiltonian for closed bosonic string, moving in the $D$-dimensional space-time with background characterized by the metric field $G_{\mu \nu}$ and the antisymmetric Kalb-Ramond field $B_{\mu \nu}$ is given by \cite{action, regal}
\begin{equation}
{\cal{H_C}} = \frac{1}{2\kappa} \pi_\mu (G^{-1})^{\mu \nu} \pi_\nu + \frac{\kappa}{2} x^{\prime \mu} G^E_{\mu \nu} x^{\prime \nu} - 2 x^{\prime \mu} B_{\mu \rho} (G^{-1})^{\rho \nu} \pi_\nu \, ,
\end{equation}
where $\pi_\mu$ are canonical momenta conjugate to coordinates $x^\mu$, and
\begin{equation} \label{eq:Gef}
G^E_{\mu \nu} = G_{\mu \nu} - 4 (B G^{-1} B)_{\mu \nu} \,
\end{equation}
is the effective metric. The Hamiltonian can be rewritten in the matrix notation
\begin{equation} \label{eq:Hcmat}
{\cal H}_{C} = \frac{1}{2\kappa} (X^T)^M H_{MN} X^N \, ,
\end{equation}
where $X^M $ is a double canonical variable given by
\begin{equation} \label{eq:Xdouble}
X^M = \begin{pmatrix}
\kappa x^{\prime \mu} \\
\pi_\mu \\
\end{pmatrix}\, ,
\end{equation}
and $H_{MN}$ is the so called generalized metric, given by
\begin{equation} \label{eq:genmet}
H_{MN} =
\begin{pmatrix}
G^E_{\mu \nu} & - 2B_{\mu \rho} (G^{-1})^{\rho \nu} \\
2(G^{-1})^{\mu \rho} B_{\rho \nu} & (G^{-1})^{\mu \nu}
\end{pmatrix} \, ,
\end{equation}
with $M,N \in \{ 0,1\}$. In the context of generalized geometry \cite{gualtieri}, the double canonical variable $X^M$ represents the generalized vector. The generalized vectors are $2D$ structures that combine both vector and 1-form components in a single entity.
The standard T-duality \cite{tdual, tdual1} laws for background fields have been obtained by Buscher \cite{buscher}
\begin{equation}\label{eq:tdbf}
^\star G^{\mu\nu} =
(G_{E}^{-1})^{\mu\nu}, \quad
^\star B^{\mu\nu} =
\frac{\kappa}{2}
{\theta}^{\mu\nu} \, ,
\end{equation}
where $(G_{E}^{-1})^{\mu\nu}$ is the inverse of the effective metric (\ref{eq:Gef}), and $\theta^{\mu \nu}$ is the non-commutativity parameter, given by
\begin{equation} \label{eq:thetadef}
{\theta}^{\mu\nu} = - \frac{2}{\kappa}(G^{-1}_E)^{\mu \rho} B_{\rho \sigma} (G^{-1})^{\sigma \nu} \, .
\end{equation}
The T-duality can be realized without changing the phase space, which is called the self T-duality \cite{crdual}. It has the same transformation rules for the background fields like T-duality (\ref{eq:tdbf}), with additionally interchanging the coordinate $\sigma$-derivatives $\kappa x^{\prime \mu}$ with canonical momenta $\pi_\mu$
\begin{equation} \label{eq:xpidual}
\kappa x^{\prime \mu} \cong \pi_\mu \, .
\end{equation}
Since momenta and winding numbers correspond to $\sigma$ integral of respectively $\pi_\mu$ and $\kappa x^{\prime \mu}$, we see that the self T-duality, just like the standard T-duality, swaps momenta and winding numbers.
\subsection{Symmetry generator}
We consider the symmetry generator that at the same time governs the general coordinate transformations, parametrized by $\xi^\mu$, and the local gauge transformations, parametrized by $\lambda_\mu$. The generator is given by \cite{dualsim}
\begin{equation}\label{eq:gltilde}
G(\xi, \lambda) = \int_0^{2\pi} d\sigma {\cal G} (\xi, \lambda)= \int_0^{2\pi} d\sigma\Big[\xi^\mu\pi_\mu+ \lambda_\mu \kappa x^{\prime\mu} \Big] \, .
\end{equation}
It has been shown that the general coordinate transformations and the local gauge transformations are related by self T-duality \cite{dualsim}, meaning that this generator is self T-dual. If one makes the following change of parameters $\lambda_\mu \to \lambda_\mu + \partial_\mu \varphi$, the generator (\ref{eq:gltilde}) does not change
\begin{equation} \label{eq:reducible}
G (\xi, \lambda + \partial \varphi) = G (\xi, \lambda) + \kappa \int_0^{2\pi}\varphi^\prime d\sigma = G (\xi, \lambda) \, ,
\end{equation}
since the total derivative integral vanishes for the closed string. Therefore, the symmetry is reducible.
Let us introduce the double gauge parameter $\Lambda^M$, as the generalized vector, given by
\begin{equation} \label{eq:Lxi}
\Lambda^M = \begin{pmatrix}
\xi^\mu \\
\lambda_\mu \\
\end{pmatrix} \, ,
\end{equation}
where $\xi^\mu$ represent the vector components, and $\lambda_\mu$ represent the 1-form components. The space of generalized vectors is endowed with the natural inner product
\begin{equation} \label{eq:skalproizvod}
\langle\Lambda_1,\Lambda_2\rangle = (\Lambda_1^T)^M \eta_{MN} \Lambda_2^N \, \Leftrightarrow \langle (\xi_1, \lambda_1),(\xi_2, \lambda_2)\rangle = i_{\xi_1} \lambda_2 + i_{\xi_2} \lambda_1 = \xi_1^\mu \lambda_{2 \mu} + \xi_2^\mu \lambda_{1 \mu} \, ,
\end{equation}
where $i_{\xi}$ is the interior product along the vector field $\xi$, and $\eta_{MN}$ is $O(D,D)$ metric, given by
\begin{equation} \label{eq:Omegadef}
\eta_{MN} =
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix} \, .
\end{equation}
Now it is possible to rewrite the generator (\ref{eq:gltilde}) as
\begin{equation} \label{eq:Ggen}
G(\Lambda) = \int d\sigma \langle\Lambda,X\rangle \, .
\end{equation}
In \cite{cdual}, the Poisson bracket algebra of generator (\ref{eq:gltilde}) was obtained in the form
\begin{equation} \label{eq:GGcourant}
\Big\{ G (\Lambda_1), \, G (\Lambda_2) \Big\} = - G \Big([\Lambda_1,\Lambda_2]_{\cal C} \Big) \, ,
\end{equation}
where the standard Poisson bracket relations between coordinates and canonical momenta were assumed
\begin{equation} \label{eq:PBR}
\{ x^{\mu} (\sigma), \pi_\nu (\bar{\sigma}) \} = \delta^\mu_{\ \nu} \delta(\sigma - \bar{\sigma}) \, .
\end{equation}
The bracket $[\Lambda_1,\Lambda_2]_{\cal C}$ is the Courant bracket \cite{courant}, defined by
\begin{equation}
[\Lambda_1,\Lambda_2]_{\cal C} = \Lambda \Leftrightarrow [(\xi_1, \lambda_1), (\xi_2, \lambda_2)]_{\cal C} = (\xi,\lambda) \, ,
\end{equation}
where
\begin{equation} \label{eq:xicou}
\xi^\mu = \xi_1^\nu \partial_\nu \xi_2^\mu - \xi_2^\nu \partial_\nu \xi_1^\mu \, , \notag
\end{equation}
and
\begin{equation} \label{eq:Lcou}
\lambda_\mu = \xi_1^\nu (\partial_\nu \lambda_{2 \mu} - \partial_\mu \lambda_{2 \nu}) - \xi_2^\nu (\partial_\nu \lambda_{1 \mu} - \partial_\mu \lambda_{1 \nu})+\frac{1}{2} \partial_\mu (\xi_1 \lambda_2- \xi_2 \lambda_1 ) \, .
\end{equation}
It is the generalization of the Lie bracket on spaces of generalized vectors.
\section{$O(D,D)$ group}
Consider the orthogonal transformation ${\cal O}$, i.e. the transformation that preserves the inner product $(\ref{eq:skalproizvod})$
\begin{equation} \label{eq:condort}
\langle {\cal O} \Lambda_1, {\cal O} \Lambda_2 \rangle = \langle \Lambda_1, \Lambda_2 \rangle \Leftrightarrow ({\cal O}\Lambda_1)^T\ \eta\ ({\cal O}\Lambda_2) = \Lambda^T_1\ \eta\ \Lambda_2 \, ,
\end{equation}
which is satisfied for the condition
\begin{equation} \label{eq:condorth}
{\cal O}^T\ \eta\ {\cal O} = \eta \, .
\end{equation}
There is a solution for the above equation in the form ${\cal O} = e^T$, see Sec. 2.1 of \cite{gualtieri}, where
\begin{equation}
T =
\begin{pmatrix}
A & \theta \\
B & -A^T
\end{pmatrix} \, ,
\end{equation}
with $\theta : T^\star M \to T M$ and $B: TM \to T^\star M$ being antisymmetric, and $A: TM \to TM $ being the endomorphism. In general case, $B$ and $\theta$ can be independent for ${\cal O}$ to satisfy condition (\ref{eq:condorth}).
Consider now the action of some element of $O(D,D)$ on the double coordinate $X$ (\ref{eq:Xdouble}) and the double gauge parameter $\Lambda$ (\ref{eq:Lxi})
\begin{equation} \label{eq:hatXL}
\hat{X}^M = {\cal O}^M_{\ N}\ X^N \, ,\ \hat{\Lambda}^M = {\cal O}^M_{\ N}\ \Lambda^N \, ,
\end{equation}
and note that the relation (\ref{eq:GGcourant}) can be written as
\begin{equation} \label{eq:twdef1}
\int d\sigma \Big\{ \langle \Lambda_1, X \rangle , \langle \Lambda_2, X \rangle \Big\} = - \int d\sigma \langle [\Lambda_1, \Lambda_2]_{\cal C}, X\rangle \, ,
\end{equation}
and using (\ref{eq:condort}) and (\ref{eq:hatXL}) as
\begin{equation} \label{eq:twdef2}
\int d\sigma \Big\{ \langle \hat{\Lambda}_1, \hat{X} \rangle , \langle \hat{\Lambda}_2, \hat{X} \rangle \Big\} = - \int d\sigma \langle [\Lambda_1, \Lambda_2]_{\cal C}, X\rangle =-\int d\sigma \langle [\hat{\Lambda}_1, \hat{\Lambda}_2]_{{\cal C}_T}, \hat{X} \rangle \, ,
\end{equation}
where we expressed the right hand side in terms of some new bracket $[\hat{\Lambda}_1,\hat{\Lambda}_2]_{{\cal C}_T} $.
Moreover, using (\ref{eq:condort}) and (\ref{eq:hatXL}), the right hand side of (\ref{eq:twdef1}) can be written as
\begin{equation} \label{eq:twdef3}
\langle [\Lambda_1, \Lambda_2]_{\cal C}, X\rangle = \langle [{\cal O}^{-1}\hat{\Lambda}_1, {\cal O}^{-1}\hat{\Lambda}_2]_{\cal C}, {\cal O}^{-1}\hat{X}\rangle = \langle{\cal O} [{\cal O}^{-1}\hat{\Lambda}_1, {\cal O}^{-1}\hat{\Lambda}_2]_{\cal C}, \hat{X}\rangle \, .
\end{equation}
Using (\ref{eq:twdef2}) and (\ref{eq:twdef3}), one obtains
\begin{equation} \label{eq:twdef}
[\hat{\Lambda}_1,\hat{\Lambda}_2]_{{\cal C}_T} = {\cal O} [{\cal O}^{-1}\hat{\Lambda}_1, {\cal O}^{-1}\hat{\Lambda}_2]_{\cal C} = e^T [e^{-T}\hat{\Lambda}_1, e^{-T}\hat{\Lambda}_2]_{\cal C} \, .
\end{equation}
This is a definition of a $T$-twisted Courant bracket. Throughout this paper, we use the notation where $[,]_{\cal C}$ is the Courant bracket, while when ${\cal C}$ has an additional index, it represents the twist of the Courant bracket by the indexed field, e.g. $[,]_{{\cal C}_B}$ is the Courant bracket twisted by $B$.
In a special case, when $A= 0$, $\theta = 0$, the bracket (\ref{eq:twdef}) becomes the Courant bracket twisted by a 2-form $B$ \cite{twist}
\begin{equation} \label{eq:CourantB}
[\Lambda_1, \Lambda_2 ]_{{\cal C}_B} = e^{\hat{B}} [e^{-\hat{B}} \Lambda_1, e^{-\hat{B}}\Lambda_2 ]_{\cal C} \, ,
\end{equation}
where $e^{\hat{B}}$ is the twisting matrix, given by
\begin{equation} \label{eq:ebhat}
e^{\hat{B}} = \begin{pmatrix}
\delta^\mu_\nu & 0 \\
2B_{\mu \nu} & \delta^\nu_\mu
\end{pmatrix}\, , \ \
\hat{B}^M_{\ N} =
\begin{pmatrix}
0 & 0 \\
2B_{\mu \nu} & 0 \\
\end{pmatrix}\, .
\end{equation}
This bracket has been obtained in the algebra of generalized currents \cite{nick1, crdual}.
In case of $A=0$, $B=0$, the bracket (\ref{eq:twdef}) becomes the Courant bracket twisted by a bi-vector $\theta$
\begin{equation} \label{eq:CourantTheta}
[\Lambda_1, \Lambda_2 ]_{{\cal C}_\theta} = e^{\hat{\theta}} [e^{-\hat{\theta}} \Lambda_1,e^{-\hat{\theta}} \Lambda_2 ]_{\cal C} \, ,
\end{equation}
where $e^{\hat{\theta}}$ is the twisting matrix, given by
\begin{equation} \label{eq:enateta}
e^{\hat{\theta}} =
\begin{pmatrix}
\delta^\mu_\nu & \kappa \theta^{\mu \nu} \\
0 & \delta^\nu_\mu
\end{pmatrix} \, , \ \
\hat{ \theta}^M_{\ N} =
\begin{pmatrix}
0 & \kappa \theta^{\mu \nu} \\
0 & 0
\end{pmatrix} \, .
\end{equation}
The $B$-twisted Courant bracket (\ref{eq:CourantB}) and $\theta$-twisted Courant bracket (\ref{eq:CourantTheta}) are related by self T-duality \cite{crdual}. It is easy to demonstrate that both $e^{\hat{B}}$ and $e^{\hat{\theta}}$ satisfy the condition (\ref{eq:condorth}).
We can now deduce a simple algorithm for finding the Courant bracket twisted by an arbitrary $O(D,D)$ transformation. One rewrites the double symmetry generator $G(\xi,\lambda)$ in the basis obtained by the action of the matrix $e^T$ on the double coordinate (\ref{eq:Xdouble}). Then, the Poisson bracket algebra between these generators gives rise to the appropriate twist of the Courant bracket. In this paper, we apply this algorithm to obtain the Courant bracket twisted by both $B$ and $\theta$.
\section{Twisting matrix}
\cleq{}
The transformations $e^{\hat{B}}$ and $e^{\hat{\theta}}$ do not commute. That is why we define the transformations that simultaneously twists the Courant bracket by $B$ and $\theta$ as $e^{\breve{B}}$, where
\begin{equation} \label{eq:breve}
\breve{B} = \hat{B}+\hat{\theta} =
\begin{pmatrix}
0 & \kappa \theta^{\mu \nu} \\
2 B_{\mu \nu} & 0
\end{pmatrix} \, .
\end{equation}
The Courant bracket twisted at the same time both by a 2-form $B$ and by a bi-vector $\theta$ is given by
\begin{equation} \label{eq:CTdef}
[\Lambda_1, \Lambda_2 ]_{{\cal C}_{B\theta}} = e^{\breve{B}} [e^{-\breve{B}}\Lambda_1,e^{-\breve{B}} \Lambda_2 ]_{\cal C} \, .
\end{equation}
The full expression for $e^{\breve{B}}$ can be obtained from the well known Taylor series expansion of exponential function
\begin{equation} \label{eq:tayeb}
e^{\breve{B}} = \sum_{n=0}^\infty \frac{\breve{B}^n}{n!} \, .
\end{equation}
The square of the matrix $\breve{B}$ is easily obtained
\begin{equation}
\breve{B}^2 =
2
\begin{pmatrix}
\kappa (\theta B)^{\mu}_{\ \nu} & 0 \\
0 & \kappa (B \theta)^{\ \nu}_{\mu}
\end{pmatrix},
\end{equation}
as well as its cube
\begin{equation}
\breve{B}^3 =
2
\begin{pmatrix}
0 & \kappa^2 ( \theta B \theta)^{\mu \nu} \\
2\kappa (B\theta B)_{\mu \nu} & 0
\end{pmatrix}.
\end{equation}
The higher degree of $\breve{B}$ are given by
\begin{equation} \label{eq:Bna2n}
\breve{B}^{2n} =
\begin{pmatrix}
(\alpha^n)^{\mu}_{\ \nu} & 0 \\
0 & ((\alpha^T)^n)^{\ \nu}_{\mu}
\end{pmatrix} \, ,
\end{equation}
for even degrees, and for odd degrees by
\begin{equation} \label{eq:Bna2n1}
\breve{B}^{2n+1} =
\begin{pmatrix}
0 & \kappa (\alpha^n \theta)^{\mu \nu} \\
2(B \alpha^n )_{\mu \nu} & 0
\end{pmatrix} \, ,
\end{equation}
where we have marked
\begin{equation} \label{eq:alfadef}
\alpha^{\mu}_{\ \nu} = 2 \kappa \theta^{\mu \rho} B_{\rho \nu}.
\end{equation}
Finally, substituting (\ref{eq:Bna2n}) and (\ref{eq:Bna2n1}) into (\ref{eq:tayeb}), we obtain the twisting matrix
\begin{equation} \label{eq:ebb}
e^{\breve{B}} =
\begin{pmatrix}
{\cal C}^{\mu}_{\ \nu} & \kappa {\cal S}^\mu_{\ \rho} \theta^{\rho \nu} \\
2 B_{\mu \rho} {\cal S}^{\rho}_{\ \nu} & ( {\cal C}^T)^{\ \nu}_{\mu}
\end{pmatrix} \, ,
\end{equation}
with ${\cal S}^\mu_{\ \nu} = \Big(\frac{\sinh{\sqrt{\alpha}}}{\sqrt{\alpha}} \Big)^\mu_{\ \nu}$ and ${\cal C}^\mu_{\ \nu} = \Big( \cosh{\sqrt{\alpha}} \Big)^{\mu}_{\ \nu}$.
Its determinant is given by
\begin{equation}
\det( e^{\breve{B}}) = e^{Tr(\breve{B})} = 1 \, ,
\end{equation}
and the straightforward calculations show that its inverse is given by
\begin{equation} \label{eq:ebmb}
e^{-\breve{B}} =
\begin{pmatrix}
{\cal C}^{\mu}_{\ \nu} & - \kappa{\cal S}^\mu_{\ \rho} \theta^{\rho \nu} \\
-2 B_{\mu \rho} {\cal S}^{\rho}_{\ \nu} & ( {\cal C}^T)^{\ \nu}_{\mu}
\end{pmatrix} \, .
\end{equation}
One easily obtains the relation
\begin{equation}
(e^{\breve{B}})^T\ \eta\ e^{\breve{B}} = \eta \, ,
\end{equation}
therefore the transformation (\ref{eq:ebb}) is indeed an element of $O(D,D)$.
It is worth pointing out characteristics of the matrix $\alpha^\mu_{\ \nu}$. It is easy to show that $\alpha^\mu_{\ \rho} \theta^{\rho \nu} = \theta^{\mu \rho} (\alpha^T)_\rho^{\ \nu}$ and $B_{\mu \rho} \alpha^\rho_{\ \nu} = (\alpha^T)_\mu^{\ \rho} B_{\rho \nu}$, which is further generalized to
\begin{equation} \label{eq:alphaf}
(f(\alpha))^\mu_{\ \rho} \theta^{\rho \nu} = \theta^{\mu \rho} (f(\alpha^T))_\rho^{\ \nu} \, ,\ \ \ B_{\mu \rho} (f(\alpha))^\rho_{\ \nu} = (f(\alpha^T))_\mu^{\ \rho} B_{\rho \nu} \, ,
\end{equation}
for any analytical function $f(\alpha)$.
Moreover, the well known hyperbolic identity $\cosh(x)^2 - \sinh(x)^2 = 1$ can also be expressed in terms of newly defined tensors
\begin{equation} \label{eq:chshid}
({\cal C}^2)^\mu_{\ \nu} - \alpha^\mu_{\ \rho} ({\cal S}^2)^\rho_{\ \nu} = \delta^\mu_\nu \, .
\end{equation}
Lastly, the self T-duality relates the matrix $\alpha$ to its transpose $\alpha \cong \alpha^T$, due to (\ref{eq:tdbf}). Consequently, we write the following self T-duality relations
\begin{equation} \label{eq:CSdual}
{\cal C} \cong {\cal C}^T \, , \ \ {\cal S} \cong {\cal S}^T \, .
\end{equation}
\section{Symmetry generator in an appropriate basis}
The direct computation of the bracket (\ref{eq:CTdef}) would be difficult, given the form of the matrix $e^{\breve{B}}$ . Therefore, we use the indirect computation of the bracket, by computing the Poisson bracket algebra of the symmetry generator (\ref{eq:gltilde}), rewritten in the appropriate basis. As elaborated at the end of the Chapter 3, this basis is obtained by the action of the matrix (\ref{eq:ebb}) on the double coordinate (\ref{eq:Xdouble})
\begin{equation} \label{eq:tildeXdef}
\breve{X}^M = (e^{\breve{B}})^M_{\ N} \ X^N =
\begin{pmatrix}
\breve{k}^\mu \\
\breve{\iota}_\mu
\end{pmatrix}
\, ,
\end{equation}
where
\begin{eqnarray} \label{eq:iktilde}
\breve{k}^\mu &=& \kappa {\cal C}^\mu_{\ \nu} x^{\prime \nu} + \kappa ({\cal S} \theta)^{\mu \nu} \pi_\nu \, , \\ \notag
\breve{\iota}_\mu &=& 2 (B{\cal S})_{\mu \nu} x^{\prime \nu} + ( {\cal C}^T)^{\ \nu}_{\mu} \pi_\nu \, ,
\end{eqnarray}
are new currents. Applying (\ref{eq:tdbf}), (\ref{eq:xpidual}) and (\ref{eq:CSdual}) to currents $\breve{k}^\mu$ and $\breve{\iota}_\mu$ we obtain $\breve{\iota}_\mu$ and $\breve{k}^\mu$ respectively, meaning that these currents are directly related by self T-duality. Multiplying the equation (\ref{eq:tildeXdef}) with the matrix (\ref{eq:ebmb}), we obtain the relations inverse to (\ref{eq:iktilde})
\begin{eqnarray} \label{eq:xpiik}
\kappa x^{\prime \mu} &=& {\cal C}^\mu_{\ \nu} \breve{k}^\nu - \kappa ({\cal S} \theta)^{\mu \nu} \breve{\iota}_\nu \, , \\ \notag
\pi_\mu &=& -2 (B{\cal S})_{\mu \nu} \breve{k}^\nu + ({\cal C}^T)^{\ \nu}_{\mu} \breve{\iota}_\nu \, .
\end{eqnarray}
Applying the transformation (\ref{eq:ebb}) to a double gauge parameter (\ref{eq:Lxi}), we obtain new gauge parameters
\begin{equation} \label{eq:Lxitilde}
\breve{\Lambda}^M =
\begin{pmatrix}
\breve{\xi}^\mu \\
\breve{\lambda}_\mu
\end{pmatrix} =
(e^{\breve{B}})^M_{\ N}\ \Lambda^N =
\begin{pmatrix}
{\cal C}^\mu_{\ \nu} \xi^\nu + \kappa ({\cal S} \theta)^{\mu \nu} \lambda_\nu \\
2 (B{\cal S})_{\mu \nu} \xi^\nu + ({\cal C}^T)^{\ \nu}_{\mu} \lambda_\nu
\end{pmatrix} \, .
\end{equation}
The symmetry generator (\ref{eq:gltilde}) rewritten in a new basis ${\cal G}({\cal C}\xi+\kappa{\cal S}\theta\lambda , 2(B{\cal S}) \xi + {\cal C}^T \lambda)\equiv{\cal \breve{G}} (\breve{\xi} , \breve{\lambda})$ is given by
\begin{equation} \label{eq:gtilde}
\breve{G}(\breve{\Lambda}) = \int d\sigma \langle\breve{\Lambda},\breve{X} \rangle \Leftrightarrow \breve{G} (\breve{\xi}, \breve{\lambda}) =\int d\sigma\Big[\breve{\xi}^\mu \breve{\iota}_\mu+ \breve{\lambda}_\mu \breve{k}^\mu \Big] \, .
\end{equation}
Substituting (\ref{eq:tildeXdef}) and (\ref{eq:Lxitilde}) into (\ref{eq:gtilde}), the symmetry generator in the initial canonical basis (\ref{eq:gltilde}) is obtained. Due to mutual self T-duality between basis currents (\ref{eq:iktilde}), this generator is invariant upon self T-duality.
Rewriting the equation (\ref{eq:GGcourant}) in terms of new gauge parameters (\ref{eq:Lxitilde}) in the basis of auxiliary currents (\ref{eq:iktilde}), the Courant bracket twisted by both a 2-form $B_{\mu \nu}$ and by a bi-vector $\theta^{\mu \nu}$ is obtained in the new generator (\ref{eq:gtilde}) algebra
\begin{equation} \label{eq:brbrG}
\Big\{ \breve{G} (\breve{\Lambda}_1), \, \breve{G} (\breve{\Lambda}_2)\Big\} = -\breve{G}\Big( [\breve{\Lambda}_1,\breve{\Lambda}_2]_{{\cal C}_{B\theta}} \Big) \, .
\end{equation}
\subsection{Auxiliary generator}
Let us define a new auxiliary basis, so that both the matrices ${\cal C}$ and ${\cal S}$ are absorbed in some new fields, giving rise to the generator algebra that is much more readable. When the algebra in this basis is obtained, simple change of variables back to the initial ones will provide us with the bracket in need.
Multiplying the second equation of (\ref{eq:iktilde}) with the matrix ${\cal C}^{-1}$, we obtain
\begin{equation} \label{eq:iotaring}
\breve{\iota}_\nu ({\cal C}^{-1})_{\ \mu}^\nu=\pi_\mu + 2 \kappa (B{\cal S} {\cal C}^{-1})_{\mu \nu } x^{\prime \nu}\, ,
\end{equation}
where we have used $(B{\cal S})_{\nu \rho} ({\cal C}^{-1})^\nu_{\ \mu} = - (B {\cal S C}^{-1})_{\rho \mu} = (B {\cal S C}^{-1})_{\mu \rho }$, due to tensor $B{\cal S}$ being antisymmetric, and properties (\ref{eq:alphaf}).
We will mark the result as a new auxiliary current, given by
\begin{equation} \label{eq:imathring}
\mathring{\iota}_\mu = \pi_\mu + 2 \kappa \mathring{B}_{\mu \nu} x^{\prime \nu} \, ,
\end{equation}
where $\mathring{B}$ is an auxiliary B-field, given by
\begin{equation} \label{eq:Bmathring}
\mathring{B}_{\mu \nu} = B_{\mu \rho} {\cal S}^\rho_{\ \sigma} ({\cal C}^{-1})^\sigma_{\ \nu} \, .
\end{equation}
On the other hand, multiplying the first equation of (\ref{eq:iktilde}) with the matrix ${\cal C}$, we obtain
\begin{equation} \label{eq:kapparing}
{\cal C}^\mu_{\ \nu} \breve{k}^\nu = ({\cal C}^2)^\mu_{\ \nu} \kappa x^{\prime \nu} + \kappa ({\cal C} {\cal S} \theta)^{\mu \nu} \pi_\nu \, .
\end{equation}
Substituting (\ref{eq:chshid}) in the previous equation, and keeping in mind that ${\cal C}$, ${\cal S}$ and $\theta$ commute (\ref{eq:alphaf}), we obtain
\begin{equation} \label{eq:ktrveza}
{\cal C}^\mu_{\ \nu} \breve{k}^\nu = \kappa x^{\prime \mu} + \kappa ({\cal C}{\cal S} \theta)^{\rho \nu}(\pi_\nu + 2\kappa (B {\cal S} {\cal C}^{-1} )_{\nu \sigma} x^{\prime \sigma}) \, .
\end{equation}
Using (\ref{eq:imathring}), the results are marked as a new auxiliary current
\begin{equation} \label{eq:kmathring}
\mathring{k}^\mu = \kappa x^{\prime \mu} + \kappa \mathring{\theta}^{\mu \nu} \mathring{\iota}_\nu \, ,
\end{equation}
where $\mathring{\theta}$ is given by
\begin{equation} \label{eq:thetamathring}
\mathring{\theta}^{\mu \nu} = {\cal C}^\mu_{\ \rho} {\cal S}^\rho_{\ \sigma} \theta^{\sigma \nu} \, .
\end{equation}
There is no explicit dependence on either ${\cal C}$ nor ${\cal S}$ in redefined auxiliary currents, rather only on canonical variables and new background fields. From (\ref{eq:kmathring}), it is easy to express the coordinate $\sigma$-derivative in the basis of new auxiliary currents
\begin{equation} \label{eq:xikring}
\kappa x^{\prime \mu} = \mathring{k}^\mu - \kappa \mathring{\theta}^{\mu \nu} \mathring{\iota}_\nu \, .
\end{equation}
The first equation of (\ref{eq:iktilde}) could have been multiplied with ${\cal C}$, instead of ${\cal C}^{-1}$, given that the latter would also produce a current that would not explicitly depend on ${\cal C}$. However, the expression for coordinate $\sigma$-derivative $\kappa x^{\prime \mu}$ would explicitly depend on ${\cal C}^2$ in that case, while with our choice of basis it does not (\ref{eq:xikring}).
Substituting (\ref{eq:iotaring}) and (\ref{eq:ktrveza}) in the expression for the generator (\ref{eq:gtilde}), we obtain
\begin{equation}
\breve{G} (\breve{\xi}, \breve{\lambda}) = \int d\sigma \Big[ \breve{\lambda}_\mu ({\cal C}^{-1})^\mu_{\ \nu} \mathring{k}^\nu + \breve{\xi}^\mu ({\cal C}^T)_\mu^{\ \nu} \mathring{\iota}_\nu \Big] \, ,
\end{equation}
from which it is easily seen that the generator (\ref{eq:gtilde}) is equal to an auxiliary generator
\begin{equation} \label{eq:Gring}
\mathring{G}(\mathring{\Lambda})= \int d\sigma \langle \mathring{X}, \mathring{\Lambda} \rangle \Leftrightarrow \mathring{G} (\mathring{\xi}, \mathring{\lambda}) = \int d\sigma \Big[ \mathring{\lambda}_\mu \mathring{k}^\mu + \mathring{\xi}^\mu \mathring{\iota}_\mu \Big] \, ,
\end{equation}
provided that
\begin{equation} \label{eq:vezatildaring}
\mathring{\Lambda}^M =
\begin{pmatrix}
\mathring{\xi}^\mu \\
\mathring{\lambda}_\mu
\end{pmatrix} \, ,\ \mathring{\lambda}_\mu = \breve{\lambda}_\nu ({\cal C}^{-1})^\nu_{\ \mu},\ \ \ \mathring{\xi}^\mu = {\cal C}^\mu_{\ \nu} \breve{\xi}^\nu \, ,
\end{equation}
and
\begin{equation} \label{eq:Xring}
\mathring{X}^M = \begin{pmatrix}
\mathring{k}^\mu \\
\mathring{\iota}_\mu
\end{pmatrix} \, .
\end{equation}
Once that the algebra of (\ref{eq:Gring}) is known, the algebra of generator (\ref{eq:gtilde}) can be easily obtained using (\ref{eq:vezatildaring}).
The change of basis to the one suitable for the auxiliary generator (\ref{eq:Gring}) corresponds to the transformation
\begin{equation} \label{eq:Adef}
A^M_{\ N} = \begin{pmatrix}
({\cal C})^\mu_{\ \nu} & 0 \\
0 & (({\cal C}^{-1})^{T})_\mu^{\ \nu}
\end{pmatrix} \, , \mathring{\Lambda}^M = A^M_{\ N}\ \breve{\Lambda}^N \, ,\ \mathring{X}^M = A^M_{\ N}\ \breve{X}^N \, ,
\end{equation}
that can be rewritten as
\begin{equation} \label{eq:AeB}
\mathring{X}^M = (A e^{\breve{B}})^M_{\ N}\ X^N \, ,\ \mathring{\Lambda}^M = (A e^{\breve{B}})^M_{\ N}\ \Lambda^N \, ,
\end{equation}
where (\ref{eq:tildeXdef}) and (\ref{eq:Lxitilde}) were used.
It is easy to show that the transformation $A^M_{\ N}$, and consequentially $(A e^{\breve{B}})^M_{\ N}$, is the element of $O(D,D)$ group
\begin{equation}
A^T \eta\ A = \eta \, , \ (A e^{\breve{B}})^T\ \eta\ (A e^{\breve{B}}) = \eta \, ,
\end{equation}
which means that there is $\mathring{C}$, for which \cite{gualtieri}
\begin{equation} \label{eq:eCCdef}
e^{\mathring{C}} = A e^{\breve{B}} \, .
\end{equation}
The generator (\ref{eq:Gring}) gives rise to algebra that closes on $\mathring{C}$-twisted Courant bracket
\begin{equation}
\Big\{\mathring{G}(\mathring{\Lambda}_1), \mathring{G}(\mathring{\Lambda}_1) \Big\} = - \mathring{G}\Big( [ \mathring{\Lambda}_1, \mathring{\Lambda}_2]_{{\cal C}_{\mathring{C}}} \Big) \, ,
\end{equation}
where the $\mathring{C}$-twisted Courant bracket is defined by
\begin{equation} \label{eq:mathCdef}
[ \mathring{\Lambda}_1, \mathring{\Lambda}_2]_{{\cal C}_{\mathring{C}}} = e^{\mathring{C}} [e^{-\mathring{C}} \mathring{\Lambda}_1, e^{-\mathring{C}} \mathring{\Lambda}_2]_{\cal C} \, .
\end{equation}
In the next chapter, we will obtain this bracket by direct computation of the generators Poisson bracket algebra.
Lastly, let us briefly comment on reducibility conditions for the $\mathring{C}$-twisted Courant bracket. Since we are working with the closed strings, the total derivatives vanishes when integrated out over the worldsheet. Using (\ref{eq:xikring}), we obtain
\begin{equation} \label{eq:ringr}
\int d\sigma \kappa \varphi^\prime = \int d\sigma \kappa x^{\prime \mu} \partial_\mu \varphi = \int d\sigma \Big( \mathring{k}^\mu \partial_\mu \varphi + \kappa \mathring{\iota}_\mu \mathring{\theta}^{\mu \nu} \partial_\nu \varphi \Big) = 0 \, ,
\end{equation}
for any parameter $\lambda$. Hence, the generator (\ref{eq:Gring}) remains invariant under the following change of parameters
\begin{equation}\label{eq:ringred}
\mathring{\xi}^\mu \to \mathring{\xi}^\mu + \kappa \mathring{\theta}^{\mu \nu} \partial_\nu \varphi, \ \ \mathring{\lambda}_\mu \to \mathring{\lambda}_\mu + \partial_\mu \varphi \, .
\end{equation}
These are reducibility conditions (\ref{eq:reducible}) in the basis spanned by $\mathring{k}^\mu$ and $\mathring{\iota}_\mu$.
\section{Courant bracket twisted by $\mathring{C}$ from the generator algebra}
\cleq
In order to obtain the Poisson bracket algebra for the generator (\ref{eq:Gring}), let us firstly calculate the algebra of basis vectors, using the standard Poisson bracket relations (\ref{eq:PBR}). The auxiliary currents $\mathring{\iota}_\mu$ algebra is
\begin{equation} \label{eq:iotaiota}
\{ \mathring{\iota}_\mu (\sigma), \mathring{\iota}_\nu (\bar{\sigma})\} = - 2\mathring{B}_{\mu \nu \rho} \mathring{k}^{\rho} \delta(\sigma-\bar{\sigma})- {\cal \mathring{F}}_{\mu \nu}^{\ \rho}\ \mathring{\iota}_\rho \delta(\sigma-\bar{\sigma}) \, ,
\end{equation}
where $\mathring{B}_{\mu \nu \rho}$ is the generalized H-flux, given by
\begin{equation} \label{eq:calB}
\mathring{B}_{\mu \nu \rho} = \partial_\mu \mathring{B}_{\nu \rho} + \partial_\nu \mathring{B}_{\rho \mu}+\partial_\rho \mathring{B}_{\mu \nu} \, ,
\end{equation}
and ${\cal \mathring{F}}_{\mu \nu}^{\rho}$ is the generalized f-flux, given by
\begin{equation} \label{eq:calF}
{\cal \mathring{F}}_{\mu \nu}^{\ \rho} = -2\kappa \mathring{B}_{\mu \nu \sigma} \mathring{\theta}^{\sigma \rho} \, .
\end{equation}
The algebra of currents $\mathring{k}^\mu$ is given by
\begin{eqnarray} \label{eq:kringkring}
\{ \mathring{k}^\mu (\sigma), \mathring{k}^\nu (\bar{\sigma}) \} = - \kappa{\cal \mathring{Q}}_{\rho}^{\ \mu \nu} \mathring{k}^\rho \delta(\sigma-\bar{\sigma}) -\kappa^2{\cal \mathring{R}}^{\mu \nu \rho} \mathring{\iota}_\rho \delta(\sigma-\bar{\sigma}) \, ,
\end{eqnarray}
where
\begin{equation} \label{eq:calQ}
{\cal \mathring{Q}}^{\ \nu \rho}_\mu
= \mathring{Q}^{\ \nu \rho}_\mu + 2\kappa \mathring{\theta}^{\nu\sigma} \mathring{\theta}^{\rho \tau} \mathring{B}_{\mu \sigma \tau} \, , \quad \mathring{Q}^{\ \nu \rho}_\mu = \partial_{\mu} \mathring{\theta}^{\nu \rho}
\end{equation}
and
\begin{equation} \label{eq:calR}
{\cal \mathring{R}}^{\mu \nu \rho} = \mathring{R}^{\mu \nu \rho} +2\kappa \mathring{\theta}^{\mu \lambda} \mathring{\theta}^{\nu \sigma}\mathring{\theta}^{\rho \tau}\mathring{B}_{\lambda \sigma \tau} \, , \quad \mathring{R}^{\mu \nu \rho} = \mathring{\theta}^{\mu \sigma}\partial_\sigma \mathring{\theta}^{\nu \rho} + \mathring{\theta}^{\nu \sigma}\partial_\sigma \mathring{\theta}^{\rho \mu}+ \mathring{\theta}^{\rho \sigma}\partial_\sigma \mathring{\theta}^{\mu \nu} \, .
\end{equation}
The terms in (\ref{eq:kringkring}) containing both $\mathring{\theta}$ and $\mathring{B}$ are the consequence of non-commutativity of auxiliary currents $\mathring{\iota}_\mu$. The remaining algebra of currents $\mathring{k}^\mu$ and $\mathring{\iota}_\mu$ can be as easily obtained
\begin{equation} \label{eq:kringiota}
\{ \mathring{\iota}_\mu (\sigma), \mathring{k}^\nu (\bar{\sigma}) \} = \kappa \delta_\mu^\nu \delta^\prime(\sigma-\bar{\sigma}) + {\cal \mathring{F}}^{\ \nu}_{\mu \rho}\ \mathring{k}^\rho \delta(\sigma-\bar{\sigma}) -\kappa {\cal \mathring{Q}}_{\mu}^{\ \nu \rho} \mathring{\iota}_\rho \delta(\sigma-\bar{\sigma}) \, .
\end{equation}
The basic algebra relations can be summarized in a single algebra relation where the structure constants contain all generalized fluxes
\begin{equation}
\{\mathring{X}^M , \mathring{X}^N \} = -\mathring{F}^{MN}_{\ \ \ \ P}\ \mathring{X}^P \delta(\sigma-\bar{\sigma}) + \kappa \eta^{MN} \delta^{\prime}(\sigma-\bar{\sigma}) \, ,
\end{equation}
with
\begin{eqnarray}\label{eq:Fdef}
F^{M N \rho} =
\begin{pmatrix}
\kappa^2 \mathring{{\cal R}}^{\mu \nu \rho} & - \kappa \mathring{{\cal Q}}_\nu^{\ \mu \rho} \\
\kappa \mathring{{\cal Q}}_\mu^{\ \nu \rho} & \mathring{{\cal F}}^{\ \rho}_{\mu \nu} \\
\end{pmatrix} \, , \qquad
F^{M N}{}_\rho =
\begin{pmatrix}
\kappa \mathring{{\cal Q}}_\rho^{\ \mu \nu} & \mathring{{\cal F}}^{\ \mu}_{\nu \rho } \\
-\mathring{{\cal F}}^{\ \nu}_{\mu \rho} & 2 \mathring{B}_{\mu \nu \rho} \\
\end{pmatrix} \, .
\end{eqnarray}
The form of the generalized fluxes is the same as the ones already obtained using the tetrad formalism \cite{flux1, flux2, flux3}. In our approach, the generalized fluxes are obtained in the Poisson bracket algebra, only from the fact that the generalized canonical variable $X^M$ is transformed with an element of the $O(D,D)$ group that twists the Courant bracket both by $B$ and $\theta$ at the same time. Consequentially, the fluxes obtained in this paper are functions of some new effective fields, $\mathring{B}_{\mu \nu}$ (\ref{eq:Bmathring}) and $\mathring{\theta}^{\mu \nu}$ (\ref{eq:thetamathring}).
We now proceed to obtain the full bracket. Let us rewrite the generator (\ref{eq:Gring}) algebra
\begin{eqnarray} \label{eq:algdeodeo}
&& \Big\{ \mathring{G} (\mathring{\xi}_1, \mathring{\lambda}_1) (\sigma), \, \mathring{G} (\mathring{\xi}_2, \mathring{\lambda}_2)(\bar{\sigma}) \Big\} = \\ \notag
&& \int d\sigma d\bar{\sigma} \Big[\Big\{\mathring{\xi}_1^\mu(\sigma) \mathring{\iota}_\mu (\sigma), \mathring{\xi}_2^\nu(\bar{\sigma}) \mathring{\iota}_\nu (\bar{\sigma}) \Big\} +\Big\{\mathring{\lambda}_{1\mu}(\sigma) \mathring{k}^\mu (\sigma), \mathring{\lambda}_{2\nu}(\bar{\sigma}) \mathring{k}^\nu (\bar{\sigma}) \Big\} \\ \notag
&&+\Big\{\mathring{\xi}^\mu_1(\sigma) \mathring{\iota}_\mu (\sigma), \mathring{\lambda}_{2\nu}(\bar{\sigma}) \mathring{k}^\nu (\bar{\sigma}) \Big\}+\Big\{\mathring{\lambda}_{1\mu}(\sigma) \mathring{k}^\mu (\sigma), \mathring{\xi}_{2}^\nu(\bar{\sigma}) \mathring{\iota}_\nu (\bar{\sigma}) \Big\}
\Big] \, .
\end{eqnarray}
The first term of (\ref{eq:algdeodeo}) is obtained, using (\ref{eq:iotaiota})
\begin{eqnarray} \label{eq:ii}
&&\int d\sigma d\bar{\sigma} \Big\{\mathring{\xi}_1^\mu(\sigma) \mathring{\iota}_\mu (\sigma), \mathring{\xi}_2^\nu(\bar{\sigma}) \mathring{\iota}_\nu (\bar{\sigma}) \Big\} = \\ \notag
&& \int d\sigma \Big[ \mathring{\iota}_\mu \Big( \mathring{\xi}_2^\nu \partial_\nu \mathring{\xi}_1^\mu - \mathring{\xi}_1^\nu \partial_\nu \mathring{\xi}_2^\mu- {\cal \mathring{F}}^{\ \mu}_{\nu \rho}\ \mathring{\xi}_1^\nu \mathring{\xi}_2^\rho \Big) -2 \mathring{B}_{\mu \nu \rho} \mathring{k}^\mu \mathring{\xi}_1^\nu \mathring{\xi}_2^\rho \Big] \, .
\end{eqnarray}
The second term is obtained, using (\ref{eq:kringkring})
\begin{eqnarray} \label{eq:kk}
&&\int d\sigma d\bar{\sigma} \Big\{\mathring{\lambda}_{1\mu}(\sigma) \mathring{k}^\mu (\sigma), \mathring{\lambda}_{2\nu}(\bar{\sigma}) \mathring{k}^\nu (\bar{\sigma}) \Big\} = \\ \notag
&& \int d\sigma \Big[ \mathring{k}^\mu \Big( \kappa\mathring{\theta}^{\nu \rho}(\mathring{\lambda}_{2\nu} \partial_\rho \mathring{\lambda}_{1\mu} - \mathring{\lambda}_{1\nu} \partial_\rho \mathring{\lambda}_{2\mu} )-\kappa {\cal \mathring{Q}}_{\mu}^{\ \nu \rho} \mathring{\lambda}_{1\nu} \mathring{\lambda}_{2\rho} \Big)-\mathring{\iota}_{\mu} \kappa^2 {\cal \mathring{R}}^{\mu \nu \rho} \mathring{\lambda}_{1\nu}\mathring{\lambda}_{2\rho} \Big] \, .
\end{eqnarray}
The remaining terms are antisymmetric with respect to $1 \leftrightarrow 2, \ \sigma \leftrightarrow \bar{\sigma}$ interchange. Therefore, it is sufficient to calculate only the first term in the last line of (\ref{eq:algdeodeo})
\begin{eqnarray} \label{eq:ik}
&&\int d\sigma d\bar{\sigma} \Big\{\mathring{\xi}^\mu_1(\sigma) \mathring{\iota}_\mu (\sigma), \mathring{\lambda}_{2\nu}(\bar{\sigma}) \mathring{k}^\nu (\bar{\sigma}) \Big\} = \\ \notag
&& \int d\sigma \Big[ \mathring{k}^\mu \Big(- \mathring{\xi}^\nu_1 \partial_\nu \mathring{\lambda}_{2\mu}-{\cal \mathring{F}}^{\ \nu}_{\mu \rho}\ \mathring{\xi}^\rho_1 \mathring{\lambda}_{2\nu} \Big)+\mathring{\iota}_\mu \Big(\kappa (\mathring{\lambda}_{2\nu} \mathring{\theta}^{\nu \rho})\partial_\rho \mathring{\xi}_1^\mu -\kappa{\cal \mathring{Q}}_\rho^{\ \nu \mu} \mathring{\xi}^\rho_1 \mathring{\lambda}_{2\nu} \Big) \Big] \\ \notag
&&+ \int d\sigma d\bar{\sigma} \kappa \mathring{\xi}^\nu_1(\sigma)\mathring{\lambda}_{2\nu}(\bar{\sigma}) \partial_\sigma \delta(\sigma-\bar{\sigma}) \, .
\end{eqnarray}
In order to transform the anomalous part, we note that
\begin{equation} \label{eq:deltapola}
\partial_\sigma \delta(\sigma-\bar{\sigma}) = \frac{1}{2} \partial_\sigma \delta(\sigma-\bar{\sigma}) -\frac{1}{2} \partial_{\bar{\sigma}} \delta(\sigma-\bar{\sigma}) \, ,
\end{equation}
and
\begin{equation} \label{eq:fdelta}
f(\bar{\sigma}) \partial_\sigma \delta(\sigma-\bar{\sigma}) = f(\sigma) \partial_\sigma \delta(\sigma-\bar{\sigma})+f^\prime (\sigma) \delta(\sigma-\bar{\sigma}) \, .
\end{equation}
Applying (\ref{eq:deltapola}) and (\ref{eq:fdelta}) to the last row of (\ref{eq:ik}), we obtain
\begin{eqnarray} \label{eq:anomtrans}
&& \int d\sigma d\bar{\sigma} \kappa \mathring{\xi}^\nu_1(\sigma)\mathring{\lambda}_{2\nu}(\bar{\sigma}) \partial_\sigma \delta(\sigma-\bar{\sigma}) = \frac{1}{2}\int d\sigma \kappa x^{\prime \mu} \Big(\mathring{\xi}^\nu_1 \partial_\mu \mathring{\lambda}_{2 \nu} - \partial_\mu \mathring{\xi}^\nu_1 \mathring{\lambda}_{2 \nu} \Big)\\ \notag
&&+\frac{\kappa}{2} \int d\sigma d\bar{\sigma} \Big( \mathring{\xi}^\nu_1(\sigma) \mathring{\lambda}_{2 \nu}(\sigma) \partial_\sigma \delta(\sigma-\bar{\sigma})-\mathring{\xi}^\nu_1(\bar{\sigma}) \mathring{\lambda}_{2 \nu}(\bar{\sigma}) \partial_{\bar{\sigma}} \delta(\sigma-\bar{\sigma}) \Big)= \\ \notag
&& \frac{1}{2}\int d\sigma \Big[\mathring{k}^\mu \Big(\mathring{\xi}_1^\nu \partial_\mu \mathring{\lambda}_{2\nu}- \partial_\mu \mathring{\xi}^\nu_1 \mathring{\lambda}_{2 \nu} \Big)+ \mathring{\iota}_\mu\kappa \mathring{\theta}^{\mu\rho } \Big( \mathring{\xi}_1^\nu \partial_\rho \mathring{\lambda}_{2\nu}- \partial_\rho \mathring{\xi}^\nu_1 \mathring{\lambda}_{2 \nu} \Big) \Big] \, ,
\end{eqnarray}
where (\ref{eq:xikring}) was used, as well as antisymmetry of $\mathring{\theta}$. Substituting (\ref{eq:anomtrans}) to (\ref{eq:ik}), we obtain
\begin{eqnarray} \label{eq:iksredjeno}
&&\int d\sigma d\bar{\sigma} \Big\{\mathring{\xi}^\mu_1 (\sigma) \mathring{\iota}_\mu (\sigma), \mathring{\lambda}_{2\nu}(\bar{\sigma}) \mathring{k}^\nu (\bar{\sigma}) \Big\} = \\ \notag
&& \int d\sigma \Big[ \mathring{k}^\mu \Big(\mathring{\xi}^\nu_1 (\partial_\mu \mathring{\lambda}_{2\nu}-\partial_\nu \mathring{\lambda}_{2\mu})-\frac{1}{2}\partial_\mu (\mathring{\xi}_1 \mathring{\lambda}_2)-{\cal \mathring{F}}^{\ \nu}_{\mu \rho}\ \mathring{\xi}^\rho_1 \mathring{\lambda}_{2\nu} \Big) \\ \notag
&& +\mathring{\iota}_\mu \Big(\kappa (\mathring{\lambda}_{2\nu} \mathring{\theta}^{\nu \rho}) \partial_\rho \mathring{\xi}_1^\mu+ \kappa \mathring{\theta}^{ \mu \rho} \Big( \mathring{\xi}_1^\nu \partial_\rho \mathring{\lambda}_{2\nu}-\frac{1}{2}\partial_\rho (\mathring{\xi}_1 \mathring{\lambda}_2)\Big) -\kappa{\cal \mathring{Q}}^{\ \nu \mu}_{\rho} \mathring{\xi}^\rho_1 \mathring{\lambda}_{2\nu} \Big) \Big] \, . \notag
\end{eqnarray}
Substituting (\ref{eq:ii}), (\ref{eq:kk}) and (\ref{eq:iksredjeno}) into (\ref{eq:algdeodeo}), we write the full algebra of generator in the form
\begin{equation} \label{GringGring}
\Big\{ \mathring{G} (\mathring{\Lambda}_1), \, \mathring{G} (\mathring{\Lambda}_2) \Big\} = -\mathring{G}(\mathring{\Lambda}) \Leftrightarrow \Big\{ \mathring{G} (\mathring{\xi}_1, \mathring{\lambda}_1), \, \mathring{G} (\mathring{\xi}_2, \mathring{\lambda}_2) \Big\} = -\mathring{G}(\mathring{\xi}, \mathring{\lambda}) \, ,
\end{equation}
where
\begin{eqnarray} \label{eq:ringxi}
\mathring{\xi}^\mu &=&\mathring{\xi}_1^\nu \partial_\nu \mathring{\xi}_2^\mu - \mathring{\xi}_2^\nu \partial_\nu \mathring{\xi}_1^\mu- \kappa \mathring{\theta}^{\mu\rho} \Big(\mathring{\xi}_1^\nu \partial_\rho \mathring{\lambda}_{2\nu} - \mathring{\xi}_2^\nu \partial_\rho \mathring{\lambda}_{1\nu}-\frac{1}{2} \partial_\rho (\mathring{\xi}_1 \mathring{\lambda}_2 - \mathring{\xi}_2 \mathring{\lambda}_1 ) \Big) \\ \notag
&& +\kappa \mathring{\theta}^{\nu \rho} (\mathring{\lambda}_{1\nu} \partial_\rho \mathring{\xi}_2^\mu-\mathring{\lambda}_{2\nu} \partial_\rho \mathring{\xi}_1^\mu) \\ \notag
&& +\kappa^2 {\cal \mathring{R}}^{\mu \nu \rho}\mathring{\lambda}_{1\nu}\mathring{\lambda}_{2\rho} + {\cal \mathring{F}}^{\ \mu}_{\rho \sigma}\ \mathring{\xi}_1^\rho \mathring{\xi}_2^\sigma +\kappa{\cal \mathring{Q}}_\rho^{\ \nu \mu}(\mathring{\xi}_1^{\rho} \mathring{\lambda}_{2\nu}-\mathring{\xi}_2^{\rho} \mathring{\lambda}_{1\nu}) \, ,
\end{eqnarray}
and
\begin{eqnarray} \label{eq:ringLambda}
\mathring{\lambda}_\mu &=& \mathring{\xi}_1^\nu (\partial_\nu \mathring{\lambda}_{2\mu}-\partial_\mu \mathring{\lambda}_{2\nu}) - \mathring{\xi}_2^\nu (\partial_\nu\mathring{\lambda}_{1\mu}-\partial_\mu \mathring{\lambda}_{1\nu})+\frac{1}{2} \partial_\mu (\mathring{\xi}_1 \mathring{\lambda}_2- \mathring{\xi}_2 \mathring{\lambda}_1 ) \\ \notag
&&+\kappa\mathring{\theta}^{\nu \rho}(\mathring{\lambda}_{1\nu} \partial_\rho \mathring{\lambda}_{2\mu} - \mathring{\lambda}_{2\nu} \partial_\rho \mathring{\lambda}_{1\mu} ) \\ \notag
&&+2\mathring{B}_{\mu \nu \rho} \mathring{\xi}_1^\nu \mathring{\xi}_2^\rho+ \kappa{\cal \mathring{Q}}^{\ \nu \rho}_\mu \mathring{\lambda}_{1\nu}\mathring{\lambda}_{2\rho} +{\cal \mathring{F}}^{\ \nu}_{\mu \sigma}\ \Big( \mathring{\xi}^\sigma_1 \mathring{\lambda}_{2\nu} - \mathring{\xi}^\sigma_2 \mathring{\lambda}_{1\nu}) \, .
\end{eqnarray}
It is possible to rewrite the previous two equations, if we note the relations between the generalized fluxes
\begin{equation}
{\cal \mathring{R}}^{\mu \nu \rho} = \mathring{R}^{\mu \nu \rho}+ \mathring{\theta}^{\mu \sigma} \mathring{\theta}^{\nu \tau} {\cal \mathring{F}}^{\ \rho}_{\sigma \tau} \, , \quad {\cal \mathring{Q}}_\mu^{\ \nu \rho} =\mathring{Q}_\mu^{\ \nu \rho} + \mathring{\theta}^{\nu \sigma} {\cal \mathring{F}}^{\ \rho}_{\mu \sigma} \, .
\end{equation}
Now we have
\begin{eqnarray} \label{eq:ringxi2}
\mathring{\xi}^\mu &=&\mathring{\xi}_1^\nu \partial_\nu \mathring{\xi}_2^\mu - \mathring{\xi}_2^\nu \partial_\nu \mathring{\xi}_1^\mu \\ \notag
&&+ \kappa \mathring{\theta}^{\mu\rho} \Big(\mathring{\xi}_1^\nu (\partial_\nu \mathring{\lambda}_{2\rho}-\partial_\rho \mathring{\lambda}_{2\nu}) - \mathring{\xi}_2^\nu(\partial_\nu \mathring{\lambda}_{1\rho}- \partial_\rho \mathring{\lambda}_{1\nu})+\frac{1}{2} \partial_\rho (\mathring{\xi}_1 \mathring{\lambda}_2 - \mathring{\xi}_2 \mathring{\lambda}_1 ) \Big) \\ \notag
&& +\kappa \mathring{\xi}_1^\rho \partial_\rho (\mathring{\lambda}_{2\nu} \mathring{\theta}^{\nu \mu}) -\kappa (\mathring{\lambda}_{2\nu} \mathring{\theta}^{\nu \rho}) \partial_\rho \mathring{\xi}_1^\mu-\kappa \mathring{\xi}_2^\rho \partial_\rho (\mathring{\lambda}_{1\nu} \mathring{\theta}^{\nu \mu})+\kappa(\mathring{\lambda}_{1\nu} \mathring{\theta}^{\nu \rho}) \partial_\rho \mathring{\xi}_2^\mu +\kappa^2 \mathring{R}^{\mu \nu \rho} \mathring{\lambda}_{1\nu}\mathring{\lambda}_{2\rho} \\ \notag
&& +{\cal \mathring{F}}^{\ \mu}_{\rho \sigma}\ \mathring{\xi}_1^\rho \mathring{\xi}_2^\sigma +\kappa \mathring{\theta}^{\mu \sigma}{\cal \mathring{F}}_{\sigma \rho}^{\ \nu}\ (\mathring{\xi}_1^{\rho} \mathring{\lambda}_{2\nu}-\mathring{\xi}_2^{\rho} \mathring{\lambda}_{1\nu}) + \kappa^2 \mathring{\theta}^{\mu \sigma} \mathring{\theta}^{\nu \tau} {\cal \mathring{F}}^{\ \rho}_{\sigma \tau}\ \mathring{\lambda}_{1\nu}\mathring{\lambda}_{2\rho} \, ,
\end{eqnarray}
and
\begin{eqnarray} \label{eq:ringLambda2}
\mathring{\lambda}_\mu &=& \mathring{\xi}_1^\nu (\partial_\nu \mathring{\lambda}_{2\mu}-\partial_\mu \mathring{\lambda}_{2\nu}) - \mathring{\xi}_2^\nu (\partial_\nu\mathring{\lambda}_{1\mu}-\partial_\mu \mathring{\lambda}_{1\nu})+\frac{1}{2} \partial_\mu (\mathring{\xi}_1 \mathring{\lambda}_2- \mathring{\xi}_2 \mathring{\lambda}_1 ) \\ \notag
&&+\kappa\mathring{\theta}^{\nu \rho}(\mathring{\lambda}_{1\nu} \partial_\rho \mathring{\lambda}_{2\mu} - \mathring{\lambda}_{2\nu} \partial_\rho \mathring{\lambda}_{1\mu} )+\kappa \mathring{Q}_\mu^{\ \nu \rho} \mathring{\lambda}_{1\nu} \mathring{\lambda}_{2\rho} \\ \notag
&&+2\mathring{B}_{\mu \nu \rho} \mathring{\xi}_1^\nu \mathring{\xi}_2^\rho+ {\cal \mathring{F}}^{\ \nu}_{\mu \sigma}\ \Big( \mathring{\xi}^\sigma_1 \mathring{\lambda}_{2\nu} - \mathring{\xi}^\sigma_2 \mathring{\lambda}_{1\nu})+ \kappa \mathring{\theta}^{\nu \sigma} {\cal \mathring{F}}_{ \mu \sigma}^{\ \rho}\ \mathring{\lambda}_{1\nu}\mathring{\lambda}_{2\rho} \, ,
\end{eqnarray}
where the partial integration was used in the equation (\ref{eq:ringxi2}).
The relation (\ref{GringGring}) defines the $\mathring{C}$-twisted Courant bracket
\begin{equation} \label{eq:bracket4}
[\mathring{\Lambda}_1, \mathring{\Lambda}_2]_{{\cal C}_{\mathring{C}}} = \mathring{\Lambda} \Leftrightarrow [(\mathring{\xi}_1,\mathring{\lambda}_1), (\mathring{\xi}_2,\mathring{\lambda}_2) ]_{{\cal C}_{\mathring{C}}} = (\mathring{\xi},\mathring{\lambda}) \, ,
\end{equation}
that gives the same bracket as (\ref{eq:mathCdef}). Both (\ref{eq:ringxi}) - (\ref{eq:ringLambda}) and (\ref{eq:ringxi2}) - (\ref{eq:ringLambda2}) are the products of $\mathring{C}$-twisted Courant bracket. The former shows explicitly how the gauge parameters depend on the generalized fluxes. In the latter, similarities between the expressions for two parameters is easier to see.
\subsection{Special cases and relations to other brackets}
Even though the non-commutativity parameter $\theta$ and the Kalb Ramond field $B$ are not mutually independent, while obtaining the bracket (\ref{eq:bracket4}) the relation between these fields (\ref{eq:thetadef}) was not used. Therefore, the results stand even if a bi-vector and a 2-form used for twisting are mutually independent. This will turn out to be convenient to analyze the origin of terms appearing in the Courant bracket twisted by $\mathring{C}$.
Primarily, consider the case of zero bi-vector $\theta^{\mu \nu} = 0$ with the 2-form $B_{\mu \nu}$ arbitrary. Consequently, the parameter $\alpha$ (\ref{eq:alfadef}) is zero, while the hyperbolic functions ${\cal C}$ and ${\cal S}$ are identity matrices. Therefore, the auxiliary fields (\ref{eq:Bmathring}) and (\ref{eq:thetamathring}) simplify in a following way
\begin{equation}
\mathring{B}_{\mu \nu} \to B_{\mu \nu} \, \ \ \ \mathring{\theta}^{\mu \nu}\to 0 \, ,
\end{equation}
and the twisting matrix $e^{\breve{B}}$ (\ref{eq:ebb}) becomes the matrix $e^{\hat{B}}$ (\ref{eq:ebhat}). The expressions (\ref{eq:ringxi}) and (\ref{eq:ringLambda}) respectively reduce to
\begin{equation} \label{eq:XIB}
\mathring{\xi}^\mu = \mathring{\xi}_1^\nu \partial_\nu \mathring{\xi}_2^\mu - \mathring{\xi}_2^\nu \partial_\nu \mathring{\xi}_1^\mu \, ,
\end{equation} and
\begin{equation} \label{eq:LB}
\mathring{\lambda}_\mu = \mathring{\xi}_1^\nu (\partial_\nu \mathring{\lambda}_{2 \mu} - \partial_\mu \mathring{\lambda}_{2 \nu}) - \mathring{\xi}_2^\nu (\partial_\nu \mathring{\lambda}_{1 \mu} - \partial_\mu \mathring{\lambda}_{1 \nu}) +\frac{1}{2} \partial_\mu (\mathring{\xi}_1 \mathring{\lambda}_2- \mathring{\xi}_2 \mathring{\lambda}_1 )+ 2 B_{\mu \nu \rho} \mathring{\xi}^\nu_1 \mathring{\xi}^\rho_2 \, ,
\end{equation}
where $B_{\mu \nu \rho}$ is ithe Kalb-Ramond field strength, given by
\begin{equation} \label{eq:bmnr}
B_{\mu \nu \rho} = \partial_\mu B_{\nu \rho} + \partial_\nu B_{\rho \mu} + \partial_\rho B_{\mu \nu} \, .
\end{equation}
The equations (\ref{eq:XIB}) and (\ref{eq:LB}) define exactly the $B$-twisted Courant bracket (\ref{eq:CourantB}) \cite{twist}.
Secondarily, consider the case of zero 2-form $B_{\mu \nu} = 0$ and the bi-vector $\theta^{\mu \nu}$ arbitrary. Similarly, $\alpha=0$ and ${\cal C}$ and ${\cal S}$ are identity matrices. The auxiliary fields $\mathring{B}_{\mu \nu}$ and $\mathring{\theta}^{\mu \nu}$ are given by
\begin{equation}
\mathring{B}_{\mu \nu} \to 0 \, \ \ \ \mathring{\theta}^{\mu \nu} \to \theta^{\mu \nu} \, .
\end{equation}
The twisting matrix $e^{\breve{B}}$ becomes the matrix of $\theta$-transformations $e^{\hat{\theta}}$ (\ref{eq:enateta}). The gauge parameters (\ref{eq:ringxi}) and (\ref{eq:ringLambda}) are respectively given by
\begin{align} \label{eq:XIR}
\mathring{\xi}^\mu =&\ \mathring{\xi}_1^\nu \partial_\nu \mathring{\xi}_2^\mu - \mathring{\xi}_2^\nu \partial_\nu\mathring{\xi}_1^\mu + \\ \notag
& +\kappa \theta^{\mu \rho}\Big( \mathring{\xi}_1^\nu (\partial_\nu \mathring{\lambda}_{2 \rho}-\partial_\rho \mathring{\lambda}_{2 \nu}) - \mathring{\xi}_2^\nu ( \partial_\nu \mathring{\lambda}_{1 \rho}-\partial_\rho \mathring{\lambda}_{1 \nu}) +\frac{1}{2} \partial_\rho (\mathring{\xi}_1 \mathring{\lambda}_{2} - \mathring{\xi}_2 \mathring{\lambda}_1) \Big) \\ \notag
& + \kappa \mathring{\xi}_1^\nu \partial_\nu (\mathring{\lambda}_{2 \rho} \theta^{\rho \mu})-\kappa \mathring{\xi}_2^\nu \partial_\nu (\mathring{\lambda}_{1 \rho} \theta^{\rho \mu})+\kappa (\mathring{\lambda}_{1 \nu} \theta^{\nu \rho}) \partial_\rho \mathring{\xi}_2^\mu -\kappa (\mathring{\lambda}_{2 \nu}\theta^{\nu \rho}) \partial_\rho \mathring{\xi}_1^\mu \\ \notag
&+\kappa^2 R^{\mu \nu \rho} \mathring{\lambda}_{1 \nu}\mathring{\lambda}_{2 \rho} \, ,
\end{align}
and
\begin{align} \label{eq:LR}
\mathring{\lambda}_\mu = &\ \mathring{\xi}_1^\nu (\partial_\nu \mathring{\lambda}_{2 \mu} - \partial_\mu \mathring{\lambda}_{2 \nu}) - \mathring{\xi}_2^\nu (\partial_\nu \mathring{\lambda}_{1 \mu} - \partial_\mu \mathring{\lambda}_{1 \nu}) +\frac{1}{2}\partial_\mu(\mathring{\xi}_1 \mathring{\lambda}_2 - \mathring{\xi}_2 \mathring{\lambda}_1) \\ \notag
& + \kappa \theta^{\nu \rho} (\mathring{\lambda}_{1 \nu}\partial_\rho \mathring{\lambda}_{2 \mu}-\mathring{\lambda}_{2 \nu} \partial_\rho \mathring{\lambda}_{1 \mu})+ \kappa \mathring{\lambda}_{1 \rho} \mathring{\lambda}_{2 \nu} Q_\mu^{\ \rho \nu} \, ,
\end{align}
where by $Q_\mu^{\ \nu \rho}$ and $R^{\mu \nu \rho}$ we have marked the non-geometric fluxes, given by
\begin{equation} \label{eq:nongeomflux}
Q_\mu^{\ \nu \rho} = \partial_\mu \theta^{\nu \rho},\ \ R^{\mu \nu \rho} = \theta^{\mu \sigma} \partial_\sigma \theta^{\nu \rho} + \theta^{\nu \sigma} \partial_\sigma \theta^{\rho \mu} +\theta^{\rho\sigma} \partial_\sigma \theta^{\mu \nu} \, .
\end{equation}
The bracket defined by these relations is $\theta$-twisted Courant bracket (\ref{eq:CourantTheta}) \cite{cdual} and it features the non-geometric fluxes only.
Let us comment on terms in the obtained expressions for gauge parameters (\ref{eq:ringxi2}) and (\ref{eq:ringLambda2}). The first line of (\ref{eq:ringxi2}) appears in the Courant bracket and in all brackets that can be obtained from its twisting by either a 2-form or a bi-vector. The next two lines correspond to the terms appearing in the $\theta$-twisted Courant bracket (\ref{eq:XIR}). The other terms do not appear in either $B$- or $\theta$-twisted Courant bracket.
Similarly, the first line of (\ref{eq:ringLambda}) appears in the Courant bracket (\ref{eq:Lcou}) and in all other brackets obtained from its twisting, while the terms in the second line appear exclusively in the $\theta$ twisted Courant bracket (\ref{eq:LB}). The first term in the last line appear in the $B$-twisted Courant bracket (\ref{eq:LR}), while the rest are some new terms. We see that all the terms that do not appear in neither of two brackets are the terms containing ${\cal \mathring{F}}$ flux.
\subsection{Coordinate free notation}
In order to obtain the formulation of the $\mathring{C}$-twisted Courant bracket in the coordinate free notation, independent of the local coordinate system that is used on the manifold, let us firstly provide definitions for a couple of well know brackets and derivatives.
The Lie derivative along the vector field $\xi$ is given by
\begin{equation} \label{eq:lieder}
{\cal L}_{\mathring{\xi}} =i_{\mathring{\xi}} d + d i_{\mathring{\xi}} \, ,
\end{equation}
with $i_{\mathring{\xi}}$ being the interior product along the vector field $\mathring{\xi}$ and $d$ being the exterior derivative. Using the Lie derivative one easily defines the Lie bracket
\begin{eqnarray} \label{eq:liebr}
[\mathring{\xi}_1, \mathring{\xi}_2]_L = {\cal L}_{\mathring{\xi}_1} {\mathring{\xi}_2} - {\cal L}_{\mathring{\xi}_2} {\mathring{\xi}_1} \, .
\end{eqnarray}
The generalization of the Lie bracket on a space of 1-forms is a well known Koszul bracket \cite{koszul}
\begin{equation} \label{eq:koszul}
[\mathring{\lambda}_1, \mathring{\lambda}_2]_\theta = {\cal{L}}_{\mathring{\theta} \mathring{\lambda}_1 } \mathring{\lambda}_2 - {\cal{L}}_{ \mathring{\theta}\mathring{\lambda}_2} \mathring{\lambda}_1 + d(\mathring{\theta}(\mathring{\lambda}_1, \mathring{\lambda}_2)) \, .
\end{equation}
The expressions (\ref{eq:ringxi}) and (\ref{eq:ringLambda}) in the coordinate free notation are given by
\begin{eqnarray} \label{eq:ringRxi}
\mathring{\xi} &=& [\mathring{\xi}_1,\mathring{\xi}_2]_L - [\mathring{\xi}_2,\mathring{\lambda}_1 \kappa \mathring{\theta}]_L + [\mathring{\xi}_1,\mathring{\lambda}_2 \kappa \mathring{\theta}]_L \\ \notag
&&- \Big({\cal L}_{\mathring{\xi}_1}\mathring{\lambda}_2 - {\cal L}_{\mathring{\xi}_2}\mathring{\lambda}_1 - \frac{1}{2}d(i_{\mathring{\xi}_1}\mathring{\lambda}_2 - i_{\mathring{\xi}_2}\mathring{\lambda}_1)\Big)\kappa \mathring{\theta}\\ \notag
&&+ {\cal \mathring{F}}(\mathring{\xi}_1,\mathring{\xi}_2, .) - \kappa \mathring{\theta}{\cal \mathring{F}}(\mathring{\lambda}_1, ., \mathring{\xi}_2) +\kappa \mathring{\theta}{\cal \mathring{F}} (\mathring{\lambda}_2, ., \mathring{\xi}_1) + {\cal \mathring{R}} (\mathring{\lambda}_1,\mathring{\lambda}_2,.) \, ,
\end{eqnarray}
and
\begin{eqnarray} \label{eq:ringRLambda}
\mathring{\lambda} &=& {\cal L}_{\mathring{\xi}_1}\mathring{\lambda}_2 - {\cal L}_{\mathring{\xi}_2}\mathring{\lambda}_1 - \frac{1}{2}d(i_{\mathring{\xi}_1}\mathring{\lambda}_2 - i_{\mathring{\xi}_2}\mathring{\lambda}_1) - [\mathring{\lambda}_1,\mathring{\lambda}_2]_{\kappa \mathring{\theta}} \\ \notag
&&+ \mathring{H}(\mathring{\xi}_1,\mathring{\xi}_2,.)-{\cal \mathring{F}} (\mathring{\lambda}_1, ., \mathring{\xi}_2) +{\cal \mathring{F}} (\mathring{\lambda}_2, ., \mathring{\xi}_1)+\kappa\mathring{\theta} {\cal \mathring{F}} (\mathring{\lambda}_1, \mathring{\lambda}_2, .) \, ,
\end{eqnarray}
where
\begin{equation} \label{eq:Bnc}
\mathring{H} = 2 d \mathring{B} \, .
\end{equation}
We have marked the geometric $H$ flux as $\mathring{H}$, so that it is distinguished from the 2-form $\mathring{B}$. In the local basis, the full term containing $H$-flux is given by
\begin{equation}
\left. \mathring{H}(\mathring{\xi}_1,\mathring{\xi}_2, .) \right|_\mu = 2\mathring{B}_{\mu \nu \rho }\ \mathring{\xi}_1^{\nu} \mathring{\xi}_2^{\rho} \, .
\end{equation}
Similarly are defined the terms containing ${\cal \mathring{F}}$ flux
\begin{equation}
\left. {\cal \mathring{F}}(\mathring{\xi}_1,\mathring{\xi}_2, .) \right|^\mu = {\cal \mathring{F}}^{\ \mu}_{\nu \rho}\ \mathring{\xi}_1^{\nu} \mathring{\xi}_2^{\rho} \, ,
\end{equation}
and the non-geometric ${\cal \mathring{R}}$ flux
\begin{equation}
\left. {\cal \mathring{R}} (\mathring{\lambda}_1, \mathring{\lambda}_2,.) \right|^{\mu} = {\cal \mathring{R}}^{\mu \nu \rho} \mathring{\lambda}_{1\nu} \mathring{\lambda}_{2\rho} \, ,
\end{equation}
as well as
\begin{equation}
\left. \mathring{\theta} {\cal \mathring{F}} (\mathring{\lambda}_1, . , \mathring{\xi}_2) \right|^{\mu} = \mathring{\theta}^{\nu \sigma}{\cal \mathring{F}}_{\sigma \rho}^{\ \mu}\ \mathring{\lambda}_{1\nu} \mathring{\xi}_{2}^{\rho} \, .
\end{equation}
It is possible to rewrite the coordinate free notation in terms of the $\mathring{H}$-flux and $\mathring{\theta}$ bi-vector only. The geometric ${\cal \mathring{F}}$ flux is just the contraction of the $\mathring{H}$-flux with a bi-vector
\begin{equation} \label{eq:Fnc}
{\cal \mathring{F}} = \kappa \mathring{\theta}\ \mathring{H} \, .
\end{equation}
The non-geometric ${\cal \mathring{R}}$ flux can be rewritten as
\begin{equation} \label{eq:Rnc}
{\cal \mathring{R}} = \frac{1}{2} [\mathring{\theta}, \mathring{\theta}]_S+ \wedge^3 (\kappa \mathring{\theta}) \mathring{H} \, ,
\end{equation}
where $\wedge$ is the wedge product, and by $[\mathring{\theta},\mathring{\theta}]_S$ we have marked the Schouten-Nijenhuis bracket \cite{SNB}, given by
\begin{equation} \label{eq:SNb}
\left. [\mathring{\theta}, \mathring{\theta}]_S \right| ^{\mu \nu \rho} = \epsilon^{\mu \nu \rho}_{\alpha \beta \gamma} \mathring{\theta}^{\sigma \alpha} \partial_\sigma \mathring{\theta}^{\beta \gamma} = 2 \mathring{R}^{\mu \nu \rho} \, ,
\end{equation}
where
\begin{equation}
\epsilon^{\mu \nu \rho}_{\alpha \beta \gamma} =
\begin{vmatrix}
\delta^\mu_\alpha & \delta^\nu_\beta & \delta^\rho_\gamma \\
\delta^\nu_\alpha & \delta^\rho_\beta & \delta^\mu_\gamma \\
\delta^\rho_\alpha & \delta^\mu_\beta & \delta^\nu_\gamma
\end{vmatrix}\, .
\end{equation}
Expressing both ${\cal \mathring{F}}$ and ${\cal \mathring{R}}$ fluxes in terms of the bi-vector $\mathring{\theta}$ and 3-form $\mathring{H}$, we obtain
\begin{eqnarray} \label{eq:ringRxi2}
\mathring{\xi} &=& [\mathring{\xi}_1,\mathring{\xi}_2]_L - [\mathring{\xi}_2,\mathring{\lambda}_1 \kappa \mathring{\theta}]_L + [\mathring{\xi}_1,\mathring{\lambda}_2 \kappa \mathring{\theta}]_L \\ \notag
&&- \Big({\cal L}_{\mathring{\xi}_1}\mathring{\lambda}_2 - {\cal L}_{\mathring{\xi}_2}\mathring{\lambda}_1 - \frac{1}{2}d(i_{\mathring{\xi}_1}\mathring{\lambda}_2 - i_{\mathring{\xi}_2}\mathring{\lambda}_1)\Big)\kappa \mathring{\theta}+ \frac{\kappa^2}{2} [\mathring{\theta},\mathring{\theta}]_S (\mathring{\lambda}_1,\mathring{\lambda}_2,.) \\ \notag
&& +\kappa \mathring{\theta} \mathring{H} (., \mathring{\xi}_1,\mathring{\xi}_2) - \wedge^2\kappa\mathring{\theta}\mathring{H}(\mathring{\lambda}_1, ., \mathring{\xi}_2)+\wedge^2 \kappa\mathring{\theta} \mathring{H} (\mathring{\lambda}_2, ., \mathring{\xi}_1)+\wedge^3\kappa \mathring{\theta} \mathring{H} (\mathring{\lambda}_1,\mathring{\lambda}_2,.) \, ,
\end{eqnarray}
and
\begin{eqnarray} \label{eq:ringRLambda2}
\mathring{\lambda} &=& {\cal L}_{\mathring{\xi}_1}\mathring{\lambda}_2 - {\cal L}_{\mathring{\xi}_2}\mathring{\lambda}_1 - \frac{1}{2}d(i_{\mathring{\xi}_1}\mathring{\lambda}_2 - i_{\mathring{\xi}_2}\mathring{\lambda}_1) - [\mathring{\lambda}_1,\mathring{\lambda}_2]_{\kappa \mathring{\theta}} \\ \notag
&&+ \mathring{H}(\mathring{\xi}_1,\mathring{\xi}_2,.) -\kappa\mathring{\theta} \mathring{H}(\mathring{\lambda}_1, ., \mathring{\xi}_2)+\kappa\mathring{\theta}\mathring{H}(\mathring{\lambda}_1, . ,\mathring{\xi}_2)+\wedge^2 \kappa\mathring{\theta} \mathring{H}(\mathring{\lambda}_1, \mathring{\lambda}_2,.) \, .
\end{eqnarray}
The term $\kappa \mathring{\theta} \mathring{H}(. , \mathring{\xi}_1, \mathring{\xi}_2)$ is the wedge product of a bi-vector with a 3-form, contracted with two vectors, given by
\begin{equation}
\Big( \kappa \mathring{\theta} \mathring{H}(. , \mathring{\xi}_1, \mathring{\xi}_2) \Big)^\mu = 2 \kappa \mathring{\theta}^{\mu \nu}\mathring{B}_{\nu \rho \sigma} \mathring{\xi}_{1}^{\rho} \mathring{\xi}_2^\sigma \, ,
\end{equation}
and $\kappa \mathring{\theta}\mathring{H}(\mathring{\lambda}_1,. ,\mathring{\xi}_2) $ is similarly defined, with the 1-form contracted instead of one vector field
\begin{equation}
\Big(\kappa \mathring{\theta}\mathring{H}(\mathring{\lambda}_1,. ,\mathring{\xi}_2) \Big)_\mu = 2 \kappa \mathring{\theta}^{\nu \rho}\mathring{B}_{\rho \mu \sigma } \mathring{\lambda}_{1\nu} \mathring{\xi}_2^\sigma \, .
\end{equation}
The terms like $\wedge^2 \kappa\mathring{\theta} \mathring{H}(\mathring{\lambda}_1, ., \mathring{\xi}_2)$ are the wedge product of two bi-vectors with a 3-form, contracted with the 1-form $\mathring{\lambda}_1$ and the vector $\mathring{\xi}_2$
\begin{equation}
\Big( \wedge^2 \kappa\mathring{\theta}\mathring{H} (\mathring{\lambda}_1, ., \mathring{\xi}_2) \Big)^\mu = 2 \kappa^2 \mathring{\theta}^{\nu \sigma} \mathring{\theta}^{\mu \rho} \ \mathring{B}_{\sigma \rho \tau} \mathring{\lambda}_{1\nu} \mathring{\xi}_2^\tau \, ,
\end{equation}
and similarly when contraction is done with two forms
\begin{equation}
\Big( \wedge^2 \kappa\mathring{\theta}\mathring{H}(\mathring{\lambda}_1, \mathring{\lambda}_2 , .) \Big)_\mu = 2\kappa^2 \mathring{\theta}^{\tau \rho } \mathring{\theta}^{\nu \sigma } \mathring{B}_{\rho \sigma \mu}\mathring{\lambda}_{1\tau} \mathring{\lambda}_{2\nu} \, .
\end{equation}
Lastly, the term $\wedge^3\kappa \mathring{\theta}\mathring{H}(\mathring{\lambda}_1,\mathring{\lambda}_2,.) $ is obtained by taking a wedge product of three bi-vectors with a 3-form and than contracting it with two 1-forms. It is given by
\begin{equation}
\Big(\wedge^3\kappa \mathring{\theta}\mathring{H}(\mathring{\lambda}_1,\mathring{\lambda}_2,.) \Big)^\mu = 2\kappa^3 \mathring{\theta}^{\nu \sigma }\mathring{\theta}^{ \rho \tau}\mathring{\theta}^{\mu \lambda} \mathring{B}_{\sigma \tau \lambda }\mathring{\lambda}_{1\nu}\mathring{\lambda}_{2\rho} \, ,
\end{equation}
\section{Star brackets}
\cleq
The expressions for gauge parameters (\ref{eq:ringRxi}) and (\ref{eq:ringRLambda}) produce some well known bracket, such as Lie bracket and Koszul bracket. The remaining terms can be combined so that they are expressed by some new brackets, acting on pairs of generalized vectors. It turns out that these brackets produce a generalized vector, where the vector part $\mathring{\xi}^\mu$ and the 1-form part $\mathring{\lambda}_\mu$ are related by $\mathring{\xi}^\mu = \kappa \mathring{\theta}^{\mu \nu} \mathring{\lambda}_\nu$, effectively resulting in the graphs in the generalized cotangent bundle $T^\star M$ of the bi-vector $\mathring{\theta}$, i.e. $\xi = \kappa \theta(.,\lambda)$. The star brackets can be interpreted in terms of projections on isotropic subspaces.
\subsection{$\theta$-star bracket}
Let us firstly consider the second line of (\ref{eq:ringxi2}) and the first line of (\ref{eq:ringLambda2}). When combined, they define a bracket acting on a pair of generalized vectors
\begin{equation} \label{eq:star2}
[\mathring{\Lambda}_1, \mathring{\Lambda}_2]^\star_{\mathring{\theta}} = \mathring{\Lambda}^\star \Leftrightarrow [ (\mathring{\xi}_1, \mathring{\lambda}_1),(\mathring{\xi}_2, \mathring{\lambda}_2)]^\star_{\mathring{\theta}} = (\mathring{\xi}_\star, \mathring{\lambda}^\star) \, ,
\end{equation}
where
\begin{equation} \label{eq:xistar2}
\mathring{\xi}_\star^\mu = \kappa \mathring{\theta}^{\mu\rho} \Big(\mathring{\xi}_1^\nu (\partial_\nu \mathring{\lambda}_{2\rho}-\partial_\rho \mathring{\lambda}_{2\nu}) - \mathring{\xi}_2^\nu(\partial_\nu \mathring{\lambda}_{1\rho}- \partial_\rho \mathring{\lambda}_{1\nu})+ \frac{1}{2} \partial_\rho (\mathring{\xi}_1 \mathring{\lambda}_2- \mathring{\xi}_2 \mathring{\lambda}_1 )\Big) \, ,
\end{equation}
and
\begin{equation} \label{eq:Lstar2}
\mathring{\lambda}^\star_\mu = \mathring{\xi}_1^\nu (\partial_\nu \mathring{\lambda}_{2\mu}-\partial_\mu \mathring{\lambda}_{2\nu}) - \mathring{\xi}_2^\nu (\partial_\nu\mathring{\lambda}_{1\mu}-\partial_\mu \mathring{\lambda}_{1\nu})+\frac{1}{2} \partial_\mu (\mathring{\xi}_1 \mathring{\lambda}_2- \mathring{\xi}_2 \mathring{\lambda}_1 ) \, ,
\end{equation}
from which one easily reads the relation
\begin{equation} \label{eq:star2veza}
\mathring{\xi}_\star^\mu= \kappa \mathring{\theta}^{\mu \rho} \mathring{\lambda}_\rho^\star \, .
\end{equation}
In a coordinate free notation, this bracket can be written as
\begin{equation}
[ \mathring{\Lambda}_1,\mathring{\Lambda}_2]^\star_{\mathring{\theta}} = [ (\mathring{\xi}_1, \mathring{\lambda}_1),(\mathring{\xi}_2, \mathring{\lambda}_2)]^\star_{\mathring{\theta}} = \Big(\kappa \mathring{\theta} \Big(.,{\cal L}_{\mathring{\xi}_1}\mathring{\lambda}_2 - {\cal L}_{\mathring{\xi}_2}\mathring{\lambda}_1\Big), {\cal L}_{\mathring{\xi}_1}\mathring{\lambda}_2 - {\cal L}_{\mathring{\xi}_2}\mathring{\lambda}_1 \Big) \, .
\end{equation}
\subsection{$B\theta$-star bracket}
The remaining terms contain geometric $\mathring{H}$ and ${\cal \mathring{F}}$ fluxes. Note that they are the only terms that depend on the new effective Kalb-Ramond field $\mathring{B}$. Firstly, we mark the last line of (\ref{eq:ringLambda2}) as
\begin{equation} \label{eq:Lstar1}
\mathring{\lambda}^*_{\mu} = 2\mathring{B}_{\mu \nu \rho} \mathring{\xi}_1^\nu \mathring{\xi}_2^\rho+ {\cal \mathring{F}}^{\ \nu}_{\mu \sigma}\ \Big( \mathring{\xi}^\sigma_1 \mathring{\lambda}_{2\nu} - \mathring{\xi}^\sigma_2 \mathring{\lambda}_{1\nu})+ \kappa \mathring{\theta}^{\nu \sigma} {\cal \mathring{F}}_{ \mu \sigma}^{\ \rho}\ \mathring{\lambda}_{1\nu}\mathring{\lambda}_{2\rho} \, .
\end{equation}
Secondly, using the definition of ${\cal \mathring{F}}$ (\ref{eq:calF}) and the fact that $\mathring{\theta}$ is antisymmetric, the last line of (\ref{eq:ringxi2}) can be rewritten as
\begin{eqnarray} \label{eq:xistar1}
\mathring{\xi}_*^{\mu} &=& 2\kappa \mathring{\theta}^{\mu \nu} \mathring{B}_{\nu \rho \sigma} \mathring{\xi}_1^\rho \mathring{\xi}_2^\sigma +\kappa \mathring{\theta}^{\mu \sigma}{\cal \mathring{F}}_{\sigma \rho}^{\ \nu}\ (\mathring{\xi}_1^{\rho} \mathring{\lambda}_{2\nu}-\mathring{\xi}_2^{\rho} \mathring{\lambda}_{1\nu}) +\kappa^2 \mathring{\theta}^{\mu \nu} \mathring{\theta}^{\tau \sigma} {\cal \mathring{F}}^{\ \rho}_{ \nu \sigma}\ \mathring{\lambda}_{1\tau}\mathring{\lambda}_{2\rho} \\ \notag
&=& \kappa \mathring{\theta}^{\mu \nu} \mathring{\lambda}^*_{\nu} \, .
\end{eqnarray}
Now relations (\ref{eq:Lstar1}) and (\ref{eq:xistar1}) define the $B\theta$-star bracket by
\begin{equation} \label{eq:star1}
[ \mathring{\Lambda}_1,\mathring{\Lambda}_2]^*_{\mathring{B} \mathring{\theta}} = \mathring{\Lambda}^* \Leftrightarrow [ (\mathring{\xi}_1, \mathring{\lambda}_1),(\mathring{\xi}_2, \mathring{\lambda}_2)]^*_{\mathring{B} \mathring{\theta}} = (\mathring{\xi}_*, \mathring{\lambda}^*) \, ,
\end{equation}
We can write the full bracket (\ref{eq:bracket4}) as
\begin{eqnarray} \label{eq:RoytZ}
[(\mathring{\xi}_1,\mathring{\lambda}_1),(\mathring{\xi}_2,\mathring{\lambda}_2)]_{{\cal C}_{\mathring{C}}} &=& \Big( [\mathring{\xi}_1,\mathring{\xi}_2]_L - [\mathring{\xi}_2,\mathring{\lambda}_1 \kappa \mathring{\theta}]_L + [\mathring{\xi}_1,\mathring{\lambda}_2 \kappa \mathring{\theta}]_L \\ \notag
&& + \frac{\kappa^2}{2} [\mathring{\theta},\mathring{\theta}]_S (\mathring{\lambda}_1,\mathring{\lambda}_2,.) , - [\mathring{\lambda}_1,\mathring{\lambda}_2]_{\kappa \mathring{\theta}} \Big) \\ \notag
&& + [(\mathring{\xi}_1,\mathring{\lambda}_1),(\mathring{\xi}_2,\mathring{\lambda}_2)]^*_{\mathring{B}, \mathring{\theta}} + [(\mathring{\xi}_1,\mathring{\lambda}_1),(\mathring{\xi}_2,\mathring{\lambda}_2)]^\star_{\mathring{\theta}} \, .
\end{eqnarray}
\subsection{Isotropic subspaces}
In order to give an interpretation to newly obtained starred brackets, it is convenient to consider isotropic subspaces. A subspace $L$ is isotropic if the inner product (\ref{eq:skalproizvod}) of any two generalized vectors from that sub-bundle is zero
\begin{equation} \label{eq:isodef}
\langle \Lambda_1, \Lambda_2 \rangle = 0, \ \ \ \Lambda_1 \, , \Lambda_2 \in L \, .
\end{equation}
From (\ref{eq:skalproizvod}), one easily finds that
\begin{eqnarray}\label{eq:isoth}
\xi_i^\mu = \kappa \, \theta^{\mu \nu} \lambda_{i \nu} . \qquad (i=1,2) \qquad \theta^{\mu \nu} = - \theta^{\nu \mu} \, ,
\end{eqnarray}
for any bi-vector $\theta$, and
\begin{eqnarray}\label{eq:isoB}
\lambda_{i \mu} = 2 B_{\mu \nu} \xi_i^\mu . \qquad (i=1,2) \qquad B_{\mu \nu} = - B_{\nu \mu} \, ,
\end{eqnarray}
for any 2-form $B$ satisfy the condition (\ref{eq:isodef}).
Furthermore, it is straightforward to introduce projections on these isotropic subspaces by
\begin{eqnarray}\label{eq:Ithdef}
{\cal I}^\theta (\Lambda^M) = {\cal I}^\theta (\xi^\mu, \lambda_\mu) = (\kappa \, \theta^{\mu \nu} \lambda_\nu , \lambda_\mu ) \, ,
\end{eqnarray}
and
\begin{eqnarray}\label{eq:IBdef}
{\cal I}_B (\Lambda^M) = {\cal I}_B (\xi^\mu, \lambda_\mu) = (\xi^\mu, 2 B_{\mu \nu} \xi^\nu ) \, .
\end{eqnarray}
Now it is easy to give an interpretation to star brackets. The $\theta$-star bracket (\ref{eq:star2}) can be defined as the projection of the Courant bracket (\ref{eq:CourantTheta}) on the isotropic subspace (\ref{eq:Ithdef})
\begin{equation} \label{eq:isostar2}
[\mathring{\Lambda}_1,\mathring{\Lambda}_2]^\star_{\mathring{\theta}} = {\cal I}^{\mathring{\theta}}\Big( [\mathring{\Lambda}_1, \mathring{\Lambda}_2]_{\cal C} \Big) \, .
\end{equation}
Similarly, note that all the terms in (\ref{eq:ringRLambda}) that do not appear in the $\theta$-twisted Courant bracket, contribute exactly to the $B\theta$-star bracket. From that, it is easy to obtain the definition of the $B\theta$-star bracket (\ref{eq:star1})
\begin{equation}
[\mathring{\Lambda}_1,\mathring{\Lambda}_2]^*_{\mathring{B}\mathring{\theta}} = {\cal I}^{\mathring{\theta}}\Big( [\mathring{\Lambda}_1, \mathring{\Lambda}_2]_{{\cal C}_{\mathring{C}}} \Big) - {\cal I}^{\mathring{\theta}}\Big( [\mathring{\Lambda}_1, \mathring{\Lambda}_2]_{{\cal C}_{\mathring{\theta}}} \Big) \, .
\end{equation}
\section{Courant bracket twisted by $B$ and $\theta$}
\cleq
Now it is possible to write down the expression for the Courant bracket twisted by $B$ and $\theta$ (\ref{eq:CTdef}), using the expression for $\mathring{C}$-twisted Courant bracket
\begin{equation} \label{eq:CBCR}
[\breve{\Lambda}_1, \breve{\Lambda}_2]_{{\cal C}_{B\theta}} = A^{-1} [A \breve{\Lambda}_1, A \breve{\Lambda}_2]_{{\cal C}_{\mathring{C}}} \, ,
\end{equation}
where $A$ is defined in (\ref{eq:Adef}). Substituting (\ref{eq:CBCR}) into (\ref{eq:ringRxi}), we obtain
\begin{eqnarray} \label{eq:breveRxi}
\breve{\xi} &=& {\cal C}^{-1}[ {\cal C}\breve{\xi}_1,{\cal C}\breve{\xi}_2]_L - {\cal C}^{-1}[{\cal C}\breve{\xi}_2,\breve{\lambda}_1 \kappa {\cal C}^{-1}\mathring{\theta}]_L +{\cal C}^{-1}[{\cal C}\breve{\xi}_1,\breve{\lambda}_2 \kappa {\cal C}^{-1}\mathring{\theta}]_L \\ \notag
&&- \Big({\cal L}_{{\cal C}{\breve{\xi}_1}}(\breve{\lambda}_2{\cal C}^{-1}) - {\cal L}_{{\cal C}{\breve{\xi}_2}}(\breve{\lambda}_1{\cal C}^{-1}) - \frac{1}{2}d(i_{\breve{\xi}_1}\breve{\lambda}_2 - i_{\breve{\xi}_2}\breve{\lambda}_1)\Big)\kappa \mathring{\theta}{\cal C}^{-1}\\ \notag
&&+ \frac{\kappa^2}{2} {\cal C}^{-1} [\mathring{\theta},\mathring{\theta}]_S (\breve{\lambda}_1 {\cal C}^{-1},\breve{\lambda}_2 {\cal C}^{-1},.) +\kappa {\cal C}^{-1} \mathring{\theta}\mathring{H}(., {\cal C}\breve{\xi}_1, {\cal C}\breve{\xi}_2) \\ \notag
&&-{\cal C}^{-1}\wedge^2 \kappa\mathring{\theta} \mathring{H}(\breve{\lambda}_1 {\cal C}^{-1}, ., {\cal C}\breve{\xi}_2) + {\cal C}^{-1}\wedge^2\kappa\mathring{\theta}\mathring{H}(\breve{\lambda}_2 {\cal C}^{-1}, ., {\cal C}\breve{\xi}_1)\\ \notag
&&+{\cal C}^{-1}\wedge^3\kappa \mathring{\theta}\mathring{H}(\breve{\lambda}_1 {\cal C}^{-1},\breve{\lambda}_2 {\cal C}^{-1},.) \, ,
\end{eqnarray}
and similarly, substituting (\ref{eq:CBCR}) into (\ref{eq:ringRLambda}), we obtain
\begin{eqnarray} \label{eq:breveRLambda}
\breve{\lambda} &=& \Big({\cal L}_{{\cal C}{\breve{\xi}_1}}(\breve{\lambda}_2{\cal C}^{-1}) - {\cal L}_{{\cal C}{\breve{\xi}_2}}(\breve{\lambda}_1{\cal C}^{-1}) - \frac{1}{2}d(i_{\breve{\xi}_1}\breve{\lambda}_2 - i_{\breve{\xi}_2}\breve{\lambda}_1)\Big){\cal C} + \mathring{H}({\cal C}\breve{\xi}_1,{\cal C}\breve{\xi}_2,.){\cal C}\\ \notag
&&- [\breve{\lambda}_1 {\cal C}^{-1},\breve{\lambda}_2{\cal C}^{-1}]_{\kappa \mathring{\theta}}{\cal C}-\kappa\mathring{\theta} \mathring{H}(\breve{\lambda}_1{\cal C}^{-1}, ., {\cal C}\breve{\xi}_2){\cal C}+\kappa \mathring{\theta} \mathring{H}(\breve{\lambda}_2{\cal C}^{-1}, ., {\cal C}\breve{\xi}_1){\cal C} \\ \notag
&&+\wedge^2 \kappa\mathring{\theta} \mathring{H}(\breve{\lambda}_1{\cal C}^{-1}, \breve{\lambda}_2{\cal C}^{-1},.){\cal C} \, ,
\end{eqnarray}
where ${\cal C}^\mu_{\ \nu} = \Big( \cosh{\sqrt{\alpha}} \Big)^{\mu}_{\ \nu}$ and $\breve{\Lambda} = (\breve{\xi}, \breve{\lambda})$ (\ref{eq:Lxitilde}). This is somewhat a cumbersome expression, making it difficult to work with. To simplify it, with the accordance of our convention, we define the twisted Lie bracket by
\begin{equation} \label{eq:lietwist}
[\breve{\xi}_1, \breve{\xi}_2]_{L_{\cal C}} = {\cal C}^{-1}[ {\cal C}\breve{\xi}_1,{\cal C}\breve{\xi}_2]_L \, ,
\end{equation}
as well as the twisted Schouten-Nijenhuis bracket
\begin{equation} \label{eq:snbtwist}
\Big( [\breve{\theta}, \breve{\theta}]_{S_{\cal C}} \Big)^{\mu \nu \rho} = ({\cal C}^{-1})^{\mu}_{\ \sigma} ({\cal C}^{-1})^{\nu}_{\ \lambda} ({\cal C}^{-1})^{\rho}_{\ \tau} \Big( [{\cal C}\breve{\theta}, {\cal C} \breve{\theta}]_S \Big)^{\sigma \lambda \tau} \, ,
\end{equation}
and twisted Koszul bracket
\begin{equation} \label{eq:koszulwist}
[\breve{\lambda}_1, \breve{\lambda}_2]_{\theta_{\cal C}} = ({\cal C}^T)^{-1} [ {\cal C}^T \breve{\lambda}_1 , {\cal C}^T \breve{\lambda}_2 ]_{\theta {\cal C}} \, ,
\end{equation}
where the transpose of the matrix is necessary because the Koszul bracket acts on 1-forms. Now, the first three terms of (\ref{eq:breveRxi}) can be written as
\begin{equation} \label{eq:Lietr}
[ \breve{\xi}_1,\breve{\xi}_2]_{L_{\cal C}} - [\breve{\xi}_2,\breve{\lambda}_1 \kappa {\cal C}^{-1} \breve{\theta}]_{L_{\cal C}} +[\breve{\xi}_1,\breve{\lambda}_2 \kappa {\cal C}^{-1}\breve{\theta}]_{L_{\cal C}} \, ,
\end{equation}
where
\begin{equation} \label{eq:brevetheta}
\breve{\theta}^{\mu \nu} = ({\cal C}^{-1})^{\mu}_{\ \rho} \mathring{\theta}^{\rho \nu} = {\cal S}^{\mu}_{\ \rho} \theta^{\rho \nu} \, .
\end{equation}
The second line of (\ref{eq:breveRxi}) and the first line of (\ref{eq:breveRLambda}) originating from $\mathring{\theta}$ star bracket (\ref{eq:star2}) can be easily combined into
\begin{equation} \label{eq:star2tr}
[ ({\cal C} \breve{\xi}_1, \breve{\lambda}_1{\cal C}^{-1}),({\cal C} \breve{\xi}_2, \breve{\lambda}_2{\cal C}^{-1})]^\star_{{\cal C}^{-1}\breve{\theta}}\ {\cal C} \, .
\end{equation}
The terms originating from $\mathring{B} \mathring{\theta}$ star bracket (\ref{eq:star1}) are combined into
\begin{equation} \label{eq:star1tr}
[(\breve{\xi}_1,\breve{\lambda}_1),(\breve{\xi}_2,\breve{\lambda}_2]^*_{\breve{B}, {\cal C}^{-1}\breve{\theta}} \, ,
\end{equation}
where
\begin{equation} \label{eq:breveB}
\breve{B}_{\mu \nu \rho} = \mathring{B}_{\alpha \beta \gamma}{\cal C}^{\alpha}_{\ \mu} {\cal C}^{\beta}_{\ \nu} {\cal C}^{\gamma}_{\ \rho} = \Big( \partial_{\alpha} (B{\cal S}{\cal C}^{-1})_{\beta \gamma}+\partial_{\beta} (B{\cal S}{\cal C}^{-1})_{\gamma \alpha}+\partial_{\gamma} (B{\cal S}{\cal C}^{-1})_{\alpha \beta} \Big){\cal C}^{\alpha}_{\ \mu} {\cal C}^{\beta}_{\ \nu} {\cal C}^{\gamma}_{\ \rho} \, .
\end{equation}
The expressions for the Courant bracket twisted by both $B$ and $\theta$ can be written in a form
\begin{eqnarray}
[(\breve{\xi}_1, \breve{\lambda}_1),(\breve{\xi}_2, \breve{\lambda}_2)]_{{\cal C}_{B\theta}} &=& \Big( [ \breve{\xi}_1,\breve{\xi}_2]_{L_{\cal C}} - [\breve{\xi}_2,\breve{\lambda}_1 \kappa {\cal C}^{-1} \breve{\theta}]_{L_{\cal C}} +[\breve{\xi}_1,\breve{\lambda}_2 \kappa {\cal C}^{-1}\breve{\theta}]_{L_{\cal C}} \\ \notag
&&+\frac{\kappa^2}{2}[\breve{\theta}, \breve{\theta}]_{S_{{\cal C}}}(\breve{\lambda}_1,\breve{\lambda}_2 ,.), -[\breve{\lambda}_1, \breve{\lambda}_2]_{\theta_{\cal C}} \Big) \\ \notag
&& +[ ({\cal C} \breve{\xi}_1, \breve{\lambda}_1{\cal C}^{-1}),({\cal C} \breve{\xi}_2, \breve{\lambda}_2{\cal C}^{-1})]^\star_{{\cal C}^{-1}\breve{\theta}}\ {\cal C} + [(\breve{\xi}_1,\breve{\lambda}_1),(\breve{\xi}_2,\breve{\lambda}_2]^*_{\breve{B}, {\cal C}^{-1}\breve{\theta}} \, .
\end{eqnarray}
When the Courant bracket is twisted by both $B$ and $\theta$, it results in a bracket similar to $\mathring{C}$-twisted Courant bracket, where Lie brackets, Schouten Nijenhuis bracket and Koszul bracket are all twisted as well.
\section{Conclusion}
\cleq
We examined various twists of the Courant bracket, that appear in the Poisson bracket algebra of symmetry generators written in a suitable basis, obtained acting on the double canonical variable (\ref{eq:Xdouble}) by the appropriate elements of $O(D,D)$ group. In this paper, we considered the transformations that twists the Courant bracket simultaneously by a 2-form $B$ and a bi-vector $\theta$. When these fields are mutually T-dual, the generator obtained by this transformation is invariant upon self T-duality.
We obtained the matrix elements of this transformation, that we denoted $e^{\breve{B}}$ (\ref{eq:ebb}), expressed in terms of the hyperbolic functions of a parameter $\alpha$ (\ref{eq:alfadef}). In order to avoid working with such a complicated expression, we considered another $O(D,D)$ transformation $A$ (\ref{eq:Adef}) and introduced a new generator, written in a basis of auxiliary currents $\mathring{\iota}_\mu$ and $\mathring{k}^\mu$. The Poisson bracket algebra of a new generator was obtained and it gave rise to the $\mathring{C}$-twisted Courant bracket, which contains all of the fluxes.
The generalized fluxes were obtained using different methods \cite{royt, nick1, nick2,flux1, flux2, flux3}. In our approach, we started by an $O(D,D)$ transformation that twists the Courant bracket simultaneously by a 2-form $B$ and bi-vector $\theta$, making it manifestly self T-dual. We obtained the expressions for all fluxes, written in terms of the effective fields
\begin{equation} \label{eq:Bmathring1}
\mathring{B}_{\mu \nu} = B_{\mu \rho} \Big( \frac{\tanh{\sqrt{2 \kappa \theta B}}}{\sqrt{2 \kappa \theta B}} \Big)^\rho{}_\nu \, , \qquad
\mathring{\theta}^{\mu \nu} = \Big( \frac{\sinh{2 \sqrt{2 \kappa \theta B}}}{2 \sqrt{2 \kappa \theta B}} \Big)^\mu{}_\sigma \theta^{\sigma \nu} \, .
\end{equation}
The fluxes, as a function of these effective fields, appear naturally in the Poisson bracket algebra of such generators.
Similar bracket was obtained in the algebra of generalized currents in \cite{nick1, nick2} and is sometimes referred to as the Roytenberg bracket \cite{royt}. In that approach, phase space has been changed, so that the momentum algebra gives rise to the $H$-flux, after which the generalized currents were defined in terms of the open string fields. The bracket obtained this way corresponds to the Courant bracket that was firstly twisted by $B$ field, and then by a bi-vector $\theta$. The matrix of that twist is given by
\begin{equation} \label{eq:enaR}
e^R = e^{\hat{\theta}} e^{\hat{B}} = \begin{pmatrix}
\delta^\mu_\nu + \alpha^\mu_{\ \nu} & \kappa \theta^{\mu \nu} \\
2 B_{\mu \nu} & \delta^\nu_\mu
\end{pmatrix}\, .
\end{equation}
In our approach, we obtained the transformations that twists the Courant bracket at the same time by $B$ and $\theta$, resulting in a $\mathring{C}$-twisted Courant bracket. As a consequence, the $\mathring{C}$-twisted Courant bracket is defined in terms of auxiliary fields $\mathring{B}$ (\ref{eq:Bmathring}) and $\mathring{\theta}$ (\ref{eq:thetamathring}), that are themselves function of $\alpha$. This is not the case in \cite{nick1, nick2}. The Roytenberg bracket calculated therein can be also obtained following our approach by twisting with the matrix
\begin{equation} \label{eq:eR}
e^C =A e^{\breve{B}} =
\begin{pmatrix}
{\cal C}^2 & \kappa ({\cal CS}\theta) \\
2B{\cal CS} & 1
\end{pmatrix}
\, ,
\end{equation}
demanding that the background fields are infinitesimal $B \sim \epsilon$, $\theta \sim \epsilon$ and keeping the terms up to $\epsilon^2$. With these conditions, $e^C$ (\ref{eq:eR}) becomes exactly $e^R$ (\ref{eq:enaR}), and the bracket becomes the Roytenberg bracket.
Analyzing the $\mathring{C}$-twisted Courant bracket, we recognized that certain terms can be seen as new brackets on the space of generalized vectors, that we named star brackets. We demonstrated that they are closely related to projections on isotropic spaces. It is well established that the Courant bracket does not satisfy the Jacobi identity in general case. The sub-bundles on which the Jacobi identity is satisfied are known as Dirac structures, which as a necessary condition need to be subsets of isotropic spaces. Therefore, the star brackets might provide future insights into integrability conditions for the $\mathring{C}$-twisted Courant bracket \cite{sorad}.
In the end, we obtained the Courant bracket twisted at the same time by $B$ and $\theta$ by considering the generator in the basis spanned by $\breve{\iota}$ and $\breve{k}$, equivalent to undoing $A$ transformation, used to simplify calculations. With the introduction of new fields $\breve{B}_{\mu \nu}$ and $\breve{\theta}^{\mu \nu}$, this bracket has a similar form as $\mathring{C}$-twisted Courant bracket, whereby the Lie, Schouten-Nijenhuis and Koszul brackets became their twisted counterparts.
It has already been established that $B$-twisted and $\theta$-twisted Courant brackets appear in the generator algebra defined in bases related by self T-duality \cite{crdual}. When the Courant bracket is twisted by both $B$ and $\theta$, it is self T-dual, and as such, represent the self T-dual extension of the Lie bracket that includes all fluxes. It has been already shown \cite{cdual} how the Hamiltonian can be obtained acting with $B$-transformations on diagonal generalized metric. The same method could be replicated with the twisting matrix $e^{\breve{B}}$, that would give rise to a different Hamiltonian, whose further analysis can provide interesting insights in the role that the Courant bracket twisted by both $B$ and $\theta$ plays in understanding T-duality.
|
1,116,691,499,054 | arxiv | \section{Introduction}
\addtocontents{toc}{\SkipTocEntry}
\subsection*{Background}
The study of (topological) full groups in the setting of topological dynamics was initiated by Giordano, Putnam and Skau~\cite{GPS}. This was inspired by the work of Dye~\cite{Dye} in the measurable setting, and by Krieger's study of so-called ample groups on the Cantor space~\cite{Kri}. For Cantor minimal systems, Giordano, Putnam and Skau showed that certain distinguished subgroups of the full group determine completely the orbit equivalence class, the strong orbit equivalence class, and the flip conjugacy class, respectively, of the system. The \emph{full group} of a Cantor system (i.e.\ a $\mathbb{Z}$-action on a Cantor space) consists of all homeomorphisms of the Cantor space which leave the orbits invariant. Roughly speaking, the \emph{topological full group} is the subgroup of the full group consisting of those homeomorphisms which additionally preserve the orbits in a continuous manner. Giordano, Putnam and Skau also connected the dynamics with the theory of $C^*$-algebras, via the crossed product construction and its $K$-theory~\cite{GPS3}. Thus, they exhibited a strong relationship between these, a priori, quite different mathematical structures.
This is but one example of the rich interplay\footnote{This interplay essentially goes all the way back to the inception of the field by Murray and von~Neumann~\cite{MvN}.} between dynamical systems and $C^*$-algebras. Another prominent example of this interplay is the connection between shifts of finite type and Cuntz-Krieger algebras; discovered by Cuntz and Krieger in the early eighties~\cite{CK}. In the setting of irreducible one-sided shifts of finite type, Matsumoto defined the topological full group of such a dynamical system and proved that this group determines the shift up to continous orbit equivalence, and also the associated Cuntz-Krieger algebra up to diagonal preserving isomorphism~\cite{Mats},~\cite{Mats2}. This parallelled Giordano, Putnam and Skau's results, although the dynamical systems were quite different. For instance, the former has no periodic points whereas the latter has a dense set of periodic points.
Using topological groupoids to model dynamical systems has unified many of these seemingly different connections between dynamics and $C^*$-algebras. Whenever one has a dynamical system of some sort, one may typically associate to it a topological groupoid, and from the groupoid one can construct its groupoid $C^*$-algebra. In many cases, isomorphism of such groupoids correspond to some suitable notion of continuous orbit equivalence of the dynamical systems, and also to diagonal preserving isomorphism of the groupoid $C^*$-algebras~\cite{MM}, \cite{BCW}, \cite{Li}, \cite{Li2}. That groupoid isomorphism corresponds to diagonal preserving isomorphism of the $C^*$-algebras (in the topologically principal case) is due to the pioneering work of Renault~\cite{Ren2}. This reconstruction result has recently been generalized in~\cite{CRST}; wherein it is also shown that by adding more structure on the groupoids, such as gradings, one can recover stronger types of equivalence of the dynamical systems.
In~\cite{Mat1}, Matui defined the topological full group of an étale groupoid with compact unit space. His definition generalized virtually all the previously given definitions for different kinds of dynamical systems at one fell swoop. Matui realized that homeomorphisms which preserve orbits in a continuous manner are always given by \emph{full bisections} from the associated groupoid. In the subsequent paper~\cite{Mat} Matui proved his remarkable Isomorphism Theorem. Supressing some assumptions, this theorem says that any two minimal groupoids over a Cantor space are isomorphic, as topological groupoids, if and only if their topological full groups are isomorphic\footnote{Actually, the same is true for several distinguished subgroups of the topological full group as well, such as its commutator subgroup. See \cite{Mat}, \cite{Nek} for details.}, as abstract groups. Matui's Isomorphism Theorem generalized the results of Giordano, Putnam and Skau, and Matsumoto, and others.
The study of topological full groups has also found interesting applications to group theory. Matui's isomorphism theorem means that one can classify the groupoids (and therefore any underlying dynamics, and the $C^*$-algebras) in terms of the topological full group. However, by going the other direction, one can use étale groupoids to distinguish certain discrete groups. Given two discrete groups, say in terms of their generators and relations, it can be hard to tell whether they are isomorphic or not. But if one can realize these groups as topological full groups (or distinguished subgroups) of some groupoids, then one can use the groupoids (i.e.\ the dynamics) to tell the groups apart---as one often has much dynamical information about the groupoids. For instance, this was the strategy used by Brin to show that Thompson's group $V$ is not isomorphic to its two-dimensional analog $2V$~\cite{Brin} (although he did not consider the groupoid explicitly\footnote{We should mention that Brin used a powerful result by Rubin~\cite{Rub2} from which one can deduce parts of Matui's Isomorphism Theorem. See Section~\ref{sec:spatrel} for more on Rubin's theorems.}). A more recent application of this form is by Matte Bon~\cite{MB} who showed that the higher dimensional Thompson group\footnote{It is known that the groups $nV$ are all non-isomorphic~\cite{BL}.} $nV$ embeds into $mV$ if and only if $n \leq m$. Matte Bon's paper also includes a novel approach to Matui's Isomorphism Theorem in terms of a dichotomy for such groupoids.
Another application is that topological full groups have provided new examples of groups with exotic properties. Most notably, topological full groups (or more precisely, their commutator subgroups) of Cantor minimal systems provided the first examples of finitely generated simple groups that are amenable (and infinite)~\cite{JM}. On another note, topological full groups arising from non-amenable groups acting minimally and topologically free on the Cantor space were recently shown to be $C^*$-simple~\cite{BS}.
Topological full groups have also found their way into Lawson's program of non-commutative Stone duality~\cite{Law}. In~\cite{Law2}, the topological full group of an étale groupoid is shown to coincide with the group of units of the so-called Tarski monoid to which the groupoid corresponds under non-commutative Stone duality.
\addtocontents{toc}{\SkipTocEntry}
\subsection*{Our results}
The main goal of the present paper is first and foremost to obtain a generalization of Matui's Isomorphism Theorem~\cite[Theorem 3.10]{Mat}. We provide two results in this direction (see Theorem~\ref{intro:KFgroupoid} and Theorem~\ref{intro:KLCCgroupoid} below). Matui's theorem applies to ample\footnote{I.e.\ étale and the unit space admits a compact open basis.} effective Hausdorff minimal second countable groupoids over (compact) Cantor spaces. We wish to relax the compactness assumption on the unit space, so we need to extend the definition of the topological full group to the locally compact setting. This is done in Definition~\ref{def:tfg}, where we stipulate that the homeomorphisms in the topological full group should be compactly supported (in addition to being induced by bisections). This seems a natural choice, as we then retain the ``finitary'' nature of the elements in the topological full group, as well as the countability of the topological full group (for second countable groupoids). Additionally, most of the arguments from~\cite{Mat} still work with suitable modifications.
For an étale groupoid $\mathcal{G}$ we denote its unit space by $\mathcal{G}^{(0)}$. The topological full group of $\mathcal{G}$ is denoted by $\llbracket \mathcal{G} \rrbracket$. And the commutator subgroup of $\llbracket \mathcal{G} \rrbracket$ is denoted by $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$. The first generalization of the isomorphism theorem relaxes the compactness assumption on~$\mathcal{G}^{(0)}$ and the second countability assumption on $\mathcal{G}$. (See the footnote after Theorem~\ref{thm:KFgroupoid} about the ``missing'' $\llbracket \mathcal{G}_i \rrbracket_0$ in the statement below.)
\begin{introtheorem}[c.f.\ Theorem~\ref{thm:KFgroupoid}]\label{intro:KFgroupoid}
Suppose $\mathcal{G}_1$ and $\mathcal{G}_2$ are effective ample minimal Hausdorff groupoids whose unit spaces have no isolated points. Then following are equivalent:
\begin{enumerate}
\item $\mathcal{G}_1 \cong \mathcal{G}_2$ as topological groupoids.
\item $\llbracket \mathcal{G}_1 \rrbracket \cong \llbracket \mathcal{G}_2 \rrbracket$ as abstract groups.
\item $\mathsf{D}(\llbracket \mathcal{G}_1 \rrbracket) \cong \mathsf{D}(\llbracket \mathcal{G}_2 \rrbracket)$ as abstract groups.
\end{enumerate}
\end{introtheorem}
As we'll see shortly, for the class of graph groupoids $\mathcal{G}_E$ arising from (directed) graphs $E$, we can also relax the assumption of minimality. Our second isomorphism theorem replaces the minimality assumption with a much weaker ``mixing property''. However, we were not able to establish rigidity for the commutator subgroup $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$. Addtionally, we need to insist that the unit space is second countable. (By a \emph{locally compact Cantor space} we mean either the \emph{compact} Cantor space or the locally compact \emph{non-compact} Cantor space (up to homeomorphism) c.f.\ ~Subsection~2.1.)
\begin{introtheorem}[c.f.\ Theorem~\ref{thm:KLCCgroupoid}]\label{intro:KLCCgroupoid}
Let $\mathcal{G}_1$ and $\mathcal{G}_2$ be effective ample Hausdorff groupoids over locally compact Cantor spaces. If, for $i = 1,2$, every $\mathcal{G}_i$-orbit has length at least $3$, and each clopen subset of $\mathcal{G}_i^{(0)}$ meets some $\mathcal{G}_i$-orbit twice, then the following are equivalent:
\begin{enumerate}
\item $\mathcal{G}_1 \cong \mathcal{G}_2$ as topological groupoids.
\item $\llbracket \mathcal{G}_1 \rrbracket \cong \llbracket \mathcal{G}_2 \rrbracket$ as abstract groups.
\end{enumerate}
\end{introtheorem}
As $(1) \Rightarrow (2) \Rightarrow (3)$ in Theorem~\ref{intro:KFgroupoid} and $(1) \Rightarrow (2)$ in Theorem~\ref{intro:KLCCgroupoid} are trivial, there is only one direction to prove. The proof strategy is essentially the same for both of the results. This strategy is summarized in the following diagram\footnote{If $\Gamma \leq \operatorname{Homeo}(X)$ and $\Lambda \leq \operatorname{Homeo}(Y)$ are groups of homeomorphisms, then a \emph{spatial isomorphism} between them is a homeomorphism $\phi \colon X \to Y$ such that $\lambda \mapsto \phi \circ \lambda \circ \phi^{-1}$ for $\lambda \in \Lambda$ is a group isomorphism.}:
\[ \begin{tikzcd}[arrows=Rightarrow]
\hspace{4.1cm}
\llbracket \mathcal{G}_1 \rrbracket \cong \llbracket \mathcal{G}_2 \rrbracket \text{ \ (abstract isomorphism)} \arrow{d} \\
\hspace{3.9cm}
\left(\llbracket \mathcal{G}_1 \rrbracket, \mathcal{G}_1^{(0)}\right) \cong \left(\llbracket \mathcal{G}_2 \rrbracket, \mathcal{G}_2^{(0)}\right) \text{\ (spatial isomorphism)} \arrow{d}{\text{ Functoriality}} \\
\hspace{0mm}
\operatorname{Germ}\left(\llbracket \mathcal{G}_1 \rrbracket, \mathcal{G}_1^{(0)}\right) \cong \operatorname{Germ}\left(\llbracket \mathcal{G}_2 \rrbracket, \mathcal{G}_2^{(0)}\right) \arrow{d}{\mathcal{G}_i \text{ covered by full bisections}} \\
\hspace{0mm}
\operatorname{Germ}\left(\llbracket \mathcal{G}_1 \rrbracket, \mathcal{G}_1^{(0)}\right) \cong \ \mathcal{G}_1 \cong \mathcal{G}_2 \ \cong\operatorname{Germ}\left(\llbracket \mathcal{G}_2 \rrbracket, \mathcal{G}_2^{(0)}\right)
\end{tikzcd} \]
We follow (and expand upon) Matui's approach to his isomorphism theorem, but we employ two different \emph{spatial realization results} in the first step---which is by far the ``hardest'' step. This then leads to two different isomorphism theorems. The first spatial realization result (see Theorem~\ref{classF}) is based on Matui's approach in~\cite{Mat}, and the other one (see Theorem~\ref{thm:KBfaithful}) is based on Rubin's work in \cite{Rub}. Neither result is stronger than the other, and so they each give rise to isomorphism theorems for different classes of groupoids.
By interpreting Theorem~\ref{intro:KLCCgroupoid} and its assumptions for graph groupoids we obtain Theorem~\ref{intro:KLCCrigid} below. Therein, Condition~(L) is the well-known exit condition of Kumjian, Pask and Raeburn~\cite{KPR}, namely, that every cycle should have an exit. \emph{Condition~(T)} (see Definition~\ref{condT}) essentially means that the graph does not have a component which is a tree. Finally, what we call \emph{degenerate vertices} (see Definition~\ref{def:degenerate}) are the ones giving $\mathcal{G}_E$-orbits of length $1$ or $2$. This theorem may be considered a considerable generalization of Matsumoto's result in the case of irreducible one-sided shifts of finite type~\cite{Mats2} (which correspond to finite strongly connected graphs).
\begin{introtheorem}[c.f.\ Theorem~\ref{thm:KLCCrigid}]\label{intro:KLCCrigid}
Let $E$ and $F$ be countable graphs satisfying Conditions~(L) and (T), and having no degenerate vertices. Then the following are equivalent:
\begin{enumerate}
\item $\mathcal{G}_{E} \cong \mathcal{G}_{F}$ as topological groupoids.
\item $\llbracket \mathcal{G}_{E} \rrbracket \cong \llbracket \mathcal{G}_{F} \rrbracket$ as abstract groups.
\end{enumerate}
\end{introtheorem}
In Corollary~\ref{cor:BCW}, we spell out the induced rigidity result for the associated graph algebras. As alluded to above, we are able to weaken the minimality assumption in Theorem~\ref{intro:KFgroupoid} for the class of graph groupoids. For this class of groupoids we can characterize exactly when the spatial realization result à la Matui applies. And this turns out to not need minimality (which essentially corresponds to the graph being strongly connected), but rather some ``exit and return''-conditions that are weaker. Each of these three conditions (see Definition~\ref{def:adhoc}) in Theorem~\ref{intro:KFrigid} below can be considered strengthenings of the three conditions that characterize when the boundary path space $\partial E$ has no isolated points (Proposition~\ref{prop:DEperfect}). Condition~(K) means that every cycle can be exited, and then returned to. Condition~(W) means that every wandering path can be exited, and then returned to. And Condition~($\infty$) means that every singular vertex can be exited (i.e.\ is not a sink, hence an infinite emitter), and then returned to (along infinitely many of the emitted edges).
\begin{introtheorem}[c.f.\ Theorem~\ref{thm:KFrigid}]\label{intro:KFrigid}
Let $E$ and $F$ be graphs with no sinks, and suppose they both satisfy Conditions~(K), (W) and ($\infty$). Then the following are equivalent:
\begin{enumerate}
\item $\mathcal{G}_{E} \cong \mathcal{G}_{F}$ as topological groupoids.
\item $\llbracket \mathcal{G}_{E} \rrbracket \cong \llbracket \mathcal{G}_{F} \rrbracket$ as abstract groups.
\item $\mathsf{D}(\llbracket \mathcal{G}_{E} \rrbracket) \cong \mathsf{D}(\llbracket \mathcal{G}_{F} \rrbracket)$ as abstract groups.
\end{enumerate}
\end{introtheorem}
The seminal embedding theorem of Kirchberg~\cite{KP} states that any separable exact (unital) $C^*$-algebra embeds (unitally) into the Cuntz algebra $\mathcal{O}_2$. In particular, this means that any graph $C^*$-algebra $C^*(E)$, where $E$ is a countable graph, embeds into $\mathcal{O}_2$. The latter, being the universal $C^*$-algebra generated by two orthogonal isometries, can be canonically identified with a graph $C^*$-algebra. Namely, the graph $C^*$-algebra of the graph $E_2$ consisting of a single vertex with two loops.
In~\cite{BS}, Brownlowe and Sørensen show that the Leavitt path algebra $L_R(E)$, over any commutative unital ring $R$, embeds into $L_R(E_2)$---the algebraic analog of $\mathcal{O}_2$. An inspection of their proof reveals that this embedding also maps the canonical diagonal subalgebra $D_R(E)$ into $D_R(E_2)$. As a consequence, Kirchberg's embedding for the graph $C^*$-algebras may then also be taken to be diagonal preserving (with respect to the diagonal\footnote{Technically, this is a Cartan subalgebra in the sense of Renault, not a $C^*$-diagonal in the sense of Kumjian. But it's common to refer to it as ``the diagonal'' in a graph $C^*$-algebra.} in $\mathcal{O}_2$ coming from the identification with $C^*(E_2)$).
At this point, it starts smelling a bit like groupoids might be lurking about. Indeed, inspired by Brownlowe and Sørensens proof, we are able to prove, using the properties of the Germ-functor (see Section~\ref{sec:SpatG}), that the underlying graph groupoid $\mathcal{G}_E$ embeds into the Cuntz groupoid $\mathcal{G}_{E_2}$ (modulo topological obstructions in sense of isolated points). We are also able to extend this embedding to all groupoids which are groupoid equivalent (which corresponds to a diagonal preserving Morita equivalence of the ($C^*$-)algebras) to a graph groupoid. To the best of the authors' knowledge, this is the first embedding result of its kind for ample groupoids.
\begin{introtheorem}[c.f.\ Theorem~\ref{eqEmbed}]\label{intro:eqEmbed}
Let $\mathcal{H}$ be an effective ample second countable Hausdorff groupoid with $\mathcal{H}^{(0)}$ a locally compact Cantor space. If $\mathcal{H}$ is groupoid equivalent to $\mathcal{G}_E$, for some countable graph $E$ satisfying Condition~(L) and having no sinks nor semi-tails, then $\mathcal{H}$ embeds into $\mathcal{G}_{E_2}$. Moreover, if $\mathcal{H}^{(0)}$ is compact, then the embedding maps $\mathcal{H}^{(0)}$ onto $E_2^\infty$.
In particular, any graph groupoid $\mathcal{G}_E$, with $E$ as above, embeds into $\mathcal{G}_{E_2}$, and any AF-groupoid embeds into $\mathcal{G}_{E_2}$.
\end{introtheorem}
In Subsection~11.2 we give explicit embeddings of any graph $C^*$-algebra $C^*(E)$ (or Leavitt path algebra $L_R(E)$) as above into $\mathcal{O}_2$, in terms of the generators, which maps the diagonal \emph{onto} the diagonal (and is unital whenever $C^*(E)$ is). We also record a result on diagonal embeddings of AF-algebras in Corollary~\ref{cor:AFemb}.
A consequence of Theorem~\ref{intro:eqEmbed} is that each topological full group $\llbracket \mathcal{G}_E \rrbracket$ embeds into Thompson's group $V$, since the latter is isomorphic to $\llbracket \mathcal{G}_{E_2} \rrbracket$. The Higman-Thompson groups $V_{n,r}$ (where $nV = V_{n,1}$) can be realized as topological full groups of graph groupoids of certain strongly connected finite graphs (see Subsection~11.3). Hence, our embedding theorem may be considered a generalization of the well-known embedding of $V_{n,r}$ into $V$. In terms of groups, our embedding also includes all the so-called \emph{LDA-groups} (see Remark~\ref{rem:LDA}).
In~\cite{Mat3}, Matui introduced two conjectures for minimal ample groupoids on the Cantor space. The \emph{HK-conjecture} relates the groupoid homology to the $K$-theory of the groupoid $C^*$-algebra. And the \emph{AH-conjecture} relates the topological full group to the groupoid homology. These conjectures have been verified in several cases~\cite{Mat2}, in particular for (products of) graph groupoids arising from strongly connected finite graphs. For the more general graph groupoids studied in the present paper, the second named author, together with Carlsen, will attack these conjectures in a forthcoming paper. (In the recent preprint~\cite{Ort}, the second named author verifies the HK-conjecture for a class of groupoids which includes the graph groupoids of row-finite graphs.)
\addtocontents{toc}{\SkipTocEntry}
\subsection*{Précis} The structure of the paper is as follows. We recall some basic notions about étale groupoids and (classical) Stone duality in Section~\ref{sec:prelim}. This section also serves the purpose of establishing notation and conventions. In Section~\ref{sec:tfg} we give the definition of the topological full group $\llbracket \mathcal{G} \rrbracket$ of an ample groupoid $\mathcal{G}$ with locally compact unit space $\mathcal{G}^{(0)}$. We also prove some elementary results on the existence of elements in the topological full group with certain properties. Then we move on to study the groupoid of germs $\operatorname{Germ}\left(\Gamma, \mathcal{G}^{(0)} \right)$ associated to a subgroup $\Gamma \leq \llbracket \mathcal{G} \rrbracket$ of the topological full group, in Section~\ref{sec:germ}. We establish that $\operatorname{Germ}\left(\Gamma, \mathcal{G}^{(0)} \right)$ always embeds into $\mathcal{G}$, and that this embedding is an isomorphism as long as $\Gamma$ contains ``enough elements''. In Section~\ref{sec:SpatG} we introduce the two categories; $\catname{SpatG}$ and $\catname{Gpoid}$. The former consists of pairs $(\Gamma, X)$ where $X$ is a space and $\Gamma$ is a subgroup of $\operatorname{Homeo}(X)$. The latter consists of certain ample groupoids.
By defining suitable morphisms in these categories and what the germ of a morphism in $\catname{SpatG}$ should be, we establish that the assigment $(\Gamma, X) \mapsto \operatorname{Germ}\left(\Gamma, X \right)$ is functorial.
The topological reconstruction results needed to deduce that an abstract isomorphism of two topological full groups always is spatially implement are provided in Section~\ref{sec:spatrel}.
In Section~\ref{sec:reggrpd} we prove our two isomorphism theorems, Theorem~\ref{intro:KFgroupoid} and Theorem~\ref{intro:KLCCgroupoid}. This is now mostly a matter of interpreting the spatial realization results from Section~\ref{sec:spatrel} in terms of the groupoid and its topological full group, and then combine this with the results of Section~\ref{sec:germ} and Section~\ref{sec:SpatG}.
Then, in Section~\ref{sec:gg} we begin our in-depth study of graph groupoids $\mathcal{G}_E$ of general graphs $E$. This section is devoted to a thorough introduction of graph terminology and the dynamics that give rise to the graph groupoids. For several of the generic properties a topological groupoid can have, we list their characterizations for graph groupoids in terms of the graphs. We continue in Section~\ref{sec:tfgg} with describing all elements in the topological full group $\llbracket \mathcal{G}_E \rrbracket$ of graph groupoids. To do this we need to specify a new (yet equivalent) basis for the topology on $\mathcal{G}_E$. We then pursue specialized isomorphism theorems for the class of graph groupoids in Section~\ref{sec:isogg}. This yields Theorem~\ref{intro:KLCCrigid} and Theorem~\ref{intro:KFrigid}. At the end of this section we spell out the induced rigidity result for the graph algebras.
In the final section of the paper we employ the machinery from Sections~\ref{sec:germ}, \ref{sec:SpatG} and \ref{sec:tfgg} to obtain our groupoid embedding result; Theorem~\ref{intro:eqEmbed}. The main ingredient of the proof is constructing, for any graph $E$ as in the theorem, an injective local homeomorphism $\phi \colon \partial E \to E_2^\infty$ which induces a spatial embedding of the associated topological full groups. This construction is entirely explicit, and so we also obtain explicit diagonal embeddings of the graph algebras, in terms of their generators. We give several examples illustrating this for infinite graphs~$E$. We also record that $\llbracket \mathcal{G}_E \rrbracket$ always has the Haagerup property. At the end of Section~\ref{sec:embed} we show that any AF-groupoid is groupoid equivalent to a graph groupoid, going via Bratteli diagrams, hence $\mathcal{G}_{E_2}$-embeddable. We then spell out consequences for diagonal embeddings of AF-algebras. Additionally, we remark that transformation groupoids arising from locally compact (non-compact) Cantor minimal systems are AF-groupoids, and hence $\mathcal{G}_{E_2}$-embeddable as well.
\addtocontents{toc}{\SkipTocEntry}
\subsection*{Acknowledgments}
We would like to express our gratitude to Volodymyr Nekrashevych for sharing his private notes on Rubin's theorems. We would also like to thank Hiroki Matui for pointing out Lemma~\ref{lem_haus} to us, as well as for other valuable comments while he visited NTNU in the spring of 2018. The first named author thanks Eric Wofsey for helpful remarks on Stone duality. We also wish to thank Ulrik Enstad and Christian Skau for comments on the first draft of this paper.
\section{Preliminaries}\label{sec:prelim}
We will now recall the basic notions needed throughout the paper, and establish notation and conventions. We denote the natural numbers $\{1, 2, 3, \ldots \}$ by $\mathbb{N}$, and denote $\mathbb{N}_0 = \mathbb{N} \cup \{0\}$. If two sets $A,B$ are disjoint we will denote their union by $A \sqcup B$ if we wish to emphasize that they are disjoint. When we write $C = A \sqcup B$ we mean that $C = A \cup B$ \emph{and} that $A$ and $B$ are disjoint sets.
\subsection{Topological notions}
Following \cite{KL}, \cite{Stein} we say that a topological space is \emph{Boolean} if it is Hausdorff and has a basis of compact open sets. (This is also the terminology orginally used by Stone~\cite{Stone}.) A \emph{Stone space} is then a compact Boolean space. We say that a topological space is \emph{perfect} if it has no isolated points. By a \emph{locally compact Cantor space} we mean a (non-empty) second countable perfect Boolean space. Up to homeomorphism there are two such spaces; one compact (the Cantor set) and one non-compact (the Cantor set with a point removed). The latter may also be realized as any non-closed open subset of the Cantor set, or as the product of the Cantor set and a countably infinite discrete space.
For a topological space $X$ we denote the group of self-homeomorphisms on $X$ by $\operatorname{Homeo}(X)$. We will occasionally denote $\id_X$ simply by $1$ for brevity. By an \emph{involution} we mean a homeomorphism (or more generally, a group element) $\phi$ with $\phi^2 = 1$. For $\phi \in \operatorname{Homeo}(X)$, we define the \emph{support of $\phi$} to be the (regular) closed set $\overline{\{x\in X \ \vert \ \phi(x)\neq x\}}$, and denote it by $\supp(\phi)$. We also define \[\operatorname{Homeo}_c(X) \coloneqq \{ \phi \in \operatorname{Homeo}(X) \ \vert \ \supp(\phi) \text{ compact open} \}.\]
When $\Gamma$ is a subgroup of a group $\Gamma'$ we write $\Gamma \leq \Gamma'$. Beware that we will abuse this notation when we write $\Gamma \leq \operatorname{Homeo}_c(X)$ to mean that $\Gamma$ is a subgroup of $\operatorname{Homeo}(X)$ \emph{and} that $\Gamma \subseteq \operatorname{Homeo}_c(X)$. (It is not clear whether $\operatorname{Homeo}_c(X)$ itself is a group.)
\subsection{Stone duality}
We will now briefly recall the basics of (classical) Stone duality needed for Section~\ref{sec:spatrel}. For more details the reader may consult~\cite{Kop}, \cite[Chapter 31]{Frem} (or even the fountainhead~\cite{Stone}, \cite{Doct}). By a \emph{Boolean algebra} we mean a complemented distributive lattice with a top and bottom element. And by a \emph{generalized Boolean algebra} we mean a relatively complemented distributive lattice with a bottom element. For a topological space $X$, we denote the set of clopen subsets of $X$ by $\co(X)$. The set of compact open subsets of $X$ are denoted by $\ck(X)$. Finally, the set of regular open subsets of $X$ are denoted by $\mathcal{R}(X)$.
\begin{example}
Let $X$ be a topological space.
\begin{enumerate}
\item $\co(X)$ is a Boolean algebra under the operations of set-theorietic union, intersection and complement by $X$.
\item $\ck(X)$ is a generalized Boolean algebra in the same way as $\co(X)$, except for admitting only relative (set-theoretic) complements.
\item $\mathcal{R}(X)$ is a Boolean algebra with the following operations. Let $A,B \in \mathcal{R}(X)$. The join of $A$ and $B$ is $\left(\overline{A \cup B}\right)^\circ$, where $\circ$ denotes the interior. The meet of $A$ and $B$ is $A \cap B$. And the complement of $A$ is $\sim A \coloneqq (X \setminus A)^\circ$.
\end{enumerate}
\end{example}
A crude way of stating Stone duality is to say that every Boolean algebra arises as $\co(X)$ for some Stone space $X$, and that every generalized Boolean algebra arises as $\ck(Y)$ for some Boolean space $Y$. Hence, Stone spaces correspond to Boolean algebas and Boolean spaces correspond to generalized Boolean algebras.
More precisely, it is a duality in the following sense. A continuous map $f \colon X \to Y$ between topological spaces $X$ and $Y$ is \emph{proper} if $f^{-1}(K)$ is compact in $X$ whenever $K$ is a compact subset of $Y$. A map $\psi \colon \mathcal{A} \to \mathcal{B}$ between generalized Boolean algebras $\mathcal{A}$ and $\mathcal{B}$ is a \emph{Boolean homomorphism} if it preserves joins, meets and relative complements. We say that $\psi$ is \emph{proper} if for each $b \in \mathcal{B}$, there exists $a \in \mathcal{A}$ such that $\psi(a) \geq b$. Boolean spaces with proper continuous maps form a category. So does generalized Boolean algebras with proper Boolean homomorphisms. For a proper continuous map $f \colon X \to Y$, let $\ck(f)(A) \coloneqq f^{-1}(A)$ for $A \in \ck(Y)$. This makes $\ck(-)$ a contravariant functor from the category of Boolean spaces to the category of generalized Boolean algebras (with maps as above).
For a generalized Boolean algebra $\mathcal{A}$, let $\mathbb{S}(\mathcal{A})$ denote the set of ultrafilters in $\mathcal{A}$. For each $a \in \mathcal{A}$, let $\mathbb{S}(a) \coloneqq \{ \alpha \in \mathbb{S}(\mathcal{A}) \mid a \in \alpha \}$. Equipping $\mathbb{S}(\mathcal{A})$ with the topology generated by the (compact open) cylinder sets $\mathbb{S}(a)$ turns it into a Boolean space. For a proper Boolean homomorphism $\psi \colon \mathcal{A} \to \mathcal{B}$ and an ultrafilter $\beta \in \mathbb{S}(\mathcal{B})$, let $\mathbb{S}(\psi)(\beta) \coloneqq \{ \psi^{-1}(b) \mid b \in \beta \}$. This makes $\mathbb{S}(-)$ a contravariant functor in the other direction, and we refer to it as the \emph{Stone functor}. \emph{Stone duality} asserts that the contravariant functors $\ck(-)$ and $\mathbb{S}(-)$ implement a dual equivalence. In other words, the category of Boolean spaces is dually equivalent to the category of generalized Boolean algebras. It is more common to state Stone duality in terms of Stone spaces and Boolean algebras. This is just the restriction of the duality above to the aforementioned sub-categories.
For a generalized Boolean algebra $\mathcal{A}$, we let $\aut(\mathcal{A})$ denote the group of isomorphisms from $\mathcal{A}$ to $\mathcal{A}$.
\subsection{Étale groupoids}
The standard references for étale groupoids (and their $C^*$-algebras) are Renault's thesis~\cite{Ren2} and Paterson's book~\cite{Pat}. See also the excellent lecture notes by Sims~\cite{Sims}. A \emph{groupoid} is a small category of isomorphisms, that is, a set $\mathcal{G}$ (the morphisms, or arrows in the category) equipped with a partially defined multiplication $(g_1, g_2) \mapsto g_1 \cdot g_2$ for a distinguished subset $\mathcal{G}^{(2)} \subseteq \mathcal{G} \times \mathcal{G}$, and everywhere defined involution $g \mapsto g^{-1}$ satisfying the following axioms:
\begin{enumerate}
\item If $g_1g_2$ and $(g_1g_2)g_3$ are defined, then $g_2g_3$ and $g_1(g_2g_3)$ are defined and $(g_1g_2)g_3=g_1(g_2g_3)$,
\item The products $gg^{-1}$ and $g^{-1}g$ are always defined. If $g_1g_2$ is defined, then $g_1=g_1g_2g_2^{-1}$ and $g_2=g_1^{-1}g_1g_2$.
\end{enumerate}
A \emph{topological groupoid} is a groupoid equipped with a topology making the operations of multiplication and taking inverse continuous. The elements of the form $gg^{-1}$ are called \emph{units}. We denote the set of units of a groupoid $\mathcal{G}$ by $\mathcal{G}^{(0)}$, and refer to this as the \emph{unit space}. We think of the unit space as a topological space equipped with the relative topology from $\mathcal{G}$. The \emph{source} and \emph{range} maps are
$$s(g) \coloneqq g^{-1}g\qquad\text{and}\qquad r(g) \coloneqq gg^{-1}$$
for $g\in \mathcal{G}$. These maps are necessarily continuous when $\mathcal{G}$ is a topological groupoid. We implicitly assume that all unit spaces appearing are of infinite cardinality (in order to avoid some degenerate cases). An \emph{étale} groupoid is a topological groupoid where the range map (and necessarily also the source map) is a local homeomorphism (as a map from $\mathcal{G}$ to $\mathcal{G}$). The unit space $\mathcal{G}^{(0)}$ of an étale groupoid is always an open subset of $\mathcal{G}$. An \emph{ample} groupoid is an étale groupoid whose unit space is a Boolean space.
It is quite common for operator algebraists to restrict to Hausdorff groupoids. One reason for this is that a topological groupoid is Hausdorff if and only if the unit space is a closed subset of the groupoid. In the end our main results will only apply to groupoids that are Hausdorff, but some of the theory applies when $\mathcal{G}$ is merely ample (and effective). For as long as the unit space $\mathcal{G}^{(0)}$ is Hausdorff the groupoid will be locally Hausdorff. We shall therefore clearly indicate whenever we actually need the groupoid to be Hausdorff for some result to hold.
Two units $x,y\in \mathcal{G}^{(0)}$ belong to the same \emph{$\mathcal{G}$-orbit} if there exists $g\in \mathcal{G}$ such that $s(g)=x$ and $r(g)=y$. We denote by $\operatorname{Orb}_{\mathcal{G}}(x)$ the $\mathcal{G}$-orbit of $x$. When every $\mathcal{G}$-orbit is dense in $\mathcal{G}^{(0)}$, $\mathcal{G}$ is called \emph{minimal}. In the special case that there is just one orbit, we call $\mathcal{G}$ \emph{transitive}. A subset $A \subseteq \mathcal{G}^{(0)}$ is called \emph{$\mathcal{G}$-full} if $r(s^{-1}(A) = \mathcal{G}^{(0)}$, in other words if $A$ meets every $\mathcal{G}$-orbit. For an open subset $A \subseteq \mathcal{G}^{(0)}$ we denote by $\mathcal{G}_{|A}$ the subgroupoid $\{g\in \mathcal{G} \mid s(g), r(g)\in A \}$, called the \emph{restriction} of $\mathcal{G}$ to $A$. When $\mathcal{G}$ is étale, the restriction $\mathcal{G}_{|A}$ is an open étale subgroupoid. The \emph{isotropy group} of a unit $x\in \mathcal{G}^{(0)}$ is the group $\mathcal{G}_x^x \coloneqq \{g\in \mathcal{G} \mid s(g)=r(g)=x\}$, and the \emph{isotropy bundle} is
\[\mathcal{G}' \coloneqq \{g\in \mathcal{G} \mid s(g)=r(g)\} = \bigsqcup_{x \in \mathcal{G}^{(0)}} \mathcal{G}_x^x.\]
A groupoid $\mathcal{G}$ is said to be \emph{principal} if all isotropy groups are trivial, or equivalently, $\mathcal{G}' = \mathcal{G}^{(0)}$. Any principal groupoid can be identified with an equivalence relation on its unit space $\mathcal{G}^{(0)}$, but the topology need not be the relative topology from $\mathcal{G}^{(0)} \times \mathcal{G}^{(0)}$. We say that $\mathcal{G}$ is \emph{effective} if the interior of $\mathcal{G}'$ equals $\mathcal{G}^{(0)}$. We call $\mathcal{G}$ \emph{topologically principal} if the set of points in $\mathcal{G}^{(0)}$ with trivial isotropy group are dense in $\mathcal{G}^{(0)}$.
\begin{remark}\label{rem:effective}
We should point out that the condition we are calling \emph{effective} often goes under the name \emph{essentially principal} (or even topologically principal) elsewhere in the literature. In general, topologically principal implies effective. However, for most groupoids considered by operator algebraists the two notions are in fact equivalent (see \cite[Proposition 3.1]{Ren}), so often these names all mean the same thing. In particular, this is the case for second countable locally compact Hausdorff étale groupoids.
\end{remark}
\begin{definition}
Let $\mathcal{G}$ be an étale groupoid. A \emph{bisection} is an open subset $U\subseteq \mathcal{G}$ such that $s$ and $r$ are both injective when restricted to $U$. A bisection $U$ is called \emph{full} if $s(U)=r(U)=\mathcal{G}^{(0)}$.
\end{definition}
When $U$ is a bisection in $\mathcal{G}$, then $s_{\vert U} \colon U \to s(U)$ is a homeomorphism, and similarly for the range map. An étale groupoid can thus be characterized by admitting a topological basis consisting of bisections, and an ample groupoid as one with a basis of compact bisections. In particular, ample groupoids are locally compact, and if $\mathcal{G}$ is Hausdorff and ample, then $\mathcal{G}$ is also a Boolean space. One of the most basic class of examples of étale groupoids are the following, which arise from group actions.
\begin{example}
Let $\Gamma$ be a discrete group acting by homeomorphisms on a topological space $X$. The associated \emph{transformation groupoid} is \[\Gamma \ltimes X \coloneqq \Gamma \times X\] with product according to~${(\tau, \gamma(x)) \cdot (\gamma,x) = (\tau \gamma, x)}$ (and undefined otherwise), and inverse $(\gamma,x)^{-1} = (\gamma^{-1}, \gamma(x))$. Identifying the unit space $\left(\Gamma \ltimes X \right)^{(0)} = \{1\} \times X$ with $X$ in the obvious way we have $s((\gamma, x)) = x$ and $r((\gamma, x)) = \gamma(x)$. Equipping $\Gamma \ltimes X$ with the product topology makes it an étale groupoid (essentially because $\Gamma$ is discrete), and a basis of bisections is given by the cylinder sets \[Z(\gamma, U) \coloneqq \{(\gamma,x) \ \vert \ x\in U\}\]
indexed over $\gamma \in \Gamma$ and open subsets $U \subseteq X$. The identification of $X$ with the unit space as above is compatible with this topology. In particular $\Gamma \ltimes X$ is Hausdorff and ample exactly when $X$ is Boolean, and second countable when $\Gamma$ is countable and $X$ is second countable. The transformation groupoid is effective if and only if every non-trivial group element has support equal to $X$. In the second countable setting, this coincides with the action being topologically principal (meaning that the set of points that are fixed only by the identity element of the group form a dense subset of $X$). The groupoid orbit $\operatorname{Orb}_{\Gamma \ltimes X}(x)$ of a point $x \in X$ coincide with the orbit under the action, i.e.\ $\operatorname{Orb}_{\Gamma \ltimes X}(x) = \{\gamma(x) \ \vert \gamma \in \Gamma \} = \operatorname{Orb}_{\Gamma}(x)$.
\end{example}
A \emph{groupoid homomorphism} between two groupoids $\mathcal{G}$ and $\mathcal{H}$ is a map $\Phi \colon \mathcal{G} \to \mathcal{H}$ such that $(\Phi(g), \Phi(g')) \in \mathcal{H}^{(2)}$ whenever $(g, g') \in \mathcal{G}^{(2)}$, and moreover $\Phi(g) \cdot \Phi(g') = \Phi(g\cdot g')$. It follows that $\Phi(g^{-1}) = \Phi(g)^{-1}$ for all $g \in \mathcal{G}$, $\Phi$ commutes with the source and range maps and $\Phi\left(\mathcal{G}^{(0)}\right) \subseteq \mathcal{H}^{(0)}$. If $\Phi$ is a bijection, then $\Phi^{-1}$ is a groupoid homomorphism and we call $\Phi$ an \emph{algebraic isomorphism}. For étale groupoids $\mathcal{G}$ and $\mathcal{H}$ an \emph{étale homomorphism} is a groupoid homomorphism $\Phi \colon \mathcal{G} \to \mathcal{H}$ which is also a local homeomorphism. It is a fact that a groupoid homomorphism $\Phi \colon \mathcal{G} \to \mathcal{H}$ between étale groupoids is a local homeomorphism if and only if the restriction $\Phi^{(0)} \colon \mathcal{G}^{(0)} \to \mathcal{H}^{(0)}$ to the unit spaces is a local homemorphism. By an \emph{isomorphism} of topogical (or étale) groupoids we mean an algebraic isomorphism which is also a homeomorphism. So a bijective étale homomorphism is an isomorphism of étale groupoids. Note that if $\Phi \colon \mathcal{G} \to \mathcal{H}$ is an étale homomorphism, then the image $\Phi(\mathcal{G})$ is an open étale subgroupoid of $\mathcal{H}$.
\section{The topological full group}\label{sec:tfg}
In this section we will expand Matui's definition of the topological full group of an ample groupoid from the compact to the locally compact case, and establish some elementary properties.
To each bisection $U\subseteq \mathcal{G}$ in an étale groupoid we associate a homeomorphism \[\pi_U \colon s(U)\to r(U)\] given by $r_{\vert U} \circ (s_{\vert U})^{-1}$. This means that for each $g \in U,$ $\pi_U$ maps $s(g)$ to $r(g)$. Whenever $U$ is a full bisection, $\pi_U$ is a homeomorphism of $\mathcal{G}^{(0)}$. We now show that the (partial) homeomorphism $\pi_U$ determines the bisection $U$, when the groupoid is effective and Hausdorff.
\begin{lemma}\label{lemma:homeoBis}
Let $\mathcal{G}$ be an effective ample Hausdorff groupoid and let $U,V \subseteq \mathcal{G}$ be bisections with $s(U) = s(V)$ and $r(U) = r(V)$. If $\pi_U = \pi_V$, then $U = V$.
\end{lemma}
\begin{proof}
That $\pi_U = \pi_V$ means that for each $x \in s(U)$, the unique elements $g \in U, h \in V$ with $s(g) = x = s(h)$ also satisfies $r(g) = r(h)$. This implies that $V^{-1}U \subseteq \mathcal{G}'$. As $\mathcal{G}$ is Hausdorff, $\mathcal{G}^{(0)}$ is closed, and therefore $V^{-1}U \cap \left( \mathcal{G} \setminus \mathcal{G}^{(0)}\right)$ is an open subset of $\mathcal{G}' \setminus \mathcal{G}^{(0)}$. But since $\mathcal{G}$ is effective this set must be empty. This entails that $V^{-1}U \subseteq \mathcal{G}^{(0)}$, and hence $U = V$.
\end{proof}
\begin{definition}\label{def:tfg}
Let $\mathcal{G}$ be an effective ample groupoid. The \emph{topological full group} of $\mathcal{G}$, denoted $\llbracket \mathcal{G} \rrbracket$, is the subgroup of $\operatorname{Homeo}\left(\mathcal{G}^{(0)}\right)$ consisting of all homeomorphisms of the form $\pi_U$, where $U$ is a full bisection in $\mathcal{G}$ such that $\supp(\pi_U)$ is compact. We will denote by $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$ its commutator subgroup.
\end{definition}
In the topological full group, composition and inversion of the homeomorphisms correspond to multiplication and inversion of the bisections, viz.:
\begin{itemize}
\item $\pi_{\mathcal{G}^{(0)}} = \id_{\mathcal{G}^{(0)}} = 1$
\item $\pi_U \circ \pi_V = \pi_{UV}$
\item $\left(\pi_U\right)^{-1} = \pi_{U^{-1}}$
\end{itemize}
\begin{remark}\label{rem:matui}
It is clear that when the unit space is compact, this definition coincides with Matui's \cite[Definition 2.3]{Mat1}---which again generalizes the definitions given in \cite{GPS} and \cite{Mats}, for Cantor dynamical systems and one-sided shifts of finite type, respectively, to étale groupoids. Moreover, in \cite{Mat4} Matui defined six different full groups associated with a minimal homeomorphism $\phi$ of a locally compact Cantor space. The smallest one of these, denoted $\tau[\phi]_c$ in \cite{Mat4}, equals the topological full group (as in Definition~\ref{def:tfg}) of the associated transformation groupoid.
\end{remark}
\begin{remark}
After the completion of this work, we were made aware of Matte Bon's preprint~\cite{MB} where he defines the topological full group of an arbitrary étale groupoid $\mathcal{G}$ as the group of all full bisection $U \subseteq \mathcal{G}$ such that $U \setminus \mathcal{G}^{(0)}$ is compact. For effective groupoids, this agrees with Definition~\ref{def:tfg}, modulo identifying a full bisection with its associated homeomorphism. For not necessarily effective groupoids it is arguably better to define the topological full group in terms of the bisections themselves, for then one does not ``lose'' the information contained in the (non-trivial) isotropy (but also to separate the group from its canonical---no longer faithful---action on the unit space). This is done in e.g.\ \cite{Nek} and \cite{BrixS}. However, the approach taken in this paper---in particular in Section~\ref{sec:spatrel}---is based on working with subgroups of the homeomorphism group of a space (i.e.\ faithful group actions), which is why we have defined $\llbracket \mathcal{G} \rrbracket$ as we have.
\end{remark}
\begin{remark}
We emphasize that the topological full group $\llbracket \mathcal{G} \rrbracket$ is viewed as a \emph{discrete} group. The term \emph{topological} is historical, and refers to the fact that the homeomorphisms in the topological full group preserves orbits in a ``continuous way'', as opposed to the full groups, which appeared first---in the measurable setting---c.f.\ \cite[page 2]{GPS}.
\end{remark}
For descriptions of the topological full group in certain classes of examples, see Proposition~\ref{prop:bis}, Remark~\ref{AFfullgroups} and Remark~\ref{LCCMfullgroups}. See also \cite{Mat2} for a survey of about topological full groups of étale groupoids with compact unit space. By virtue of the groupoid being effective, the support of a homeomorphism in the topological full group is in fact open as well. Matui's proof of this fact for compact unit spaces carries over verbatim to our setting.
\begin{lemma}[{c.f. \cite[Lemma 2.2]{Mat}}]\label{lem_clopen}
Let $\mathcal{G}$ be an effective ample groupoid. Then $\supp(\pi_U) = s(U \setminus \mathcal{G}^{(0)})$ for each $\pi_U \in \llbracket \mathcal{G} \rrbracket$. In particular, $\supp(\pi_U)$ is a compact open subset of $\mathcal{G}^{(0)}$.
\end{lemma}
We now present a few basic results on the existence of elements in the topological full group. They will be used in later sections to construct elements in the topological full group having certain properties.
\begin{lemma}\label{lem:extendBisection}
Let $\mathcal{G}$ be an effective ample groupoid, and let $\pi_U \in \llbracket \mathcal{G} \rrbracket$. Then we have a decomposition
\[U = U^\perp \bigsqcup \left( \mathcal{G}^{(0)} \setminus \supp(\pi_U) \right),\]
where $U^\perp$ is a compact bisection with $s(U^\perp) = r(U^\perp) = \supp(\pi_U)$.
Conversely, any compact bisection $V \subseteq \mathcal{G}$ with $s(V) = r(V)$ defines an element $\pi_{\tilde{V}} \in \llbracket \mathcal{G} \rrbracket$ with $\supp(\pi_{\tilde{V}}) \subseteq s(V)$ by setting $\tilde{V} = V \sqcup \left( \mathcal{G}^{(0)} \setminus s(V) \right)$.
\end{lemma}
\begin{proof}
It is clear that $\supp(\pi_U)$ is invariant under $\pi_U$. Hence we may simply put $U^\perp = s^{-1}(\supp(\pi_U))$. The second statement is obvious.
\end{proof}
\begin{lemma}\label{lemma:bisectionInvolution}
Let $\mathcal{G}$ be an effective ample groupoid. Any compact bisection $V \subseteq \mathcal{G}$ with $s(V) \cap r(V) = \emptyset$ defines an involutive element $\pi_{\hat{V}} \in \llbracket \mathcal{G} \rrbracket$ with $\supp(\pi_{\hat{V}}) \subseteq s(V) \cup r(V)$ by setting $\hat{V} = V \sqcup V^{-1} \sqcup \left( \mathcal{G}^{(0)} \setminus \left( s(V) \cup r(V) \right) \right)$.
\end{lemma}
\begin{proof}
Immediate.
\end{proof}
\begin{lemma}\label{lemma:bisectionExistence}
Let $\mathcal{G}$ be an effective ample groupoid. If $g \in \mathcal{G} \setminus \mathcal{G}'$, that is $s(g) \neq r(g)$, then there is a (nontrivial) bisection $U \subseteq \mathcal{G}$ containing $g$ with $\pi_U \in \llbracket \mathcal{G} \rrbracket$.
Furthermore, for any open set $A \subseteq \mathcal{G}^{(0)}$ containing both $s(g)$ and $r(g)$, $U$ can be chosen so that $\supp(\pi_U) \subseteq A$.
\end{lemma}
\begin{proof}
As $\mathcal{G}$ is ample there is a compact bisection $W$ containing $g$. Let $B_1$, $B_2$ be disjoint open neighbourhoods of $s(g)$, $r(g)$ respectively in $\mathcal{G}^{(0)}$. By intersecting we may take ${B_1 \subseteq s(W) \cap A}$ and ${B_2 \subseteq r(W) \cap A}$. By continuity of $s$ and $r$ there are compact open sets $W_1, W_2 \subseteq W$, both containing $g$, such that $s(W_1) \subseteq B_1$ and $r(W_2) \subseteq B_2$. Then $V = W_1 \cap W_2$ is a compact bisection containing $g$ with $s(V) \cap r(V) = \emptyset$ and $s(V) \cup r(V) \subseteq A$. Hence $U = \hat{V}$ (as in Lemma~\ref{lemma:bisectionInvolution}) is the desired full bisection.
\end{proof}
\begin{remark}
Note that the element $\pi_U \in \llbracket \mathcal{G} \rrbracket$ from Lemma~\ref{lemma:bisectionExistence} is also an involution, by virtue of Lemma~\ref{lemma:bisectionInvolution}.
\end{remark}
\begin{remark}
In the non-compact case we may view the topological full group as a direct limit of topological full groups of groupoids over \emph{compact spaces} as follows. Consider $\ck\left(\mathcal{G}^{(0)}\right)$ as a directed set (ordered by inclusion). Then given $A,B\in \ck\left(\mathcal{G}^{(0)}\right)$ with $A\subseteq B$ we define the group homomorphism $\iota_{A,B} \colon \llbracket \mathcal{G}_A \rrbracket \to \llbracket \mathcal{G}_B \rrbracket$ by $\pi_U\mapsto \pi_{\tilde{U}}$ where $\tilde{U} = U \sqcup (B\setminus A)$. Then we have that \[\llbracket \mathcal{G} \rrbracket \cong \lim\limits_{\rightarrow}(\llbracket \mathcal{G}_A \rrbracket, \iota).\]
\end{remark}
\section{The groupoid of germs}\label{sec:germ}
We are now going to adapt the notions of \cite[Section 3]{Ren} to the (special) case of groups, rather than inverse semigroups, to fit the framework of the topological full group and its subgroups, rather than the \emph{pseudogroup} studied in \cite{Ren}. Our goal is to reconstruct the groupoid as a so-called \emph{groupoid of germs}, which is a quotient of a transformation groupoid.
Recall that two homeomorphisms $\gamma, \tau \colon X \to X$ have the same \emph{germ} at a point $x \in X$ if there is a neighbourhood $U$ of $x$ such that ${\gamma}_{|U} = {\tau}_{|U}$.
\begin{definition}
Let $X$ be a locally compact Hausdorff space and let $\Gamma \leq \operatorname{Homeo}(X)$. The \emph{groupoid of germs} of $(\Gamma,X)$ is
\[\operatorname{Germ}(\Gamma,X) \coloneqq \left(\Gamma \ltimes X\right) / \sim \]
where $(\gamma, x) \sim (\tau, y)$ iff $x=y$ and $\gamma, \tau$ have the same germ at $x$.
\end{definition}
Denote the equivalence class of $(\gamma, x) \in \Gamma \ltimes X$ under $\sim$ by $[\gamma, x]$. It is straightforward to check that the groupoid operations of the transformation groupoid are well-defined on representatives of the equivalence classes in the groupoid of germs (and that they are continuous). The bisections
\[Z[\gamma, A] \coloneqq \{[\gamma,x] \ \vert \ x\in A\},\] for $\gamma \in \Gamma$ and $A \subseteq X$ open, form a basis for the quotient topology. The unit space of $\operatorname{Germ}(\Gamma,X)$ is also identified with $X$ in the obvious way. Hence the groupoid $\operatorname{Germ}(\Gamma,X)$ is étale (and ample when $X$ is Boolean), and it is furthermore always effective (as any group element acting identically on an open set is identified with the identity at each point of this open set). Hausdorffness of the groupoid however, is no longer guaranteed, but it can be characterized as follows.
\begin{lemma}\label{lem_haus}
Let $X$ be a locally compact Hausdorff space and let $\Gamma \leq \operatorname{Homeo}(X)$. Then the groupoid of germs $\operatorname{Germ}(\Gamma,X)$ is Hausdorff if and only if $\supp(\gamma)$ is clopen in $X$ for every $\gamma \in \Gamma$.
\end{lemma}
\begin{proof}
Since $X$ is Hausdorff, any two groupoid elements $[\gamma, x], [\tau, y] \in \operatorname{Germ}(\Gamma,X)$ with distinct sources (i.e.\ $x \neq y$) can always be separated by open sets. We only have to worry about separating elements in the same isotropy group, and it suffices to be able to separate the unit from any other element. Also note that $[\gamma, x] \neq [1, x]$ if and only if $x \in \supp(\gamma)$.
Now assume that all the supports are clopen. If $[\gamma, x] \neq [1, x]$, then by the observation above, $Z[\gamma, \supp(\gamma)]$ and $Z[1, \supp(\gamma)]$ are disjoint open neighbourhoods of these elements. To separate $[\gamma, x]$ from $[\tau, x]$ (when these are distinct), note that $[\gamma, x][\tau, x]^{-1} = [\gamma \tau^{-1}, \tau(x)] \neq [1, \tau(x)]$. Hence $\tau(x) \in \supp(\gamma \tau^{-1})$, so by the argument above $Z[\gamma \tau^{-1}, A]$ and $Z[1, A]$, with $A = \supp(\gamma \tau^{-1})$, separates $[\gamma \tau^{-1}, \tau(x)]$ from $[1, \tau(x)]$. It follows that $Z[\gamma, \tau^{-1}(A)]$ and $Z[\tau, \tau^{-1}(A)]$ separates $[\gamma, x]$ and $[\tau, x]$.
Conversely, suppose there is a $\gamma\in\Gamma$ such that $\supp(\gamma)$ is not open. Let $x$ be any point on the boundary of $\supp(\gamma)$. Then $\gamma(x) = x$, but $[\gamma, x] \neq [1, x]$, and these two groupoid elements cannot be separated by open sets. To see this take any two basic neighbourhoods $Z[\gamma, A], Z[1, B]$ where $A,B$ are open neighbourhoods of $x$ in $X$. They both contain the basic set $Z[1, C]$ where $C = (A \cap B) \setminus \supp(\gamma)$, since $\gamma$ acts identically on $C$.
\end{proof}
In the sequel we shall restrict our attention to groups of homeomorphisms which have open, as well as compact, support.
\begin{definition}
Let $X$ be a locally compact Hausdorff space and let $\Gamma\leq \operatorname{Homeo}_c(X)$. We say that a homeomorphism $\varphi\in \operatorname{Homeo}_c(X)$ \emph{locally belongs to $\Gamma$} if for every $x\in X$, there exists an open neighborhood $U$ of $x$ and $\gamma\in \Gamma$ such that $\varphi_{|U}=\gamma_{|U}$. The group $\Gamma$ is called \emph{locally closed} if whenever $\varphi\in \operatorname{Homeo}_c(X)$ locally belongs to $\Gamma$, then $\varphi\in \Gamma$.
\end{definition}
Let us verify that topological full groups are always locally closed.
\begin{proposition}\label{prop_TFGample}
Let $\mathcal{G}$ be an effective ample groupoid. Then $\llbracket \mathcal{G} \rrbracket \leq \operatorname{Homeo}_c\left(\mathcal{G}^{(0)}\right)$ is locally closed.
\end{proposition}
\begin{proof}
Let $\varphi \in \operatorname{Homeo}_c(\mathcal{G}^{(0)})$ locally belong to $\llbracket \mathcal{G} \rrbracket$. Then, since $\supp(\varphi)$ is compact open, we can find finitely many open sets $A_i \subseteq \supp(\varphi)$, covering $\supp(\varphi)$, such that $\varphi_{\vert A_i} = (\pi_{U_i})_{\vert A_i}$, where $\pi_{U_i} \in \llbracket \mathcal{G} \rrbracket$. Since $\mathcal{G}^{(0)}$ is Boolean we may assume that the $A_i$'s are clopen and disjoint. We then have a clopen partition $\supp(\varphi) = A_1 \sqcup A_2 \sqcup \cdots \sqcup A_n$, and $\varphi$ restricts to a self-homeomorphism of $\supp(\varphi)$ which on each region $A_i$ equals $\pi_{U_i}$. It follows that the set $V = \cup_{i=1}^n V_i$, where $V_i = \left(s_{\vert U_i} \right)^{-1}(A_i)$, is a compact bisection in $\mathcal{G}$ with $s(V) = \supp(\varphi) = r(V)$. And then $\varphi = \pi_{\tilde{V}} \in \llbracket \mathcal{G} \rrbracket$, where $\tilde{V}$ is as in Lemma~\ref{lem:extendBisection}.
\end{proof}
Given a group $\Gamma\leq \operatorname{Homeo}_c(X)$ we denote by $\langle \Gamma \rangle$ the set of $\varphi \in \operatorname{Homeo}_c(X)$ which locally belong to $\Gamma$. Clearly $\langle \Gamma \rangle$ is a locally closed group in $\operatorname{Homeo}_c(X)$ and $\Gamma \leq \langle \Gamma \rangle$. As the groupoid of germs is defined in the same local terms as the local closure we have a canonical isomorphism $\operatorname{Germ}(\langle \Gamma \rangle, X) \cong \operatorname{Germ}(\Gamma,X)$. From this we obtain the analog of \cite[Proposition 3.2]{Ren}, namely that the topological full group of a groupoid of germs equals the local closure of the group we started with.
\begin{proposition}\label{prop_ample}
Let $X$ be a Boolean space and let $\Gamma \leq \operatorname{Homeo}_c(X)$. Then we have that $\llbracket \operatorname{Germ}(\Gamma,X) \rrbracket \cong \langle \Gamma \rangle$.
\end{proposition}
\begin{proof}
Since $\operatorname{Germ}(\Gamma,X) \cong \operatorname{Germ}(\langle \Gamma \rangle, X)$, it suffices to show that $\llbracket \operatorname{Germ}(\langle \Gamma \rangle, X) \rrbracket = \langle \Gamma \rangle$. For each $\varphi \in \langle \Gamma \rangle$ the full bisection $Z[\varphi, X] = U_\varphi$ in $\operatorname{Germ}(\langle \Gamma \rangle, X)$ satisfies $\pi_{U_\varphi} = \varphi$. And since $\varphi$ has compact support it belongs to $\llbracket \operatorname{Germ}(\langle \Gamma \rangle, X) \rrbracket$.
For the reverse inclusion, take any $\pi_U \in \llbracket \operatorname{Germ}(\langle \Gamma \rangle, X) \rrbracket$. Recall that the support of $\pi_U$ is open, as well as compact, since any groupoid of germs is effective (c.f.\ Lemma~\ref{lem_clopen}). To see that $\pi_U$ locally belongs to $\Gamma$ take any $x \in X$, and let $[\varphi, x]$ be the unique element in $U$ whose source is $x$. Since $U$ is open there is a basic set $Z[\varphi, A] \subseteq U$, where $A$ is an open neighbourhood of $x$ in $X$. As $\varphi \in \langle \Gamma \rangle$ there is an open neighbourhood $B$ of $x$ and an element $\gamma \in \Gamma$ with $\varphi_{\vert B} = \gamma_{\vert B}$. By intersecting with $A$ we may assume that $B \subseteq A$. Now observe that $(\pi_U)_{\vert B} = \varphi_{\vert B} = \gamma_{\vert B}$, and we are done.
\end{proof}
As topological full groups are locally closed (Proposition~\ref{prop_TFGample}) we obtain the following immediate corollary.
\begin{corollary}\label{cor:tfgGerm}
Let $\mathcal{G}$ be an effective ample groupoid. Then $\llbracket \operatorname{Germ}(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)}) \rrbracket \cong \llbracket \mathcal{G} \rrbracket$.
\end{corollary}
The preceding results show that a locally closed group $\Gamma \leq \operatorname{Homeo}_c(X)$ can be reconstructed from its associated groupoid of germs $\operatorname{Germ}(\Gamma,\mathcal{G}^{(0)})$, namely as the topological full group of this groupoid. We now turn to the question of how an ample groupoid $\mathcal{G}$ relates to the groupoid of germs, $\operatorname{Germ}(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)})$, determined by its topological full group. We will see that these will also be isomorphic under some mild condition on the groupoid---namely that the groupoid can be covered by bisections as in the following definition.
\begin{definition}
Let $\mathcal{G}$ be an effective ample groupoid. We say that a subgroup $\Gamma \leq \llbracket \mathcal{G} \rrbracket$ \emph{covers $\mathcal{G}$} if there for each $g \in \mathcal{G}$ exists a $\pi_U \in \Gamma$ such that $g \in U$.
\end{definition}
Note that if $\Gamma \leq \llbracket \mathcal{G} \rrbracket$ covers $\mathcal{G}$, then so does any group $\Gamma'$ in between, i.e.\ $\Gamma \leq \Gamma' \leq \llbracket \mathcal{G} \rrbracket$, and in particular $\llbracket \mathcal{G} \rrbracket$ itself covers $\mathcal{G}$. Sufficient conditions on the orbits of $\mathcal{G}$ for $\llbracket \mathcal{G} \rrbracket$, or the commutator $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$, to cover $\mathcal{G}$ is given by the following result (which is the analog of~\cite[Lemma~3.7]{Mat}).
\begin{lemma}\label{ex_cover}
Let $\mathcal{G}$ be an effective ample groupoid.
\begin{enumerate}
\item If $\vert \operatorname{Orb}_\mathcal{G}(x) \vert \geq 2$ for every $x\in \mathcal{G}^{(0)}$, then $\llbracket \mathcal{G} \rrbracket$ covers $\mathcal{G}$.
\item If $\vert \operatorname{Orb}_\mathcal{G}(x) \vert \geq 3$ for every $x\in \mathcal{G}^{(0)}$, then $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$ covers $\mathcal{G}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) First consider $g \in \mathcal{G} \setminus \mathcal{G}'$. Then Lemma~\ref{lemma:bisectionExistence} immediately gives a $\pi_U \in \llbracket \mathcal{G} \rrbracket$ with $g \in U$. Next, suppose $s(g) = r(g) = x$. By assumption there is a point $y$ different from $x$ in $\operatorname{Orb}_\mathcal{G}(x)$. This means that there is some $h \in \mathcal{G}$ with $s(h) = x \neq y = r(h)$. Then $h^{-1}$ is composable with $g$ and $gh^{-1} \in \mathcal{G} \setminus \mathcal{G}'$. Applying Lemma~\ref{lemma:bisectionExistence} to both $gh^{-1}$ and $h$ we get $\pi_{U_1}, \pi_{U_2} \in \llbracket \mathcal{G} \rrbracket$ with $gh^{-1} \in U_1$ and $h \in U_2$. Since $\pi_{U_1U_2} \in \llbracket \mathcal{G} \rrbracket$ and $g \in U_1U_2$ we see that $\llbracket \mathcal{G} \rrbracket$ covers $\mathcal{G}$.
(2) As in the previous part we first consider $g \in \mathcal{G} \setminus \mathcal{G}'$. By assumption there is a third (distinct) point $y$ in the same orbit as $s(g)$ and $r(g)$. Therefore there is an element $h \in \mathcal{G}$ with $s(h) = y$ and $r(h) = s(g)$. Lemma~\ref{lemma:bisectionExistence} gives involutions $\pi_U, \pi_V \in \llbracket \mathcal{G} \rrbracket$ such that $g \in U$ and $h \in V$. We may also arrange so that $y \notin \supp(\pi_U)$ by the second part of Lemma~\ref{lemma:bisectionExistence}. Then
\[ [\pi_U, \pi_V] = \pi_U \pi_V (\pi_U)^{-1} (\pi_V)^{-1} = \pi_{(UV)^2} \in \mathsf{D}(\llbracket \mathcal{G} \rrbracket), \]
and we claim that $g$ belongs to the associated full bisection $(UV)^2$. To see that this is the case, note that $y \in U$ since $y \notin \supp(\pi_U)$. Thus we have $g = g\cdot h \cdot y \cdot h^{-1} \in UVUV$ as $s(h) = y$.
Finally, for the case $s(g) = r(g)$ we proceed similar as in part (1). We take $h \in \mathcal{G}$ with $s(h) = s(g)$ and $r(h) \neq s(g)$ and apply the above part to $gh^{-1}$ and $h$, which both belong to $\mathcal{G} \setminus \mathcal{G}'$. Multiplying the bisections we get gives the desired bisection containing $g$. This completes the proof.
\end{proof}
The conditions in Lemma~\ref{ex_cover} are not necessary (c.f.\ Example~\ref{ex:1orbitCover}), but they are typically easy to check in specific examples. Note that for minimal groupoids all orbits are in particular infinite, so the covering as above is automatic. We are now ready to give the main result on how a groupoid $\mathcal{G}$ can be reconstructed from the germs of $\llbracket \mathcal{G} \rrbracket$. It is the analog of~\cite[Proposition~3.2]{Ren}.
\begin{proposition}\label{prop_germs}
Let $\mathcal{G}$ be an effective ample Hausdorff groupoid and let $\Gamma \leq \llbracket \mathcal{G} \rrbracket$. Then there is an injective étale homomorphism
\[\iota \colon \operatorname{Germ}\left(\Gamma, \mathcal{G}^{(0)} \right) \hookrightarrow \mathcal{G},\] given by $\iota([\pi_U,x]) = (s_{\vert U})^{-1}(x)$ for ${[\pi_U,x] \in \operatorname{Germ}\left(\Gamma, \mathcal{G}^{(0)} \right)}$.
Furthermore, $\iota$ is surjective, and hence an isomorphism, if and only if $\Gamma$ covers $\mathcal{G}$.
\end{proposition}
\begin{proof}
We first have to verify that $\iota$ is well-defined. Let $x \in \mathcal{G}^{(0)}$ and suppose that $\pi_U, \pi_V \in \Gamma$ have the same germ over $x$. Let $A$ be an open neighbourhood of $x$ on which $\pi_U$ and $\pi_V$ agree. Then
\[\pi_{UA} = \left(\pi_U\right)_{|A} = \left(\pi_V\right)_{|A} = \pi_{VA},\] so by Lemma~\ref{lemma:homeoBis} we have $UA = VA$. This means that the unique groupoid elements in $U$ and $V$ that have source equal to $x$ coincide, so $\iota$ is well-defined.
To see that $\iota$ is a groupoid homomorphism recall that $([\pi_V,y],[\pi_U,x])$ is a composable pair iff $\pi_U(x) = y$. Suppose this is the case and let $g \in U$ be the element with $s(g) = x$, and let $h \in V$ be the element with $s(h) = y$. As $r(g) = \pi_U(x) = y = s(h)$ we have $(h,g) \in \mathcal{G}^{(2)}$ and
\[\iota([\pi_V,y] \cdot [\pi_U,x]) = \iota([\pi_{VU},x]) = hg,\] since $hg \in VU$ and $s(hg) = x$.
Now note that $\iota(x) = x$ for $x \in \mathcal{G}^{(0)}$ (under the identification of the unit space of the groupoid of germs). So $\iota^{(0)} = \id_{\mathcal{G}^{(0)}}$ is a (local) homeomorphism, hence $\iota$ is an étale homomorphism.
To see that $\iota$ is injective note first that $\iota([\pi_U,x]) \neq \iota([\pi_V,y])$ if $x \neq y$ since $\iota^{(0)}$ is the identity. Suppose now that $\iota([\pi_U,x]) = \iota([\pi_V,x])$ for some $\pi_U, \pi_V \in \Gamma$. This means that there is a groupoid element $g \in U \cap V$ with $s(g) = x$. Thus $B = s(U \cap V)$ is an open neighbourhood of $x$ in $\mathcal{G}^{(0)}$ and clearly $\left(\pi_U\right)_{|B} = \left(\pi_V\right)_{|B}$, which means that $[\pi_U,x]) = [\pi_V,x]$.
That $\iota$ is surjective is easily seen to be the same as $\Gamma$ covering $\mathcal{G}$.
\end{proof}
\begin{remark}
When the map $\iota$ in the previous proposition is an isomorphism the inverse is given by $\iota^{-1}(g) = [\pi_U,s(g)]$, where $U$ is any full bisection such that $\pi_U \in \Gamma$ and $g \in U$.
\end{remark}
\begin{remark}
Let $\mathcal{G}$ be an effective ample Hausdorff groupoid. Combining Propositions~\ref{prop_germs} and~\ref{prop_ample} we see that for each locally closed subgroup $\Gamma \leq \llbracket \mathcal{G} \rrbracket$, there is an open étale subgroupoid $\mathcal{H}_\Gamma \subseteq \mathcal{G}$ such that $\llbracket \mathcal{H}_\Gamma \rrbracket \cong \Gamma$, namely $\mathcal{H}_\Gamma = \operatorname{Germ} \left(\Gamma, \mathcal{G}^{(0)}\right)$.
\end{remark}
Since we are really interested in knowing when $\mathcal{G}$ is isomorphic to $\operatorname{Germ}\left(\Gamma, \mathcal{G}^{(0)} \right)$ (particularly for the case $\Gamma = \llbracket \mathcal{G} \rrbracket$) it is natural to ask whether they could be isomorphic even if the canonical map $\iota$ fails to be an isomorphism. We will see shortly that this is not possible. For $\Gamma \leq \operatorname{Homeo}_c(X)$ with $X$ Boolean we have seen that $\Gamma \leq \langle \Gamma \rangle \cong \llbracket \operatorname{Germ}(\Gamma,X) \rrbracket$. Identifying the latter two we see that $\Gamma$ covers $\operatorname{Germ}(\Gamma,X)$ since $[\gamma, x] \in Z[\gamma, X]$ and $\pi_{Z[\gamma, X]} = \gamma \in \Gamma$ for each $[\gamma, x] \in \operatorname{Germ}(\Gamma,X)$.
\begin{corollary}\label{cor:necessaryCover}
Let $\mathcal{G}$ be an effective ample Hausdorff groupoid. Then $\mathcal{G} \cong \operatorname{Germ}\left(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)} \right)$ as étale groupoids if and only if $\llbracket \mathcal{G} \rrbracket$ covers $\mathcal{G}$.
\end{corollary}
\begin{proof}
Suppose $\Phi \colon \mathcal{G} \to \operatorname{Germ}\left(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)} \right)$ is an isomorphism. Then $\Phi$ induces an isomorphism between the topological full groups by $\pi_U \mapsto \pi_{\Phi(U)}$ for $\pi_U \in \llbracket \mathcal{G} \rrbracket$. Let $g \in \mathcal{G}$ be given. As $\llbracket \mathcal{G} \rrbracket$ covers $\operatorname{Germ}\left(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)} \right)$ there is a full bisection $V$ containing $\Phi(g)$ such that $\pi_V \in \llbracket \operatorname{Germ}(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)}) \rrbracket = \llbracket \mathcal{G} \rrbracket$. And then $\Phi^{-1}(V)$ is a full bisection in $\mathcal{G}$ containing $g$ with $\pi_{\Phi^{-1}(V)} \in \llbracket \mathcal{G} \rrbracket$. Hence $\llbracket \mathcal{G} \rrbracket$ covers $\mathcal{G}$.
\end{proof}
\section{The category of spatial groups}\label{sec:SpatG}
By introducing suitable categories we will see that the assigment of the groupoid of germs is indeed functorial. We first define the category of \emph{spatial groups}, denoted $\catname{SpatG}$ to consist of:
\begin{description}
\item[Objects] Pairs $(\Gamma,X)$ where $X$ is a Boolean space and $\Gamma \leq \operatorname{Homeo}_c(X)$.
\item[Morphisms] A morphism from $(\Gamma_1,X_1)$ to $(\Gamma_2,X_2)$ is a local homeomorphism $\phi \colon X_1\to X_2$ satisfying $\phi \circ \Gamma_1\subseteq \Gamma_2\circ \phi$.
\end{description}
We shall sometimes refer to a pair $(\Gamma, X)$ as a \emph{space-group pair}. Observe that an isomorphism in the category $\catname{SpatG}$ is a homeomorphism $\phi$ such that $\phi \circ \Gamma_1\circ \phi^{-1}=\Gamma_2$. We call such an isomorphism a \emph{spatial isomorphism} (as it is a group isomorphism implemented by a homeomorphism).
\begin{remark}
We are of course, ``up to notation'', studying faithful group actions, but we prefer to concretely realize the groups acting as subgroups of $\operatorname{Homeo}(X)$. Also note that the term \emph{faithful} will be reserved for something else in Section~\ref{sec:spatrel} (c.f.\ Definition~\ref{def:faithful}).
\end{remark}
Secondly, we define the category $\catname{Gpoid}$ to consist of:
\begin{description}
\item[Objects] Ample effective Hausdorff groupoids.
\item[Morphisms] Étale homomorphisms.
\end{description}
\begin{remark}
The choice of morphisms in $\catname{SpatG}$ is done so that they induce étale homomorphisms between the groupoid of germs. As for the morphisms in $\catname{Gpoid}$, there are several reasons for stipulating that they should be étale homomorphisms (rather than merely continuous groupoid homomorphisms). First of all, since all the structure maps in an étale groupoid are local homeomorphisms, it is reasonable to prescribe that maps between étale groupoids should be as well. Moreover, the image under an étale homomorphism is always an open étale subgroupoid in the codomain. An important consequence of this is that an injective étale homomorphism induce (diagonal preserving) injective $*$-homomorphisms between both the full and reduced groupoid $C^*$-algebras, respectively (and also between the Steinberg algebras), c.f.\ \cite[page 113]{BNRSW} and \cite[Proposition 1.9]{Phil}. Whereas the groupoid $C^*$-algebra construction is not functorial in general.
\end{remark}
It is straightforward to check that $\catname{SpatG}$ and $\catname{Gpoid}$ indeed are categories. We will now define a functor from $\catname{SpatG}$ to $\catname{Gpoid}$, which on objects is the groupoid of germs. Let $\phi$ be a spatial morphism between the space-group pairs $(\Gamma_1,X_1)$ and $(\Gamma_2,X_2)$. Given $[\gamma,x] \in \operatorname{Germ}(\Gamma_1,X_1)$ there is a $\gamma' \in \Gamma_2$ with $\phi \circ \gamma = \gamma' \circ \phi$. We then propose to define a map $\operatorname{Germ}(\phi)$ from $\operatorname{Germ}(\Gamma_1,X_1)$ to $\operatorname{Germ}(\Gamma_2,X_2)$ by setting $\operatorname{Germ}(\phi)([\gamma,x]) = [\gamma',\phi(x)]$.
\begin{proposition}\label{prop:functorial}
The mapping $\operatorname{Germ}(\phi)$ described above is a well-defined étale homomorphism, and $\operatorname{Germ}(-) \colon \catname{SpatG} \to \catname{Gpoid}$ is a (covariant) functor.
\end{proposition}
\begin{proof}
Let $\phi \colon (\Gamma_1,X_1) \to (\Gamma_2,X_2)$ be a spatial morphism. We first verify that $\operatorname{Germ}(\phi)$ is well-defined. Given $[\gamma,x] \in \operatorname{Germ}(\Gamma_1,X_1)$, suppose $\gamma', \gamma'' \in \Gamma_2$ satisfy $\phi \circ \gamma = \gamma' \circ \phi = \gamma'' \circ \phi$. Then $\gamma'$ and $\gamma''$ agree on $\phi(X_1)$, which is an open neighbourhood of $\phi(x)$, hence $[\gamma',\phi(x)] = [\gamma'',\phi(x)]$. So the choice of $\gamma'$ doesn't matter. As for the choice of $\gamma$, suppose $\tau \in \Gamma_1$ has the same germ over $x$ as $\gamma$, i.e.\ $\gamma_{|A} = \tau_{|A}$ for some open neighbourhood $A$ of $x$ in $X_1$. Let $\tau' \in \Gamma_2$ satisfy $\phi \circ \tau = \tau' \circ \phi$. Then \[\gamma' \circ \phi_{|A} = \phi \circ \gamma_{|A} = \phi \circ \tau_{|A} = \tau' \circ \phi_{|A}.\]
This means that $\gamma'_{|\phi(A)} = \tau'_{|\phi(A)}$, hence $[\gamma',\phi(x)] = [\tau',\phi(x)]$. This shows that $\operatorname{Germ}(\phi)$ is well-defined.
Observe that the restriction to the unit spaces is just $\operatorname{Germ}(\phi)^{(0)} = \phi \colon X_1 \to X_2$. From this we obtain
\[s\left(\operatorname{Germ}(\phi)([\gamma,x])\right) = \phi(x) = \operatorname{Germ}(\phi)\left(s([\gamma,x])\right), \] and
\[r\left(\operatorname{Germ}(\phi)([\gamma,x])\right) = \gamma' \circ \phi(x) = \phi \circ \gamma(x) = \operatorname{Germ}(\phi)\left(r([\gamma,x])\right). \]
This means that $\operatorname{Germ}(\phi)$ takes composable pairs to composable pairs. As for preserving the product itself, we check that
\begin{align*}
\operatorname{Germ}(\phi)([\tau,\gamma(x)]) \cdot \operatorname{Germ}(\phi)([\gamma,x]) &= [\tau',\phi \gamma(x)] \cdot [\gamma',\phi(x)] = [\tau' \gamma', \phi(x)] \\
&= \operatorname{Germ}(\phi)([\tau \gamma,x]), \text{ since } \phi \tau \gamma = \phi \tau' \gamma'.
\end{align*}
As $\operatorname{Germ}(\phi)^{(0)} = \phi$ is a local homeomorphism, we have shown that $\operatorname{Germ}(\phi)$ is an étale homomorphism. Similar computations as above shows that $\operatorname{Germ}(-)$ sends identity morphisms to identity morphisms and preserves composition of morphisms.
\end{proof}
We record some consequences of this functoriality.
\begin{corollary}\label{corol_inj_groupoid}
Let $\phi \colon (X_1,\Gamma_1)\to(X_2,\Gamma_2)$ be a spatial morphism in $\catname{SpatG}$.
\begin{enumerate}
\item If $\phi$ is a spatial isomorphism, then $\operatorname{Germ}(\phi) \colon \operatorname{Germ}(\Gamma_1,X_1)\to \operatorname{Germ}(\Gamma_2,X_2)$ is an isomorphism of étale groupoids.
\item $\operatorname{Germ}(\phi)^{(0)} = \phi$, in particular $\operatorname{Germ}(\phi)$ maps $X_1$ onto $X_2$ if and only if $\phi$ is surjective.
\item If $\phi \colon X_1\to X_2$ is injective, then $\operatorname{Germ}(\phi) \colon \operatorname{Germ}(\Gamma_1,X_1)\to \operatorname{Germ}(\Gamma_2,X_2)$ is also injective.
\item If $\phi \colon X_1\to X_2$ is surjective and $\phi \circ \Gamma_1 = \Gamma_2 \circ \phi$, then $\operatorname{Germ}(\phi) \colon \operatorname{Germ}(\Gamma_1,X_1)\to \operatorname{Germ}(\Gamma_2,X_2)$ is also surjective.
\end{enumerate}
\end{corollary}
\begin{proof}
Statement (1) follows immediately from functoriality, and statement (2) was observed in the proof of Proposition~\ref{prop:functorial}.
(3) Assume that $\phi \colon X_1\to X_2$ is injective. Then clearly $\operatorname{Germ}(\phi)$ maps elements with distinct sources to distinct elements. So suppose \[[\gamma',\phi(x)] = \operatorname{Germ}(\phi)([\gamma,x]) = \operatorname{Germ}(\phi)([\tau,x]) = [\tau',\phi(x)]. \]
Then $\gamma'_{|A} = \tau'_{|A}$ for some open neighbourhood $A$ of $\phi(x)$ in $X_2$. As $\phi \circ \gamma = \gamma' \circ \phi$ and $\phi \circ \tau = \tau' \circ \phi$ we have that $\phi \circ \gamma$ and $\phi \circ \tau$ agree on $\phi^{-1}(A)$. The injectivity of $\phi$ now implies that $\gamma$ and $\tau$ agree on $\phi^{-1}(A)$, which is an open neighbourhood of $x$, hence $[\gamma,x] = [\tau,x]$ and $\operatorname{Germ}(\phi)$ is injective.
(4) Suppose $\phi \colon X_1\to X_2$ is surjective and that $\phi \circ \Gamma_1 = \Gamma_2 \circ \phi$. Given $[\tau, y] \in \operatorname{Germ}(\Gamma_2, X_2)$, pick any $x \in X_1$ with $\phi(x) = y$. By assumption there is some $\gamma \in \Gamma_1$ with $\phi \circ \gamma = \tau \circ \phi$, and then $\operatorname{Germ}(\phi)([\gamma, x]) = [\tau, y]$.
\end{proof}
\begin{remark}
It is natural to ask whether a spatial morphism $\phi \colon (X_1,\Gamma_1)\to(X_2,\Gamma_2)$ induces a (algebraic) group homomorphism from $\Gamma_1$ to $\Gamma_2$. This is not so clear, but at least if $\phi \colon X_1\to X_2$ is injective and $\Gamma_2$ is locally closed, then one can define an injective group homomorphism $f_\phi \colon \Gamma_1 \to \Gamma_2$ in the following way. First observe that given $\gamma \in \Gamma_1$, there is a $\gamma_2 \in \Gamma_2$ with $\phi \circ\gamma=\gamma_2\circ \phi$, and then $\gamma_2(\phi(X_1)) = \phi(X_1)$ and $\supp((\gamma_2)_{|\phi(X_1)})= \phi(\supp(\gamma))$. Given another $\gamma_3 \in \Gamma_2$ with $\phi \circ\gamma=\gamma_3\circ \phi$ we have \[(\gamma_2)_{|\phi(X_1)}=(\gamma_3)_{|\phi(X_1)}\in \operatorname{Homeo}_c(h(X_1)).\] So we can define $f_\phi(\gamma) = \gamma'$ to be the homeomorphism $\gamma'$ on $X_2$ given by \[(\gamma')_{|\phi(X_1)}= (\gamma_2)_{|\phi(X_1)} \quad \text{and} \quad (\gamma')_{| X_2 \setminus \phi(X_1)} = \id_{X_2 \setminus \phi(X_1)}. \]
The homeomorphism $\gamma'$ belongs to $\Gamma_2$ because $\Gamma_2$ is locally closed. It is straightforward to check that $f_\phi$ is an injective group homomorphism, and also that $\supp(f_\phi(\gamma))= \phi(\supp(\gamma))$ for every $\gamma \in\Gamma_1$. If $\phi$ is a spatial isomorphism, then $f_\phi$ is a group isomorphism and $f_\phi(\gamma) = \phi \circ \gamma \circ \phi^{-1}$ for each $\gamma \in \Gamma_1$.
\end{remark}
\begin{remark}
Viewing the functor $\operatorname{Germ}$ as a ``free'' functor turning a space-group pair into an effective ample Hausdorff groupoid (in the ``most efficient'' way), one could ask for a ``forgetful'' functor in the opposite direction. Proposition~\ref{prop_ample} suggests that this functor should be \[\llbracket - \rrbracket \colon \catname{Gpoid} \to \catname{SpatG} \quad \text{assigning} \quad \mathcal{G} \mapsto \left(\llbracket \mathcal{G} \rrbracket,\operatorname{Germ}^{(0)}\right). \]
The natural choice of mapping on the morphisms is for an étale homomorphism $\Phi \colon \mathcal{G} \to \mathcal{H}$ to let \[ \llbracket \Phi \rrbracket \coloneqq \Phi^{(0)} \colon \left(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)} \right) \to \left(\llbracket \mathcal{H} \rrbracket, \mathcal{H}^{(0)} \right), \]
i.e.\ restriction to the unit space. Unfortunately, this \emph{fails} to be a morphism in $\catname{SpatG}$ in general. For injective étale homomorphisms though, the restriction to the unit spaces does yield an injective spatial morphism.
\end{remark}
\section{Spatial realization theorems}\label{sec:spatrel}
\subsection{Faithful classes of spatial groups}
In this section we shall study reconstruction of topological spaces from subgroups of their homeomorphism group in the sense of the following definition.
\begin{definition}\label{def:faithful}
A class $K$ of space-group pairs is called \emph{faithful} if every group isomorphism $\Phi \colon \Gamma_1 \to \Gamma_2$, where $(\Gamma_1,X_1), (\Gamma_2,X_2) \in K$, is \emph{spatially implemented}, that is, there is a homeomorphism $\phi \colon X_1 \to X_2$ such that $\Phi(\gamma)=\phi \circ \gamma \circ \phi^{-1}$ for every $\gamma\in \Gamma_1$.
\end{definition}
We stress the fact that the isomorphisms $\Phi$ considered in the preceding definition are, a priori, \emph{abstract} group isomorphisms. They only ``see'' the algebraic structure of the $\Gamma_i$'s, not the canonical action on the underlying space. We may rephrase faithfulness to saying that ``every group isomorphism is a spatial isomorphism'' (à la $\catname{SpatG}$). In relation to the previous section we obtain the following from Corollary \ref{corol_inj_groupoid}.
\begin{proposition}\label{faithGerm}
Suppose $K$ is a faithful class of space-group pairs from $\catname{SpatG}$. If $(\Gamma_1,X_1)$ and $(\Gamma_2,X_2)$ belong to $K$ and $\Gamma_1\cong \Gamma_2$ as abstract groups, then $\operatorname{Germ}(\Gamma_1,X_1)\cong \operatorname{Germ}(\Gamma_2,X_2)$ as topological groupoids.
\end{proposition}
In conjunction with Proposition~\ref{prop_germs} this will allow us to deduce that in many cases, the topological full group of an ample groupoid, considered as an abstract group, is a complete invariant for the isomorphism class of the groupoid. This will be done in the next section. The rest of this section will be devoted to proving two faithfulness results. The first one is a straightforward extension of Matui's spatial realization result \cite[Theorem 3.5]{Mat} to our locally compact setting (Theorem~\ref{classF}). This result will not only apply to the topological full group, but also to any subgroup containing the commutator. The second result we present (Theorem~\ref{thm:KBfaithful}) has more relaxed assumptions on the ``mixing properties'' of the action, but we were not able to apply it to the commutator subgroup of the topological full group.
\subsection{The class $K^F$}
We now present the main definition from \cite[Section 3]{Mat}, adapted to our setting.
\begin{definition}\label{def_F}
We define the class \emph{$K^F$} to consist of all space-group pairs $(\Gamma,X) \in \catname{SpatG}$ which satisfy the following conditions:
\begin{enumerate}
\item[(F1)] For any $x\in X$ and any clopen neighbourhood $A\subset X$ of $x$, there exists an involution $\alpha\in\Gamma\setminus\{1\}$ such that $x\in \supp(\alpha)$ and $\supp(\alpha)\subseteq A$.
\item[(F2)] For any involution $\alpha\in\Gamma\setminus\{1\}$, and any non-empty clopen set $A\subseteq \supp(\alpha)$, there exists $\beta\in\Gamma\setminus\{1\}$ such that $\supp(\beta)\subseteq A\cup\alpha(A)$ and $\alpha(x)=\beta(x)$ for every $x\in \supp(\beta)$.
\item[(F3)] For any non-empty clopen set $A\subseteq X$, there exists $\alpha\in\Gamma$ such that $\supp(\alpha)\subseteq A$ and $\alpha^2\neq 1$.
\end{enumerate}
\end{definition}
\begin{remark}
In \cite[Definition 3.1]{Mat} there is also a condition (F0), which says that the support of any involution is clopen. This is already implicit in the definition above, since all supports of elements in $\Gamma$ are assumed compact and open. We also remark that in Definition~\ref{def_F} there is no countability assumption on the spaces. Moreover, note that condition (F1) (and also (F3)) implies that $X$ cannot have isolated points.
\end{remark}
\begin{remark}
The notation $K^F$ to denote a class of space-group pairs is in the same style as Rubin uses in his paper~\cite{Rub}. Elsewhere in the literature, in particular \cite{Mat} and \cite{GPS}, groups $\Gamma$ with $(\Gamma, X) \in K^F$ are called groups of \emph{class F} (and $X$ is assumed to be a (compact) Cantor space).
\end{remark}
The following theorem is a simple extension of Matui's spatial realization theorem \cite[Theorem~3.5]{Mat}.
\begin{theorem}\label{classF}
The class $K^F$ is faithful.
\end{theorem}
\begin{proof}
By closely inspecting the proof of \cite[Theorem~3.5]{Mat} and the three lemmas preceding it, we observe that the compactness of the spaces is not needed until the proof of \cite[Theorem~3.5]{Mat} itself. The preceding lemmas are completely algebraic. The compactness is used only to guarantee that a certain intersection of supports is non-empty---by appealing to the finite intersection property. However, since all supports in our setting are already compact (by assumption) the conclusion that the intersection is non-empty still holds. Therefore, the same proof goes through.
\end{proof}
\begin{remark}
We remark that Matui's proof is quite similar to the approach used by Bezuglyi and Medynets in \cite{BezMed}, which again builds on Fremlin's book \cite[Section 384]{Frem}.
\end{remark}
\subsection{The class $K^{LCC}$}
We now turn to obtaining the second spatial realization result, which in some sense is more general than Theorem~\ref{classF} (in the sense that it applies to a seemingly larger class of topological full groups), but in fact it does not directly imply Theorem~\ref{classF}. We will of course still need the groups $\Gamma$ to be very ``rich'' in order to recover the action on the space $X$, but we do not focus solely on involutive group elements as in $K^F$.
Some of the (many) results from Rubins remarkable paper~\cite{Rub} will form the backbone of our reconstruction result. In that paper, Rubin exhibits the faithfulness of several wide classes of space-group pairs. However, each of the classes considered required quite different proofs. We essentially reprove Rubin's result on $0$-dimensional spaces, but we obtain a slightly different statement. Also, our proof is more straightforward (since we aim for a less general setting; namely the unit spaces of ample groupoids).
\subsubsection{Reconstructing the Boolean algebra $\mathcal{R}(X)$}
The main theorem from Section~$2$ of Rubin's paper (given below in Theorem~\ref{reconstructRX}) gives general conditions for when the abstract isomorphism class of a group $\Gamma \leq \operatorname{Homeo}(X)$ determines the Boolean algebra $\mathcal{R}(X)$, and the induced action by $\Gamma$ on it ($\Gamma$ acts naturally on the regular sets in the space $X$ by taking the image under each homeomorphism). Then in~\cite[Section 3]{Rub}, Rubin defines several classes of space-group pairs and proves, in a case-by-case manner, that the space $X$ and the action by $\Gamma$ on it, is recovered from the action of $\Gamma$ on $\mathcal{R}(X)$. Let us begin with some terminology (adapted from \cite{Rub}).
\begin{definition}
Let $(\Gamma,X)$ be a space-group pair.
\begin{enumerate}
\item We say that $(\Gamma,X)$ is \emph{locally moving} if for every nonempty open subset $A \subseteq X$ there exists $\gamma\in \Gamma\setminus\{1\}$ with $\supp(\gamma)\subseteq A$.
\item An open set $B \subseteq X$ is called \emph{flexible} if for every pair of open subsets $C_1,C_2\subseteq B$, if there exists $\gamma \in \Gamma$ such that $\gamma(C_1)\cap C_2\neq\emptyset$, then there exists $\tau \in\Gamma$ such that $\tau(C_1)\cap C_2\neq \emptyset$ and $\supp(\tau)\subseteq B$.
\item We say that $(\Gamma,X)$ is \emph{locally flexible} if every non-empty open subset $A$ contains a non-empty open flexible subset $B \subseteq A$.
\end{enumerate}
\end{definition}
\begin{remark}
Note that if $(\Gamma,X)$ is locally moving, then the space $X$ has no isolated points.
\end{remark}
\begin{remark}
In \cite{Rub}, ``locally moving'' goes by the name ``regionally disrigid'', whilst the former terminology is from a later paper of Rubin \cite{Rub2}.
\end{remark}
We now state a version of the main result from Section~$2$ in \cite{Rub}.
\begin{theorem}[cf.\ {\cite[Theorem 0.2, Theorem 2.14(a)]{Rub}}] \label{reconstructRX}
Suppose $(\Gamma_1,X_1), (\Gamma_2,X_2) \in \catname{SpatG}$ are locally moving and locally flexible. If $\Phi \colon \Gamma_1 \to \Gamma_2$ is an isomorphism of groups, then there exists an isomorphism $\psi \colon \mathcal{R}(X_1) \to \mathcal{R}(X_2)$ of Boolean algebras such that $\psi(g(A)) = \Phi(g)(\psi(A))$ for each $A \in \mathcal{R}(X_1)$ and $g \in \Gamma_1$.
\end{theorem}
If we think of $g$ and $\Phi(g)$ as elements in $\aut(\mathcal{R}(X_1))$ and $\aut(\mathcal{R}(X_2))$ respectively, then we can rewrite the conclusion in the preceding theorem as
\[\Phi(g) = \psi \circ g \circ \psi^{-1}. \] Thus, Theorem~\ref{reconstructRX} says that any group isomorphism between $\Gamma_1$ and $\Gamma_2$ is actually induced by an isomorphism of the Boolean algebras of regular open sets of the underlying spaces. One could also view it as an equivariance of the group actions on the regular sets.
\begin{remark}
We remark that what Rubin proves in {\cite[Theorem 2.14(a)]{Rub}} is a somewhat stronger statement than the one we gave above. He shows that if $(\Gamma,X)$ is locally moving and locally flexible, then starting with $\Gamma$ alone, one can canonically reconstruct the Boolean algebra $\mathcal{R}(X)$ (up to isomorphism) using only group theoretic constructions. Moreover, one obtains a natural action by $\Gamma$ on this Boolean algebra which is conjugate to the action by $\Gamma$ on $\mathcal{R}(X)$. The strategy of the proof is to model a regular set $A \in \mathcal{R}(X)$ by its rigid stabilizer $Q(A) \coloneqq \{\gamma \in \Gamma \ \vert \supp(\gamma) \subseteq A \}$, and then to describe the Boolean operations in $\mathcal{R}(X)$ in group theoretic terms in terms of the subgroups $Q(A)$. Finally one shows that there are enough regular sets $A$ for which subgroups of the form $Q(A)$ can be detected inside $\Gamma$ in order to generate the whole of $\mathcal{R}(X)$.
\end{remark}
\subsubsection{Reconstructing the space $X$}
We now turn to reconstructing $X$ (and the original action by $\Gamma$) from its Boolean algebra of regular sets. The strategy is to first impose conditions making it possible to detect clopenness. And then characterize the compact open sets among the clopen sets, which then allow us to recover $X$ from Stone duality.
\begin{definition}\label{recognizable}
Let $(\Gamma,X)$ be a space-group pair. A clopen set $A \subseteq X$ is called \emph{recognizable by $\Gamma$} if it satisfies:
\begin{enumerate}
\item For every $\gamma \in \Gamma$ with $\gamma(A) = A$ the homeomorphism $\tau$ given by
\[\tau(x) =
\begin{cases}
\gamma(x) & x \in A,\\
x & \text{otherwise,}
\end{cases}\]
belongs to $\Gamma$.
\item For every $\gamma \in \Gamma$ with $\gamma(A) \cap A = \emptyset$ the involution $\alpha$ given by
\[\alpha(x) =
\begin{cases}
\gamma(x) & x \in A,\\
\gamma^{-1}(x) & x \in \gamma(A), \\
x & \text{otherwise,}
\end{cases}\]
belongs to $\Gamma$.
\end{enumerate}
\end{definition}
We shall see later that in our setting of topological full groups, all clopen subsets of the unit space are recognizable. And whenever this is the case, it is possible to characterize when a regular set is closed (i.e.\ clopen) using the following Boolean algebra notion.
\begin{definition}
Let $(\Gamma,X)$ be a space-group pair, and let $A \in \mathcal{R}(X)$ be a regular open set. We say that $A$ is \emph{weakly clopen} if for every $\gamma \in \Gamma$ with $\gamma\left(A \cap \gamma(A)\right) = A \cap \gamma(A)$, there exists an element $\gamma' \in \Gamma$ satisfying
\begin{enumerate}
\item $\gamma'(B) = \gamma(B)$ for each $B \subseteq A \cap \gamma(A)$,
\item $\gamma'(B) = B$ for each $B \subseteq \ \sim\left(A \cap \gamma(A)\right)$.
\end{enumerate}
\end{definition}
Note that the notion of being weakly clopen is formulated solely in terms of the action by $\Gamma$ on the Boolean algebra $\mathcal{R}(X)$. And as the next result shows --- under suitable hypotheses --- weakly clopen is the same as clopen.
\begin{lemma}\label{clopenLemma}
Let $(\Gamma,X) \in \catname{SpatG}$. Assume that every clopen subset of $X$ is recognizable by $\Gamma$, and that the $\Gamma$-orbit of each point contains at least $3$ points. Then a regular open set $A \in \mathcal{R}(X)$ is clopen if and only if both $A$ and $\sim A$ are weakly clopen.
\end{lemma}
\begin{proof}
This is a special case of \cite[Lemma 3.45]{Rub}, where the dense subset $R$ is taken to be all of $\mathcal{R}(X)$. The assumptions 3.V.1 (a), (b), (c) and 3.V.2 (a), (b) preceding \cite[Lemma 3.45]{Rub} follow from those above. In particular, what Rubin calls ``recognizably clopen'' coincides with (2) in Definition~\ref{recognizable}, and ``strongly recognizably clopen'' is slightly weaker than (1) in Definition~\ref{recognizable} (together with (2)).
\end{proof}
In order to invoke Stone duality for Boolean spaces we need to recover the generalized Boolean algebra of compact open sets. The previous lemma gives us the clopen sets, and from these we obtain the compact open ones as follows.
\begin{lemma}\label{lem:countCompact}
Let $X$ be a second countable Boolean space. Then $X$ is compact if and only if $\co(X)$ is countable.
\end{lemma}
\begin{proof}
If $X$ is compact, then $\co(X) = \ck(X)$, and any second countable space has countably many compact open subsets.
Suppose $X$ is non-compact. Let $\{K_n \}_{n=1}^\infty$ be a countable basis for $X$ consisting of compact open sets. Now form the compact open sets $C_k = \cup_{n=1}^k K_n$. As $X$ is not compact, we have $C_k \neq X$ for each $k$. Also, $C_k \subseteq C_{k+1}$ and they cover $X$. By passing to a subsequence, if necessary, we may assume that $C_k \subsetneq C_{k+1}$ for each $k$. Finally, let $D_k = C_{k+1} \setminus C_k$. Then the $D_k$'s are pairwise disjoint nonempty compact open sets. We claim that for each subset $S \subseteq \mathbb{N}$, $\cup_{k \in S} D_k$ is clopen. And then we have produced uncountably many distinct clopen sets. The claim follows from the fact that for each $C_m$, the intersection $C_m \cap (\cup_{k \in S} D_k)$ is a finite intersection, hence closed, and that the $C_m$'s cover $X$.
\end{proof}
\begin{corollary}\label{compactOpenLemma}
Let $X$ be a second countable Boolean space, and let $A \in \co(X)$ be a clopen set. Then $A$ is compact if and only if the set $\{B \in \co(X) \ \vert \ B \subseteq A\}$ is countable.
\end{corollary}
\begin{proof}
The set $\{B \in \co(X) \ \vert \ B \subseteq A\}$ coincides with $\co(A)$ when viewing $A$ as a subspace of $X$. The result now follows from Lemma~\ref{lem:countCompact}.
\end{proof}
This shows that in the generalized Boolean algebra $\co(X)$ compactness is characterized by having only countably many elements below. We are now ready to give the main result of this section.
\begin{definition}\label{def_KB}
We define the class \emph{$K^{LCC}$} to consist of all space-group pairs $(\Gamma,X) \in \catname{SpatG}$ which satisfy the following conditions:
\begin{enumerate}[label=(K\arabic*)]
\item $X$ is a locally compact Cantor space.
\item $(\Gamma,X)$ is locally moving.
\item $(\Gamma,X)$ is locally flexible.
\item Every clopen subset of $X$ is recognizable by $\Gamma$.
\item The $\Gamma$-orbit of each point contains at least $3$ points.
\end{enumerate}
\end{definition}
\begin{theorem}\label{thm:KBfaithful}
The class $K^{LCC}$ is faithful.
\end{theorem}
\begin{proof}
Suppose we have two space-group pairs $(\Gamma_1,X_1)$, $(\Gamma_2,X_2) \in K^{LCC}$ and a group isomorphism $\Phi \colon \Gamma_1 \to \Gamma_2$. Invoking Theorem~\ref{reconstructRX} yields an isomorphism $\psi \colon \mathcal{R}(X_1) \to \mathcal{R}(X_2)$ of Boolean algebras such that $\psi(g(A)) = \Phi(g)(\psi(U))$ for each $A \in \mathcal{R}(X_1)$ and $g \in \Gamma_1$. We first argue that $\psi(\co(X_1)) = \co(X_2)$, and then that $\psi(\ck(X_1)) = \ck(X_2)$.
First of all, note that both $\co(X_i)$ and $\ck(X_i)$ are invariant under $\Gamma_i$ ($i = 1,2$). Lemma~\ref{clopenLemma} characterizes clopenness of regular sets in $X_i$ solely in terms of the (induced) actions by $\Gamma_i$ on $\mathcal{R}(X_i)$. Since $\psi$ is an equivariant Boolean algebra isomorphism, it follows that $\psi(\co(X_1)) = \co(X_2)$. Next, Corollary~\ref{compactOpenLemma} characterizes compactness of a clopen set in terms of a countability condition in the generalized Boolean algebra $\co(X_i)$. Clearly, this is then also preserved by $\psi$. Consequently, $\psi$ restricts to an equivariant isomorphism of the generalized Boolean algebras $\ck(X_1)$ and $\ck(X_2)$.
By applying the Stone functor to the generalized Boolean algebra isomorphism \[\psi \colon \ck(X_1) \to \ck(X_2)\] we obtain a homeomorphism \[\mathbb{S}(\psi) \colon \mathbb{S}(\ck(X_2)) \to \mathbb{S}(\ck(X_1))\] of the spaces of ultrafilters. The groups $\Gamma_i$ act naturally on $\mathbb{S}(\ck(X_i))$ by $g \cdot \alpha = \{g(K) \ \vert \ K \in \alpha \}$ for an ultrafilter $\alpha \in \mathbb{S}(\ck(X_i))$. Finally, let $\phi \colon X_1 \to X_2$ be the homeomorphism given by the composition
\[\begin{tikzcd}
X_1 \arrow{r}{\Omega_{X_1}} & \mathbb{S}(\ck(X_1)) \arrow{r}{\mathbb{S}(\psi)^{-1}} & \mathbb{S}(\ck(X_2)) \arrow{r}{\Omega_{X_2}^{-1}} & X_2,
\end{tikzcd} \]
where $\Omega_{X_i}$ is the canonical homeomorphism mapping a point to its compact open neighbourhood ultrafilter. It is now easy to check that the original group isomorphism $\Phi$ is spatially implemented by $\phi$, i.e.\ that $\phi(g) = \phi \circ g \circ \phi^{-1}$ for each $g \in \Gamma_1$. This completes the proof.
\end{proof}
\begin{remark}
We would like to remark that Medynets has already obtained this result for transformation groupoids associated with $\mathbb{Z}$-actions on the Cantor space \cite[Remark 3]{Med}. The transformation groupoids arising there give rise to topological full groups in $K^{LCC}$.
\end{remark}
\section{Isomorphism theorems for ample groupoids}\label{sec:reggrpd}
In this section we shall use the topological reconstruction results of the previous section to obtain spatial realization results for the topological full group. As corollaries we are able to reconstruct certain ample groupoids from their topological full group. The two faithful classes considered in the previous section allows us to lift an abstract group isomorphism of (subgroups of) the topological full groups to a spatial one. This in turn yields an isomorphism of the associated groupoids of germs (c.f.\ Corollary~\ref{corol_inj_groupoid}). In order to conclude that the groupoids themselves are isomorphic we need, by Proposition~\ref{prop_germs} and Corollary~\ref{cor:necessaryCover}, to assume that the subgroups in question cover the groupoids. As we saw in Lemma~\ref{ex_cover}, if every $\mathcal{G}$-orbit has length at least $2$, or respectively $3$, then $\llbracket \mathcal{G} \rrbracket$, or respectively any $\Gamma$ with $D(\llbracket \mathcal{G} \rrbracket) \leq \Gamma \leq \llbracket \mathcal{G} \rrbracket$, covers $\mathcal{G}$.
Let us first state a reconstruction result building on the class $K^F$.
\begin{proposition}[{c.f. \cite[Proposition 3.6]{Mat}}]\label{KFprop}
Let $\mathcal{G}$ be an effective ample Hausdorff groupoid whose unit space has no isolated points. If $\mathcal{G}$ is minimal and $\Gamma$ is any subgroup of $\llbracket \mathcal{G} \rrbracket$ containing $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$, then $\left(\Gamma, \mathcal{G}^{(0)} \right) \in K^F$.
\end{proposition}
\begin{proof}
The proof of~\cite[Proposition 3.6]{Mat} goes through verbatim in this slightly more general setting. The proof makes heavy use of the minimality of $\mathcal{G}$ and combine this with Lemma~\ref{lemma:bisectionInvolution} to find the desired elements in $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$.
\end{proof}
This yields the following minor extension of~\cite[Theorem 3.9 \& 3.10]{Mat}\footnote{In~\cite[Theorem 3.10]{Mat} the kernel of the so-called \emph{index map} also appears (as $\llbracket \mathcal{G} \rrbracket_0$). We could equally well have included it in Theorem~\ref{thm:KFgroupoid} since it is a distinguished subgroup lying between $\llbracket \mathcal{G} \rrbracket$ and $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$.}
\begin{theorem}\label{thm:KFgroupoid}
Let $\mathcal{G}_1, \mathcal{G}_2$ be effective ample minimal Hausdorff groupoids whose unit spaces have no isolated points. Suppose $\Gamma_1, \Gamma_2$ are subgroups with $\mathsf{D}(\llbracket \mathcal{G}_i \rrbracket) \leq \Gamma_i \leq \llbracket \mathcal{G}_i \rrbracket$. If $\Gamma_1 \cong \Gamma_2$ as abstract groups, then $\mathcal{G}_1 \cong \mathcal{G}_2$ as topological groupoids. In particular, the following are equivalent:
\begin{enumerate}
\item $\mathcal{G}_1 \cong \mathcal{G}_2$ as topological groupoids.
\item $\llbracket \mathcal{G}_1 \rrbracket \cong \llbracket \mathcal{G}_2 \rrbracket$ as abstract groups.
\item $\mathsf{D}(\llbracket \mathcal{G}_1 \rrbracket) \cong \mathsf{D}(\llbracket \mathcal{G}_2 \rrbracket)$ as abstract groups.
\end{enumerate}
\end{theorem}
\begin{proof}
Clearly every $\mathcal{G}_i$-orbit is infinite, for $i = 1,2$. Thus the result follows from combining Proposition~\ref{KFprop}, Theorem~\ref{classF}, Proposition~\ref{faithGerm}, Lemma~\ref{ex_cover} and Proposition~\ref{prop_germs}.
\end{proof}
\begin{remark}
For transformation groupoids arising from locally compact Cantor minimal systems, a variant of this result appears in~\cite[Theorem 4.13 (vi)]{Mat4}. See also Remark~\ref{rem:matui}.
\end{remark}
In the next section we will see examples of non-minimal graph groupoids where (the commutator of) the topological full group belongs to $K^F$. It would be interesting to find generic conditions on a general ample groupoid $\mathcal{G}$, weaker than minimality, ensuring that $(\llbracket \mathcal{G} \rrbracket,\mathcal{G}^{(0)})$, or $(\mathsf{D}(\llbracket \mathcal{G} \rrbracket),\mathcal{G}^{(0)})$, belong to $K^F$.
Our next goal is to analyze the conditions in the definition of the class $K^{LCC}$, when the space-group pair under consideration is the topological full group together with the unit space of an ample groupoid. We begin by showing that the orbits in the groupoid coincide with the orbits by the action of the topological full group.
\begin{lemma}\label{lemma:orbit}
Let $\mathcal{G}$ be an effective ample groupoid and let $x \in \mathcal{G}^{(0)}$. Then we have \[\operatorname{Orb}_{\llbracket \mathcal{G} \rrbracket}(x) = \operatorname{Orb}_\mathcal{G}(x).\]
\end{lemma}
\begin{proof}
From the definition of the topological full grup it is obvious that $\operatorname{Orb}_{\llbracket \mathcal{G} \rrbracket}(x) \subseteq \operatorname{Orb}_\mathcal{G}(x)$. For the reverse inclusion, suppose $y \in \operatorname{Orb}_\mathcal{G}(x)$ is distinct from $x$, and let $\gamma \in \mathcal{G}$ be an arrow from $x$ to $y$. Applying Lemma~\ref{lemma:bisectionExistence} to $\gamma$ we obtain an element $\pi_U \in \llbracket \mathcal{G} \rrbracket$ with $\pi_U(x) = y$. Thus $y \in \operatorname{Orb}_{\llbracket \mathcal{G} \rrbracket}(x)$.
\end{proof}
In other words, when the space group pair is $(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)})$ condition (K5) of Definition~\ref{def_KB} is equivalent to saying that every $\mathcal{G}$-orbit has length at least $3$ (which in turn implies that $\llbracket \mathcal{G} \rrbracket$ covers $\mathcal{G}$). Next we show that conditions (K3) and (K4) of Definition~\ref{def_KB} is always satisfied for topological full groups. In fact $(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)})$ is even ``globally'' flexible.
\begin{lemma}
Let $\mathcal{G}$ be an effective ample groupoid. Then every open subset of $\mathcal{G}^{(0)}$ is flexible with respect to $\llbracket \mathcal{G} \rrbracket$. In particular, $(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)})$ is locally flexible.
\end{lemma}
\begin{proof}
Let $A$ be a non-empty open subset of $\mathcal{G}^{(0)}$, and let $B_1, B_2$ be two open subsets of $A$. We may assume that these are disjoint, for otherwise the identity homeomorphism trivially witnesses flexibility. Suppose $\pi_U \in \llbracket \mathcal{G} \rrbracket$ satisfies $\pi_U(B_1) \cap B_2 \neq \emptyset$. Then there is a $g \in U$ with $s(g) \in B_1$ and $r(g) \in B_2$. Lemma~\ref{lemma:bisectionExistence} applied to $g$ and $B_1 \sqcup B_2$ produces an element $\pi_V \in \llbracket \mathcal{G} \rrbracket$ with $\supp(\pi_V) \subseteq B_1 \sqcup B_2 \subseteq A$ and $\pi_V(B_1) \cap B_2 \neq \emptyset$. This shows that $A$ is flexible.
\end{proof}
\begin{lemma}
Let $\mathcal{G}$ be an effective ample groupoid. Then every clopen subset of $\mathcal{G}^{(0)}$ is recognizable by $\llbracket \mathcal{G} \rrbracket$.
\end{lemma}
\begin{proof}
Let $A \subseteq \mathcal{G}^{(0)}$ be clopen.
(1) Suppose $\pi_U \in \llbracket \mathcal{G} \rrbracket$ satisfies $\pi_U(A) = A$. Then $V = s_{\vert U}^{-1}(A) \subseteq U$ is a clopen bisection with $s(V) = r(V) = A$. Then $\tilde{V}$ as in Lemma~\ref{lem:extendBisection} is a full bisection with $\supp(\pi_{\tilde{V}}) \subseteq \supp(\pi_U)$, hence $\pi_{\tilde{V}} \in \llbracket \mathcal{G} \rrbracket$. The homeomorphism $\pi_{\tilde{V}}$ is the one from condition (1) of Definition~\ref{recognizable}.
(2) Suppose now that $\pi_U \in \llbracket \mathcal{G} \rrbracket$ satisfies $\pi_U(A) \cap A = \emptyset$. Again we set $V = s_{\vert U}^{-1}(A)$. Then $s(V) \cap r(V) = A \cap \pi_U(A) = \emptyset$. The full bisection $\hat{V}$ as in Lemma~\ref{lemma:bisectionInvolution} also has compact support since $\supp(\pi_{\hat{V}}) \subseteq \supp(\pi_U)$, and so $\pi_{\hat{V}} \in \llbracket \mathcal{G} \rrbracket$. The involution $\pi_{\hat{V}}$ is the one from condition (2) of Definition~\ref{recognizable}.
\end{proof}
Inspired by \cite[Proposition 2.2]{Med} we arrive at the following characterization of $(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)})$ being locally moving in terms of the groupoid $\mathcal{G}$.
\begin{proposition}\label{prop:LMChar}
Let $\mathcal{G}$ be an effective ample groupoid. The following are equivalent:
\begin{enumerate}
\item $(\llbracket \mathcal{G} \rrbracket, \mathcal{G}^{(0)})$ is locally moving.
\item Every non-empty clopen subset of $\mathcal{G}^{(0)}$ meets some $\mathcal{G}$-orbit twice.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $A$ be a non-empty clopen subset of $\mathcal{G}^{(0)}$. We will prove that $A$ meets some $\mathcal{G}$-orbit twice if and only if there is an element $\pi_U \in \llbracket \mathcal{G} \rrbracket \setminus \{1\}$ with $\supp(\pi_U) \subseteq A$. If $\emptyset \neq \supp(\pi_U) \subseteq A$, then, since both sets are clopen, there is an $x \in A$ with $x \neq \pi_U(x) \in A$. In other words, $\vert A \cap \operatorname{Orb}_\mathcal{G}(x) \vert \geq 2$. Conversely, if $\vert A \cap \operatorname{Orb}_\mathcal{G}(x) \vert \geq 2$ holds for some $x \in \mathcal{G}^{(0)}$, then there is a $g \in \mathcal{G} \setminus \mathcal{G}'$ such that $s(g)$ and $r(g)$ both belong to $A$. Now Lemma~\ref{lemma:bisectionExistence} gives us a nontrivial group element in $\llbracket \mathcal{G} \rrbracket$ supported on $A$. As the clopens form a base for the topology on $\mathcal{G}^{(0)}$ we are done.
\end{proof}
Putting it all together, we arrive at the second main result of this section.
\begin{theorem}\label{thm:KLCCgroupoid}
Let $\mathcal{G}_1, \mathcal{G}_2$ be effective ample Hausdorff groupoids over locally compact Cantor spaces. If, for $i = 1,2$, every $\mathcal{G}_i$-orbit has length at least $3$, and each clopen subset of $\mathcal{G}_i^{(0)}$ meets some $\mathcal{G}_i$-orbit twice, then any isomorphism between $\llbracket \mathcal{G}_1 \rrbracket$ and $\llbracket \mathcal{G}_2 \rrbracket$ is spatial. In particular, the following are equivalent:
\begin{enumerate}
\item $\mathcal{G}_1 \cong \mathcal{G}_2$ as topological groupoids.
\item $\llbracket \mathcal{G}_1 \rrbracket \cong \llbracket \mathcal{G}_2 \rrbracket$ as abstract groups.
\end{enumerate}
\end{theorem}
The preceding result applies in particular to groupoids which in addition are minimal (for then the two orbit conditions in the preceding theorem are trivially satisfied). We make the following definitions in an attempt to give some generic conditions --- weaker than minimality --- guaranteeing condition (2) in Proposition~\ref{prop:LMChar}.
\begin{definition}[{c.f. \cite[page 8]{Nek}}]
An ample groupoid $\mathcal{G}$ is called \emph{locally minimal} if there exists a basis for $\mathcal{G}^{(0)}$ consisting of clopen sets $A$ such that $\mathcal{G}_A$ is minimal.
\end{definition}
\begin{definition}
An ample groupoid $\mathcal{G}$ is called \emph{densely minimal} if for every non-empty open subset $A$ of $\mathcal{G}^{(0)}$ there exists a non-empty clopen subset $B \subseteq A$ such that $\mathcal{G}_B$ is minimal.
\end{definition}
We clearly have the following implications:
\[\text{minimal} \Rightarrow \text{locally minimal} \Rightarrow \text{densely minimal} \Rightarrow \text{all clopens meet some orbit twice}.\]
We will give examples of densely minimal groupoids, which are not minimal, in the next section ( Example~\ref{ex:1orbitCover}), as well as groupoids where every clopen meets some orbit twice, but which are not densely minimal (Remark~\ref{rem:notDM}).
\begin{remark}
It would be desirable to also obtain a spatial realization result for the commutator subgroup $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$ in terms of the class $K^{LCC}$. Unfortunately we were not able to show that the commutator satisfies condition (K4). Nor were we able to find some other condition characterizing clopenness in terms of the commutator. It seems that the class $K^F$ is better suited to deal with subgroups between $\llbracket \mathcal{G} \rrbracket$ and $\mathsf{D}(\llbracket \mathcal{G} \rrbracket)$, but for the commutator to be in $K^F$ one must have ``more conditions'' on the groupoid.
\end{remark}
\section{Graph groupoids}\label{sec:gg}
We are now going to apply the theory from the preceding sections to obtain specialized results for graph groupoids. We begin by recalling the relevant terminology of graphs and their associated groupoids (as they appear in the literature on graph $C^*$-algebras). This is pretty standard and may be found in many other papers, e.g.~\cite{BCW}, \cite{KPRR}.
\subsection{Graph terminology}
By a \emph{graph} we shall always mean a directed graph, that is, a quadruple ${E=(E^0,E^1,r,s)}$, where $E^0, E^1$ are (non-empty) sets and $r,s \colon E^1 \to E^0$ are maps. The elements in $E^0$ and $E^1$ are called \emph{vertices} and \emph{edges}, respectively, while the maps $r$ and $s$ are called the \emph{range} and \emph{source} map, respectively.\footnote{Although the notation collides with the range and source maps in a groupoid, both conventions are well established. In the sequel it will always be clear from context whether we mean the source/range in a graph or in a groupoid.} We say that $E$ is \emph{countable} if $E^0$ and $E^1$ both are countable sets, and similarly that $E$ is \emph{finite} if $E^0$ and $E^1$ are finite.
A \emph{path} in $E$ is a sequence of edges $\mu=e_1 e_2 \ldots e_n$ such that $r(e_i)=s(e_{i+1})$ for $1\leq i\leq n-1$. The \emph{length} of $\mu$ is $\left| \mu \right| \coloneqq n$. The set of paths of length $n$ is denoted $E^n$. The vertices, $E^0$, are considered trivial paths of length $0$. The set of all finite paths is denoted $E^* \coloneqq \bigcup_{n=0}^\infty E^n$. The range and source maps extend to $E^*$ by $r(\mu) \coloneqq r(e_n)$ and $s(\mu) \coloneqq s(e_1)$. For $v \in E^0$, $s(v) = r(v) = v$. Given another path $\lambda = f_1 \ldots f_m$ with $s(\lambda) = r(\mu)$ we denote the concatenated path $e_1 \ldots e_n f_1 \ldots f_m$ by $\mu \lambda$. By convention $s(\mu) \mu = \mu = \mu \ r(\mu)$ for each $\mu \in E^*$. Given two paths $\mu, \mu' \in E^*$ we write $\mu < \mu'$ if there exists a path $\lambda$ with $\left| \lambda \right| \geq 1$ such that $\mu' = \mu \lambda$. Writing $\mu \leq \mu'$ allows for $\mu = \mu'$. We say that $\mu$ and $\mu'$ are \emph{disjoint} if $\mu \nleq \mu'$ and $\mu' \nleq \mu$, i.e.\ neither is a subpath of the other.
A \emph{cycle} is a nontrivial path $\mu$ (i.e.\ $\left| \mu \right| \geq 1$) with $r(\mu) = s(\mu)$, and we say that $\mu$ is \emph{based} at $s(\mu)$. We also say that the vertex $s(\mu)$ \emph{supports} the cycle $\mu$. By a \emph{loop} we mean a cycle of length $1$. Beware that some authors use the term loop to denote what we here call cycles. When $\mu$ is a cycle, $\mu^k$ denotes the cycle $\mu \mu \ldots \mu$, where $\mu$ is repeated $k$ times. A cycle $\mu = e_1 \ldots e_n$ is called a \emph{return path} if $r(e_i) \neq r(\mu)$ for all $i < n$. This simply means that $\mu$ does not pass through $s(\mu)$ multiple times. An \emph{exit} for a path $\mu = e_1 \ldots e_n$ is an edge $e$ such that $s(e) = s(e_i)$ and $e \neq e_i$ for some $1 \leq i \leq n$.
For $v,w\in E^0$ we set $vE^n \coloneqq \{ \mu \in E^n \mid s(\mu) = v\}$, $E^n w \coloneqq \{ \mu \in E^n \mid r(\mu) = w\}$ and $vE^n w \coloneqq vE^n \cap E^n w.$ A vertex $v \in E^0$ is called a \emph{sink} if $vE^1 = \emptyset$, and a \emph{source} if $E^1 v = \emptyset$. Further, $v$ is called an \emph{infinite emitter} if $vE^1$ is an infinite set. The set of \emph{regular} vertices is $E^0_{\text{reg}} \coloneqq \{ v \in E^0 \mid 0 < | vE^1 | < \infty \}$, and the set of \emph{singular} vertices is $E^0_{\text{sing}} \coloneqq E^0\setminus E^0_{\text{reg}}$. In other words, sinks and infinite emitters are singular vertices, while all other vertices are regular. We equip the vertex set $E^0$ with a preorder $\geq$ by definining $v \geq w$ iff $vE^* w \neq \emptyset$, i.e.\ there is a path from $v$ to $w$. The graph $E$ is called \emph{strongly connected} if for each pair of vertices $v,w \in E^0$ we have $v \geq w$.
A graph $E$ is said to satisfy \emph{Condition (L)} if every cycle in $E$ has an exit. The graph $E$ satisfies \emph{Condition (K)} if for every vertex $v \in E^0$, either there is no return path based at $v$ or there are at least two distinct return paths based at $v$. We say that $E$ satisfies \emph{Condition (I)} if for every vertex $v \in E^0$, there exists a vertex $w \in E^0$ supporting at least two distinct return paths and $v \geq w$. In general, Conditions (K) and (I) both imply (L), while (K) and (I) are not comparable. For graphs with finitely many vertices and no sinks, Condition (I) is equivalent to Condition~(L).
\subsection{The boundary path space}
An \emph{infinite path} in a graph $E$ is an infinite sequence of edges $x = e_1 e_2 e_3 \ldots$ such that $r(e_i)=s(e_{i+1})$ for all $i \in \mathbb{N}$. We set $s(x) \coloneqq s(e_1)$ and $\left| x \right| \coloneqq \infty$. The set of all infinite paths in $E$ is denoted $E^\infty$. Given a finite path $\mu = f_1 \ldots f_n \in E^*$ and an infinite path $x = e_1 e_2 e_3 \ldots \in E^\infty$ such that $r(\mu) = s(x)$ we denote the infinite path $f_1 \ldots f_n e_1 e_2 e_3 \ldots$ by $\mu x$. For natural numbers $m < n$ we denote the finite path $e_m e_{m+1} \ldots e_n$ by $x_{[m,n]}$, and we denote the infinite path $e_m e_{m+1} e_{m+2} \ldots$ by $x_{[m,\infty)}$. Given a cycle $\lambda \in E^*$ we denote the infinite path $\lambda \lambda \lambda \ldots$ by $\lambda^\infty$. An infinite path of the form $\mu \lambda^\infty$, where $\lambda$ is a cycle with $s(\lambda) = r(\mu)$, is called \emph{eventually periodic}. An infinite path $e_1 e_2 \ldots \in E^\infty$ is \emph{wandering} if the set $\{ i \in \mathbb{N} \mid s(e_i) = v \}$ is finite for each $v \in E^0$. Note that there are no wandering infinite paths in a graph with finitely many vertices. We call a wandering infinite path $e_1 e_2 \ldots \in E^\infty$ a \emph{semi-tail}\footnote{A \emph{tail} is a path with $s(e_i)E^1 = \{e_i\} = E^1r(e_i)$ for all $i$, c.f.\ \cite{BPRS}.} if $s(e_i)E^1 = \{e_i\}$ for each~{$i \in \mathbb{N}$}. The graph $E$ is called \emph{cofinal} if for every vertex $v \in E^0$ and for every infinite path $e_1 e_2 \ldots \in E^\infty$, there exists $n \in \mathbb{N}$ such that $v \geq s(e_n)$.
The \emph{boundary path space} of $E$ is
\[\partial E \coloneqq E^\infty \cup \{\mu\in E^* \mid r(\mu)\in E^0_{\text{sing}}\},\]
whose topology will be specified shortly. Note that if $v \in E^0$ is a singular vertex, then $v \in \partial E$. For any vertex $v \in E^0$ we let $v \partial E \coloneqq \{x \in \partial E \mid s(x) = v \}$ and $v E^\infty \coloneqq \{x \in E^\infty \mid s(x) = v \}$. The \emph{cylinder set} of a finite path $\mu \in E^*$ is $Z(\mu) \coloneqq \{ \mu x \mid x \in r(\mu) \partial E \}$. And given a finite subset $F \subseteq r(\mu)E^1$ define $Z(\mu \setminus F) \coloneqq Z(\mu) \setminus \left( \bigcup_{e\in F} Z(\mu e) \right)$. Note that two finite paths are disjoint if and only if their cylinder sets are disjoint sets. A basis for the topology on the boundary path space $\partial E$ is given by $\left\{Z(\mu \setminus F) \mid \mu \in E^*, F \subseteq_{\text{finite}} r(\mu)E^1 \right\}$, c.f.~\cite{Web}. Each basic set $Z(\mu \setminus F)$ is compact open and these separate points, so $\partial E$ is a Boolean space. Moreover, each open set in $\partial E$ is a disjoint union of basic sets $Z(\mu \setminus F)$ (\cite[Lemma 2.1]{BCW}). The boundary path space $\partial E$ is second countable exactly when $E$ is countable, and it is compact if and only if $E^0$ is finite. When it comes to (topologically) isolated points, these are classified as follows.
\begin{proposition}[{\cite[Proposition 3.1]{CW}}]\label{prop:DEperfect}
Let $E$ be a graph.
\begin{enumerate}
\item If $v \in E^0$ is a sink, then any finite path $\mu \in E^*$ with $r(\mu) = v$ is an isolated point in $\partial E$.
\item If $x = \mu \lambda^\infty \in E^\infty$ is eventually periodic, then $x$ is an isolated point if and only if the cycle $\lambda$ has no exit.
\item If $x = e_1 e_2 \ldots \in E^\infty$ is wandering, then $x$ is an isolated point if and only if for some $n \in \mathbb{N}$, $e_n e_{n+1} \ldots$ is a semi-tail.
\end{enumerate}
These are the only isolated points in $\partial E$.
\end{proposition}
For each $n \in \mathbb{N}$ we set $\partial E^{\geq n} \coloneqq \{ x \in \partial E \mid \left| x \right| \geq n \}$ and $\partial E^{n} \coloneqq \{ x \in \partial E \mid \left| x \right| = n \}$. Each $\partial E^{\geq n}$ is an open subset of $\partial E$. The \emph{shift map} on $E$ is the map $\sigma_E \colon \partial E^{\geq 1} \to \partial E$ given by $\sigma_E(e_1 e_2 e_3 \ldots) = e_2 e_3 e_4 \ldots$ for $e_1 e_2 e_3 \ldots \in \partial E^{\geq 2}$ and $\osh(e) = r(e)$ for $e \in \partial E^{1}$. In other words, $\sigma_E(x) = x_{[2, \infty)}$. We have that
\[ \sigma_E \left( \partial E^{\geq 1} \right) = \{ x \in \partial E \mid E^1 s(x) \neq \emptyset \} = \partial E \setminus \left( \cup_{E^1 v \neq \emptyset} Z(v) \right), \]
which is an open set, and we see that $\osh$ is surjective if and only if $E$ has no sources. We let $\osh^n \colon \partial E^{\geq n}\to\partial E$ be the $n$-fold composition of $\osh[E]$ with itself, and we set $\osh^0 = \id_{\OSS}$. Each $\osh^n$ is then a local homeomorphism between open subsets of $\partial E$. Note that an infinite path $x \in E^\infty$ is eventually periodic if and only if there are distinct numbers $m,n \in \mathbb{N}_0$ such that $\osh^m(x) = \osh^n(x)$.
\subsection{Graph groupoids and their properties}
The \emph{graph groupoid} of a graph $E$ is the Renault-Deaconu groupoid (\cite{Dea}, \cite{Ren3}) of the dynamical system $(\partial E, \osh)$, that is
\[\mathcal{G}_E \coloneqq \{(x,m-n,y) \mid m,n \in \mathbb{N}_0, x \in \partial E^{\geq m}, y \in \partial E^{\geq n}, \osh^m(x) = \osh^n(y) \}. \]
The groupoid structure is given by $(x,k,y) \cdot (y,l,z) \coloneqq (x,k+l,z)$ (and undefined otherwise), $(x,k,y)^{-1} \coloneqq (y,-k,x)$. The unit space is $\mathcal{G}^{(0)}_E = \{ (x,0,x) \mid x \in \partial E \}$, which we will identify with $\partial E$ via $(x,0,x) \leftrightarrow x$. Then $s(x,k,y) = y$ and $r(x,k,y) = x$. We equip $\mathcal{G}_E$ with the topology generated by the basic sets
\[Z(U,m,n,V) \coloneqq \{(x,m-n,y) \mid x\in U, y\in V, \osh[E]^m(x)=\osh[E]^n(y)\},\]
where $U \subseteq \partial E^{\geq m}$ and $V \subseteq \partial E^{\geq n}$ are open sets such that $\left(\sigma_E^m \right)_{| U}$ and $\left(\sigma_E^n \right)_{| V}$ are injective, and $\osh[E]^m(U)=\osh[E]^n(V)$. This makes $\mathcal{G}_E$ an étale groupoid, and the identification of the unit space with $\partial E$ is compatible with the topology on $\partial E$. Note however, that this topology is finer than the relative topology from $\partial E \times \mathbb{Z} \times \partial E$. According to~\cite[page 394]{BCW} the family
\begin{equation}\label{eq:basis}
\left \{ Z(U, \left| \mu \right|,\left| \lambda \right|, V) \mid \osh^{\left| \mu \right|}(U) = \osh^{\left| \lambda \right|}(V) \right \},
\end{equation}
parametrized over all $\mu, \lambda \in E^*$ with $r(\mu) = r(\lambda)$, $U \subseteq Z(\mu)$ compact open, $V \subseteq Z(\lambda)$ compact open, is also a basis for the same topology. Each set $Z(U, \left| \mu \right|,\left| \lambda \right|, V)$ is a compact open bisection, and they separate the elements of $\mathcal{G}_E$, so $\mathcal{G}_E$ is an ample Hausdorff groupoid. The family in \eqref{eq:basis} is countable precisely when $E$ is countable, and so the graph groupoid $\mathcal{G}_E$ is second countable exactly when $E$ is countable.
For a boundary path $x \in \partial E$, the isotropy group of $(x,0,x) \in \mathcal{G}_E^{(0)}$ is nontrivial if and only if $x$ is eventually periodic (and infinite). For graph groupoids, effectiveness always coincides with topological principality, which again is well-known to coincide with the graph satisfying Condition~(L).
\begin{proposition}[c.f.\ {\cite[Proposition 2.3]{BCW}}]
Let $E$ be a graph. The following are equivalent:
\begin{enumerate}
\item The groupoid $\mathcal{G}_E$ is effective.
\item The groupoid $\mathcal{G}_E$ is topologically principal.
\item The set of infinite paths which are not eventually periodic form a dense subset of the boundary path space $\partial E$.
\item The graph $E$ satisfies Condition (L).
\end{enumerate}
\end{proposition}
\begin{proof}
The equivalence of (2), (3) and (4) is proved in~\cite[Proposition 2.3]{BCW}. The proof does not rely on the countability of the graph. As it is always the case that (2) implies (1) (c.f.\ Remark~\ref{rem:effective}), we only have to show that (1) implies (4). To that end, assume that $E$ does not satisfy Condition~(L). Then there is a cycle $\lambda \in E^*$ with no exit, and $\lambda^\infty$ is an isolated point in $\partial E$. But then the bisection \[Z\left(Z\left(\lambda^2\right), \vert \lambda \vert^2, \vert \lambda \vert, Z(\lambda)\right) = \{ \left(\lambda^\infty, \vert \lambda \vert, \lambda^\infty \right) \}\] is an open subset of $\mathcal{G}_E \setminus \mathcal{G}_E^{(0)}$, and hence $\mathcal{G}_E$ is not effective.
\end{proof}
We end this subsection by giving a characterization of minimality for graph groupoids. Let $E$ be a graph. Two infinite paths $x, y \in E^\infty$ are called \emph{tail equivalent} if there are natural numbers $k,l$ such that $x_{[k, \infty)} = y_{[l, \infty)}$. Similarly, two finite paths $\mu, \lambda \in E^*$ are \emph{tail equivalent} if $r(\mu) = r(\lambda)$. From the definition of $\mathcal{G}_E$ one sees that two boundary paths belong to the same $\mathcal{G}_E$-orbit if and only if they are tail equivalent. By combining~\cite[Theorem 5.1]{BCFS} with~\cite[Corollary 2.15]{DT} we arrive at the following result---of which we provide a self-contained proof.
\begin{proposition}
Let $E$ be a graph. Then the following are equivalent:
\begin{enumerate}
\item The groupoid $\mathcal{G}_E$ is minimal.
\item The graph $E$ is cofinal and for each $v \in E^0_\text{sing}, \ w \in E^0$, we have $w \geq v$.
\end{enumerate}
\end{proposition}
\begin{proof}
If $E$ has a sink $w \in E^0_{\text{sing}}$, then one immediately deduces from both statements that $E$ cannot have any other singular vertices, nor any infinite paths. Consequently \[\partial E = \operatorname{Orb}_{\mathcal{G}_E}(w) = \{ \mu \in E^* \mid r(\mu) = w \},\]
and $\mathcal{G}_E$ becomes a discrete transitive groupoid. So \emph{(1)} and \emph{(2)} are clearly equivalent in this case. For the remainder of the proof we assume that $E$ has no sinks.
Assume that \emph{(2)} holds. Let $x \in E^\infty$ and let $\lambda \in E^*$. By cofinality, there is a path $\lambda'$ from $r(\lambda)$ to $s(x_n)$ for some $n \in \mathbb{N}$. The infinite path $\lambda \lambda' x_n x_{n+1} \ldots$ then belongs to both $Z(\lambda)$ and $\operatorname{Orb}_{\mathcal{G}_E}(x)$. Hence the latter is dense in $\partial E$ (since every open set contains a cylinder set when there are no sinks). Next, suppose $\mu \in \partial E \cap E^*$ with $r(\mu)$ an infinite emitter. By assumption there is a path $\lambda''$ from $r(\lambda)$ to $r(\mu)$, and then $\lambda \lambda'' \in Z(\lambda) \cap \operatorname{Orb}_{\mathcal{G}_E}(\mu)$. This shows that $\mathcal{G}_E$ is minimal.
Assume now that $\mathcal{G}_E$ is minimal. To see that $E$ is cofinal, let $x \in E^\infty$ and $v \in E^0$ be given. By minimality there is a $y \in E^\infty$ tail equivalent to $x$ such that $y \in Z(v)$. This implies that $v$ can reach $x$. As for the second part of \emph{(2)}, let $v \in E^0_{\text{sing}}$ and $w \in E^0$ be given. Again by minimality there is a $\lambda \in E^* \cap Z(w)$ tail equivalent to $v$, but this is just a path from $w$ to $v$, so $w \geq v$.
\end{proof}
\begin{remark}
The notion of cofinality is slightly weaker than strong connectedness. But for finite graphs with no sinks and no sources, cofinality coincides with strong connectedness. In fact, this is also true for infinite graphs which additionaly have no \emph{semi-heads} (the direction-reversed notion of a semi-tail). We also remark that for cofinal graphs, Condition~(L) coincides with Condition~(K).
\end{remark}
\section{Topological full groups of graph groupoids}\label{sec:tfgg}
We are now going to describe and analyze the topological full group of a graph groupoid. We begin by specifying yet another (equivalent) basis for $\mathcal{G}_E$, allowing us in turn to describe bisections combinatorially in terms of the graph.
For two finite paths $\mu, \lambda \in E^*$ with $r(\mu) = r(\lambda) = v$ we define
\[Z(\mu, \lambda) \coloneqq Z\left(Z(\mu),\left| \mu \right|,\left| \lambda \right|,Z(\lambda) \right).\]
More generally, given a finite subset $F \subseteq v E^1$ as well, we define
\[Z(\mu, F, \lambda) \coloneqq Z\left(Z(\mu \setminus F),\left| \mu \right|,\left| \lambda \right|,Z(\lambda \setminus F) \right).\]
Each $Z(\mu, F, \lambda)$ is a compact open bisection in $\mathcal{G}_E$, and we will see shortly that they also form a basis. Observe that if $v \in E^0_{\text{reg}}$, then $Z(\mu,F,\lambda)=\bigsqcup_{e \in v E^1\setminus F}Z(\mu e,\lambda e)$, and this is a finite union.
\begin{lemma}\label{lem_disj}
Let $E$ be a graph. Let $\mu, \mu',\lambda,\lambda' \in E^*$ with $r(\mu)=r(\lambda)=v$, $r(\mu')=r(\lambda')=v'$ and let $F \subseteq_{\text{finite}} v E^1$, $F' \subseteq_{\text{finite}} v'E^1$. Then $Z(\mu,F,\lambda) \bigcap Z(\mu',F',\lambda')$ equals either
\begin{enumerate}
\item $\emptyset$, or
\item $Z(\mu,F,\lambda)$, or
\item $Z(\mu',F',\lambda')$, or
\item $Z(\mu,F\cup F',\lambda)$, in which case $\mu=\mu'$, $\lambda=\lambda'$ and \[Z(\mu,F,\lambda) \cup Z(\mu',F',\lambda') = Z(\mu,F \cap F',\lambda).\]
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose $Z(\mu,F,\lambda)\cap Z(\mu',F',\lambda') \neq \emptyset$. Then we must have $\left| \mu \right| - \left| \lambda \right| = \left| \mu' \right| - \left| \lambda' \right|$, $Z(\mu \setminus F)\cap Z(\mu' \setminus F') \neq \emptyset$ and $Z(\lambda \setminus F)\cap Z(\lambda' \setminus F') \neq \emptyset$. Since
\[ Z(\mu \setminus F) \bigcap Z(\mu' \setminus F')= \left\lbrace \begin{array}{ll}
Z(\mu\setminus (F\cup F')) & \text{if }\mu=\mu', \\
Z(\mu \setminus F) & \text{if }\mu' < \mu \text{ and } \mu_{ \left| \mu' \right| +1} \notin F', \\
Z(\mu' \setminus F') & \text{if }\mu <\mu' \text{ and } \mu'_{ \left| \mu \right| +1} \notin F, \\
\emptyset & \text{otherwise},\end{array} \right. \]
we may suppose without loss of generality that $\mu \leq \mu'$. The equality $\left| \mu \right| - \left| \lambda \right| = \left| \mu' \right| - \left| \lambda' \right|$ then forces $\lambda \leq \lambda'$ as well. If $\mu = \mu'$, then we must also have $\lambda = \lambda'$ and it is easy to see that \emph{(4)} holds in this case.
Next, suppose $\mu < \mu'$, which forces $\lambda < \lambda'$. As the intersections above are non-empty we have $Z(\mu' \setminus F') \subseteq Z(\mu \setminus F)$ and $Z(\lambda' \setminus F') \subseteq Z(\lambda \setminus F)$. It follows from this that Z$(\mu',F',\lambda') \subseteq Z(\mu,F,\lambda)$, and we are done.
\end{proof}
\begin{lemma}
The family $\left\{ Z(\mu, F, \lambda) \mid \mu, \lambda \in E^*, r(\mu) = r(\lambda), F \subseteq_{\text{finite}} r(\mu)E^1 \right\}$ forms a basis for $\mathcal{G}_E$.
\end{lemma}
\begin{proof}
It suffices to write each basic set $Z(U, \left| \mu \right|,\left| \lambda \right|, V)$, where $\mu, \lambda \in E^*$ with $r(\mu) = r(\lambda)$, $U \subseteq Z(\mu)$ compact open, $V \subseteq Z(\lambda)$ compact open and $\osh^{\left| \mu \right|}(U) = \osh^{\left| \lambda \right|}(V)$, as a union of $Z(\mu', F', \lambda')$'s. Given such a basic set $Z(U, \left| \mu \right|,\left| \lambda \right|, V)$, we can then write
\[ \osh^{\left| \mu \right|}(U) = \osh^{\left| \lambda \right|}(V) = \bigsqcup_{i=1}^k Z(\eta_i \setminus F_i), \]
for some $\eta_i \in E^*$, $F_i \subseteq_{\text{finite}} r(\eta_i)E^1$, since the former two are compact open subsets of $\partial E$. It follows that
\[U = \bigsqcup_{i=1}^k Z(\mu \eta_i \setminus F_i) \quad \text{and} \quad V = \bigsqcup_{i=1}^k Z( \lambda \eta_i \setminus F_i). \]
Hence
\[Z(U, \left| \mu \right|,\left| \lambda \right|, V) = \bigsqcup_{i=1}^k Z(\mu \eta_i, F_i, \lambda \eta_i). \]
\end{proof}
Using the basis above, may concretely describe bisections in $\mathcal{G}_E$ as follows.
\begin{lemma}\label{lem_bis}
Let $E$ be graph, and let $U \subseteq \mathcal{G}_E$ be a compact bisection with $s(U) = r(U)$. Then $U$ is of the form
\[U = \bigsqcup_{i=1}^k Z(\mu_i, F_i, \lambda_i), \]
where $\mu_i, \lambda_i \in E^*$ with $r(\mu_i) = r(\lambda_i)$, $F_i \subseteq_{\text{finite}} r(\mu_i) E^1$ and
\[s(U) =\bigsqcup_{i=1}^k Z(\lambda_i \setminus F_i) = \bigsqcup_{i=1}^k Z(\mu_i \setminus F_i). \]
\end{lemma}
\begin{proof}
Since $U$ is a compact open subset of $\mathcal{G}_E$ we may, by the preceding two lemmas, write $U$ as a finite disjoint union of basic sets $Z(\mu, F, \lambda)$'s, say $U = \bigsqcup_{i=1}^k Z(\mu_i, F_i, \lambda_i$. As $r$ and $s$ are injective on $U$ they preserve disjoint unions, so we have
\begin{align*}
&s(U) = s\left(\bigsqcup_{i=1}^k Z(\mu_i, F_i, \lambda_i) \right) = \bigsqcup_{i=1}^k s\left(Z(\mu_i, F_i, \lambda_i)\right) = \bigsqcup_{i=1}^k Z(\lambda_i \setminus F_i) \\
= \ &r(U) = r\left(\bigsqcup_{i=1}^k Z(\mu_i, F_i, \lambda_i) \right) = \bigsqcup_{i=1}^k r\left(Z(\mu_i, F_i, \lambda_i)\right) = \bigsqcup_{i=1}^k Z(\mu_i \setminus F_i).
\end{align*}
\end{proof}
In conjunction with Lemma~\ref{lem:extendBisection} we get that the elements in $\llbracket \mathcal{G}_E \rrbracket$ for an effective graph groupoid (i.e.\ $E$ satisfying Condition (L)) may be described as follows.
\begin{proposition}\label{prop:bis}
Let $E$ be a graph satisfying Condition (L). If $\pi_U \in \llbracket \mathcal{G}_E \rrbracket$, then the full bisection $U$ can be written as
\[U = \left( \bigsqcup_{i=1}^k Z(\mu_i, F_i, \lambda_i) \right) \bigsqcup \left( \partial E \setminus \supp(\pi_U) \right), \]
where $\mu_i, \lambda_i \in E^*$ with $r(\mu_i) = r(\lambda_i)$, $F_i \subsetneq_{\text{finite}} r(\mu_i) E^1$ and
\[ \supp(\pi_U) = \bigsqcup_{i=1}^k Z(\lambda_i \setminus F_i) = \bigsqcup_{i=1}^k Z(\mu_i \setminus F_i). \]
Moreover, $\mu_1, \ldots \mu_k$ are pairwise disjoint, $\lambda_1, \ldots \lambda_k$ are pairwise disjoint, and $\mu_i \neq \lambda_i$ for each $i$.
The homeomorphism $\pi_U \colon \partial E \to \partial E$ is given by $x = \lambda_i z \longmapsto \mu_i z$ for $x \in Z(\lambda_i \setminus F_i)$ and $x \longmapsto x$ otherwise.
\end{proposition}
\begin{remark}
The elements in $\llbracket \mathcal{G}_E \rrbracket$ may alternatively be described in more dynamical terms via the orbits by the shift map. From~\cite[Proposition 3.3]{BCW} one deduces that a homeomorphism $\alpha \in \operatorname{Homeo}(\partial E)$ belongs to $\llbracket \mathcal{G}_E \rrbracket$ if and only if there are compactly supported continuous functions $m,n \colon \partial E \to \mathbb{N}_0$ such that $\osh^{m(x)}(x) = \osh^{n(x)}(\alpha(x))$. This parallels Matui's definition for locally compact Cantor minimal systems mentioned in Remark~\ref{rem:matui}, and Matsumoto's definition in~\cite{Mats}.
\end{remark}
Having completely described the topological full group of a graph groupoid, we provide an example to show that the assumption on the orbits in Lemma~\ref{ex_cover} is not necessary. On the other hand, we also give an example to show that the statement is generally false without said assumption. These examples also provide examples of densely minimal groupoids which are not minimal.
\begin{example}\label{ex:1orbitCover}
Consider the following graph:
\[ \begin{tikzpicture}[vertex/.style={circle, draw = black, fill = black, inner sep=0pt,minimum size=5pt}]
\node at (-2,0) {$E$};
\node[vertex] (a) at (0,0) [label=above:$v$] {};
\node[vertex] (b) at (2,0) [label=right:$w$] {};
\path (a) edge[thick, loop, min distance = 10mm, looseness = 10, out = 135, in = 225,decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[left] {$e$} (a)
edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$f$} (b)
(b) edge[thick,loop, min distance = 10mm, looseness = 20, out = 225, in = 315, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[below] {$g_2$} (b)
(b) edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[above] {$g_1$} (b);
\end{tikzpicture} \]
The graph $E$ satisfies condition~(L), but is not cofinal, so $\mathcal{G}_E$ is effective, but not minimal. We claim that $\mathcal{G}_E$ is densely minimal. To see this, note that any non-empty open subset of $E^\infty$ must contain a cylinder set $Z(\mu)$ where $r(\mu) = w$. And the restriction of $\mathcal{G}_E$ to $Z(\mu)$ is minimal.
Observe that the orbit of $e^\infty$ has length $1$, i.e.\ $\operatorname{Orb}_{\mathcal{G}_E}(e^\infty) = \{e^\infty\}$. However, the topological full group $\llbracket \mathcal{G}_E \rrbracket$ still covers $\mathcal{G}_E$. For instance, the isotropy element $(e^\infty,1,e^\infty)$ belongs to the full bisection
\[ U = Z(e^2,e) \bigsqcup Z(ef,g_1g_2) \bigsqcup Z(g_1,g_1g_1) \bigsqcup Z(f,f) \bigsqcup Z(g_2,g_2). \]
Similar bisections can be found for any integer $k$ instead of $1$.
\end{example}
\begin{example}
Consider the following graph:
\[ \begin{tikzpicture}[vertex/.style={circle, draw = black, fill = black, inner sep=0pt,minimum size=5pt}]
\node at (-2,0) {$F$};
\node[vertex] (a) at (0,0) [label=above:$v$] {};
\node[vertex] (c) at (2,0) [label=below:$w$] {};
\node[vertex] (b) at (4,0) [label=right:$u$] {};
\path (a) edge[thick, loop, min distance = 10mm, looseness = 10, out = 135, in = 225,decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[left] {$e$} (a)
edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$f$} (c)
(b) edge[thick,loop, min distance = 10mm, looseness = 20, out = 225, in = 315, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[below] {$g_2$} (b)
(b) edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[above] {$g_1$} (b)
(c) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$i$} (b)
edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[above] {$h$} (c);
\end{tikzpicture} \]
As in the previous example, $e^\infty$ has a singleton orbit. However, in this case $\llbracket \mathcal{G}_F \rrbracket$ does not cover $\mathcal{G}_F$, for there is no full bisection containing the element $(e^\infty,1,e^\infty)$. If $U$ is a bisection containing $(e^\infty,1,e^\infty)$, then $U$ must contain a bisection of the form $Z(e^{k+1}, e^k)$. Now since $Z(e^k) = Z(e^{k+1}) \sqcup Z(e^k f)$, it will be impossible to enlarge $U$ to a full bisection. By adding disjoint $Z(\mu, \lambda)$'s to write $U$ as in Proposition~\ref{prop:bis} one will always have one more $\mu$ ending in $w$ than $\lambda$'s. See also~\cite[Example 3.5]{BrixS} for the same phenomenon in a restricted transformation groupoid.
\end{example}
\section{Isomorphism theorems for graph groupoids}\label{sec:isogg}
In this section we will obtain specialized isomorphism theorems for graph groupoids. We will determine exactly when the topological full group of a graph groupoid belongs to $K^F$, and the conditions for this turn out to be weaker than minimality. We will also determine, in terms of the graph, exactly when it belongs to $K^{LCC}$. From this we obtain two isomorphism theorems for graph groupoids.
\subsection{The class $K^F$} We are now going to give necessary and sufficient conditions for when $(\Gamma, \partial E)$ belongs to $K^F$ --- for a graph $E$, and a subgroup $\Gamma \leq \llbracket \mathcal{G}_E \rrbracket$ containing $\mathsf{D}(\llbracket \mathcal{G}_E \rrbracket)$. Of the three conditions (F1), (F2) and (F3) in Definition~\ref{def_F}, (F1) is the ``hardest'' one. This is essentially because we need to produce elements in the topological full group with support containing a given point $x \in \partial E$, but also contained in a given neighbourhood of $x$. In the other two conditions we can get away with simply choosing a ``small enough'' support. As both conditions (F1) and (F3) fails in the presence of isolated points, we will only consider graphs graphs that have no sinks, no semi-tails, and satisfy Condition~(L). We will see that Condition~(K) will be necessary for (F1) to hold for periodic points\footnote{That is, $x = \lambda^\infty$ for some cycle $\lambda$.}. The two conditions in Definition~\ref{def:adhoc} below are needed to ensure that (F1) holds for wandering infinite paths, and for finite boundary paths, respectively. For notational convenience we make the following ad-hoc definitions.
\begin{definition}\label{def:adhoc}
Let $E$ be a graph.
\begin{enumerate}
\item We say that $E$ satisfies \emph{Condition~(W)} if for every wandering infinite path $x \in E^\infty$, we have $\vert s(x) E^* r(x_n) \vert \geq 2$ for some $n \in \mathbb{N}$.
\item We say that $E$ satisfies \emph{Condition~($\infty$)} if for every infinite emitter $v \in E^0$, the set $\{ e \in v E^1 \mid r(e) \geq v \}$ is infinite.
\end{enumerate}
\end{definition}
The three conditions (K), (W) and ($\infty$) can be thought of as strengthenings of each of the three criteria for the boundary path space $\partial E$ being perfect (Proposition~\ref{prop:DEperfect}). The latter three criteria can informally be described as ``can exit'', whereas the former three can be described as ``can exit \emph{and} return''. More specifically, Condition~(L) means that one can exit every cycle, whereas Condition~(K) means that one can also return back to the cycle. That $E$ has no semi-tails means that every wandering infinite path has an exit, and Condition~(W) means that one can return to the same infinite path again. That $E$ has no sinks can be reformulated as saying that every singular vertex has an exit (and hence infinitely many), whereas Condition~($\infty$) says that one can also return to the same vertex (from infinitely many of these exits). Note that Condition~($\infty$) holds in particular if every infinite emitter supports infinitely many loops. Also note that if $\vert s(x) E^* r(x_n) \vert \geq 2$ for some $n \in \mathbb{N}$, then the same is true for each $m \geq n$. We now make two elementary observations needed in the proof of the next proposition.
\begin{lemma}\label{lem:cycles}
Let $E$ be a graph.
\begin{enumerate}
\item If $\mu \in E^*$ is a cycle and $E$ satisfies Condtion~(K), then there are infinitely many cycles $\lambda_1, \lambda_2, \ldots$ based at $s(\mu)$ such that $\mu, \lambda_1, \lambda_2, \ldots$ are mutually disjoint.
\item If $x = x_1 x_2 \ldots \in E^\infty$ is a wandering infinite path and $E$ satisfies Condition~(W), then for each $N \in \mathbb{N}$ there is an $n \in \mathbb{N}$ and paths $\mu_1, \ldots, \mu_N$ from $s(x)$ to $r(x_n)$ such that $x_{[1,n]}, \mu_1, \ldots, \mu_N$ are mutually disjoint.
\end{enumerate}
\end{lemma}
\begin{proof}
For the first part, let $\tau_1$ and $\tau_2$ be two distinct return paths based at $s(\mu)$. As distinct return paths are disjoint we must have that $\mu$ is disjoint from one of them, say $\tau_1$. And then $\mu, \tau_1 \mu, \tau_1^2 \mu, \tau_1^3 \mu, \ldots$ are all disjoint.
We argue inductively for the second part. Let $n_1 \in \mathbb{N}$ be such that $\vert s(x) E^* r(x_{n_1}) \vert \geq 2$, and put $v = r(x_{n_1})$. Since $x$ is wandering we can let $m_1 \geq n_1$ be the largest index such that $r(x_{m_1}) = v$. So that $x$ never returns to $v$ after the $m_1$'th edge. Let $\mu$ be a path in $s(x) E^* r(x_{m_1})$ distinct from $x_{[1,m_1]}$. If $x_{[1,m_1]}$ and $\mu$ are disjoint, then we are done with the base case. If not, then either $x_{[1,m_1]} < \mu$ or $x_{[1,m_1]} > \mu$. In the former case we have that $\mu = x_{[1,m_1]} \rho$, where $\rho$ is a cycle based at $v$. As $x$ does not return to $v$ again we must have that $x_{[m_1 +1, m_1 + \vert \rho \vert]} \neq \rho$, and then $x_{[1, m_1 + \vert \rho \vert]}$ is disjoint from $\mu_1 \coloneqq \mu x_{[m_1 +1, m_1 + \vert \rho \vert]} = x_{[1,m_1]} \rho x_{[m_1 +1, m_1 + \vert \rho \vert]}$. If the latter is the case, then $\mu = x_{[1,k]}$ for some $k < m_1$ and $x_{[k+1, m_1]}$ is a cycle. And then the previous argument applied to $x_{[1,m_1]}$ and $\mu' = x_{[1,k]} \left(x_{[k+1, m_1]}\right)^2$ shows that the statement holds for $N=1$.
Applying the above to the tail $x_{[m_1 + 1, \infty)}$, which is again a wandering infinite path, we get an index $m_2 > m_1$ and a path $\mu_2$ from $r(x_{m_1})$ to $r(x_{m_2})$ disjoint from $x_{[m_1 + 1, m_2]}$. By concatenating $x_{[1,m_1]}$ and $\mu_1$ with $x_{[m_1 +1,m_2]}$ and $\mu_2$ we obtain three paths from $s(x)$ to $r(x_{m_2})$ that are mutually disjoint, as well as disjoint from $x_{[1,m_2]}$. By continuing in this manner one sees that the result is true for all $N \in \mathbb{N}$.
\end{proof}
\begin{proposition}\label{prop:KFgraph}
Let $E$ be a graph with no sinks and let $\Gamma \leq \llbracket \mathcal{G}_E \rrbracket$ be a subgroup containing $\mathsf{D}(\llbracket \mathcal{G}_E \rrbracket)$. Then $(\Gamma, \partial E)$ belongs to $K^F$ if and only if $E$ satisfies Conditions~(K), (W) and ($\infty$).
\end{proposition}
\begin{proof}
This proof is inspired by Matui's proof of~\cite[Proposition 3.6]{Mat}. We employ similar tricks in this more concrete, yet non-minimal context. We will first show that (F2) and (F3) holds when $E$ satisfies Conditions~(K) and (W). And then we will show, in turn, that all three conditions are necessary and sufficient for (F1) to hold at certain boundary paths.
Suppose $E$ satisfies Conditions~(K) and (W) (in addition to having no sinks). We verify (F3) first. Let $A$ be any non-empty clopen subset of $\partial E$. There is then a path $\eta$ such that $Z(\eta) \subseteq A$. Now there are two possibilities, either $r(\eta)$ connects to a cycle, or $r(\eta) E^\infty$ consists only of wandering paths. In the first case we may assume, by extending $\eta$, that $r(\eta)$ supports a cycle. By Lemma~\ref{lem:cycles} we can find three disjoint cycles $\lambda_1, \lambda_2, \lambda_3$ based at $r(\eta)$. Define $V = Z(\eta \lambda_1, \eta \lambda_2)$, $W = Z(\eta \lambda_2, \eta \lambda_3)$ and $\alpha = [\pi_{\hat{V}}, \pi_{\hat{W}}]$ (as in Lemma~\ref{lemma:bisectionInvolution}). Then $\alpha \in \Gamma \setminus \{1\}$ has order $3$ and $\supp(\alpha) \subseteq Z(\eta) \subseteq A$. In the case that $r(\eta) E^\infty$ consists only of wandering paths we may find, again by Lemma~\ref{lem:cycles}, three disjoint paths $\lambda_1, \lambda_2, \lambda_3$ starting at $r(\eta)$, and such that $r(\lambda_1) = r(\lambda_2) = r(\lambda_3)$. Defining $\alpha$ as above shows that (F3) holds in this case as well.
Next we verify (F2). To that end, let $\alpha \in \Gamma \setminus \{1\}$ with $\alpha^2 = 1$ and $\emptyset \subsetneq A \subseteq \supp(\alpha)$ a clopen be given. We have $\alpha = \pi_U$ with
\[U = \left( \bigsqcup_{i=1}^k Z(\mu_i, F_i, \lambda_i) \right) \bigsqcup \left( \mathcal{G}^{(0)} \setminus \supp(\pi_U) \right) \]
as in Proposition~\ref{prop:bis}. Arguing as above, we can find a path $\eta$ such that $Z(\eta) \subseteq A \cap Z(\lambda_j \setminus F_j)$ for some $1 \leq j \leq k$, and two disjoint paths $\tau_1, \tau_2$ with $s(\tau_1) = s(\tau_2) = r(\eta)$ and $r(\tau_1) = r(\tau_2)$. As $\lambda_j \leq \eta$ we can write $\eta = \lambda_j \rho$ for some path $\rho$ whose first edge does not belong to $F_{j}$. Define the bisections
\[V = Z(\lambda_{j} \rho \tau_1, \lambda_{j} \rho \tau_2) \bigsqcup Z(\mu_{j} \rho \tau_1, \mu_{j} \rho \tau_2) \] and
\[W = Z(\mu_{j} \rho \tau_1, \lambda_{j} \rho \tau_1). \]
Put $\beta = [\pi_{\hat{V}}, \pi_{\hat{W}}]$. As $\alpha$ is an involution we have that $\alpha(\lambda_{j} z) = \mu_{j} z$ for $\lambda_{j} z \in Z(\lambda_{j} \setminus F_{j})$ and vice versa. Now observe that $\beta \in \Gamma$,
\begin{align*}
\supp(\beta) &= Z(\lambda_j \rho \tau_1) \sqcup Z(\lambda_j \rho \tau_2) \sqcup Z(\mu_j \rho \tau_1) \sqcup Z(\mu_j \rho \tau_2) \\
& \subseteq Z(\eta) \cup \alpha(Z(\eta)) \subseteq A \cup \alpha(A),
\end{align*}
and that $\alpha$ and $\beta$ agree on $\supp(\beta)$ (as they both swap the inital paths $\lambda_j$ and $\mu_j$).
Assume now that $E$ merely has no sinks, no semi-tails and satisfies Condition~(L). We will show that (F1) holds if and only if $E$ satisfies Conditions~(K), (W) and ($\infty$). Let $x \in \partial E$ and $A$ a clopen neighbourhood of $x$ be given. We further divide this part into three cases, each one yielding the necessity of one of the three conditions.
\textbf{Condtion~(K)}: Assume $E$ satisfies Condtion~(K), and suppose $x = x_1 x_2 \ldots \in E^\infty$ is an infinite non-wandering path. For $m \in \mathbb{N}$ large enough, we have that $Z(x_{[1,m]}) \subseteq A$. As $x$ contains infinitely many cycles we can, by possibly choosing $m$ larger, assume that $x_{[m+1,n]}$ is a return path at $r(x_m)$ for some $n > m$. Using Lemma~\ref{lem:cycles} we can find three mutually disjoint cycles $\lambda_1, \lambda_2, \lambda_3$ based at $r(x_m)$ which are also disjoint from $x_{[m+1,n]}$. Let $\mu_i = x_{[1,m]} \lambda_i$ for $i = 1,2,3$ and let $\mu_4 = x_{[1,n]}$. Define
\[V = Z(\mu_1, \mu_2) \bigsqcup Z(\mu_3, \mu_4) \] and
\[W = Z(\mu_1, \mu_3). \]
Then $\alpha = [\pi_{\hat{V}}, \pi_{\hat{W}}] \in \Gamma$ satisfies $\supp(\alpha) = \bigsqcup_{i=1}^4 Z(\mu_i) \subseteq Z(x_{[1,m]}) \subseteq A$, $\alpha^2 = 1$ and $x \in Z(\mu_4) \subseteq \supp(\alpha)$ as desired.
To see that Condition~(K) is necessary, suppose that $E$ does not satisfy it. Then there is a vertex $v \in E^0$ supporting a unique return path, say $\tau$. We may assume that $\tau$ has an exit $f$ with $s(f) = v$. Consider $x = \tau^\infty$ and its neighbourhood $A = Z(\tau)$. We claim that (F1) fails for this pair. To see this, suppose $\pi_U \in \llbracket \mathcal{G}_E \rrbracket$ satisfies $\tau^\infty \in \supp(\pi_U) \subseteq Z(\tau)$. By Proposition~\ref{prop:bis} we can find $Z(\mu, \lambda) \subseteq U$ with $r(\mu) = r(\lambda)$, $\mu \neq \lambda$ and $\tau^\infty \in Z(\lambda)$, which means that $\lambda \leq \tau^k$ for some $k \geq 1$. By possibly extending $\mu$ and $\lambda$ we may assume that $\lambda = \tau^k$. We also have $Z(\mu) \subseteq Z(\tau)$, i.e.\ $\tau \leq \mu$, and $r(\mu) = r(\lambda) = v$. But since $\tau$ is the only return path based at $v$ we must have $\mu = \tau^l$ for some $l \neq k$ as $\mu \neq \lambda$. Let $z \in r(f) \partial E$. Then $(\pi_U)^2\left(\tau^{2k}fz\right) = \tau^{2l}fz \neq \tau^{2k}fz$, hence $\pi_U$ is not an involution, and therefore $(\Gamma, \partial E)$ does not satisfy (F1).
\textbf{Condtion~(W)}: Assume $E$ satisfies Condtion~(W), and suppose $x = x_1 x_2 \ldots \in E^\infty$ is an infinite wandering path. Choose $m$ large enough so that $Z(x_{[1,m]}) \subseteq A$. By Lemma~\ref{lem:cycles} there is an $n \geq m$ and three paths $\lambda_1, \lambda_2, \lambda_3$ from $s(x)$ to $r(x_n)$ such that $\lambda_1, \lambda_2, \lambda_3, x_{[1,n]}$ are mutually disjoint. Setting $\mu_i = x_{[1,m]} \lambda_i$ for $i = 1,2,3$ and $\mu_4 = x_{[1,n]}$, and defining $\alpha$ in the same way as in the case of Condition~(K) above gives the desired element in $\Gamma$.
To see that Condition~(W) is necessary, suppose that there is an infinite wandering path $x = x_1 x_2 \ldots \in E^\infty$ such that $\vert s(x) E^* s(x_n) \vert = 1$ for all $n \in \mathbb{N}$. We claim that (F1) fails for $A = Z(x_1)$. Indeed, suppose $\pi_U \in \llbracket \mathcal{G}_E \rrbracket$ satisfies $x \in \supp(\pi_U) \subseteq Z(x_1)$. By Proposition~\ref{prop:bis} we can find $Z(\mu, \lambda) \subseteq U$ with $r(\mu) = r(\lambda)$, $\mu \neq \lambda$ and $x \in Z(\lambda)$, which implies that $\lambda = x_{[1,m]}$ for some $m \geq 1$. But as $Z(\mu) \subseteq Z(x_1)$ we have that $s(\mu) = s(x)$ and $r(\mu) = r(x_m)$. It now follows that $\mu = \lambda$ since $\vert s(x) E^* s(x_m) \vert = 1$. This contradiction shows that there is not even an element $\pi_U \in \llbracket \mathcal{G}_E \rrbracket$ such that $x \in \supp(\pi_U) \subseteq Z(x_1)$.
\textbf{Condtion~($\infty$)}: Assume $E$ satisfies Condtion~($\infty$), and suppose $x = x_1 \ldots x_m \in E^*$ is a finite boundary path. Then for some $F \subseteq_{\text{finite}} r(x)E^1$ we have $Z(x \setminus F) \subseteq A$. By Condition~($\infty$) we can find three distinct edges $e_1, e_2, e_3 \in r(x)E^1 \setminus F$, and three (necessarily disjoint) cycles $\tau_1, \tau_2, \tau_3$ based at $r(x)$ such that $e_i \leq \tau_i$ for $i = 1,2,3$. Let $F' = F \sqcup \{e_1, e_2, e_3\}$. Now define
\[V = Z(x \tau_1,F', x) \bigsqcup Z(x \tau_2,F', x \tau_3) \] and
\[W = Z(x \tau_1,F', x \tau_2). \]
Then $\alpha = [\pi_{\hat{V}}, \pi_{\hat{W}}] \in \Gamma$ satisfies $\supp(\alpha) = Z(x \setminus F') \bigsqcup_{i=1}^3 Z(x \tau_i \setminus F') \subseteq Z(x \setminus F) \subseteq A$, $\alpha^2 = 1$ and $x \in Z(x \setminus F') \subseteq \supp(\alpha)$.
Finally, if $E$ does not satisfy Condition~($\infty$), then there is an infinite emitter $v \in E^0$ such that the set $F = \{ e \in v E^1 \mid r(e) \geq v \}$ is finite. And then (F1) fails for $x = v$ and $A = Z(v \setminus F)$ as there is no element $\pi_U \in \llbracket \mathcal{G}_E \rrbracket$ whose support is contained in $Z(v \setminus F)$ and contains $v$.
The argument for this is essentially the same as in the necessity of Condition~(W) above.
\end{proof}
\begin{remark}
From the proof of Proposition~\ref{prop:KFgraph} we see that for a graph groupoid $\mathcal{G}_E$, the topological full group $\llbracket \mathcal{G}_E \rrbracket$ together with the boundary path space $\partial E$ belongs to the class $K^F$ if and only if its commutator subgroup $\mathsf{D}(\llbracket \mathcal{G}_E \rrbracket)$ does. This is not a general phenomenon, as one cannot in general deduce that one of them belongs to $K^F$ if the other does. It is clear that (F1) and (F3) in Definition~\ref{def_F} passes to supergroups, but (F2) need not do so. It is even more peculiar that the properties (F1), (F2), (F3) pass down to the commutator from $\llbracket \mathcal{G}_E \rrbracket$. This phenomenon seems to be an artifact of the combinatorial nature of the topological full group of a graph groupoid, and so it might also hold for other concrete classes of groupoids.
\end{remark}
\subsection{The class $K^{LCC}$} Our next objective is to perform a similar analysis of when $(\llbracket \mathcal{G}_E \rrbracket, \partial E)$ for a graph $E$ belongs to $K^{LCC}$. In this case the ``mixing conditions'' will be weaker than for $K^F$ (c.f.\ Proposition~\ref{prop:KFgraph}), but we are only able to prove membership for the topological full group itself --- no proper subgroups. As in the case of $K^F$ we need to stipulate that the boundary path space $\partial E$ has no isolated points (c.f.\ condition (K1) in Definition~\ref{def_KB}), but also that the graphs are countable (this also for condition (K1)). By the results in Section~\ref{sec:reggrpd} we only have to determine when every clopen subset of $\partial E$ meets some orbit twice, and when all orbits have length at least $3$. We shall soon see that the former property is characterized by excluding certain ``tree-like'' components in the graph $E$, which we make precise in the following defintion.
\begin{definition}\label{condT}
We say that a graph $E$ satisfies \emph{Condition~(T)} if for every vertex $v \in E^0$, there exists a vertex $w \in E^0$ such that $\vert v E^* w \vert \geq 2$.
\end{definition}
Note first of all that Condition~(T) implies that there are no sinks and no semi-tails. It does not, however, imply Condition~(L) as one can traverse a cycle twice to get two different paths. Observe that Conditions~(K) and (W) imply Conditions~(L) and (T). Condition~(T) is a fairly weak condition, it is in fact satisfied by all graphs having finitely many vertices and no sinks, and more generally by any graph in which every vertex connects to a cycle. The archetypical example of graphs not satisfying Condition~(T) are trees, or more generally graphs containing such components.
As for when $\mathcal{G}_E$ can have such ``short orbits'' one finds, by merely exhausting all possibilites, that this happens exactly if one or more of the following kinds of vertices are present in the graph $E$.
\begin{definition}\label{def:degenerate}
Let $E$ be a graph. We say that a vertex $v \in E^0$ is \emph{degenerate} if it is one of the following types:
\begin{enumerate}
\item \textbf{``1-loop-source''}: $E^1 v = \{e\}$ where $e$ is a loop.
\item \textbf{``1 source to 1-loop-source''}: $E^1 v = \{e,f \}$ where $e$ is a loop and $s(f)$ is a source.
\item \textbf{``2-loop-source''}: There is another vertex $w \in E^0$ distinct from $v$ such that $E^1 v = \{e\} = w E^1 v$ and $E^1 w = \{f\} = v E^1 w$.
\item \textbf{``Infinite source''}: $v E^1$ is infinite and $E^1 v$ is empty.
\item \textbf{``1 source to singular''}: $v$ is singular and $E^1 v = \{f\}$ where $s(f)$ is a source.
\item \textbf{``Stranded''}: $vE^1$ and $E^1 v$ are both empty.
\end{enumerate}
\end{definition}
\begin{proposition}\label{prop:KLCCgraph}
Let $E$ be a graph.
\begin{enumerate}
\item Every non-empty clopen subset of $\partial E$ meets some $\mathcal{G}_E$-orbit twice if and only if $E$ satisfies Conditions~(L) and (T).
\item $\left| \operatorname{Orb}_{\mathcal{G}_E}(x) \right| \geq 3$ for all $x \in \partial E$ if and only if $E$ has no degenerate vertices.
\end{enumerate}
\end{proposition}
\begin{proof}
We prove part \emph{(1)} first. Suppose $E$ satisfies Conditions~(L) and (T). Let $A$ be a non-empty clopen subset of $\partial E$. Then there is a path $\mu \in E^*$ such that $Z(\mu) \subseteq A$. Suppose first that $r(\mu)$ connects to a cycle. Let $\lambda$ be such a cycle and let $\rho$ be a path from $r(\mu)$ to $s(\lambda)$. We may assume that $\lambda$ has an exit $f$ with $s(f) = s(\lambda)$. Let $x \in r(f)E^\infty$. Then $\mu \rho f x$ and $\mu \rho \lambda f x$ are two distinct tail-equivalent boundary paths in $A$. If, on the other hand, $r(\mu)$ does not connect to a cycle, then $r(\mu) E^\infty$ consists only of wandering paths that visit each vertex at most once. Let $w \in E^0$ be a vertex such that there are two distinct paths $\rho_1, \rho_2$ from $r(\mu)$ to $w$. Again letting $x \in wE^\infty$ be arbitrary we have that $\mu \rho_1 x$ and $\mu \rho_2 x$ are two distinct tail-equivalent boundary paths in $A$.
To see that Conditions~(L) and (T) are both necessary, note first that if $E$ does not satisfy Condition~(L), then $\partial E$ has an isolated point, and an open singleton surely cannot meet any set twice. Assume instead that $E$ fails to satisfy Condition~(T), and let $v \in E^0$ be a vertex such that there is either no path or a unique path from $v$ to any other vertex in $E$. We claim that the cylinder set $Z(v)$ meets each orbit at most once. We first consider a finite boundary path $\mu$ beginning in $v$ (if such a path exists). Then $r(\mu)$ is a singular vertex and \[ \operatorname{Orb}_{\mathcal{G}_E}(\mu) \cap Z(v) = \{ \lambda \in E^* \mid s(\lambda) = v, \ r(\lambda) = r(\mu) \} = v E^* r(\mu) = \{\mu\}, \] as desired. Similarly, if $x \in v E^\infty$ and $y \in \operatorname{Orb}_{\mathcal{G}_E}(x) \cap Z(v)$, then there are $k,l \in \mathbb{N}$ such that $x_{[k, \infty)} = y_{[l, \infty)}$. In particular $x_{[1, k-1]}$ and $y_{[1, l-1]}$ are finite paths from $v$ to $s(x_k) = s(y_l)$, hence these are equal and it follows then that $x = y$. Thus $\operatorname{Orb}_{\mathcal{G}_E}(x) \cap Z(v) = \{x\}$. This proves the first part of the proposition.
For the second part, simply note that an orbit of length $1$ can only occur if there are degenerate vertices of type (1), (4), or (6) as in Definition~\ref{def:degenerate} (the corresponding orbits of length $1$ being $\{e^\infty\}$, $\{v\}$, $\{v\}$, respectively). And that an orbit of length $2$ can only occur if there are degenerate vertices of type (2), (3), or (5) (the corresponding orbits of length $2$ being $\{e^\infty, f e^\infty \}$, $\{ (ef)^\infty, (fe)^\infty\}$, $\{v, f\}$, respectively).
\end{proof}
\begin{remark}\label{rem:notDM}
By an argument as in Example~\ref{ex:1orbitCover} one deduces that if a graph $E$ satisfies Condition~(I), then the graph groupoid $\mathcal{G}_E$ is densely minimal. However, statement \emph{(1)} in Proposition~\ref{prop:KLCCgraph} is strictly weaker than $\mathcal{G}_E$ being densely minimal. It is easy to cook up examples of infinite graphs satisfying Conditions~(L) and (T), but whose graph groupoids are not densely minimal. For example:
\[ \begin{tikzpicture}[vertex/.style={circle, draw = black, fill = black, inner sep=0pt,minimum size=5pt}]
\node at (-4,0) {$E$};
\node[vertex] (a) at (0,0) {};
\node[vertex] (c) at (2,0) {};
\node[vertex] (b) at (4,0) {};
\node (e) at (6,0) {$\cdots$};
\node (f) at (-2,0) {$\cdots$};
\path (f) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] (a)
(a) edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] (a)
edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] (c)
(b) edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] (b) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] (e)
(c) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] (b)
edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] (c);
\end{tikzpicture} \]
\end{remark}
\subsection{Isomorphism theorems} Recall that all orbits having length at least $3$ is sufficient for the commutator subgroup of the topological full group to cover the groupoid (Lemma~\ref{ex_cover}). This in turn means that the groupoid can be recovered as the groupoid of germs of any subgroup between the topological full group and its commutator. We will now combine Propositions~\ref{prop:KFgraph} and \ref{prop:KLCCgraph} with the results from Section~\ref{sec:reggrpd} to obtain rigidity results for the topological full group of graph groupoids.
We begin by first observing that the conditions on the graph for membership in $K^F$ actually implies that all orbits are infinite.
\begin{lemma}\label{KForbits}
Let $E$ be a graph with no sinks and suppose $E$ satisfies Conditions~(K) and ($\infty$). Then $\operatorname{Orb}_{\mathcal{G}_E}(x)$ is infinite for each $x \in \partial E$. In particular, $E$ has no degenerate vertices.
\end{lemma}
\begin{proof}
We first consider the $\mathcal{G}_E$-orbits of finite boundary paths. Suppose $v \in E^0$ is an infinite emitter. Condition~($\infty$) implies that there are infinitely many distinct return paths at $v$, hence $\operatorname{Orb}_{\mathcal{G}_E}(\mu)$ is infinite for each $\mu \in \partial E \cap E^*$.
Next, let $x \in E^\infty$ be an infinite path. If $x$ is eventually periodic, then $x = \mu \lambda^\infty$ for some finite path $\mu$ and some cycle $\lambda$. Lemma~\ref{lem:cycles} gives a sequence of mutually disjoint cycles $\tau_1, \tau_2, \ldots$ based at $s(\lambda)$. And then $\{ \tau_1 \lambda^\infty, \tau_2 \lambda^\infty, \ldots \}$ is an infinite subset of $\operatorname{Orb}_{\mathcal{G}_E}(x)$. If $x$ is not eventually periodic, then $\{ x, x_{[2, \infty]}, x_{[3, \infty]}, \ldots \}$ is an infinite subset of $\operatorname{Orb}_{\mathcal{G}_E}(x)$.
\end{proof}
In terms of the class $K^F$ we obtain the following rigidity result, which relaxes the assumptions in Theorem~\ref{thm:KFgroupoid} for graph groupoids.
\begin{theorem}\label{thm:KFrigid}
Let $E$ and $F$ be graphs with no sinks, and suppose they both satisfy Conditions~(K), (W) and ($\infty$). Suppose $\Gamma, \Lambda$ are subgroups with $\mathsf{D}(\llbracket \mathcal{G}_{E} \rrbracket) \leq \Gamma \leq \llbracket \mathcal{G}_{E} \rrbracket$ and $\mathsf{D}(\llbracket \mathcal{G}_{F} \rrbracket) \leq \Lambda \leq \llbracket \mathcal{G}_{F} \rrbracket$. If $\Gamma \cong \Lambda$ as abstract groups, then $\mathcal{G}_{E} \cong \mathcal{G}_{F}$ as topological groupoids. In particular, the following are equivalent:
\begin{enumerate}
\item $\mathcal{G}_{E} \cong \mathcal{G}_{F}$ as topological groupoids.
\item $\llbracket \mathcal{G}_{E} \rrbracket \cong \llbracket \mathcal{G}_{F} \rrbracket$ as abstract groups.
\item $\mathsf{D}(\llbracket \mathcal{G}_{E} \rrbracket) \cong \mathsf{D}(\llbracket \mathcal{G}_{F} \rrbracket)$ as abstract groups.
\end{enumerate}
\end{theorem}
\begin{proof}
Combine Proposition~\ref{prop:KFgraph}, Theorem~\ref{classF}, Proposition~\ref{faithGerm}, Lemma~\ref{KForbits}, Lemma~\ref{ex_cover} and Proposition~\ref{prop_germs}.
\end{proof}
The preceding result covers---in particular---all finite graphs that have no sinks and satisfy Condition~(K).
As for rigidity in terms of $K^{LCC}$, we combine Proposition~\ref{prop:KLCCgraph} with Theorem~\ref{thm:KLCCgroupoid} to get the following result.
\begin{theorem}\label{thm:KLCCrigid}
Let $E$ and $F$ be countable graphs satisfying Conditions~(L) and (T), and having no degenerate vertices. Then the following are equivalent:
\begin{enumerate}
\item $\mathcal{G}_{E} \cong \mathcal{G}_{F}$ as topological groupoids.
\item $\llbracket \mathcal{G}_{E} \rrbracket \cong \llbracket \mathcal{G}_{F} \rrbracket$ as abstract groups.
\end{enumerate}
\end{theorem}
This result covers---in particular---all finite graphs that have no degenerate vertices and which satisfy Condition~(L).
\begin{remark}
In~\cite{Mats2}, Matsumoto established a version of Theorem~\ref{thm:KLCCrigid} for finite graphs which are strongly connected (and satisfy condition~(L), or equivalently (K)). At about the same time, Matui announced~\cite{Mat}, and the spatial realization theorem therein applies to the enlarged class of graphs which have finitely many vertices, no sinks, are cofinal, satisfy Condition~(K) and for which every vertex can reach every infinite emitter.
\end{remark}
Combining Theorem~\ref{thm:KLCCrigid} with the main result of~\cite{BCW} and~\cite[Corollary 4.2]{CR} we obtain Corollary~\ref{cor:BCW} below, which ties in many of the mathematical structures associated to (directed) graphs. For background on graph $C^*$-algebras, see~\cite{Rae}\footnote{Beware that the convention for paths in graphs in Raeburn's book is different than the one used in this paper.}, and for Leavitt path algebras, see~\cite{AAM}.
\begin{corollary}\label{cor:BCW}
Let $E$ and $F$ be countable graphs satisfying Conditions~(L) and (T), and having no degenerate vertices. Let $R$ be an integral domain. Then the following are equivalent:
\begin{enumerate}
\item The graph groupoids $\mathcal{G}_{E}$ and $\mathcal{G}_{F}$ are isomorphic as topological groupoids.
\item There is an isomorphism of the graph $C^*$-algebras $C^*(E)$ and $C^*(F)$ which maps the diagonal $\mathcal{D}(E)$ onto $\mathcal{D}(F)$.
\item There is an isomorphism of the Leavitt path algebras $L_R(E)$ and $L_R(F)$ which maps the diagonal $D_R(E)$ onto $D_R(F)$.
\item The pseudogroups $\mathcal{P}_E$ and $\mathcal{P}_F$ are spatially isomorphic.
\item The graphs $E$ and $F$ are orbit equivalent\footnote{Coincides (in this case) with Li's notion of \emph{continuous orbit equivalence} for the partial dynamical systems associated to the graps, c.f.\ \cite{Li}.}.
\item The topological full groups $\llbracket \mathcal{G}_{E} \rrbracket$ and $\llbracket \mathcal{G}_{F} \rrbracket$ are isomorphic as abstract grups.
\end{enumerate}
\end{corollary}
\begin{remark}
We remark that in Corollary~\ref{cor:BCW} statements (1), (2) and (3) are always equivalent, statements (4) and (5) are always equivalent and they are implied by (1), (2) or (3). Furthermore, if the graphs satisfy Condition~(L), then statements (1)--(5) are equivalent. Additionally, the equivalence of (1) and (2) has recently been shown in greater generality~\cite{CRST}. The same is true for (1) and (3) by recent work of Steinberg~\cite{Stein}, even with weaker hypothesis on the ring $R$.
\end{remark}
\section{Embedding theorems}\label{sec:embed}
In this final subsection we will show that several classes of groupoids embed into a certain fixed graph groupoid---namely the groupoid whose graph consists of a single vertex and two edges.
\subsection{Embedding graph groupoids}
Let $E_2$ denote the graph with a single vertex $v$, and two edges $a$ and $b$:
\[ \begin{tikzpicture}[vertex/.style={circle, draw = black, fill = black, inner sep=0pt,minimum size=5pt}]
\node at (-2.5,0) {$E_2 $};
\node[vertex] (a) at (0,0) [label=above:$v$] {};
\path (a) edge[thick, loop, min distance = 20mm, looseness = 10, out = 135, in = 225,decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[left] {$a$} (a)
edge[thick, loop, min distance = 20mm, looseness = 10, out = 45, in = 315,decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[right] {$b$} (a);
\end{tikzpicture} \]
In \cite{BS}, Brownlowe and Sørensen proved an algebraic analog of Kirchberg's Embedding Theorem (see \cite{KP}) for Leavitt path algebras. They showed that for any countable graph $E$, and for any unital commutative ring $R$, the Leavitt path algebra $L_R(E)$ embeds (unitally whenever it makes sense) into $L_R(E_2)$. By inspecting their proof one finds that this embedding is also \emph{diagonal-preserving}, i.e.\ that the canonical diagonal $D_R(E)$ is mapped into $D_R(E_2)$. A special case of Kirchberg's Embedding Theorem says that any graph $C^*$-algebra, $C^*(E)$, embeds into the Cuntz algebra $\mathcal{O}_2$, which is canonically isomorphic to $C^*(E_2)$ (and $C^*_r\left(\mathcal{G}_{E_2}\right)$). We denote the canonical diagonal subalgebra in $\mathcal{O}_2$ by $\mathcal{D}_2$. A priori, Kirchberg's embedding is analytic in nature, but Brownlowe and Sørensen's results shows that in the case of graph $C^*$-algebras, algebraic embeddings exist. Both graph $C^*$-algebras and Leavitt path algebras have the same underlying groupoid models (being canonically isomorphic to the groupoid $C^*$-algebra, and the Steinberg $R$-algebra ($A_R(\mathcal{G}_E)$) of $\mathcal{G}_E$, respectively). And generally isomorphisms of the graph groupoids correspond to diagonal preserving isomorphisms of the algebras. Thus one could wonder whether there is an embedding of the underlying graph groupoids. We will show that this is indeed the case, modulo topological obstructions. Our proof is inspired by~{\cite[Proposition 5.1]{BS}} (and the examples following it).
\begin{lemma}\label{lem:emb}
Let $E$ be a countable graph with no sinks, no semi-tails, and suppose $E$ satisfies Condition~(L). Then there exists an injective local homeomorphism $\phi \colon \partial E \to E^\infty_2$ such that \[\phi \circ \llbracket \mathcal{G}_E \rrbracket \subseteq \llbracket \mathcal{G}_{E_2} \rrbracket \circ \phi.\]
If $E^0$ is finite, then $\phi$ is surjective (hence a homeomorphism), and if $E^0$ is infinite, then $\phi(\partial E) = E_2^\infty \setminus \{a^\infty\}$. In particular, there exists an injective étale homomorphism $\Phi \colon \operatorname{Germ}(\llbracket \mathcal{G}_E \rrbracket, \partial E) \to \mathcal{G}_{E_2}$.
\end{lemma}
\begin{proof}
For transparency we first treat the case when $E^0$ is finite. The infinite case requires only a minor tweak. Let $n = | E^0 |$. Label the vertices and edges of $E$ (arbitrarily) as \[E^0 = \{w_1, w_2, \ldots w_n\} \quad \text{and} \quad w_i E^1 = \{ e_{i,j} \mid 1 \leq j \leq k(i) \} \text{ for each } 1 \leq i \leq n, \]
where $k(i) = | s^{-1}(w_i) |$. When $w_i$ is an infinite emitter, $k(i) = \infty$, and we let $j$ range over $\mathbb{N}$. For each pair $j,i$ with $j \in \mathbb{N}, \ i \in \mathbb{N} \cup \{\infty\}$ and $j \leq i$ we define a finite path $\alpha_{j,i} \in E_2^*$ as follows: $\alpha_{1,1} \coloneqq v$ and for $j \geq 2$
\[ \alpha_{j,i} \coloneqq \left\lbrace\begin{matrix}
b & \text{ if } j=1, \\
a^{j-1}b & \text{\qquad if } 1 < j < i, \\
a^{j-1} & \text{ if } j=i.
\end{matrix} \right. \]
Observe that for each fixed $i \in \mathbb{N}$, the set $\{ Z(\alpha_{j,i}) \mid 1 \leq j \leq i \}$ forms a partition of $E_2^\infty$. For $i = \infty$, $\{ Z(\alpha_{j,i} ) \mid 1 \leq j < \infty \}$ forms a partition of $E_2^\infty \setminus \{a^\infty\}$.
We now define the map $\phi \colon \partial E \to E_2^\infty$ as follows. For an infinite path $x=e_{i_1,j_1}e_{i_2,j_2} \ldots \in E^\infty$ we set
\[\phi(x) = \alpha_{i_1,n} \alpha_{j_1,k(i_1)} \alpha_{j_2,k(i_2)} \ldots \ . \]
If $w_i \in E^0$ is an infinite emitter, then
\[ \phi(w_i)= \alpha_{i,n} a^\infty. \]
For notational convenience, we define
\[ \phi^*(\mu) \coloneqq \alpha_{i_1,n} \alpha_{j_1,k(i_1)} \alpha_{j_2,k(i_2)} \ldots \alpha_{j_m,k(i_m)} \in E_2^* \]
for each finite path $\mu = e_{i_1,j_1} e_{i_2,j_2} \ldots e_{i_m,j_m} \in E^*$. Finally, if $\mu$ is a finite boundary path, then
\[ \phi(\mu) = \phi^*(\mu) a^\infty. \]
Recall that $v \alpha = \alpha = \alpha v$ for each $\alpha \in E_2^*$. A priori, $\phi(x)$ could be a finite path in $E_2$. We argue that this is not the case. For a finite path $\mu \in E^*$, $\phi(\mu)$ is clearly infinite. For an infinite path $x = e_{i_1,j_1}e_{i_2,j_2} \ldots$, $\phi(x)$ is finite if and only if for some $M \in \mathbb{N}$, $\alpha_{j_m,k(i_m)} = v$ for all $m > M$, that is $k(i_m) = 1$ and $j_m = 1$. This means that $e_{i_{M+1},j_{M+1}}e_{i_{M+2},j_{M+2}} \ldots$ is either a semi-tail, or an eventually periodic point whose cycle has no exit. But there are by assumption no such paths in $E$. So we conclude that $\phi$ is well-defined.
Using the fact that $\{ Z(\alpha_{j,i})\}$ for fixed $i$ forms a partition of $E_2^\infty$, or $E_2^\infty \setminus \{a^\infty\}$, one easily sees that $\phi$ is a bijection. As for continuity, we define $F_{i,l} \coloneqq \{e_{i,1}, e_{i,2}, \ldots, e_{i,l}\} \subseteq w_iE^1$ for $1 \leq l < k(w_i) +1$. Let $\mu = e_{i_1,j_1} e_{i_2,j_2} \ldots e_{i_m,j_m} \in E^*$ and suppose $r(\mu) = w_i$. Observe that
\[\phi(Z(\mu)) = Z(\phi^*(\mu))\] and
\[\phi(Z(\mu \setminus F_{i,l})) = Z(\phi^*(\mu) a^l).\]
For arbitrary $F = \{e_{i,j_1}, \ldots, e_{i,j_m}\}$ we have
\begin{equation}\label{eq:1}
Z(\mu \setminus F) = Z(\mu \setminus F_{i,j_m +1}) \bigsqcup \bigsqcup_{j \in J_F} Z(\mu e_{i,j}),
\end{equation}
where $J_F$ is the set of $j$'s with $1 \leq j \leq j_m$ and $e_{i,j} \notin F$. Thus $\phi$ is an open map. Conversely, we have that for $\beta \in E_2^*$
\[ \phi^{-1}(Z(\beta)) = \left( \bigcup_{\beta \leq \phi^*(\mu)} Z(\mu) \right) \bigcup \left( \bigcup_{l=1}^\infty \bigcup_{\beta \leq \phi^*(\lambda) a^l} Z(\lambda \setminus F_{r(\lambda),l}) \right), \]
(and these unions may actually be taken to be finite). Hence $\phi$ is a homeomorphism.
To see that $\phi \circ \llbracket \mathcal{G}_E \rrbracket \circ \phi^{-1}\subseteq \llbracket \mathcal{G}_{E_2} \rrbracket$, let $\mu, \lambda \in E^*$ with $r(\mu) = r(\lambda) = w_i$ be given, and let $1 \leq l < k(w_i) +1$. Observe that
\[\phi \circ \pi_{Z(\mu, \lambda)} \circ \phi^{-1} = \pi_{Z(\phi^*(\mu), \phi^*(\lambda))} \colon Z(\phi^*(\lambda) \to Z(\phi^*(\mu)), \]
and
\[\phi \circ \pi_{Z(\mu, F_l ,\lambda)} \circ \phi^{-1} = \pi_{Z(\phi^*(\mu)a^l, \phi^*(\lambda)a^l)} \colon Z(\phi^*(\lambda) a^l \to Z(\phi^*(\mu)a^l), \]
as partial homeomorphisms. Utilizing a similar decomposition as in Equation~(\ref{eq:1}) for $Z(\mu, F, \lambda)$ for arbitrary $F$, together with the description of elements in $\llbracket \mathcal{G}_E \rrbracket$ from Proposition~\ref{prop:bis}, we see that for each $\pi_U \in \llbracket \mathcal{G}_E \rrbracket$, $\phi \circ \pi_U \circ \phi^{-1}$ belongs to $\llbracket \mathcal{G}_{E_2} \rrbracket$.
In the case that $E^0$ is infinite, all the arguments above still go through, with the minor adjustment that the first word in $\phi(x)$ is $\alpha_{i_1, \infty}$. This word always ends with a $b$, so we see that $\phi$ becomes a homeomorphism from $\partial E$ onto $E_2^\infty \setminus \{a^\infty\}$.
The final statement follows from Corollary~\ref{corol_inj_groupoid} and Proposition~\ref{prop_germs} ($\llbracket \mathcal{G}_{E_2} \rrbracket$ covers $\mathcal{G}_{E_2}$ since the groupoid is minimal).
\end{proof}
\begin{remark}
The local homeomorphism $\phi$ constructed in the preceding proof depends on the choice of labeling of the graph. And there are of course many ways to label a graph, but each one gives a local homeomorphism $\phi$ with the desired properties.
\end{remark}
In order to conclude that $\mathcal{G}_E$ embeds into $\mathcal{G}_{E_2}$ it seems like we have to assume that $\llbracket \mathcal{G}_E \rrbracket$ covers $\mathcal{G}_E$ (as this is not always the case). However, in the proof of Lemma~\ref{lem:emb} we are really showing that $\phi \circ \mathcal{P}_c(\mathcal{G}_E) \subseteq \mathcal{P}_c(\mathcal{G}_{E_2}) \circ \phi$, where $\mathcal{P}_c(\mathcal{G})$ denotes the inverse semigroup of partial homeomorphisms $\pi_U \colon s(U) \to r(U)$ coming from compact bisections $U \subseteq \mathcal{G}$. It is a sub-inverse semigroup of Renault's pseudogroup as in \cite{Ren2}, \cite{BCW} (when $\mathcal{G}$ is effective). The constructions in Sections~\ref{sec:germ} and~\ref{sec:SpatG} apply more or less verbatim to $\mathcal{P}_c(\mathcal{G})$. The crucial difference is that $\mathcal{P}_c(\mathcal{G})$ always covers $\mathcal{G}$, when $\mathcal{G}$ is ample. Thus, the analogs of Corollary~\ref{corol_inj_groupoid} and Proposition~\ref{prop_germs} for $\mathcal{P}_c(\mathcal{G})$ applied to $\phi$ induces the desired embedding of the graph groupoids, which we record in the following theorem.
\begin{theorem}\label{thm:emb}
Let $E$ be a countable graph satisfying Condition~(L) and having no sinks nor semi-tails. Then there is an embedding of étale groupoids $\Phi \colon \mathcal{G}_E \hookrightarrow \mathcal{G}_{E_2}$. If $E^0$ is finite, then $\Phi$ maps $\partial E$ onto $E_2^\infty$.
\end{theorem}
\begin{remark}
Theorem~\ref{thm:emb} is optimal in the sense there is no embedding if one relaxes the assumptions on $E$. For if $\partial E$ has isolated points, then there is no local homeomorphism from $\partial E$ to $E_2^\infty$, as the latter has no isolated points. And if $E$ is uncountable, then there is no embedding either, for then $\partial E$ is not second countable, while $E_2^\infty$ is. Similarly, $\partial E$ cannot map onto $E_2^\infty$ if $E^0$ is infinite, for then the former is not compact.
\end{remark}
\subsection{Diagonal embeddings of graph algebras}
From Theorem~\ref{thm:emb} we recover Brownlowe and Sørensen's embedding theorem (for the slightly smaller class of graphs $E$ with $\partial E$ having no isolated points). However, we get the additional conclusion that when $E^0$ is finite (i.e.\ the algebras are unital), the embedding can be chosen to not only be unital, but also to map the diagonal \emph{onto} the diagonal.
\begin{corollary}\label{cor:graphAlgEmb}
Let $E$ be a countable graph with no sinks, no semi-tails, and satisyfing Condition~(L).
\begin{enumerate
\item There is an injective $*$-homomorphism $\psi \colon C^*(E) \to \mathcal{O}_2$ such that $\psi(\mathcal{D}(E)) \subseteq \mathcal{D}_2$. If $E^0$ is finite, then $\psi$ is unital and $\psi(\mathcal{D}(E)) = \mathcal{D}_2$.
\item For any commutative unital ring $R$, there is an injective $*$-algebra homomorphism $\rho \colon L_R(E) \to L_R(E_2)$ such that $\rho(D_R(E)) \subseteq D_R(E_2)$. If $E^0$ is finite, then $\rho$ is unital and $\rho(D_R(E)) = D_R(E_2)$.
\end{enumerate}
\end{corollary}
\begin{remark}\label{rem:emb}
For each labeling of a graph $E$ as in the proof of Lemma~\ref{lem:emb}, one obtains explicit embeddings of both the graph $C^*$-algebras and the Leavitt path algebras, in terms of their generators. This is done by expanding the scheme in~\cite[Proposition 5.1]{BS}. The canonical isomorphism between both $C^*(E)$ and $C^*(\mathcal{G}_E)$, and $L_R(E)$ and $A_R(\mathcal{G}_E)$ is given by $p_v \leftrightarrow 1_{Z(v)}$ for $v \in E^0$ (vertex projections) and $s_e \leftrightarrow 1_{Z(e, r(e))}$ for $e \in E^1$ (edge partial isometries). Denote the generators in $\mathcal{O}_2$ and $L_R(E_2)$ by $s_a$ and $s_b$. Given a labeling $E^0 = \{w_1, w_2, w_3 \ldots\}$ and $E^1 = \{ e_{i,j} \mid 1 \leq i \leq n, \ 1 \leq j \leq k(i) \}$, the embedding of the algebras induced by $\phi$ as in Lemma~\ref{lem:emb} is given by
\[p_{w_i} \longmapsto s_{\phi^*(w_i)} \left(s_{\phi^*(w_i)}\right)^*, \qquad s_{e_{i,j}} \longmapsto s_{\phi^*(e_{i,j})}\left(s_{\phi^*(r(e_{i,j}))}\right)^*,\]
where $\phi^*(\mu) \in \{a,b\}^*$ is as in the proof of Lemma~\ref{lem:emb} (recall that for $\mu = e_1, \ldots e_n \in E^*$, $s_{\mu} \coloneqq s_{e_1} \cdots s_{e_2}$).
\end{remark}
\begin{remark}
In the case that $E$ has infinitely many vertices, the image of the diagonals in Corollary~\ref{cor:graphAlgEmb} can be described as follows:
\[ \psi(\mathcal{D}(E)) = \overline{\spn \{ s_\alpha s_\alpha^* \mid \alpha \in E_2^* \setminus \{a, a^2, a^3, \ldots\} \}}, \]
and
\[ \rho(D_R(E)) = \spn_R \{ s_\alpha s_\alpha^* \mid \alpha \in E_2^* \setminus \{a, a^2, a^3, \ldots\} \}. \]
\end{remark}
For examples of explicit embeddings for finite graphs satisfying Condition~(L) (possibly even having sinks), see Section~5 of~\cite{BS}. As for infinite graphs, we provide a few examples.
\begin{example}
Consider the following graph, whose graph $C^*$-algebra is the Cuntz algebra $\mathcal{O}_\infty$:
\[ \begin{tikzpicture}[vertex/.style={circle, draw = black, fill = black, inner sep=0pt,minimum size=5pt}, implies/.style={double,double equal sign distance,-implies}]
\node at (-2,0) {$E_\infty$};
\node[vertex] (a) at (0,0) [label=below:$w$] {};
\path (a) edge[implies, thick, loop, min distance = 20mm, looseness = 10, out = 45, in = 135] node[above] {$e_j$} (a);
\end{tikzpicture} \]
The double arrow indicates infinitely many edges, i.e.\ $E^1 = \{ e_1, e_2, e_3, \ldots \}$. For simplicity denote the edge isometries by $s_j$ for $j \in \mathbb{N}$. We label $w = w_1$ and $e_j = e_{1,j}$. Following the recipe in Remark~\ref{rem:emb} we obtain a unital embedding of $\mathcal{O}_\infty$ into $\mathcal{O}_2$ (or of $L_R(E_\infty)$ into $L_R(E_2)$) which maps the diagonal onto the diagonal, in terms of generators as follows:
\[p_w = 1_{\mathcal{O}_\infty} \longmapsto 1_{\mathcal{O}_2} = p_v, \qquad s_j \longmapsto s_{a^{j-1}b}. \]
\end{example}
\begin{example}
Next, consider the following graph:
\[ \begin{tikzpicture}[vertex/.style={circle, draw = black, fill = black, inner sep=0pt,minimum size=5pt}, implies/.style={double,double equal sign distance,-implies}]
\node at (-2,0) {$E$};
\node[vertex] (a) at (0,0) [label=below:$w_1$] {};
\node[vertex] (b) at (2,0) [label=below:$w_2$] {};
\path (a) edge[implies, thick, loop, min distance = 20mm, looseness = 10, out = 45, in = 135] node[above] {$e_j$} (a) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$h$} (b)
(b) edge[implies, thick, loop, min distance = 20mm, looseness = 10, out = 45, in = 135] node[above] {$f_j$} (b);
\end{tikzpicture} \]
By labeling the edges as $h = e_{1,1}, \ e_j = e_{1,j+1}, \ f_j = e_{2,j}$ we get the following unital diagonal embedding:
\begin{align*}
&p_{w_1} \longmapsto s_b s_b^*, \qquad p_{w_2} \longmapsto s_a s_a^*, \\ &s_h \longmapsto s_{bb}s_a^*, \qquad s_{e_j} \longmapsto s_{ba^jb}s_b^*, \qquad s_{f_j} \longmapsto s_{ba^jb}s_a^*.
\end{align*}
\end{example}
\begin{example}
Finally, let us look at a graph with infinitely many vertices:
\[ \begin{tikzpicture}[vertex/.style={circle, draw = black, fill = black, inner sep=0pt,minimum size=5pt}]
\node at (-4,0) {$F$};
\node[vertex] (a) at (-2,0) [label=below:$w_1$] {};
\node[vertex] (b) at (0,0) [label=below:$w_2$] {};
\node[vertex] (c) at (2,0) [label=below:$w_3$] {};
\node[vertex] (d) at (4,0)[label=below:$w_4$] {};
\node[vertex] (e) at (6,0)[label=below:$w_5$] {};
\node (f) at (8,0) {$\cdots$};
\path (a) edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[above] {$f_1$} (a)
edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$e_1$} (b)
(b) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$e_2$} (c)
(c) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$e_3$} (d)
edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[above] {$f_2$} (c)
(d) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$e_4$} (e)
(e) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$e_5$} (f) edge[thick,loop, min distance = 10mm, looseness = 20, out = 45, in = 135, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[above] {$f_3$} (e);
\end{tikzpicture} \]
We label the edges as $e_j = e_{j,1}$ for $j \in \mathbb{N}$, and $f_j = e_{j,2}$ for $j$ odd. The induced diagonal preserving embedding is then given on the generators as follows:
\begin{align*}
&p_{w_i} \longmapsto s_{a^{i-1}b} \left(s_{a^{i-1}b} \right)^*, \qquad s_{f_j} \longmapsto s_{a^{j-1}ba} \left(s_{a^{j-1}b} \right)^* \ (j \text{ odd}),\\
& s_{e_j} \longmapsto \left\lbrace\begin{matrix}
s_{a^{j-1}b} \left(s_{a^{j}b} \right)^* & j \text{ even}, \\
s_{a^{j-1}b^2} \left(s_{a^{j}b} \right)^* & j \text{ odd}.
\end{matrix} \right.
\end{align*}
\end{example}
\subsection{Analytic properties of $\llbracket \mathcal{G}_E \rrbracket$}
Before generalizing the groupoid embedding theorem to a larger class of groupoids in the next subsection we take brief pause to discuss some analytic properties of the topological full groups $\llbracket \mathcal{G}_E \rrbracket$ for graphs $E$ as in Lemma~\ref{lem:emb}. First of all, $\llbracket \mathcal{G}_E \rrbracket$ is generally not amenable, as it often contains free products~\cite[Proposition 4.10]{Mat}.
Let $E_n$ for $n \geq 2$ denote the graph consisting of a single vertex and $n$ edges. And more generally, for $r \in \mathbb{N}$, let $E_{n,r}$ be the graph with $r$ vertices $w_1, w_2, \ldots, w_r$ and $n+r-1$ edges $e_1, \ldots, e_n, f_1, \ldots, f_{r-1}$ such that $s(e_i) = w_1, r(e_i) = w_r$ for each $1 \leq i \leq n$ and $s(f_i) = w_{i+1}, r(f_i) = w_i$ for each $1 \leq i \leq r-1$. As seen in \cite[Section 6]{Mat}, the topological full group $\llbracket \mathcal{G}_{E_{n,r}} \rrbracket $ is isomorphic to the \emph{Higman-Thompson group} $V_{n,r}$. In particular, $\llbracket \mathcal{G}_{E_2} \rrbracket \cong V_{2,1} = V$ (Thompson's group $V$).
As Lemma~\ref{lem:emb} in particular induces an algebraic embedding of the topological full groups, we have that $\llbracket \mathcal{G}_E \rrbracket$ embeds into $V$. Thus, Lemma~\ref{lem:emb} may be considered a generalization of the well-known embedding of $V_{n,r}$ into $V$. As $V$ has the \emph{Haagerup property}~\cite{Far}, we deduce that $\llbracket \mathcal{G}_E \rrbracket$ does as well.
\begin{corollary}
Let $E$ be a countable graph with no sinks, no semi-tails, and suppose $E$ satisfies Condition~(L). Then the topological full group $\llbracket \mathcal{G}_E \rrbracket$ has the Haagerup property.
\end{corollary}
\begin{remark}
For finite, strongly connected graphs, this was proved directly, using so-called \emph{zipper actions}, by Matui in~\cite{Mat}. Later, in~\cite{Mat3} Matui proved that for any finite, strongly connected graph $E$, $\llbracket \mathcal{G}_E \rrbracket$ embeds into $\llbracket \mathcal{G}_{E_2} \rrbracket$. In fact, he proved even more, namely that $\mathcal{G}_{E_2}$ could be replaced by any groupoid with similar properties (see~\cite[Proposition 5.14]{Mat3} for details). By our results, one may relax the conditions on $E$ considerably in Matuis embedding result.
\end{remark}
\subsection{Embedding equivalent groupoids}
We are now going to expand on the embedding theorem for graph groupoids to include all groupoids that are merely groupoid equivalent to a graph groupoid. To accomplish this we will make us of the fundamental results by Carlsen, Ruiz and Sims in~\cite{CRS}. Following their notation, let $\mathcal{R}$ denote the countably infinite discrete full equivalence relation, that is $\mathcal{R} = \mathbb{N} \times \mathbb{N}$ equipped with the discrete topology, and product and inverse given by $(k,m) \cdot (m,n) \coloneqq (k,n)$ and $(m,n)^{-1} \coloneqq (n,m)$. We refer to $\mathcal{G} \times \mathcal{R}$ as the \emph{stabilization} of the groupoid $\mathcal{G}$. For a graph $E$, let $SE$ denote the graph obtained from $E$ by adding a \emph{head} at every vertex---see the example below (see also~\cite{Tomf}). It is shown in~\cite{CRS} that $\mathcal{G}_{E} \times \mathcal{R} \cong \mathcal{G}_{S E}$ for any graph $E$.
\begin{example}
The stabilized graph of $E_2$ is the following graph:
\[ \begin{tikzpicture}[vertex/.style={circle, draw = black, fill = black, inner sep=0pt,minimum size=5pt}]
\node at (-4,0) {$SE_2$};
\node[vertex] (a) at (0,0) [label=above:$w_2$] {};
\node[vertex] (c) at (2,0) [label=above:$w_1$] {};
\node[vertex] (b) at (4,0) [label=right:$v$] {};
\node (f) at (-2,0) {$\cdots$};
\path (f) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$c_3$} (a)
(a) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$c_2$} (c)
(b) edge[thick, loop, min distance = 20mm, looseness = 10, out = 225, in = 315,decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[below] {$a$} (b)
edge[thick, loop, min distance = 20mm, looseness = 10, out = 135, in = 45,decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate}] node[above] {$b$} (b)
(c) edge[thick, decoration={markings, mark=at position 0.99 with {\arrow{triangle 45}}}, postaction={decorate} ] node[above] {$c_1$} (b);
\end{tikzpicture} \]
\end{example}
Let us first just say a few words on necessary conditions for an étale groupoid $\mathcal{H}$ being embeddable into $\mathcal{G}_{E_2}$. First of all, it is clearly necessary that $\mathcal{H}$ is ample, Hausdorff and second countable, since $\mathcal{G}_{E_2}$ is. As we observed for the graph groupoids, it is also necessary that $\mathcal{H}^{(0)}$ has no isolated points, and hence that $\mathcal{H}^{(0)}$ is a locally compact Cantor space. Furthermore, since subgroupoids of effective groupoids are effective, it is also necessary that $\mathcal{H}$ be effective. As a final observation in this regard, any embedding $\Phi \colon \mathcal{H} \hookrightarrow \mathcal{G}_{E_2}$ induces an embedding of the isotropy bundles $H' \hookrightarrow \mathcal{G}_{E_2}'$, meaning that $\Phi$ restricts to an embedding of the isotropy group $H_y^y$ into $\left(\mathcal{G}_{E_2}\right)_{\Phi(y)}^{\Phi(y)}$ for each $y \in \mathcal{H}^{(0)}$. Now recall that for any graph groupoid $\mathcal{G}_E$ the istropy groups are
\[\left(\mathcal{G}_E\right)_x^x \cong \left\lbrace\begin{matrix}
\mathbb{Z} & \text{ if } $x$ \text{ is eventually periodic}, \\
0 & \text{ otherwise}.
\end{matrix} \right. \]
Thus, a final necessary condition for embeddability is that the istropy bundle of $H$ consists only of the groups $0$ and $\mathbb{Z}$. This rules out for instance (most) products of graph groupoids, since they typically have isotropy groups that are free abelian of rank up to the number of factors in the product. Note however, that taking the product with a principal groupoid does no harm in this regard. As we'll see imminently, taking the product with $\mathcal{R}$ actually does not affect anything.
Let us now move towards the actual embedding result. We begin by noting that stabilizing the groupoid doesn't affect embeddability into $\mathcal{G}_{E_2}$.
\begin{proposition}\label{stabEmb}
Let $\mathcal{H}$ be an effective ample second countable Hausdorff groupoid with $\mathcal{H}^{(0)}$ a locally compact Cantor space. Then $\mathcal{H}$ embeds into $\mathcal{G}_{E_2}$ if and only if the stabilized groupoid $\mathcal{H} \times \mathcal{R}$ embeds into $\mathcal{G}_{E_2}$.
\end{proposition}
\begin{proof}
The ``if statement'' is trivial as a groupoid always embeds into its stabilization. Suppose $\Phi \colon \mathcal{H} \to \mathcal{G}_{E_2}$ is an injective étale homomorphism. Then $\phi \times \id \colon \mathcal{H} \times \mathcal{R} \to \mathcal{G}_{E_2} \times \mathcal{R}$ is an injective étale homomorphism as well. By~\cite[Lemma 4.1]{CRS} we have $\mathcal{G}_{E_2} \times \mathcal{R} \cong \mathcal{G}_{S E_2}$, and $S E_2$ is a countable graph satisfying Condition~(L) with no sinks nor semi-tails. So by Theorem~\ref{thm:emb}, $\mathcal{G}_{S E_2}$ embeds into $\mathcal{G}_{E_2}$. Thus $\mathcal{H} \times \mathcal{R}$ embeds into $\mathcal{G}_{E_2}$.
\end{proof}
The next lemma shows that any étale embedding of a groupoid $\mathcal{H}$, with compact unit space, into $\mathcal{G}_{E_2}$ can be ``twisted'' into an embedding that hits the whole unit space of $\mathcal{G}_{E_2}$.
\begin{lemma}\label{unitalEmb}
Let $\mathcal{H}$ be an effective ample second countable Hausdorff groupoid with $\mathcal{H}^{(0)}$ a Cantor space. If $\mathcal{H}$ embeds into $\mathcal{G}_{E_2}$, then there exists an embedding $\Phi \colon \mathcal{H} \hookrightarrow \mathcal{G}_{E_2}$ such that $\Phi\left(\mathcal{H}^{(0)}\right) = E_2^\infty$.
\end{lemma}
\begin{proof}
Let $\Psi \colon \mathcal{H} \to \mathcal{G}_{E_2}$ be an injective étale homomorphism and let $Y = \Psi\left(\mathcal{H}^{(0)}\right)$. Then $Y$ is a compact open (hence clopen) subset of $E_2^\infty$. We claim that there exists a compact open bisection $U \subseteq \mathcal{G}_{E_2}$ such that $s(U) = Y$ and $r(U) = E_2^\infty$. The claim follows from~\cite[Theorem~6.4]{Mat} and~\cite[Example~3.3~(3)]{Mat2} by identifying $\mathcal{G}_{E_2}$ with the \emph{SFT-groupoid}\footnote{See~\cite[Example~2.5]{Mat2}.} of the matrix $A = [2]$. Now define $\Phi(h) = U \cdot \Psi(h) \cdot U^{-1}$ for $h \in \mathcal{H}$. Then $\Phi$ is an injective étale homomorphism and \[\Phi\left(\mathcal{H}^{(0)}\right) = U Y U^{-1} = U U^{-1} = r(U) = E_2^\infty,\]
as desired.
\end{proof}
We now state the most general version of our embedding theorem.
\begin{theorem}\label{eqEmbed}
Let $\mathcal{H}$ be an effective ample second countable Hausdorff groupoid with $\mathcal{H}^{(0)}$ a locally compact Cantor space. If $\mathcal{H}$ is groupoid equivalent to $\mathcal{G}_E$, for some countable graph $E$ satisfying Condition~(L) and having no sinks nor semi-tails, then $\mathcal{H}$ embeds into $\mathcal{G}_{E_2}$. Moreover, if $\mathcal{H}^{(0)}$ is compact, then the embedding maps $\mathcal{H}^{(0)}$ onto $E_2^\infty$.
\end{theorem}
\begin{proof}
Suppose $\mathcal{H}$ is groupoid equivalent to $\mathcal{G}_E$ as above. Then by~\cite[Theorem 3.2]{CRS} we have $\mathcal{H} \times \mathcal{R} \cong \mathcal{G}_E \times \mathcal{R}$. By Theorem~\ref{thm:emb} and Proposition~\ref{stabEmb}, $\mathcal{G}_E \times \mathcal{R}$ embeds into $\mathcal{G}_{E_2}$, hence so does $\mathcal{H} \times \mathcal{R}$ and $\mathcal{H}$. The second statement follows from Lemma~\ref{unitalEmb}.
\end{proof}
\subsection{Embedding AF-groupoids} A well-studied class of groupoids satisfying the hypothesis of Theorem~\ref{eqEmbed}, yet conceptually different from graph groupoids, are the \emph{AF-groupoids}. See~\cite{GPS2} (wherein they are dubbed \emph{AF-equivalence relations}). Let $\mathcal{G}$ be an ample Hausdorff second countable groupoid with $\mathcal{G}^{(0)}$ a locally compact Cantor space. Then $\mathcal{G}$ is called an \emph{AF-groupoid} if there exists an increasing sequence $\mathcal{K}_1 \subseteq \mathcal{K}_2 \subseteq \ldots \subseteq \mathcal{G}$ of clopen subgroupoids such that
\begin{itemize}
\item $\mathcal{K}_n$ is principal for each $n \in \mathbb{N}$.
\item $\mathcal{K}_n^{(0)} = \mathcal{G}^{(0)}$ for each $n \in \mathbb{N}$.
\item $\mathcal{K}_n \setminus \mathcal{G}^{(0)}$ is compact for each $n \in \mathbb{N}$.
\item $\bigcup_{n=1}^\infty \mathcal{K}_n = \mathcal{G}$.
\end{itemize}
This entails that $\mathcal{G}$ is principal.
\begin{remark}
The terminology AF-groupoid is due to Renault~\cite{Ren}, and is also used by Matui in~\cite{Mat1} and \cite{Mat2}. Note however that Matui only considers the case of a compact unit space therein.
\end{remark}
In the next example we explain how \emph{Bratteli diagrams} give rise to AF-groupoids.
\begin{example}[{\cite[Example 2.7(ii)]{GPS2}}]\label{ex:BD}
A \emph{Bratteli diagram} $B$ is a directed graph\footnote{This notation is inconsistent with what we have been using for directed graphs so far. But since Bratteli diagrams are very special kinds of graphs we have chosen to use the well-established notation from the literature. In this way we can, albeit somewhat artificially, distinguish a Bratteli diagram from its underlying graph.} whose vertex set $V$ and edge set $E$ can be written as countable disjoint unions of non-empty finite sets
\begin{equation}\label{eq:BD}
V = V_0 \sqcup V_1 \sqcup V_2 \sqcup \ldots \quad \text{and} \quad E = E_1 \sqcup E_2 \sqcup E_3 \sqcup \ldots
\end{equation}
such that the source and range maps satisfy $s(E_n) = V_{n-1}$ and $r(E_n) \subseteq V_n$. In particular, there are no sinks in $B$. Let $S_B \subseteq V$ denote the set of sources in $B$. Then $V_0 \subseteq S_B$. We call $B$ a \emph{standard} Bratteli diagram if there is only one source in $B$, i.e.\ $V_0 = \{v_0\}$. We say that $B$ is \emph{simple} if for every vertex $v \in V_m$, there is an $m > n$ such that there is a path from $v$ to every vertex in $V_m$. The partitions of the vertices and edges (into \emph{levels} as in Equation \eqref{eq:BD}) is considered part of the data of the Bratteli diagram $B$. We let $E_B$ denote the underlying graph where we ``forget'' about the partitions.
For a source $v \in S_B \cap V_n$ on level $n$ we let $X_v$ denote the set of infinite paths starting in $v$, that is
\[X_v \coloneqq \{ e_{n+1} e_{n+2} e_{n+3} \ldots \mid s(e_{n+1}) = v, e_{n+k} \in E_{n+k}, r(e_{n+k}) = s(e_{n+k-1}), k > 1 \}\]
The \emph{path space} of $B$ is
\[X_B \coloneqq \bigsqcup_{v \in S_B} X_v \]
whose topology is given by the basis of \emph{cylinder sets} \[C(\mu) \coloneqq \{ e_{n+1} e_{n+2} \ldots \in X_{s(\mu)} \mid e_{n+1} \ldots e_{n+ \vert \mu \vert} = \mu \}\] where $\mu$ is a finite path such that $s(\mu) = v$ for some source $v \in S_B \cap V_n$. The path space $X_B$ is Boolean, and it is compact if and only if $S_B$ is finite. Further, $X_B$ is perfect if and only if $E_B$ has no semi-tails. Two infinite paths in $X_B$ are \emph{tail-equivalent} if they agree from some level on. With this equivalence relation as the starting point, let for each $N \in \mathbb{N}$ \[\mathcal{P}_N \coloneqq \{ (x,y) \in X_B \times X_B \mid s(x) \in V_m, s(y) \in V_n, m,n \leq N, x_k = y_k \text{ for all } k > N \}.\]
That is, $\mathcal{P}_N$ consists of all pairs of infinite paths which start before the $N$'th level and agrees from the $N$'th level and onwards. Equipping $\mathcal{P}_N$ with the relative topology from $X_B \times X_B$ makes $\mathcal{P}_N$ a compact principal ample Hausdorff groupoid whose unit space is identified with $\bigsqcup_{n=1}^N \bigsqcup_{v \in S_B \cap V_n} Z(v)$.
We define the \emph{groupoid of the Bratteli diagram $B$} as the increasing union \[\mathcal{G}_B \coloneqq \bigcup_{N=1}^\infty \mathcal{P}_N\] equipped with the inductive limit topology. For two finite paths $\mu, \lambda$ with $s(\mu), s(\lambda) \in S_B$ and $r(\mu) = r(\lambda)$ we define \[C(\mu, \lambda) \coloneqq \left\{ (x,y) \in C(\mu) \times C(\lambda) \mid x_{\left[\vert \mu \vert +1, \infty\right)} = y_{\left[\vert \lambda \vert +1, \infty\right)} \right\}. \]
A straightforward computation shows that the family of $C(\mu, \lambda)$'s form a compact open basis for the inductive limit topology on $\mathcal{G}_B$. We identify $\mathcal{G}_B^{(0)}$ with $X_B$. By setting $\mathcal{K}_n = \mathcal{P}_n \cup X_B $ one sees that $\mathcal{G}_B$ is an AF-groupoid. The groupoid $\mathcal{G}_B$ is minimal if and only if $B$ is a simple Bratteli diagram.
\end{example}
\begin{remark}
Although the AF-groupoid $\mathcal{G}_B$ is defined in terms of a very special graph, namely the Bratteli diagram $B$, it is generally not isomorphic to a graph groupoid. To see this, recall that $\mathcal{G}_B$ is principal while a graph groupoid $\mathcal{G}_E$ is principal if and only if the graph $E$ has no cycles. If $X_B$ is compact, perfect and infinite (this is essentially stipulating that the Bratteli diagram is standard and ``non-degenerate''), then $\mathcal{G}_B$ cannot be isomorphic to any graph groupoid, for any such $\mathcal{G}_E$ would have a compact unit space, i.e.\ finitely many vertices, no loops and no sinks. And there are clearly no such graphs.
\end{remark}
Giordano, Putnam and Skau showed that, just as with AF-algebras~\cite{Bra}, every AF-groupoid can be realized by a Bratteli diagram as in Example~\ref{ex:BD}.
\begin{theorem}[{\cite[Theorem 3.9]{GPS2}}]
Let $\mathcal{H}$ be an AF-groupoid. Then there exists a Bratteli diagram $B$ such that $H \cong \mathcal{G}_B$. If $\mathcal{H}^{(0)}$ is compact, then $B$ can be chosen to be standard.
\end{theorem}
\begin{remark}\label{AFfullgroups}
As another example of a concrete description of the topological full group of an ample groupoid, we remark that Matui computed the topological full group of an AF-groupoid with compact unit space in terms of a definining Bratteli diagram in~\cite[Proposition~3.3]{Mat5}. The topological full group $\llbracket \mathcal{G}_B \rrbracket$, where $B$ is a Bratteli diagram, is the direct limit of the finite groups $\Gamma_N$ for $N \in \mathbb{N}$, where $\Gamma_N \leq \operatorname{Homeo}(X_B)$ consists of all permutations of the finite set of paths from level $V_0$ to $V_N$ such that the permutation preserves the range of these paths (and the action on $X_B$ is by permuting the intial segment of an infinite path). We should also mention that these groups were originally studied by Krieger in~\cite{Kri}, without emphasis on the underlying groupoids.
\end{remark}
By the preceding remark it is clear that the topological full group of any AF-groupoid is a locally finite group. And actually, this characterizes the AF-groupoids. This is somewhat of a folklore result, but a proof is published by Matui in the compact case, and it is not hard to see that his proof extends to locally compact unit spaces as well.
\begin{proposition}[c.f.\ {\cite[Proposition 3.2]{Mat5}}]\label{AFlocfin}
Let $\mathcal{G}$ be an ample principal Hausdorff second countable groupoid with $\mathcal{G}^{(0)}$ a locally compact Cantor space. Then $\llbracket \mathcal{G} \rrbracket$ is locally finite if and only if $\mathcal{G}$ is an AF-groupoid.
\end{proposition}
\begin{remark}\label{rem:LDA}
The commutator subgroups $\mathsf{D}(\mathcal{G})$ for AF-groupoids $\mathcal{G}$ are quite interesting in their own right. In fact, these exhaust\footnote{With the single exception of the infinite finitary alternating group.} the class of so-called \emph{strongly diagonal limits of products of alternating groups} (also called \emph{LDA-groups}), see~\cite{LN} where these are classified using the dimension groups of their Bratteli diagrams. These form a subclass of the locally finite simple groups. By Corollary~\ref{AFembed} below, the LDA-groups all embed into Thompson's group $V$.
\end{remark}
We now demonstrate that every AF-groupoid is groupoid equivalent to a graph groupoid. This is essentially just a reformulation of the main theorem from~\cite{Drin}, wherein it is shown that any AF-algebra can be recovered as a certain \emph{pointed} graph $C^*$-algebra of a defining Bratteli diagram. In contrast, in Proposition~\ref{AFgrpdeq} below we emphasize the groupoids, rather than their $C^*$-algebras. Also, since we use ``unlabeled'' Bratteli diagrams here, as opposed to \emph{labeled Bratteli diagrams} (c.f.~\cite[Section 2]{Drin}), the computations are easier.
\begin{proposition}\label{AFgrpdeq}
Let $B$ be a Bratteli diagram. Then the AF-groupoid $\mathcal{G}_B$ is isomorphic to the restriction of the graph groupoid $\mathcal{G}_{E_B}$ to the open subset $ \bigsqcup_{v \in S_B} Z(v) \subseteq E_B^\infty$. In particular, every AF-groupoid is groupoid equivalent to a graph groupoid.
\end{proposition}
\begin{proof}
Let $A = \bigsqcup_{v \in S_B} Z(v)$. Then
\[\left(\mathcal{G}_{E_B}\right)_{| A} = \{ (x,k,y) \mid s(x), s(y) \in S_B, \sigma_{E_B}(x)^m = \sigma_{E_B}(y)^n, k = m - n \}. \]
Due to the special structure of the graph $E_B$, the lag $k$ in $(x,k,y) \in \left(\mathcal{G}_{E_B} \right)_{| A}$ is uniquely determined by $x$ and $y$. In fact, $k$ is determined by the levels on which $x$ and $y$ start in the Bratteli diagram. Indeed, let $m,n \in \mathbb{N}$ be such that $s(x) \in V_m$ and $s(y) \in V_n$, then $k = n-m$. This means that the map $\Phi \colon \left(\mathcal{G}_{E_B} \right)_{| A} \to \mathcal{G}_B$ defined by $\Phi((x,k,y)) = (x,y)$ is a bijection. It is easy to see that $\Phi$ is also a groupoid homomorphism. Finally, to see that $\Phi$ is a homeomorphism simply note that the family of $Z(\mu, \lambda)$'s where $\mu, \lambda$ are finite paths with $s(\mu), s(\lambda) \in S_B$ and $r(\mu) = r(\lambda)$ form a basis for $\left(\mathcal{G}_{E_B}\right)_{| A}$, and that $\Phi(Z(\mu, \lambda)) = C(\mu, \lambda)$. Thus $\left(\mathcal{G}_{E_B}\right)_{| A} \cong \mathcal{G}_B$ as étale groupoids.
We claim that $A$ is a $\mathcal{G}_{E_B}$-full subset of $E_B^\infty$, and then the second statement follows from~\cite[Theorem 3.2]{CRS}. To see this, let $z \in E_B^\infty$ be an infinite path starting anywhere in the Bratteli diagram and simply note that by following $s(z)$ upwards in the Bratteli diagram, one eventually reaches a source $v \in S_B$ such that $v$ connects to $s(z)$. Letting $\mu$ be any path from $v$ to $s(z)$ we have that $z$ belongs to the $\mathcal{G}_{E_B}$-orbit of $\mu z \in A$.
\end{proof}
As a special case of Theorem~\ref{eqEmbed} we obtain the following.
\begin{corollary}\label{AFembed}
Let $\mathcal{G}$ be an AF-groupoid with $\mathcal{G}^{(0)}$ perfect. Then there exists an embedding of étale groupoids $\mathcal{G} \hookrightarrow \mathcal{G}_{E_2}$. If $\mathcal{G}^{(0)}$ is compact, then $\mathcal{G}^{(0)}$ maps onto $E_2^\infty$.
\end{corollary}
From this we obtain an analogue of Corollary~\ref{cor:graphAlgEmb} for AF-algebras and their diagonals. Let $A$ be an AF-algebra. By an \emph{AF Cartan subalgebra} $D \subseteq A$ we mean a Cartan subalgebra arising from the diagonalization method of Str\u atil\u a and Voiculescu~\cite{SV}. See~\cite[Section 4]{Drin} for a description of these diagonals for non-unital AF-algebras. Note that they are also $C^*$-diagonals in the sense of Kumjian~\cite{Kum}. According to~\cite[Subsection 6.2]{Ren2} these are precisely the Cartain pairs arising as $\left(C^*_r\left(\mathcal{G}_B\right), C_0(X_B)\right)$ for a Bratteli diagram $B$.
\begin{corollary}\label{cor:AFemb}
Let $A$ be an (infinite-dimensional) AF-algebra and let $D \subseteq A$ be any AF Cartan subalgebra in $A$ whose spectrum is totally disconnected. Then there exists a $*$-embedding $\psi \colon A \hookrightarrow \mathcal{O}_2$ such that $\psi(D) \subseteq \mathcal{D}_2$. If $A$ is unital, then so is $\psi$, and $\psi(D) = \mathcal{D}_2$.
\end{corollary}
\begin{remark}\label{LCCMfullgroups}
As a final remark, we note that certain transformation groupoids (by virtue of actually being AF-groupoids) also embed into $\mathcal{G}_{E_2}$. Let $X$ be a \emph{non-compact} locally compact Cantor space and let $T$ be a minimal homeomorphism on $X$. It follows from~\cite[Theorem 4.3]{GPS2} that the transformation groupoid $\mathbb{Z} \ltimes_T X$ is an AF-groupoid, and consequently $\mathbb{Z} \ltimes_T X$ embeds into $\mathcal{G}_{E_2}$.
An indirect way of seeing that $\mathbb{Z} \ltimes_T X$ is an AF-groupoid is via Proposition~\ref{AFlocfin}. By realizing the dynamical system $(X,T)$ as a so-called \emph{Bratteli-Vershik system} on a (standard) \emph{almost simple orderered Bratteli diagram} $B = (V,E, \geq)$ c.f.~\cite{Dan}, one easily observes (as Matui did in~\cite{Mat4}) that $\llbracket \mathbb{Z} \ltimes_T X \rrbracket$ is locally finite. This is because each element of $\llbracket \mathbb{Z} \ltimes_T X \rrbracket$ only depends on the inital edges down to level $N$ for some fixed $N$ (determined by the group element), for each infinite path in $X_B$. This actually allows one to describe the topological full group $\llbracket \mathbb{Z} \ltimes_T X \rrbracket$ explicitly in terms of a conjugate Bratteli-Vershik system.
A third way of demonstrating that $\mathbb{Z} \ltimes_T X$ is an AF-groupoid is that one can go from a conjugate Bratteli-Vershik system on an ordered Bratteli diagram $B = (V,E, \geq)$ to an ``unordered'' Bratteli diagram $B'$ such that $\mathbb{Z} \ltimes_T X \cong \mathcal{G}_{B'}$ as étale groupoids. Indeed, let $e_1 e_2 e_3 \ldots \in X_B$ denote the unique maximal and minimal path in $X_B$ (c.f.~\cite{Dan}). By ``forgetting'' the ordering and removing each of the edges $e_n$ for all $n \in \mathbb{N}$, and thereby introducing a source at each $s(e_n)$, one obtains the modified Bratteli diagram $B'$, and it is not hard to see that the AF-groupoid $\mathcal{G}_{B'}$ is isomorphic to $\mathbb{Z} \ltimes_T X$.
\end{remark}
|
1,116,691,499,055 | arxiv | \section{Introduction}
Advances in nanotechnology demand a better understanding of heat transport in nanoscale systems. The increased levels of dissipated power in ever smaller devices make the search for high thermal conductors essential~\cite{Pop06,Vasileska08,Pop10}. On the other hand, thermoelectric energy conversion requires materials with a strongly suppressed thermal conductivity, but still high electronic conduction. In this regard, one of the main goals in thermoelectric research is to find materials with a high figure of merit $ZT=S^2\sigma T / \kappa$, which assesses the thermoelectric efficiency of a system~\cite{Goldsmid10}. Here $S$ stands for the Seebeck coefficient, and $\sigma$ and $\kappa$ are the electric and thermal conductivities at a temperature $T$, respectively~\cite{Villagonzalo1999}. Both electrons and lattice vibrations contribute to the heat current and consequently $\kappa=\kappa_\mathrm{el} + \kappa_\mathrm{lat}$. Therefore, it is necessary to minimize both contributions to $\kappa$ while keeping $\sigma$ and $S$ high. Unfortunately, parameters $\kappa$, $\sigma$ and $S$ cannot be adjusted independently in most bulk materials. For instance, the ratio $\sigma/\kappa_\mathrm{el}$ in metals is determined from the Wiedemann-Franz law~\cite{Franz1853}. Hence reducing the lattice thermal conductivity $\kappa_\mathrm{lat}$ by increasing phonon scattering is one of the most promising routes to improve thermoelectric materials.
Several works have demonstrated theoretically~\cite{Hicks93,Khitun00,Balandin03,Sadeghi15} and experimentally~\cite{Venkata01,Harman02,Hochbaum08,Boukai08} that nanometer-sized objects exhibit thermoelectric efficiency unachievable with bulk materials. In particular, quantum effects allow thermoelectric devices to overcome the limitations arising from the classical Wiedemann-Franz law: nanodevices with sharp resonances in the electron transmission (such as Fano lineshapes) are in principle ideal candidates for highly efficient waste heat-to-electricity converters because the ratio $\sigma/\kappa_\mathrm{el}$ increases well above the Wiedemann-Franz limit~\cite{Mahan96,GomezSilva12,Zheng12,Garcia13,Fu15,SaizBretin16,Wang16}. Thus, ballistic electrons in nanodevices pave a possible way to achieve large $ZT$ and consequently more efficient thermoelectric devices as refrigerators and generators~\cite{Koumoto13}.
Graphene nanoribbons (GNRs) can behave as single-channel room-tem\-per\-a\-ture ballistic electrical conductors on a length scale greater than ten microns~\cite{Baringhaus14}. Recent advances in nanotechnology enable the fabrication of devices based on GNRs, such as quantum nanorings~\cite{Russo08,Smirnov12,Schelter12,Cabosart14,Samal15}, that can show ballistic transport and consequently are good candidates to exploit quantum effects even at room temperature. Although ballistic electron transport yields higher values of both $\sigma$ and $\kappa_\mathrm{el}$, it turns out that the ratio $S^2\sigma/\kappa_\mathrm{el}$ becomes largely enhanced in quantum nanorings due to the occurrence of Fano anti-resonances~\cite{SaizBretin15}. However, graphene occupies a unique place amongst materials in terms of its thermal properties because it possesses one of the highest lattice thermal conductivities. A high value of $\kappa_\mathrm{lat}$ is undesirable for thermoelectric applications but it can be greatly reduced in GNRs by rough edges~\cite{Savin10}, hydrogen-passivation~\cite{Hu10} and patterning ~\cite{Mazzamuto11,Li14,Zhang12,Chen10,Xu10}. Since graphene is envisioned as a material of choice for a variety of applications in future electronics, understanding how heat is carried in different graphene nanostructures is crucial. Among these structures, graphene nanorings stand out because of the straightforward way in which they exploit quantum interference effects. These effects could be used for designing new quantum interferometers~\cite{Wu10,Munarriz11,Mrenka16,Sousa17} or spintronic devices \cite{Munarriz12,Farghadan13}. Recently, we demonstrated theoretically that graphene nanorings might be useful as thermoelectric devices too~\cite{SaizBretin15}. We found that quantum interference effects lead to large $S$ and hence high $ZT$ when the heat current is solely due to electrons. Yet, lattice heat conduction, which is expected to be the most important contribution to heat transport in carbon materials due to the strong covalent $sp^{2}$ bonding, had not been studied in graphene nanorings.
In this work, we address the contribution of the atomic lattice to heat transport in armchair GNRs and nanorings by using non-equilibrium molecular dynamics (NEMD) simulations as implemented in the LAMMPS Molecular Dynamics Simulator~\cite{LAMMPS}. NEMD simulations provide a direct method to calculate the lattice thermal conductivity. To this end, a heat flow through the system under study establishes a temperature gradient across the system and Fourier's law brings about an estimate of the lattice thermal conductivity. We compare the thermal conductivities of GNRs and rectangular graphene nanorings of widths up to $\SI{6}{\nano\meter}$ over a wide range of temperature. Our study proves that the lattice thermal conductivity $\kappa_\mathrm{lat}$ is greatly reduced in nanorings as compared to GNRs due to higher scattering of lattice vibrations at bends. We also demonstrate that edge disorder has a weaker impact on the heat current in nanorings as compared to GNRs. Similarly, we find that the effects of hydrogen-saturated edges can be safely neglected in these nanostructures.
\section{Model and methodology}
In our study we focus on two different types of graphene nanostructures connected to leads. The first system under consideration is a uniform rectangular armchair GNR of width $W$, as seen in the top panel of Figure~\ref{fig1}. We only consider armchair GNRs since many studies have shown them to have a lower thermal conductivity than zig-zag GNRs \cite{Guo09,Hu09,Evans10}, thus being more suitable for thermoelectric applications. The second kind of nanostructures are graphene nanorings. For constructing the latter, a rectangular graphene ring is inserted between two armchair nanoribbons of width $W$, as depicted in the lower panel of Figure~\ref{fig1}. The existence of bends along with the presence of two types of edges (zigzag and armchair) give rise to stronger scattering of lattice vibrations which is expected to reduce the lattice thermal conductivity, as compared with uniform GNRs. We will show later that this is actually the case.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.7\linewidth]{figure1.jpg}
\end{center}
\caption{Schematic view of the analysed structures. The top panel shows a GNR of width $W$. The bottom panel displays a "square" graphene nanoring. The red (blue) area represents the hot (cold) contact where an amount of heat $\Delta \epsilon$ is introduced (removed) in every time step of the NEMD simulation.}
\label{fig1}
\end{figure}
The lattice thermal conductivity is calculated using NEMD simulations~\cite{LAMMPS,Plimpton95} in which the classical trajectories of Lagrangian particles (in our case, carbon atoms) are obtained by numerically solving Newton's equations of motion. Atoms with initial positions and velocities are exposed to collisions governed by an empirical interatomic potential. At each time step, the force acting on each atom is obtained, and then, positions and velocities are updated.This is an excellent approximation for a wide range of materials; only when we consider light atoms or vibrational motion with a frequency $\omega$ such that $\hbar\omega > k_B T$ should we worry about quantum effects \cite{Frenkel96}. As a general rule, the classical treatment will be valid if the interparticle distance is much larger than the thermal de Broglie wavelength $\Lambda_\mathrm{th}=h/\sqrt{2\pi m k_B T}$. This is indeed the scenario in our simulations since the distance between C atoms is about $\SI{0.142}{\nano\metre}$ while $\Lambda_\mathrm{th}=\SI{0.030}{\nano\metre}$ at room temperature and $\Lambda_\mathrm{th}=\SI{0.043}{\nano\metre}$ at $T=\SI{100}{\kelvin}$.
NEMD simulations provide a direct method to calculate lattice thermal conductivity by applying a perturbation to the system and measuring the response. Most suitable choices of perturbations would be either imposing a thermal gradient $\nabla T$ across the system or introducing a heat flow $\vc{J}$. Throughout this work we consider the latter case, that is, a heat flow is introduced and the subsequent thermal gradient is calculated (further details about this choice will be given in next section)\cite{Lukes00,Pei11}. Then, Fourier's law is applied to obtain the lattice thermal conductivity $\kappa$ (because we only address the lattice contribution to the thermal conductivity we omit the subscript hereafter, unless otherwise stated)
\begin{equation}
\vc{J}=\kappa\, \nabla T\ .
\label{fourier}
\end{equation}
The small size of the systems under study poses a question about the validity of Fourier's law at the nanoscale but a comparison with
more elaborated approaches, such as the phonon Boltzmann transport equation, is beyond the scope of the present manuscript. Nevertheless, previous studies on steady-state thermal transport in nanostructures concluded that Fourier's law is essentially exact in the diffusive and ballistic limits (see Ref.~\cite{Kaiser17} and references therein for further details).
In this work, a time step of $\SI{0.5}{\femto\second}$ is used in the simulations and carbon-carbon interactions are described by the adaptive intermolecular reactive bond order (AIREBO) potential~\cite{Stuart00}, which depends not only on the distance between atoms but also on the local atomic environment, and therefore implicitly contains many-body information. This potential has already been successfully implemented to study thermal and mechanical properties of graphene \cite{Ng12,Pei11,Yang13}. The initial configuration is first equilibrated at a temperature $T$ (typically the mean target temperature) during $\num{2}\times\num{10}^6$ time steps by keeping the two outermost rows of atoms at each end fixed (gray lines in Figure~\ref{fig1}) while applying a Nose-Hoover thermostat to the rest of the atoms which are free to move in three dimensions. Then, we introduce a heat flow by adding at each time step a small amount of energy ($+\Delta \varepsilon$) into the hot contact and removing the same amount of energy ($-\Delta \varepsilon$) from the cold contact. This energy addition (subtraction) is done by rescaling the velocity vectors at both contacts. In order to avoid non-linear temperature profiles when $\Delta \varepsilon$ is too large or temperature fluctuations when $\Delta \varepsilon$ is too small, we adjust $\Delta \varepsilon$ so that $\Delta T = T_\mathrm{max}-T_\mathrm{min}\simeq 0.2 T$. Since the value of $\Delta \varepsilon$ is unknown beforehand it is found by performing iterative simulations and adjusting $\Delta \varepsilon$ at each iteration step to obtain the target $\Delta T/T$ ratio. The system is then switched to the constant volume and constant energy ensemble and we run at least $\num{10}^{7}$ time steps to allow the system to attain steady state. Once it is reached, the three components of velocity are averaged over $10^7$ time steps. Then, the system is divided into slices and the temperature within each slice is obtained from the expression $T=(M/3Nk_B)\sum_i \Big(\langle v_i \rangle_x^2 + \langle v_i \rangle_y^2 + \langle v_i \rangle_z^2\Big)$, where $M$ is the mass of the carbon atom, $N$ is the number of atoms in each slab, $k_B$ denotes the Boltzmann's constant and $\langle v_i \rangle_\mu$ stands for the time averaged $\mu$-component of velocity of atom $i$. The temperature gradient $\partial T/\partial x$ is determined after a linear fit and the heat current, which can be defined as the amount of energy transferred per unit time and cross sectional area, is calculated as
\begin{equation}
J=\frac{1}{Wd}\,\frac{\Delta \epsilon}{\Delta t}\ ,
\end{equation}
where $\Delta t$ is the time step and $d$ the graphene thickness, taken approximately as the diameter of a carbon atom $d=\SI{0.144}{\nano\meter}$ \cite{Guo09}. Finally, the lattice thermal conductivity is obtained using Fourier's law (equation \ref{fourier}).
We have also tried an alternative method of the conductivity calculation, consisting in maintaining a fixed temperature difference between the contacts and calculating the energy flux. In this case temperature profiles can be highly nonlinear, characterized by abrupt temperature changes at the contacts (resulting from a mismatch between the dispersion relation of the fixed temperature parts and the rest of the system~\cite{Shiomi14}). To avoid the non-physical temperature kinks, larger contacts have to be considered increasing the simulation time considerably, for which reason the latter approach was abandoned.
\section{Influence of the contacts}
In this section we analyze the impact of contact sizes on our results. In our simulations we found that their size needs to be chosen carefully to obtain meaningful results. To do so, we consider an armchair GNR with fixed length $L_S$ and width $W$ and vary the contact length $L_C$ (see Figure~\ref{fig1} for a schematic view). We first plot the temperature profile across the system for a fixed value of $L_C$. In Figure~\ref{fig2}(a) one can observe that it is linear and smooth in the interface between the system and the contacts.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.7\linewidth]{figure2.jpg}
\end{center}
\caption{(a)~Temperature profile and linear fit for an armchair graphene nanoribbon of width $W=\SI{1.2}{\nano\meter}$ and $L_S=L_C=\SI{8.4}{\nano\meter}$. The gray areas represent the contact regions of length $L_C$. (b)~Lattice thermal conductivity $\kappa$ as a function of the ratio $L_C/L_S$ for the above mentioned nanoribbons.}
\label{fig2}
\end{figure}
Further we calculate $\kappa$ as a function of the contact length $L_C$, as displayed in Figure \ref{fig2}(b). We observe that $\kappa$ increases with $L_C$ and tends to saturate as $L_C/L_S$ becomes larger. Figure~\ref{fig2}(a) shows that the thermal gradient spans not only across the "device" part of the system (the central part of length $L_S$) but also across part of the contacts. Then the effective length of the system is larger than $L_S$ and varies with $L_C$. Therefore, the thermal conductivity is also contact size dependent because $\kappa$ is a length-dependent magnitude in nanometer-sized graphene nanoribbons \cite{Guo09,Park13}. This occurs because the contacts (heat source and heat sink) cannot be considered as isothermal classical boundary conditions. Although the average temperature remains constant, there is a temperature gradient within contacts. They are part of the system, so that vibrational modes are characterized by the whole dimension and not only by the size of the intermediate zone~\cite{Chantrenne04}. In order to avoid any dependence on the length dimension, the length of the system and the contacts (cold and hot regions) will be fixed throughout this work so that we take $L_C = \SI{12.6}{\nano\meter}$ and $L_S = \SI{25.4}{\nano\meter}$.
\section{Thermal conductivity of graphene nanorings}
\subsection{Role of dimensions}
First, we fix $T=\SI{300}{\kelvin}$ and study how the thermal conductivity is affected by the width of the nanoribbons. Figure~\ref{fig3}(a) shows the thermal conductivity for GNRs and two types of nanorings for widths $W$ up to $\SI{6}{\nano\meter}$. We refer to these as \emph{symmetric} or \emph{asymmetric} nanorings depending on the connection between the nanoring and the nanoribbons forming the leads (see Figure~\ref{fig3}). Our results show that $\kappa$ monotonously increases with the width $W$, both for nanoribbons and nanorings. This is in agreement with previous results~\cite{Guo09,Evans10}, where the same trend was found for armchair nanoribbons at room temperature. It can be understood as follows. Wider nanoribbons have a larger number of vibrational modes while the number of edge localized modes does not change. Thus, the edge effect decreases and $\kappa$ increases with $W$. At a threshold width, larger than the ones considered in this work, $\kappa$ will reach the value of graphene ($2000 - 4000\,$W/mK\cite{Pop12}) and then stay constant due to intermode scattering arising in the anharmonic lattice. Although $\kappa$ also increases with $W$ for nanorings, it remains lower than for nanoribbons at all $W$ considered by us. We interpret this as the effect of the mix of different edges, both armchair and zig-zag, that leads to a mismatch of the vibrational modes~\cite{Mazzamuto11} and by scattering at the bends~\cite{Li14,Yang13}. We note here that an introduction of more asymmetries in the rings, such as, different widths of the ring arms is expected to increase the scattering and reduce the conductivity even further.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.7\linewidth]{figure3.jpg}
\end{center}
\caption{(a)~Lattice thermal conductivity as a function of width $W$ for armchair ribbons and rings. (b)~Ratio $\kappa_\mathrm{ring}/\kappa_\mathrm{ribbon}$ as a function of the nanoribbon width $W$.}
\label{fig3}
\end{figure}
Although absolute values of $\kappa$ obtained by NEMD simulations depend on the choice of interatomic potential, boundary conditions, simulated system dimensions and chosen method of imposing heat flux and temperature gradient, our results remain relevant because we are addressing the relative reduction of the thermal conductivity due to the nanostructuring.In the lower panel of Figure~\ref{fig3} the ratio $\kappa_\mathrm{ring}/\kappa_\mathrm{ribbon}$ is plotted, where $\kappa_\mathrm{ribbon}$ is the thermal conductivity of nanoribbons and $\kappa_\mathrm{ring}$ indicates the lattice thermal conductivity of the corresponding symmetric/asymmetric ring of the same nanoribbon width. Our results show that symmetric nanorings reduce the thermal conductivity more efficiently, reaching only~$60\%$ of the nanoribbon with the same width up to $\SI{3.5}{\nano\meter}$. This can be explained by the existence of more bends in symmetric rings, leading to a stronger scattering and suppression of the thermal conductivity. As the nanoribbon width decreases, the mismatch of vibration modes at different regions become more important and the suppression of $\kappa$ is stronger.
\subsection{Effect of temperature}
In this section we analyze the temperature dependence of $\kappa$. We take $W = \SI{2.5}{\nano\meter}$ as a typical value of the nanoribbon width and vary the temperature from $\SI{100}{\kelvin}$ up to $\SI{1000}{\kelvin}$. Classical NEMD simulations are considered valid near and above Debye's temperature ($T_\mathrm{D} \approx \SI{322}{\kelvin}$ for graphene nanoribbons \cite{Hu09}), where all vibrational modes are fully excited. At lower temperatures, quantum effects cannot be neglected. To mitigate this limitation, a quantum correction was developed using the scheme~\cite{Hu09, Hu09_2}
\begin{equation}
T_\mathrm{MD}=\frac{T_\mathrm{D}}{3}+\frac{2T_\mathrm{Q}^3}{T_\mathrm{D}^2}\int_{0}^{T_\mathrm{D}/T_\mathrm{Q}}\frac{x^2}{e^x-1}\,dx\ ,
\end{equation}
where $T_\mathrm{MD}$, $T_\mathrm{D}$ and $T_\mathrm{Q}$ are simulation temperature, Debye's temperature and quantum corrected temperature, respectively. The corrected thermal conductivity, $\kappa_\mathrm{D}=\kappa\, dT_\mathrm{MD}/dT_\mathrm{Q}$, is obtained by equating the heat fluxes obtained from Fourier's law in the classical (non-corrected) and quantum systems. This correction has been implemented in many works, also those studying graphene~\cite{Hu09,Hu09_2}. However, other works have questioned the validity of these quantum corrections~\cite{Turney09}. For this reason, in Figure~\ref{fig4} we plot both the quantum corrected and uncorrected values of the thermal conductivity. There is no quantum correction available at low temperatures (shadowed areas in Figure~\ref{fig4})~\cite{Hu09_2}. Our results show that for the three considered graphene structures, $\kappa$ first increases very quickly with $T$ until it reaches a maximum value and then it slowly decreases. When no quantum correction is considered, the maximum $\kappa$ value is reached at lower temperatures. We also find that low temperatures favor a reduction of the thermal conductivity in rings, both in symmetric and asymmetric configurations (see lower panel of Figure~\ref{fig4}). As before, the reduction is stronger for symmetric rings. For these rings, $\kappa$ is only about $40\%$ of the conductance of the corresponding nanoribbon for temperatures near $\SI{100}{\kelvin}$, and even for temperatures as high as $\SI{1000}{\kelvin}$ symmetric rings cause a significant decrease of the thermal conductivity ($\kappa_\mathrm{ring}<0.8\,\kappa_\mathrm{ribbon}$).
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.7\linewidth]{figure4.jpg}
\end{center}
\caption{(a)~Lattice thermal conductivity as a function of temperature for armchair nanoribbons and nanorings with $W=2.5$~nm. Solid lines represent the quantum corrected value of the conductivity for each structure. There is no quantum correction available for temperatures within the shadowed area. (b)~Ratio $\kappa_\mathrm{ring}/\kappa_\mathrm{ribbon}$ as a function of temperature $T$.}
\label{fig4}
\end{figure}
\subsection{Edge disorder and functionalization}
Thus far, we have considered ideal nanostructures while imperfections or disorder, can clearly affect the thermoelectric response. There exist different sources of disorder, such as charged impurities in the substrate, native defects and imperfections of the edges. The impact of the latter on the lattice thermal conductivity will probably be rather significant, especially in nanoscale systems. Following Refs.~\cite{Munarriz11,Munarriz12}, in order to estimate a possible impact of the edge disorder on the heat current, we consider disordered samples in which we delete carbon atoms from the edges. To do so, carbon atoms are randomly removed from the zigzag edges with some given probability $p$. To avoid dangling atoms in the armchair edges, pairs of neighbor atoms are removed with the same probability. We find that the thermal conductivity is almost independent of the exact position of the removed atoms as long as the probability $p$ remains the same. Even so, results are averaged over five realizations of the disordered sample.
Figure~\ref{fig5}(a) shows the thermal conductivity of the nanoribbons and nanorings as a function of the probability $p$ when $W = \SI{2.5}{\nano\meter}$ and $\SI{300}{\kelvin}$. All thermal conductivities are normalized to the thermal conductivity of the corresponding perfect nanostructure, denoted by $\kappa_0$. Our results show that rough edges decrease $\kappa$ in the three studied nanostructures. This is in good agreement with the results reported in Refs.~\cite{Savin10,Evans10}, where it was found that rough edges can cause a suppression of the thermal conductivity in nanoribbons due to the scattering of vibrational modes. Furthermore, Figure \ref{fig5}(a) shows nanoribbons would be the most affected, while symmetric rings would be the least. It is worth mentioning that $\kappa_\mathrm{ring}(p) < \kappa_\mathrm{ribbon}(p)$ continues to be true in spite of the already marked decrease observed for $\kappa_\mathrm{ribbon}(p)$ when increasing $p$. Figure \ref{fig5}(b) demonstrates that the thermal conductivity in symmetric rings is about $70\%$ of the corresponding ribbon conductivity for $p=20\%$ (which is the largest probability of removal considered in this work).
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.7\linewidth]{figure5.jpg}
\end{center}
\caption{(a)~Lattice thermal conductivity as a function of the probability of removal ($p$) for $W=2.5$~nm and $T=300$~K. All thermal conductivities are normalized by the thermal conductivity of the corresponding perfect ribbon or ring ($\kappa_0$). (b)~Ratio $\kappa_\mathrm{ring}/\kappa_\mathrm{ribbon}$ as a function of the probability of removal ($p$).}
\label{fig5}
\end{figure}
One major issue regarding heat transport in narrow GNRs is to elucidate up to what extent would chemical functionalization affect the results. Among various functional groups, hydrogen has attracted considerable interest in recent years (see Ref.~\cite{Pei11} and references therein). In order to answer this question, we have conducted further NEMD simulations of GNRs and rings with edges saturated by hydrogen atoms. We found that the thermal conductivity slightly decreases as compared to open GNRs and rings of the same size. This finding seems consistent with NEMD simulations carried out in hydrogenated GNRs~\cite{Pei11} and GNRs with hydrogen termination~\cite{Evans10}. The decrease of the thermal conductivity is larger in rings than in GNRs because the larger edge length of the formers. However, in both cases the reduction of the thermal conductivity is not significant and the effects of hydrogen-saturated edges can be safely neglected.
\section{Conclusions}
We have studied the lattice thermal conductivity of graphene nanorings and nanoribbons by means of NEMD. We found a significant reduction of the thermal conductivity $\kappa_\mathrm{lat}$ in symmetric rings, especially for narrow widths and low temperatures, as compared to uniform GNRs. Even at temperatures as high as $\SI{1000}{\kelvin}$, our results show a substantial decrease of $\kappa_\mathrm{lat}$. The impact of rough edges on heat transport was also addressed and we concluded that it is higher in nanoribbons. Nevertheless, the thermal conductivity of disordered nanorings is considerably smaller than that of nanoribbons at the same magnitude of disorder. Therefore, nanorings present two main advantages for exploiting their thermoelectric properties as compared to nanoribbons. First, quantum interference effects enhance the ratio $\sigma/\kappa_\mathrm{el}$ and make it much larger than the universal value predicted by the Wiedemann-Franz law in ohmic metals. Second, the scattering of vibrational modes at bends yields a strong reduction of the lattice thermal conductivity $\kappa_\mathrm{lat}$, which can probably be further decreased in the case of more irregular rings having arms of different widths or lengths.
\section*{Acknowledgments}
The authors are grateful to R.\ Brito for helpful discussions. M.\ S.-B.\ thanks the Theoretical Physics Group of the University of Warwick for the warm hospitality. Work at Madrid has been supported by MINECO under grants MAT2013-46308, MAT2016-75955 and MAT2016-63955-R. Calculations were performed at the \textit{Cluster de C\'{a}lculo de Alta Capacidad para T\'{e}cnicas F\'{\i}sicas}, funded by the Universidad Complutense and the EU under the FEDER program. UK research data statement: All data accompanying this publication are directly available within the publication.
\section*{References}
|
1,116,691,499,056 | arxiv | \section{Introduction}
The discovery of accelerating expansion of the universe by
the observations of Type Ia supernovae in 1998 \cite{acc1,acc2} motivated the search for dark energy
and modified gravity. The gravitational force decays at a scale larger than $m^{-1}$ if graviton has
a mass $m$, so massive gravity may be used to explain the cosmic acceleration. Naively, the mass of graviton
should be very small so that gravity is still approximately a long range force, therefore,
it is expected that the mass of graviton is about Hubble scale $m\sim H_0$.
Dvali, Gabadadze and Porrati proposed that general relativity is modified
at the cosmological scale \cite{dgp}. In this model, there are a continuous tower of massive
gravitons. The first attempt of a theory of gravity with massive graviton
was made by Fierz and Pauli \cite{pauli}. However, the linear theory with the Fierz-Pauli mass is in
contradiction with solar system tests \cite{vdvz1,vdvz2}. Recently, de Rham, Gabadadze and Tolley
introduced a nonlinear theory of massive gravity \cite{massgrav} that is free from Bouldware-Deser ghost \cite{bdghost,hassan}.
The cosmological solutions for massive gravity were sought in \cite{amico,koyama1,koyama2,gumruk11,gratia12,kobayashi12,volkov12,mukohyama,tolley,langlois12,vonStrauss:2011mq,Berezhiani:2011mt,Akrami:2012vf,Koyama:2011wx,Motohashi:2012jd,Tasinato:2012ze}.
The first homogenous and isotropic solution was found for spatially open universe in \cite{gumruk11} and the massive graviton
term is equivalent to a cosmological constant. The same solutions were then found for spatially open and closed universe
in \cite{gratia12,kobayashi12}. In addition to the equivalent cosmological constant solution, more general cosmological solutions were also found in \cite{tolley,langlois12}
by taking the de Sitter metric as the reference metric.
We follow the approach in \cite{tolley,langlois12} and proposed a new approach to find more general cosmological solutions.
\section{massive gravity}
The theory of massive gravity is base on the following action \cite{massgrav}
\begin{equation}
\label{action}
S=\frac{M_{pl}^2}{2}\int d^4x\sqrt{-g}(R+m_g^2\mathcal{U})+S_m,
\end{equation}
where $m_g$ is the mass of the graviton, the mass term
\begin{equation}
\label{massv}
\mathcal{U}=\mathcal{U}_2+\alpha_3\mathcal{U}_3+\alpha_4\mathcal{U}_4,
\end{equation}
\begin{gather}
\label{massv2}
\mathcal{U}_2=[\mathcal{K}]^2-[\mathcal{K}^2],\\
\label{massv3}
\mathcal{U}_3=[\mathcal{K}]^3-3[\mathcal{K}][\mathcal{K}^2]+2[\mathcal{K}^3],\\
\label{massv4}
\mathcal{U}_4=[\mathcal{K}]^4-6[\mathcal{K}]^2[\mathcal{K}^2]+8[\mathcal{K}^3][\mathcal{K}]-6[\mathcal{K}^4],
\end{gather}
and
\begin{equation}
\label{tensor1}
\mathcal{K}^\mu_\nu=\delta^\mu_\nu-(\sqrt{\Sigma})^\mu_\nu,
\end{equation}
The tensor $\Sigma_{\mu\nu}$ is defined by four St\"{u}ckelberg fields $\phi^a$ as
\begin{equation}
\label{stuckphi}
\Sigma_{\mu\nu}=\partial_\mu\phi^a\partial_\nu\phi^b\eta_{ab}.
\end{equation}
The reference metric $\eta_{ab}$ is usually taken as the Minkowski one.
The cosmological solution was first found in \cite{gumruk11} for an open universe and the mass term
behaves like an effective cosmological constant with
\begin{equation}
\label{efflambda}
\begin{split}
\Lambda_{eff}=&-m^2_g\left(1+3\alpha_3\pm\sqrt{1+3\alpha_3+9\alpha_3^2-12\alpha_4}\right)\times\\
&\frac{(1+9\alpha^2_3-24\alpha_4\pm(1+3\alpha_3)\sqrt{1+3\alpha_3+9\alpha_3^2-12\alpha_4})}{9(\alpha_3+4\alpha_4)^2}.
\end{split}
\end{equation}
The same solution was then found in \citep{gratia12} for a flat universe
by considering an arbitrary spatially isotropic metric and a spherically symmetric ansatz for the St\"{u}ckelberg fields.
In \cite{Motohashi:2012jd}, the authors obtained the solution by assuming isotropic forms for both the physical and reference metrics.
For a general case with positive, negative and
zero curvature, Kobayashi {\it et al.} found the same solution with $\Lambda_{eff}=m^2_g/\alpha$ for the particular choices
of parameters $\alpha_3$ and $\alpha_4$ \cite{kobayashi12},
\begin{equation}
\label{1parm}
\alpha_3=\frac{1}{3}(\alpha-1),\quad \alpha_4=\frac{1}{12}(\alpha^2-\alpha+1).
\end{equation}
In \cite{tolley,langlois12}, the authors assumed that the spacetime metric takes the form
\begin{equation}
\label{frwmetric}
ds^2=g_{\mu\nu}dx^\mu dx^\nu=-N^2(t)dt^2+a^2(t)\gamma_{ij}(x)dx^idx^j,
\end{equation}
with the spatial metric
$$\gamma_{ij}(x)dx^idx^j=\frac{dr^2}{1-kr^2}+r^2(d\theta^2+\sin^2\theta d\phi^2),$$
and generalized the reference metric from Minkowski metric to de Sitter metric,
\begin{equation}
\label{refmetric}
\eta_{ab}d\phi^a d\phi^b=-dT^2+b_k^2(T)\gamma_{ij}dX^idX^j,
\end{equation}
where the St\"{u}ckelberg fields are assumed to be $\phi^0=T=f(t)$, $\phi^i=X^i=x^i$,
so that the tensor $\Sigma_{\mu\nu}$ takes the homogeneous and isotropic form,
\begin{equation}
\label{stuckphi1}
\Sigma_{\mu\nu}={\rm Diag}\{-\dot{f}^2,\ b^2_k[f(t)]\gamma_{ij}\},
\end{equation}
and the functions $b_k(T)$ are
$$b_0(T)=e^{H_c T},\quad b_{-1}(T)=H_c^{-1}\sinh(H_cT),\quad b_1(T)=H_c^{-1}\cosh(H_cT),$$
they then found three branches of cosmological solutions, two of them correspond to the effective cosmological constant (\ref{efflambda})
and exist for spatially flat, open and closed cases. They also found a new solution \cite{tolley,langlois12}
\begin{equation}
\label{masssol1}
\frac{db_k[f]}{df}=\frac{\dot a}{N}.
\end{equation}
For the flat case, $k=0$, substituting the de Sitter function $b_0[f(t)]=e^{H_c f(t)}$ into equation (\ref{masssol1}),
we obtain the effective energy density
and pressure for the massive graviton,
\begin{gather}
\label{effrho1}
\rho_g=-m_g^2M_{pl}^2\left(1-\frac{H}{H_c}\right)\left[3(\alpha_3+4\alpha_4)\frac{H^2}{H_c^2}
-3(1+5\alpha_3+8\alpha_4)\frac{H}{H_c}+6+12\alpha_3+12\alpha_4\right],\\
\label{effp1}
\begin{split}
p_g=&m_g^2M_{pl}^2\left[-3(\alpha_3+4\alpha_4)\frac{H^3}{H^3_c}\left(1+\frac{\dot H}{H^2}\right)
+6+12\alpha_3+12\alpha_4\right.\\
&\left.-(3+9\alpha_3+12\alpha_4)\frac{H}{H_c}\left(3+\frac{\dot H}{H^2}\right)
+(1+6\alpha_3+12\alpha_4)\frac{H^2}{H^2_c}\left(3+2\frac{\dot H}{H^2}\right)\right].
\end{split}
\end{gather}
So when $H=H_c$, $\rho_g=0$.
The Friedmann equations are
\begin{gather}
\label{frweq1}
H^2+\frac{k}{a^2}=\frac{1}{3 M_{pl}^2}(\rho_m+\rho_g),\\
\label{frweq2}
2\dot H+3H^2+\frac{k}{a^2}=-\frac{1}{M_{pl}^2}(p_m+p_g).
\end{gather}
For the flat case, substituting equations (\ref{effrho1}) and (\ref{effp1}) into Friedmann equations (\ref{frweq1}) and (\ref{frweq2}),
we get \cite{tolley,langlois12}
\begin{equation}
\label{frweq4}
\begin{split}
\frac{m_g^2}{H_0^2}\frac{H(z)}{H_c}\left[-(\alpha_3+4\alpha_4)\frac{H^2(z)}{H_c^2}
+(1+6\alpha_3+12\alpha_4)\frac{H(z)}{H_c}-3(1+3\alpha_3+4\alpha_4)\right]\\
=-E^2(z)+\Omega_m(1+z)^{3(1+w_m)}-2\frac{m_g^2}{H_0^2}(1+2\alpha_3+2\alpha_4).
\end{split}
\end{equation}
\begin{equation}
\label{acceq4}
\begin{split}
\frac{\dot H}{H^2}\left\{-2E^2(z)+\frac{m_g^2}{H_c^2}E(z)\left[3(1+3\alpha_3+4\alpha_4)\frac{H_c}{H_0}-2(1+6\alpha_3+12\alpha_4)E(z)\right.\right.\\
\left.\left.+3(\alpha_3+4\alpha_4)\frac{H_0}{H_c}E^2(z)\right]\right\}=3\Omega_{m}(1+w_m)(1+z)^{3(1+w_m)},
\end{split}
\end{equation}
where $E(z)=H(z)/H_0$. The effective equation of state $w_g=p_g/\rho_g$ for the massive graviton is
\begin{equation}
\label{weff4}
w_g=-1-\frac{2E^2(z)\dot{H}/H^2+3\Omega_m(1+w)(1+z)^{3(1+w_m)}}{3[E^2(z)-\Omega_m(1+z)^{3(1+w_m)}]}.
\end{equation}
Since $\rho_g=0$ when $H(z)=H_c$, so if $H_c=H_0$, we find that $\Omega_m=1$ which
is inconsistent with current observations, therefore $H_c\neq H_0$. If $H_c<H_0$,
then we cannot recover the standard cosmology $H^2\sim \rho$ in the past unless we fine tune the value
of $m_g^2/H_0^2$ to be very small.
From equation (\ref{frweq4}), we see that the standard cosmology is recovered when $H(z)\ll H_c$ and $m_g\ll H_0<H_c$.
At very early times, $H(z)>H_c$, the universe evolves according to $H^3 \sim \rho$. If it was radiation dominated in
the very early times, then
the universe evolves faster as $a(t)\sim t^{3/4}$ instead of $t^{1/2}$.
For the special case $\alpha_3=\alpha_4=0$, Friedmann equation is simplified to
\begin{equation}
\label{frweq6}
\left(1+\frac{m_g^2}{H_c^2}\right)E^2(z)-3\frac{m_g^2}{H_cH_0}E(z)+2\frac{m_g^2}{H_0^2}=\Omega_m(1+z)^{3(1+w_m)}.
\end{equation}
At $z=0$, $E(z)=1$, we get
\begin{equation}
\label{conseq1}
\frac{m_g^2}{H_0^2}=-\frac{1-\Omega_m}{(H_0/H_c-2)(H_0/H_c-1)}.
\end{equation}
As discussed above, $H_c/H_0>1$, so $m_g^2$ must be negative for this special case. The sign of $m_g^2$ is
not important because we can always redefine the potential term so that the graviton mass is positive.
Without loss of generality, we assume that $m_g^2=-\beta_1 H_0^2$, and $H_c=\beta_2 H_0$ with $\beta_2>1$.
For the special case $\alpha_3=\alpha_4=0$, equation (\ref{conseq1}) gives
\begin{equation}
\label{conseq3}
\beta_1=\frac{(1-\Omega_m)\beta_2^2}{(2\beta_2-1)(\beta_2-1)}.
\end{equation}
In this case, we have only two free parameters $\Omega_m$ and $\beta_2$, and equation (\ref{acceq4}) gives
\begin{equation}
\label{acceq5}
\frac{\dot H}{H^2}=-\frac{3\Omega_m(1+w_m)(1+z)^{3(1+w_m)}}{2\left(1-\frac{\beta_1}{\beta_2^2}\right)E^2(z)+3\frac{\beta_1}{\beta_2}E(z)}.
\end{equation}
If $\beta_2\gg 1$, then $\beta_1=(1-\Omega_m)/2$, and the model becomes the $\Lambda$CDM model.
This is shown in Fig. \ref{fig1} for $\beta_2=20.1$ and $\Omega_m=0.3$.
Fitting this model to the three year Supernova Legacy Survey (SNLS3) sample of 472 SNe Ia data with systematic errors \cite{snls3},
and the baryon acoustic oscillation (BAO) measurements from the 6dFGS \cite{6dfgs}, the distribution of galaxies \cite{wjp} and the WiggleZ dark
energy survey \cite{wigglez}, we find the best fit values are $\Omega_m=0.27$, $\beta_2=2.69$, $\beta_1=0.71$ with $\chi^2=421.4$.
As discussed above, when $\beta_2\gg 1$, the model becomes the $\Lambda$CDM model which is independent of the value of $\beta_2$,
so $\beta_2$ cannot be constrained from above by the observational data, and we get $\beta_2\ge 1.83$ at $2\sigma$ confidence level.
The more detailed observational constraints are done in \cite{Gong:2012ny}.
\begin{figure}[htp]
\centerline{\includegraphics[width=4in]{magrqzwz.eps}}
\caption{The evolution of the deceleration parameter $q(z)$ and the effective equation of state $w_g$ of massive graviton.
The blue lines are for the model with de Sitter metric as the reference metric and the black lines are
for the model taking $b_k(f)$ as power law form. } \label{fig1}
\end{figure}
\section{General cosmological solutions}
In summary, starting with the homogeneous and isotropic metric (\ref{frwmetric}) and tensor (\ref{stuckphi1}),
we obtain the Friedmann equations (\ref{frweq1}) and (\ref{frweq2})
with the effective energy density and pressure for the massive graviton,
\begin{gather}
\label{effrho2}
\rho_g=\frac{m_g^2M^2_{pl}}{a^3}(b_k[f]-a)\{6(1+2\alpha_3+2\alpha_4)a^2
-(3+15\alpha_3+24\alpha_4)a b_k[f]+3(\alpha_3+4\alpha_4)b_k[f]^2\},\\
\label{effp2}
\begin{split}
p_g=&\frac{m_g^2M^2_{pl}}{a^2}\{[6+12\alpha_3+12\alpha_4-(3+9\alpha_3+12\alpha_4){\dot f}]a^2
-2[3+9\alpha_3+12\alpha_4\\
&-(1+6\alpha_3+12\alpha_4){\dot f}]a b_k[f]
+[1+6\alpha_3+12\alpha_4-3(\alpha_3+4\alpha_4){\dot f}]b_k^2[f]\},
\end{split}
\end{gather}
and the equation of motion for the function $f(t)$ which leads to the three branches of solutions
\begin{gather}
\label{masssol4}
b_k[f(t)]=\frac{(1+6\alpha_3+12\alpha_4\pm\sqrt{1+3\alpha_3+9\alpha_3^2-12\alpha_4})}{3(\alpha_3+4\alpha_4)}a(t),\\
\label{masssol5}
\frac{db_k[f]}{df}=\frac{\dot a}{N}.
\end{gather}
When we take the solution (\ref{masssol4}), we get the effective cosmological constant solution (\ref{efflambda})
independent of the choice of spatial curvature $k$. For $k=1$, the solution was obtained in \cite{gumruk11} by taking
the reference metric $\eta_{ab}$ as Minkowski and the same homogeneous and isotropic tensor $\Sigma_{\mu\nu}$ (\ref{stuckphi1}) with $b_k[f(t)]=f(t)$. The same solution (\ref{efflambda}) was obtained in \cite{kobayashi12} for all values of $k$
for the particular parameters (\ref{1parm}), but the tensor $\Sigma_{\mu\nu}$ is not homogeneous and isotropic.
For the flat case $k=0$, Gratia, Hu and Wyman obtained the same cosmological constant solution (\ref{efflambda}) by using
another inhomogeneous and anisotropic tensor $\Sigma_{\mu\nu}$ \cite{gratia12}. Motohashi and Suyama obtained the cosmological
constant solution for the $k=0$ case with isotropic forms for both the physical and reference metrics \cite{Motohashi:2012jd}.
However, the cosmological constant solution (\ref{efflambda}) is just the consequence of the equation of motion of
the function $f(t)$ once we assumed the homogeneous and isotropic form for the metric (\ref{frwmetric}) and the tensor $\Sigma_{\mu\nu}$ (\ref{stuckphi1}).
Since the cosmological constant solution (\ref{efflambda}) was obtained by different methods for different sepcial cases, this suggests that
this solution exists for the general case. The method proposed in \cite{tolley,langlois12} not only gives the solution for the general case,
but also gives additional new dynamic solutions.
This suggests that the solution (\ref{masssol5}) should be quite general even without the assumption of the reference metric $\eta_{ab}$ as de Sitter.
Therefore, we propose that more solutions can be found with equations (\ref{frwmetric}) and (\ref{stuckphi1})
by assuming more general form of $b_k[f(t)]$. Note that the specific form of $\Sigma_{\mu\nu}$ in equation (\ref{stuckphi1})
may be obtained from Minkowski, de Sitter, or isotropic reference metrics.
Follow the above argument, we assume a power law form $b_k[f(t)]=(H_c f(t))^{\gamma/(\gamma-1)}$ with
$\gamma>1$,
then the solution to equation (\ref{masssol5}) is
\begin{equation}
\label{masssoln}
b_k[f(t)]=\left(\frac{a(\gamma-1)H}{\gamma H_c}\right)^\gamma,
\end{equation}
and the effective energy density becomes
\begin{equation}
\label{rhogn}
\begin{split}
\rho_g=&3m_g^2M_{pl}^2\left[3(1+3\alpha_3+4\alpha_4)\left(\frac{\gamma-1}{\gamma}\right)^\gamma a^{\gamma-1}\left(\frac{H}{H_c}\right)^\gamma\right.\\
&-(1+6\alpha_3+12\alpha_4)\left(\frac{\gamma-1}{\gamma}\right)^{2\gamma} a^{2\gamma-2}\left(\frac{H}{H_c}\right)^{2\gamma}\\
&\left.-2(1+2\alpha_3+2\alpha_4)+(\alpha_3+4\alpha_4)\left(\frac{\gamma-1}{\gamma}\right)^{3\gamma} a^{3\gamma-3}\left(\frac{H}{H_c}\right)^{3\gamma}\right].
\end{split}
\end{equation}
The effective pressure of massive graviton is
\begin{equation}
\label{pgn}
\begin{split}
p_g=&m_g^2M_{pl}^2\left\{6(1+2\alpha_3+2\alpha_4)-3(1+3\alpha_3+4\alpha_4)\left(\frac{\gamma-1}{\gamma}\right)^\gamma a^{\gamma-1}\left(\frac{H}{H_c}\right)^\gamma
\left[2+\gamma\left(1+\frac{\dot H}{H^2}\right)\right]\right.\\
&+(1+6\alpha_3+12\alpha_4)\left(\frac{\gamma-1}{\gamma}\right)^{2\gamma} a^{2\gamma-2}\left(\frac{H}{H_c}\right)^{2\gamma}
\left[1+2\gamma\left(1+\frac{\dot H}{H^2}\right)\right]\\
&\left.-3\gamma(\alpha_3+4\alpha_4)\left(\frac{\gamma-1}{\gamma}\right)^{3\gamma} a^{3\gamma-3}\left(\frac{H}{H_c}\right)^{3\gamma}
\left(1+\frac{\dot H}{H^2}\right)\right\}.
\end{split}
\end{equation}
Again when the hubble parameter is in the range $H_0<H(z)<H_c$, the standard cosmology is recovered. To have a long history
of matter and radiation domination, we require $H_0\ll H_c$.
For the special case $\alpha_3=\alpha_4=0$, Friedmann equations are
\begin{equation}
\label{frweqn1}
E^2(z)-\frac{\beta_1}{\beta_2^{2\gamma}}\left(\frac{\gamma-1}{\gamma}\right)^{2\gamma} a^{2\gamma-2}E^{2\gamma}(z)-2\beta_1+
3\frac{\beta_1}{\beta_2^{\gamma}}\left(\frac{\gamma-1}{\gamma}\right)^\gamma a^{\gamma-1}E^\gamma(z)=\Omega_m(1+z)^{3(1+w_m)},
\end{equation}
\begin{equation}
\label{acceqn1}
\frac{\dot H}{H}=-1+\frac{\frac{\beta_1}{\beta_2^{2\gamma}}\left(\frac{\gamma-1}{\gamma}\right)^{2\gamma}a^{2\gamma-2}E^{2\gamma}
-6\frac{\beta_1}{\beta_2^{\gamma}}\left(\frac{\gamma-1}{\gamma}\right)^{\gamma}a^{\gamma-1}E^{\gamma}-E^2+6\beta_1}{
-2\gamma\frac{\beta_1}{\beta_2^{2\gamma}}\left(\frac{\gamma-1}{\gamma}\right)^{2\gamma}a^{2\gamma-2}E^{2\gamma}+3\gamma
\frac{\beta_1}{\beta_2^{\gamma}}\left(\frac{\gamma-1}{\gamma}\right)^{\gamma}a^{\gamma-1}E^{\gamma}+2E^2},
\end{equation}
with
\begin{equation}
\label{constreqn1}
\beta_1=\frac{(1-\Omega_m)\beta_2^{2\gamma}}{\left[\left(\frac{\gamma-1}{\gamma}\right)^\gamma -\beta_2^\gamma\right]\left[\left(\frac{\gamma-1}{\gamma}\right)^\gamma -2\beta_2^\gamma\right]}.
\end{equation}
In this case, we have three free parameters $\Omega_m$, $\gamma$ and $\beta_2$. Again if $\beta_2\gg 1$,
$\beta_1=(1-\Omega_m)/2$, the model is equivalent to the $\Lambda$CDM model and is independent of the values of $\beta_2$ and $\gamma$.
This is shown in Fig. \ref{fig1} for $\beta_2=20.1$ and $\Omega_m=0.3$.
\section{Conclusions}
In conclusion, more general cosmological solutions which are consistent with
the observational data can be found by taking homogeneous and isotropic form
for both the metric $g_{\mu\nu}$ and the tensor $\Sigma_{\mu\nu}$ without specifying the form of reference metric,
even though the tensor $\Sigma_{\mu\nu}$ may be obtained with Minkowski, de Sitter or isotropic reference metrics.
In addition to the cosmological constant solution, more richer dynamics can be found in these solutions.
The mass of graviton is in the order of $((1-\Omega_m)/2)^{1/2}H_0$.
\begin{acknowledgments}
This work was partially supported by
the National Basic Science Program (Project 973) of China under
grant No. 2010CB833004, the National Nature Science Foundation of China under grant Nos. 10935013 and 11175270,
and the Fundamental Research Funds for the Central Universities.
\end{acknowledgments}
|
1,116,691,499,057 | arxiv | \section*{Abstract}
Wikipedia (WP) as a collaborative, dynamical system of humans is an appropriate subject of social studies. Each
single action of the members of this society, i.e. editors, is well recorded and accessible. Using the cumulative data of 34
Wikipedias in different languages, we try to characterize and find the universalities and differences in temporal activity patterns
of editors. Based on this data, we estimate the geographical distribution of editors for each WP in the globe. Furthermore we also
clarify the differences among different groups of WPs, which originate in the variance of cultural and social
features of the communities of editors.
\section*{Introduction}\label{sec:intro}
Relying on the data gathered by recently developed information and communication technologies (ICT), studies on social
systems has entered into a new era, in which one is able to track and analyze the behavior of a large number of individuals and the
interaction between them in details. Among all examples, recent investigations based on cell phone records (calls \cite{karsai2011}
and text messages \cite{wu2010}) and web-based societies and media (web-pages \cite{huberman1999}, movie, news and status sharing sites e.g.,
YouTube.com \cite{szabo2010},
digg.com \cite{wu2007} and twitter.com \cite{huberman2009}) have opened very interesting insights into features of collective
and cooperative dynamics of human systems.
Wikipedia (WP) as a free, web-base encyclopedia, which is entirely written and edited by voluntaries from all around the world, has also attracted
attention of many researchers recently \cite{voss2005,wilkinson2007,ortega2008,ratkiewicz2010} (for a recent review, see \cite{park2011}). To study WP, understand and model
its evolution\cite{voss2005}, coverage\cite{holloway2007}, conflicts or editorial wars\cite{sumi2011a,sumi2011b}, user reputation\cite{javanmardi2010}
and many other issues, we should obtain basic information about the community of its editors, i.e.,
their age, education level, nationality, individual editorial patterns, fields of interest and many other aspects.
Yet, there has been rare systematic and
unbiased studies in this direction. The main barrier here is the privacy issues, which prohibit any attempt to obtain personal data of committed editors.
There are two ways of contributing to Wikipedia. The first way is editing as an unregistered user; in this case all the edits are
recognized by the IP address of the editor, and therefore it becomes easy to locate the editor and collect some geographical
information about him/her. But most of the editors take a second way which is editing under a registered user name, which hides the
real world identity and IP address of the editors and therefore is a much more secure way of contributing. Moreover, contributions of
such serious editors are identified and unified under one single nickname, irrespective of which IP address they use to connect to the
network and can be counted as a measure of
maturity in the promotion processes. Cohen has extracted geographical data from
IP addresses of unregistered editors of English WP, integrated them over time and concluded that about $80\%$ of edits on English WP are originated
from few English speaking countries with high Internet penetration rate, i.e., $60\%$ from the USA, $12\%$ from UK,
$7\%$ from Canada and $5\%$ from Australia \cite{cohen2010}.
However, contributions from unregistered editors are
limited to less than 10 percent for many WPs (see Table~\ref{tab:stat}). Moreover the rather small sample of unregistered users,
is not representing the features of average users, as will be discussed later. Therefore, indirect methods to
locate editors or to obtain
any kind of information about the community is highly desirable.
One of our aims is to show that using the temporal patterns of WP users, conclusions about the geographical distribution of (registered) editors can be drawn.
Recently much effort has been devoted to describe and understand the extreme temporal inhomogeneity of human activities, represented by
the burstiness of activities and the fat-tailed distribution of time intervals between
events\cite{barabasi2005}. While the circadion and other periodic characteristics of temporal patterns of human activities cannot account for the whole richness of bursty behavior\cite{jo2011}, they remain important for understanding the entire dynamics of the systems. These regularities
are induced by circadian and seasonal cycles of the nature \cite{panda2002} on one hand and by cultural aspects on the other one. Consequently,
studies on diurnal patterns of the Internet traffic have brought interesting information about individual habits of the Internet usage
in different societies \cite{spennemann2006,spennemann2007}.
In this paper our focus is on such cyclic behavior, while investigations on other aspects of temporal inhomogeneities like short time bursty behavior and
inter-event interval distributions are reported elsewhere \cite{yasseri-po}.
West et.~al. have tried to make use of diurnal characteristics of edits to detect vandalism and destructive edits \cite{west2010}. Their study was
again restricted to tracking positive and negative edits from
unregistered editors, for which they found that most of the "offending edits" are
committed during the working hours and working days compared to after-darks and
weekends.
In the admin-ship of Wikipedia it is also becoming fashionable to use the personal temporal fingerprint of editors as a side-tool to
detect and prevent sock-puppetry, \footnote{WP editors are generally expected to edit using only one account. Sock puppetry is the use of
multiple accounts to deceive other editors, disrupt discussions, distort consensus,
avoid sanctions, etc., which is according to WP rules forbidden.}
although this could only be done with high respect to the privacy policies of Wikimedia
Foundation.\footnote{\url{http://wikimediafoundation.org/wiki/Privacy_policy}}
In this work, we first try to characterize the circadian pattern of edits on Wikipedia,
by analyzing massive data of 34
WPs, then we introduce a novel method to locate and find geographical distribution of
the editors of large international WPs, e.g., English, simple English, Spanish, etc.
Furthermore, we analyze the temporal behavior of editors on longer time scales, i.e. weekly patterns and report on significant differences
between various societies.
\section*{Methods}\label{sec:data}
This work is carried out on 34 WPs selected from the largest ones in
respect to the number of articles, i.e., those ones, which have more than 100,000 articles.\footnote{Two Wikipedias of Volap\"{u}k and
Waray-Waray are excluded from the list due to their small number of speakers and Wikipedians and considering that many of articles are robotically generated.
The simple English Wikipedia is also included in the list, despite it contains only around 70,000 articles.}
Among the sample, number of total edits and editors vary between 3 M to 455 M and 46 k to 14 M, respectively.
In Table~\ref{tab:stat} some statistics about the WPs under the investigation are reported.
We considered every single edit performed on each WP and having the timestamps assigned to edits,
calculated the overall activity of users for the time of day and day of the week. To see the universality
of circadian activity patterns among editors of all different languages, we assumed a local time offset for
each language. Clearly there are some languages which are not spoken only in one country or one time zone, e.g.,
Spanish, Arabic, etc, whereas some others are very localized in a specific time zone, e.g., Italian, Hungarian, etc.
For the first sort of languages, we initially considered the time offset of the most known origin of the language.
For the special cases of the English and simple English Wikipedias, initially we considered an offset corresponding to
USA Central Time. In the ninth column of Table~\ref{tab:stat}, the assigned time offset to each language is reported.
Note that, due to lack of information such as IP addresses of users, this initial assumptions for the origin of edits and
corresponding time offset can not be any improved at this step.
It is one of our goals to implement a method, based on the average behavior of WP editors, which is able to determine the percentage of the contributions coming from different geographic units. This method will be described in the next section in sequence with the empirical observations.
\section*{Results}\label{sec:results}
\subsection*{Circadian patterns}\label{subsec:circadian}
We calculated the normalized number of edits for each of the 34 WPs with the consecutive time windows of one hour for
the 24 hours of the days. The rational activity level of each time window is calculated by dividing the number of edits within the time window by the
total number of edits.
This way the circadian activity patterns are created as
depicted in Figure~\ref{fig:daily} (a). Most WPs show a universal pattern;
A minimum of activity at around 6 A.M., followed by a rapid increase up to noon. The activity shows a slight increase
until around 9 P.M., where it start to decrease during night.
Qualitatively similar shapes are observed for other kind of human activities, e.g. cell phone callings and textings\cite{jo2011}, and
the Internet instant messaging \cite{pozdnoukhov2010}.
\subsubsection*{Deviations}\label{subsubsec:dev}
Among all 34 investigated WPs, there are four, which significantly deviate from all the others in respect to the circadian patterns.
in Figure.~\ref{fig:daily} (c) and (d) diurnal activity for these four outliers, Spanish, Portuguese,
English and simple English WPs are shown. In the case of Spanish and Portuguese, the main difference
to the rest WPs, is the slight shift to the right (later times). Having in mind that Spain and Portugal
both use local times which have a larger offset compared to the countries with the same longitude,
this comes as no surprise. Beside that, the rather large number of speakers from Latin America not only
is in favor of this shift, but also flatten the overall amplitude of the diurnal pattern (this
will be discussed later in more details). And finally the cultural features of those two countries
might contribute to this observation.
In the case of the English and simple English WPs, for simplicity, we assumed the reference being UTC-6 (which corresponds to the Central Time Zone of the US). Naturally
the deviations from the universal pattern are very strong, indicating the complex origin of the English WP. Later we will come back to this point.
To better illustrate the deviation from an average circadian pattern, we calculated the weighted average of curves in Figure.~\ref{fig:daily} (a).
Each WPs pattern is weighted by its total number of edits. The average curve is depicted in Figure.~\ref{fig:daily} (b). Now we can calculate
the difference from this average pattern for each WP, $\mathcal{D}(t)$ at different times of the day $t$. According to the shape of $\mathcal{D}(t)$ and by
maximizing the cross-correlation coefficient, almost all WPs
could be categorized in 4 categories as in Figure.~\ref{fig:dev}. Two of these categories, Figure.~\ref{fig:dev} (a) and (c) consist of WPs which have
less activity during nights compared to the average pattern.
These WPs are all in such European languages, which are spoken in single, localized regions and therefore the
minimum of activity of their editors is deeper than others. In Figure.~\ref{fig:dev} (b), a category consisting of
Asian languages is shown. These WPs are more active during nights and less active during working hours compared to the average.
In the last category, shown in
Figure.~\ref{fig:dev} (d), a higher activity during night and a lower activity during working hours is a clear sign of a extended distribution of contributors
from different time zones. Arabic, Persian, Chinese are from this category in addition to Spanish, Portuguese, English and Simple English (not shown).
The other way to look at the locality of the languages is to quantify the {\it sleep depth}.
Sleep depth is defined as the
difference between the maximum and the minimum of the activity of each language users and
might be assumed as a measure of the locality of the global distribution of the
editors of the corresponding language. In the last column of Table~\ref{tab:stat}, the calculated depth values are
reported. These values vary from 2.3 for simple English to 5.6 for Italian. Among those WPs with small sleep
depth are Arabic, Indonesian, Persian and English. The average sleep depth for the category of Fig.~\ref{fig:dev}(d) is $2.8$ with standard deviation of $0.4$.
Among languages with the large sleep depth are Italian, Hungarian, Polish, Catalan, and Dutch. These are all languages which are
mostly spoken in a narrow area of the world and therefore are very localized in time zones.
The average sleep depth for the category of Fig.~\ref{fig:dev}(c) is $4.9$ with standard deviation of $0.4$.
It is also to mention, that although Spanish and Portuguese are both widely spoken in different areas and different time zones, but
the sleep depth of both lay in the middle range (4.4 and 4.2 respectively). For a more precise interpretation we try to estimate the
share of editors from different areas to each WP in the next section.
\subsubsection*{Geographical distribution of editors}\label{subsubsec:spat}
As mentioned above, due to privacy policy issues, there is no access to the locating information of registered editors, such IP addresses. However there
are studies only considering contributions by unregistered users which give a very rough image of the real distribution
of editors in the globe\cite{cohen2010}.
We aim at a better method by decomposing the overall activity pattern of each WP to basic elements, which are assumed to be representative for
contributions purely originated from a certain time zone. For this purpose, we averaged over activity patterns for the 10 WP with the deepest sleep to obtain
a smooth curve, which has the features of collective activity of users in synchrony (hereafter called Standard curve $\mathcal{S}(t)$).
In the next step, we assume that the activity pattern of a WP, $\mathcal{A}(t)$ with wide spatial distribution of editors can be
simulated by superpositions of $N$ standard
curves with different time shifts $\tau_i$ and different weights $w_i$ for $i=1$ to $N$,
\begin{equation}
\mathcal{A}(t) = \sum\limits_{i=1}^N w_i \mathcal{S}(t-\Delta \tau_i)
\end{equation}
where $\Delta \tau_i$ is the difference between $\tau_i$ and the assumed time offset of the language (see Table.~\ref{tab:stat}).
In general, one could minimize the error of the simulated activity pattern for each WP for $N=24$ different offsets and find the optimal weighting.
Clearly, weights are proportional to the volume of contributions from each time zone. Following this outline we did the optimization, but in a more
supervised manner. We restricted $N$ to the number of different time zones, which are relevant candidates for being an origin of contribution, e.g., we excluded
time zones of nonliving areas of the earth. Furthermore, to reduce the complexity of calculations and also avoid multiple solutions, we reduced $N$ to
the number of areas, which have considerable number of speakers of the language. In many cases, by superposition of $N$ between 3 to 6 standard curves,
we could fit the empirical data with a high value of correlation coefficient between the simulated and imperial data sets (see Figure~\ref{fig:dc})
, whereas taking larger $N$s does not decrease the error
and it only leads to more zero $w_i$'s. Finally, by a proper combination of demographic information and optimization techniques, we
estimated the share of different regions to 9 different WPs. These estimations are summarized in Figure~\ref{fig:estimate}.
Though in some cases the error function is rather flat around its minimum, leading to relative large tolerance in calculated weights, existence of separated multiple minimums is
prohibited by applying the demographic restrictions.
\subsection*{Weekly patterns}\label{subsec:weekly}
We also considered the activity of editors during weeks and its dependence on the day of the week. These results are shown in
Figure~\ref{fig:weekly}. According to the weekly pattern of activity, we could categorize 28 out of 34 WPs into 4 different
categories which belong to two main categories of "working days" and "weekend" activity. In the upper-left panel of
Figure~\ref{fig:weekly}, those WPs are shown, which have highest activity of editors during the working days of the week.
Among them, are English, simple English, German, Spanish, Portuguese and Italian. In the rest of WPs, a big part of edits
are done during weekends. In the class of Polish, Dutch, Korean and Japanese WPs (upper-right panel of Figure~\ref{fig:weekly})
equal activities are shown on Saturdays and Sundays, whereas in the class of Danish, Swedish, Norwegian and Finnish WPs, editors
have very low activity on Saturdays. The last class of the "weekend" WPs, consists of Arabic and Persian WPs, in which Fridays
are also active days in addition to Saturdays and Sundays. The latter is no surprise, considering that Friday is a public
holiday in all of the original countries along with Saturday in most of them.
\section*{Discussion}\label{sec:discussion}
The novel approach to the collective characteristics of community of editors of WPs, described above, enables us, for the first time, to shed light
on less studied aspects of Wikipedia. Based on the reported results, many basic questions and concerns about the whole projects of Wikimedia
can be investigated. Knowing the spatial distribution of the editors of a certain WP would be reliable basis for explaining specific
biases in WP articles, heterogeneous topical coverage and origins of conflicts and editorial wars to some good extent. In addition to that, these results
arise new questions and puzzles as well.
Considering the large population of
English speakers in North America compared to Europe, and the fact that the Internet
is most developed in North America, the estimation of around only half share for north America to English WP is a puzzle, which definitely
needs further multidisciplinary studies. In the case of Simple English WP, the European share is even larger, which is not surprising, together with the fact that the share of Far East increased, since this WP is
meant to be of use by non-native speakers (though, not necessarily written by them). Note that previous results of
\cite{cohen2010} and \cite{west2010} are partially supported by the results reported here.
For instance, a share of less than $10\%$
for Australian editors in
English WP is in both articles reported. Unfortunately, there is no explicit focus on the contributions from European countries in the mentioned works, and
it seems the large amount of efforts by European editors was overlooked. However we have repeated the measurements on
IP addresses of unregistered users more generally for different WPs by following every single edit from this type to locate
the editor. Firstly, we constructed the ``precise`` activity pattern of unregistered users, as shown in
Fig.~\ref{fig:daily}(b). The activity pattern of unregistered users has clearly deeper minimum at night and higher maximum
during working hours, compare to most of the other curves. Unregistered users contribute to WP occasionally and mostly only
with few edits from the same IP address. To be actively editing even at nights, one must be extremely committed to WP,
therefore
the deep sleep of the activity curve of unregistered users comes as no surprise. We believe the sample of unregistered users
is more representing the activity of WP readers who edit rarely as they notice needs to tiny modifications here and there
while reading the articles than committed users who basically write the main body of the articles.
The percentage of contributions by
unregistered users is measured and reported in Table~\ref{tab:stat} for all 34 WPs. This value varies between 4 for Slovenian
and 37 for Japanese WP. We compared the results for
geographical distribution of editors obtained
by locating them with the IP addresses to the previous results described above and observed that, both methods mainly
give similar results for the WPs with rather larger share of unregistered users, whereas they deviate for WPs with
small share of unregistered users. Finally, one should consider the fact that the committed users, sometimes edit without
using their registered user name to vandalize or edit specific controversial articles without leaving any trace which
may cause troubles
for the original user name. In such cases, most of the time an ''open proxy`` (with an arbitrary IP address)
is used to hide the real IP address of the editor.
This makes the analysis based on IP addresses even looser.
Another interesting part of the results is on Persian WP. Although more than $70 \%$ of native Persian speakers
live in Iran and the rest in closely neighboring countries, but the corresponding WP appears in the top list of
WPs with small sleep depth. In addition, the estimated share for edits from Iran is only
about $45 \%$. This could be due to the following facts. 1) Strong restrictions on the Internet usage have been applied by Iranian government during years
as a consequence of socio-political issues\footnote{Wikipedia is one of the few remaining
unbanned Web.2.0 web sites currently in Iran.},
which makes it difficult to contribute to WP using Iranian based ISPs. 2) Iran has a high rate of immigration of
students and scholars. That has led to formation of large intellectual communities out of Iran, which might be
responsible for considerable amount of edits in Persian WP.
Low level of contribution to the French WP by North American editors and to the Arabic WP by Egyptian editors, could have roots
in the differences between the spoken dialects and the standard languages.
Though both languages (French and Arabic) are among the official languages in the mentioned regions, it seems that the divergence
between dialects play an important role to suppress contributing to WP. It should be mentioned, that there is a separate WP in the local dialect of Egypt,
({\it Egyptian Arabic Wikipedia}) and there has been an unsuccessful effort to launch the {\it Canadian French Wikipedia} recently. Therefore we think that
the estimations for contributions could be of interest for the WP community too and elaborate the process of decision making for a new WP in a local dialect.
Clearly, the presented method also has its limitations. For instance, accessing to information about the distribution of editors in different
longitudes is impossible by only considering the time stamps. Moreover the resolution of the regional estimations are not very high. Because of many
factors, e.g. applying summer time in many countries the method can not claim at a resolution higher than a one hour stripe. For example,
in the case of English WP, the supervised optimization results in a ratio of 3 for the weight of GMT+1 over GMT+0, corresponding to Central Europe and Western Europe times. But
because of the mentioned reasons, distinction between the share of the very close time zones is not justifiable. Moreover, in some cases the error of the
simulated activity pattern is not very sensitive to changes in weights of spatially closed offsets. However, all the
results presented above are precise
up to the last significant digit.
Putting beside the deviations from the average of daily activities and the weekly activity for all WPs, one is able to make very clear conclusions. For example, the daily
pattern of Asian languages (e.g., Japanese, Chinese and Korean) show higher activity during evenings and nights along with high level of activity at weekends.
This can be related partly to the lengths of working hours in corresponding countries. This general image, which holds partially for Turkey and Russia
and Israel too,
could
be in close relation with the high average working hours per day (more than 40 hours in all the mentioned cases\footnote{According to the dataset of
{\it The Organization for Economic Co-operation and Development}: \url{http://stats.oecd.org}}) in those countries. Furthermore, among European countries,
we also see the same tendency; in the countries with rather larger working times, edits are mostly done in later times in evenings.
It is to mention that same analysis have been done for the seasonal patterns to extract effects of changes in daylight timing, but the large fluctuations
in average behavior, makes it very difficult to conclude relevant results. The only significant large scale seasonal pattern is the reduction of
activity with approaching to the new year holidays for many WPs.
In conclusion, based on a dataset of time stamped edits on different Wikipedias, we studied the diurnal and weekly patterns of activity of
editors. We could see a universal circadian pattern for all WPs, which has its minimum at dawn and maximum at late afternoon
and early evening. According to this investigation, we also argued that using a weighted mixture of contributions from different time zones and an optimization procedure, we can estimate the different contributions to a WP.
In particular, we observe that
a considerably large part of edits on English and simple English
WPs are originated from Europe and the share of North America was below expectations. The same type of analysis was also performed
for other WPs in different languages.
In contrast to diurnal pattern, which is universal to a great extent, weekly activity patterns of WPs show remarkable differences. We could, however, identify two main categories, namely "weekends" and "working days" active WPs. Further studies are needed to explain these observations in detail and relate
them to cultural and social differences.
\section*{Acknowledgments}
The project ICTeCollective acknowledges the financial support of the Future and Emerging Technologies (FET) program within the Seventh Framework Program for Research of the European Commission, under FET-Open grant number: 238597. JK and TY thanks the FiDiPro program of TEKES for partial support. We also thank Wikimedia Deutschland e.V. for providing us with the data through the Wikimedia toolserver platform.
|
1,116,691,499,058 | arxiv | \section{Introduction}
Activation functions play a key role during the training process of neural networks, and considerable attention has been paid to explore standard activation functions over the past years. Especially, with the remarkable development of Deep Neural Networks (DNN) in various computer vision applications, such as image classification \citep{he2016deep,krizhevsky2012imagenet,tan2017photograph}, image segmentation \citep{chen2017deeplab}, object detection \citep{girshick2014rich,jiang2016speed,he2015delving}, image enhancement \citep{lin2018image,tang2018joint}, image retrieval \citep{yu2014click,yu2016deep} and tracking \citep{wu2016regional}, Rectified Linear Unit (ReLU) \citep{Nair2010} has become extremely popular in the deep learning community in recent years. Owing to the significant improvements of ReLU in the deep neural networks, some extended versions are constantly springing up. For instance, Leaky ReLU (LReLU) \citep{maas2013rectifier} is proposed by replacing the negative part of the ReLU with a non-zero slope, while Exponential Linear Units (ELUs) \citep{clevert2015fast} can tend to converge cost to zero faster and produce more accurate results. All these extended versions can more or less achieve a certain effect in the respective fields.
However, there is hardly a generally accepted rule-of-thumb for the choice of activation functions owing to the fact that it solely depends on the problem at hand. Even the most popularly and commonly used activation function ReLU is not suitable for all datasets and network architectures. Therefore, adaptive activation functions have drawn more and more attention in recent years. For example, Maxout\citep{goodfellow2013maxout} can approximate any convex functions by selecting the maximum output value of multiple linear activation functions, but a large number of extra parameters are introduced, which causes large storage memory and high computation cost. In Parametric rectified linear unit (PReLU) \citep{he2015delving}, the slopes of negative part can be obtained by learning from data rather than the pre-defined fixed values, thus PReLU has theoretically all the advantages of ReLU and effectively avoids Dead ReLU. But in practice, it has not been fully confirmed that PReLU always surpasses ReLU. In 2017, an activation function with the property of ``self-normalization" is proposed, named SELU \citep{klambauer2017self}, and it can avoid the problem of gradient vanishing and exploding, thereby leading to the feedforward neural network to obtain beyond state-of-the-art performance. However, the effectiveness of SELU in Convolutional neural networks (CNN) has not been confirmed. In the same year, swish \citep{ramachandran2017searching} with some complex characteristics, such as no upper and lower bound, smooth and non-monotonic, can perform better than ReLU on many deep models.
Although the existing adaptive activation functions are relatively more flexible than the traditional activation function owing to the adaptability, and have already achieved great improvements, they are limited to some specific application scenarios, and there are still many problems to be solved, such as low generalization capability and poor precision performance. For example, their performance often depends on some specific network models and data sets. In this work, a novel methodology is proposed to explore the optimal activation functions with more flexibility and adaptability only by adding few additional parameters to the traditional activation functions such as Sigmoid, Tanh and ReLU. The proposed methodology can avoid local minimums and accelerate convergence only by introducing very few parameters to the fixed activation functions, thereby increasing the precision, reducing the training cost and improving the generalization performance.
The primary contributions of our work are summarized as follows:
\begin{itemize}
\item A novel methodology is proposed to customize activation functions with more flexibility and adaptation for various layers only by introducing very few parameters to the traditional activation functions such as Sigmoid, Tanh, and ReLU.
\item A theoretical analysis for accelerating the convergence and improving the performance is presented by taking an activation function of one layer as an example without loss of generality, and an experimental study is performed by comparing the weight increments between two successive epochs in different layers during the training process between the proposed AReLU and ReLU on CIFAR100 based on VGGNet.
\item The proposed \textit{AReLU} is a generalized form of the \textit{ReLU-based} versions, while \textit{ReLU} and \textit{PReLU} are the special cases of the proposed \textit{AReLU}.
\end{itemize}
The rest of the paper is organised as follows. Section 2 introduces the related work, and the proposed methodology is presented in Section 3. Section 4 presents the analysis for our methodology. Section 5 details the experimental results for comparison and validation. Section 6 concludes the paper.
\section{Related work}
Over the last few decades, many various activation functions have been proposed in the artificial neural network community. According to whether the parameter or shape of an activation function is learnable or variable during the training phase, activation functions can be divided in two categories: fixed activation functions and adaptive activation functions.
\subsection{Fixed activation functions}
Fixed activation functions indicate that the parameters or shapes can not be modified during the training phase (shown in Fig. \ref{fig:fig1}), and the most common fixed activation functions can be fallen into three categories: Logistic (Sigmoid), Hyperbolic Tangent (Tanh) and Rectified Linear Activation (ReLU).
\paragraph{Sigmoid}
Sigmoid function is a common S-like function or S-like growth curve, and is normally used to refer specifically to the logistic function. It can map any real value to the range [0,1], thereby being interpreted as a probability, defined as follows:
\begin{equation}
Sigmoid(x)=\sigma(x)=\frac{1}{1+e^{-x}}
\end{equation}
It is differentiable, and the derivative is derived as follows:
\begin{equation}
\sigma'(x)=\sigma(x)(1-\sigma(x))
\end{equation}
Note that the gradient $\sigma'(x) \to 0$ as $\sigma(x) \to 0$ or $\sigma(x) \to 1$, meaning that, when the output of Sigmoid saturates for a large positive or negative inputs (i.e. the curve becomes parallel to $x$-axis shown in Fig. \ref{fig:fig1}), the gradients are almost zero. Due to the zero gradient, the weights are no longer updated and the networks will not learn, thus the neuron dies, thereby causing the vanishing gradient problem. Besides, Sigmoid outputs are not zero-centered, and it can indirectly introduce undesirable zig-zagging dynamics in the gradient updates for the weights.
\paragraph{Tanh}
Tanh function, a hyperbolic tangent function, graphically looks very similar to Sigmoid. Actually, the Tanh is simply a scaled Sigmoid, such that its outputs range from -1 to 1, defined as follows:
\begin{equation}
Tanh(x)=\frac{sinh(x)}{cosh(x)}=\frac{e^x-e^{-x}}{e^x+e^{-x}}=2\sigma(2x)- 1
\end{equation}
Like the Sigmoid, Tanh is also affected by the vanishing gradient problem. But unlike the Sigmoid, its output is zero-centered, the negative inputs will be mapped strongly negatives and the zero inputs will be mapped near zero. Therefore, the non-linearity of Tanh is always preferred to that of Sigmoid, and it has been widely used in deep learning \& machine learning, especially in classification scenarios between two classes.
\paragraph{ReLU}
ReLU is a very simple and efficient activation function that has been widely used in almost all deep learning domains, especially in CNNs, defined as
\begin{equation}
ReLU(x)=max(0,x)
\end{equation}
Owing to the simpler mathematical operations, ReLU is far more computationally efficient than Tanh and Sigmoid. Besides, ReLU can solve parts of the saturation problem only in the positive region. Whereas for the negative inputs, the results contain one or more true zero values (called a sparse representation) to accelerate learning and simplify the model in representational learning, but the weights and biases are not updated owing to the zero gradient during the backpropogation process, thereby causing the dying ReLU problem.
\begin{figure*}[!t]
\centering
\includegraphics[width=1.6 in]{./Figure_1a.eps}
\includegraphics[width=1.6 in]{./Figure_1b.eps}
\includegraphics[width=1.6 in]{./Figure_1c.eps}
\caption{An illustration of fixed activation functions with a fixed shape.}
\label{fig:fig1}
\end{figure*}
\subsection{Adaptive activation functions}
Adaptive activation functions refer primarily to the functions that the parameters or shapes are trained and learned along with other parameters in neural networks (shown in Fig. \ref{fig:fig2}), thereby adaptively varying with training data. In other words, the main idea of this kind of functions is to search a good function shape using knowledge given by the training data. For example, PReLU \citep{he2015delving} replaces the fixed slope $\alpha$ of LReLU \citep{maas2013rectifier} with a trainable parameter $\alpha_i$ in the negative region. Whereas Swish \citep{ramachandran2017searching} is a recently proposed activation function with no upper bound, lower bound, smooth, and non-monotonic characteristic, and it can be loosely viewed as a bridging function between the linear function and the ReLU function. Other similar activation functions like FReLU \citep{qiu2018frelu} and PELU \citep{trottier2017parametric} have achieved performance improvements in some specific tasks.
Although the existing adaptive activation functions has shown to improve the network performances significantly, thanks to properties such as no saturation feature, flexibility and adaptivity, exploring the optimal and appropriate activation functions is still an open field of research, and there is still potential room for improvement in various scenarios, especially for complex datasets and different models.
\begin{figure*}[!t]
\centering
\includegraphics[width=1.5 in]{./Figure_2a.eps}
\includegraphics[width=1.5 in]{./Figure_2b.eps}
\includegraphics[width=1.5 in]{./Figure_2c.eps}
\includegraphics[width=1.5 in]{./Figure_2d.eps}
\caption{An illustration of various adaptive activation functions. The shapes of activation functions can be controlled and adjusted by some parameters, which are trained along with other parameters in the neural networks.}
\label{fig:fig2}
\end{figure*}
\section{Methodology}
The training of neural networks is essentially a non-convex optimization problem, in which the optimal weight parameters can be searched and found by using the back-propagation algorithm, so that the functional subspace will be explored and determined by the activation function. Adaptive activation functions refer to the functions that adapt themselves to the network inputs, therefore they can learn hyper-parameters to adapt the parameters of the affine transformation to a given input, and thereby increase the flexibility and the representation ability of network models.
In this work, we attempt to construct a new parameter learning method for each layer only by introducing a few parameters to the fixed activation functions, and the general form in the $i^{th}$ layer with activation functions can be defined as follows
\begin{equation}
f_A(a_i,b_i,c_i,d_i,z)=b_i f(a_i z+c_i)+d_i
\end{equation}
where $f(\cdot)$ represents a traditional activation function (fixed activation function). $a_i$, $b_i$, $c_i$ and $d_i$ are four learnable parameters in the $i^{th}$ layer, and they can adapt to the different tasks according to the complexity of input data so as to efficiently avoid falling into local minimums. $z$ denotes the weighted sum of inputs, including the bias term, defined as
\begin{equation}
z = w x+b
\end{equation}
$w$ and $b$ indicate weights and bias, respectively. $x$ is an input vector.
In practice, the proposed adaptive activation function is very simple, and it is composed of two embedded linear equations, namely: internal linear equation
\begin{equation}f_{in}(a_i,c_i,z) = a_i z+c_i \end{equation}
and external linear equation
\begin{equation}f_{ex}(b_i,d_i) = b_i f_{in}+d_i \end{equation}
Therefore Equation (1) is rewritten as
\begin{equation}
f_A(a_i,b_i,c_i,d_i,z)=f_{ex}(b_i,d_i)=b_i f_{in}(a_i,c_i,z)+d_i
\end{equation}
In the following sections, the effectiveness and advantages of the proposed methodology will be verified by taking some common fixed activation functions as baselines, such as \textit{Sigmoid}, \textit{Tanh} and \textit{ReLU}, and the corresponding adaptive activation functions are named \textit{ASimoid}, \textit{ATanh} and \textit{AReLU}, respectively. According to Equation (5), these functions are respectively defined as
\begin{equation}ASimoid: f_{Asimoid}=b_i Sigmoid(a_i z+c_i)+d_i\end{equation}
\begin{equation}ATanh: f_{Atanh}=b_i tanh(a_i z+c_i)+d_i\end{equation}
\begin{equation}AReLU: f_{Arelu}=maximum(a_i z+c_i, b_i z+d_i)\end{equation}
In ASigmoid and ATanh, $a_i$ and $c_i$ are respectively used to scale the inputs of Sigmoid and Tanh,while $b_i$ and $c_i$ scale the outputs,simultaneously.
Significantly, when $a_i=1$ and $b_i=c_i=d_i=0$, the negative part of the AReLU is replaced with a zero slope, while the slope of the positive part is fixed. In this case, AReLU is actually degenerated to a standard ReLU, given as
\begin{equation}ReLU: f_{ReLU}=maximum(0, z)\end{equation}
Furthermore, when $b_i=1$ and $c_i=d_i=0$, the slope (i.e., the parameter $a_i$) of the negative part is adjustable, which means that the parameter can learn from data rather than be obtained by the pre-defined. Under these conditions, AReLU is evolved to PReLU when , given as
\begin{equation}PReLU: f_{Prelu}=maximum(a z, z)\end{equation}
Therefore, \textit{AReLU} is a generalized form of the \textit{ReLU-based} versions, while \textit{ReLU} and \textit{PReLU} are the special cases of the proposed \textit{AReLU}.
Above, it can be clearly seen that our method only adds four parameters for each layer. For the entire network model, $4i$ parameters should be added. This parameter amount and calculation amount is negligible compared with the entire network model.
\begin{figure*}[!t]
\centering
\includegraphics[width=6in]{./Figure_3.eps}
\caption{The shapes of ASigmoid (green), ATanh (orange) and AReLU (blue) at different layers during the training process on CIFAR100 based on VGG.}
\label{fig:fig3}
\end{figure*}
\section{Analysis}
A convex loss function $L(\cdot)$ for the linear weighted combination of each activation function applied to an input is defined to find the optimal weights by adopting suitable optimization strategies based on the back-propagation algorithm. Thus, the training process of the network model is essentially an iterative optimization process for the weight parameters by minimizing the loss function $L(\cdot)$ in the functional subspace.
\subsection{Theoretical Analysis}
In order to facilitate analysis, we just take an activation function of one layer as an example without loss of generality. Suppose that a neural network with traditional activation functions is given as
\begin{equation}y = f(z)\end{equation}
For the update process of weights, the partial derivative chain is defined as follows.
\begin{equation}\frac{\partial L}{\partial w}=\frac{\partial L}{\partial y} \frac{\partial y}{\partial z}\frac{\partial z}{\partial w}\end{equation}
\begin{equation}\frac{\partial y}{\partial z}=f'(z)\end{equation}
Meanwhile, the weight is updated as follows.
\begin{equation}\widetilde{w}=w-\eta\frac{\partial L}{\partial\omega}\end{equation}
where $\eta$ is learning rate. Equation (16) and (17) are substituted into (18), we can obtain the weights update equation for a common activation function as follows.
\begin{equation}\widetilde{w}=w-\eta f'(z) \frac{\partial L}{\partial y}\frac{\partial z}{\partial w}\end{equation}
Considering that the proposed adaptive activation functions consist of two linear equations, for simplicity, we just consider the internal linear function and omit its intercept term in a certain layer, given as
\begin{equation}\widetilde{y} = f(a z)\end{equation}
where $\widetilde{y}$ is the output of an adaptive activation function. The hyperparameter $a$ represents a generalization form for scale inputs in any layers, and it can be inferred to fine-tune the learning rate so as to speed up the update of weights, and the corresponding derivation is given as follows.
For the output $\widetilde{y}$, the partial derivative is given as
\begin{equation}\frac{\partial \widetilde{y}}{\partial z}=a f'(z)\end{equation}
With Equations (16), (18) and (21), the update process of the weight is given as
\begin{equation}\widetilde{w}_1=w-\eta a f'(z) \frac{\partial L}{\partial \widetilde{y}}\frac{\partial z}{\partial w}\end{equation}
By comparison between Equations (19) and (22), the learning rate of the adaptive activation function can be written as:
\begin{equation}\widetilde{\eta}=\eta a\end{equation}
From Equation (23), we can adaptively adjust the learning rate $\widetilde{\eta}$ by using the hyperparameter $a$. Simultaneously, the optimization of the hyperparameter $a$ in neural networks is similar to the hyperparameter $w$, then the update process of $a$ is achieved by using the chain rule.
\begin{equation}\frac{\partial L}{\partial a}=\frac{\partial L}{\partial \widetilde{y}} \frac{\partial \widetilde{y}}{\partial a}\end{equation}
\begin{equation}\frac{\partial \widetilde{y}}{\partial a}=z f'(z) \end{equation}
With Equations (24) and (25)
\begin{equation}\widetilde{a}=a-\eta z f'(z) \frac{\partial L}{\partial \widetilde{y}}\end{equation}
With Equations (6) and (26)
\begin{equation}\widetilde{a}=a-\eta w x f'(z) \frac{\partial L}{\partial \widetilde{y}}\end{equation}
Therefore, the adaptive activation function is to achieve rapid convergence by the adapting learning rate, and this method is achieved by adjusting the weight $w$ and the parameter $a$ mutually to speed up learning in neural networks and lead to higher classification precision.
Besides, the internal linear equation has also its respective intercept, which contributes to tuning the parameters from another vertical direction during training process, thereby avoiding involving local extremum.
Similarly, the external linear equation has the same effects for accelerating the convergence and improving the performance. More importantly, the internal equation is embedded within the external, such case will enable the optimization toward a global optimum solution more efficiently from all directions.
\subsection{Experimental Analysis}
Owing to the fact that each layer has its own independent activation function, the optimal hyperparameters of each layers can obtained by learning from the respective complexity of input data, and their values will vary with the input data characteristics. Therefore, the obtained weights will be optimal, and the corresponding activation functions are different with different layers. Fig. \ref{fig:fig3} shows the visualization of different layers during the training process on CIFAR100 based on VGG, and different shapes of ASigmoid, ATanh and AReLU at different layers indicate that these functions can learn the optimal hyperparameters from the inputs of the respective layers, which would lead to the enhancement and improvement of the fitting capability and the accuracy of the networks.
Moreover, compared with the traditional adaptive activation functions, two embedded linear equations with intercepts can accelerate the weights adjustment. For further verification, the change curves of the weight increments $\Delta w$ between two successive epochs in various layers are visualized along with the training process (shown in Fig. \ref{fig:fig4}). The results clearly show the amplitudes of the increments $\Delta w$ by using the proposed AReLU are much larger than those of the traditional ReLU in the early training stages, then the increments of the two methods converge, which means the proposed methods can provide faster weight updates than the traditional methods. Consequently the proposed methods can improve greatly the convergence speed and reduce the computational burden. Meanwhile, the large amplitudes of the increments can also help to avoid falling into a local optimum when training artificial neural networks with gradient-based learning methods and backpropagation.
\begin{figure*}[!t]
\centering
\includegraphics[width=6.3in]{./Figure_4.eps}
\caption{The change curves of the weight increments $\Delta w$ between two successive epochs in different layers during the training process by using AReLU and ReLU on CIFAR100 based on VGGNet. Column 1,2 and 3 represent the $1^{th}$, $5^{th}$ and $9^{th}$ layer, respectively. Rows 1-4 illustrate four various weights by using AReLU and ReLU.}
\label{fig:fig4}
\end{figure*}
\section{Experiments}
In this section, a series of experiments are implemented to verify and evaluate the effectiveness of the proposed methodology based on the three baseline activation functions such as Sigmoid, Tanh and ReLU. Considering the fact that ReLU is the most common activation function used in neural networks, there exist many derivatives of ReLU, and some typical derivatives like LReLU and PReLU are selected for comparison to highlight the effectiveness of the parameterization method in the activation function. Whereas swish, as an outstanding activation function, is used to demonstrate the state-of-the-art performance of the proposed adaptive activation function. Firstly, many comparison experiments between the proposed functions and its corresponding baseline functions have been conducted by using Stochastic gradient descent (SGD) \citep{cramer1946mathematical} on the datasets of CIFAR10 and CIFAR100 based on different network models, such as AlexNet \citep{krizhevsky2012imagenet}, VGG \citep{simonyan2014very}, GoogleNet \citep{szegedy2015going}, ResNet \citep{he2016deep} and DenseNet \citep{huang2017densely}. Then, some experiments are implemented to further verify the validity and suitability in various optimization strategies, such as SGD, Momentum \citep{qian1999momentum}, AdaGrad \citep{duchi2011adaptive}, AdaDelta \citep{zeiler2012adadelta} and ADAM \citep{kingma2014adam}. Finally, a series of comparison experiments are conducted to further verify the effectiveness, suitability and generalization ability on other more complicated datasets like miniImageNet\citep{Oriol2016}, PASCAL VOC \citep{VOC2012} and COCO \citep{COCO2014}.
\subsection{Experimental setup}
We test the proposed adaptive activation functions on CIFAR10 and CIFAR100 based on AlexNet, VGGNet, GoogleNet, ResNet and DenseNet. The detailed experimental setup is illustrated in Fig. \ref{fig:fig5}.
\begin{figure*}[!t]
\centering
\includegraphics[width=6in]{./Figure_5.eps}
\caption{The network architectures of AlexNet, VGGNet, GoogleNet, ResNet and DenseNet on CIFAR10 and CIFAR100.}
\label{fig:fig5}
\end{figure*}
Note that Dense block consists of $\{\stackrel{bn} \longrightarrow conv(1, 1, 2k) \stackrel{drop0.8, bn}\longrightarrow conv(3, 3, 2k)\stackrel{drop0.2}\longrightarrow \}\times 6$, $\times 12$, $\times 48$, $\times 32$, respectively, and the transition layer is shown in Fig. \ref{fig:fig3}, which corresponds to the sequence $conv(1, 1) - avg\_poo(2, 2).$ Moreover, the growth rate is k=24 for all.
\subsection{CIFAR10}
In these experiments, the proposed adaptive activation functions (10)$\sim$(12) are applied on CIFAR10 dataset based on the models (shown in Fig. \ref{fig:fig5}), respectively. All trainings are implemented for no less than 80 epochs with a 64-batch size and without data augmentation by using SGD with fixed learning rate schedule of 0.001, 0.0001 and 0.00001 with the training process, respectively.
Fig. \ref{fig:fig6} shows the convergence curves (top row) and the area enclosed by the convergence curves (bottom row) during the training process. It is obvious that the smaller the area, the faster the convergence speed, and it is also clearly shown from the results that the proposed activation functions can surpass the corresponding baseline functions and LReLU for different network models in terms of convergence speed. Thereinto, AReLU can obtain the fastest convergence speed among these activation functions, especially on DenseNet, ResNet and VGG16, it can converge much faster than other activation functions. Tables~\ref{tab:table1} and Table~\ref{tab:table2}\footnote{Indicates that the activation function does not converge in this mode.} show the quantified results in precision, and the proposed methodology has an overall advantage. Table~\ref{tab:table1} illustrates the comparison results between the proposed methods and their respectively corresponding baselines. From the results, the proposed methodology can effectively applied to the classic fixed activation functions, and can surpass the corresponding baseline functions on most network models. For instance, ASigmoid can surpass the corresponding baseline function Sigmoid on all models, while ATanh can also obtain better precision than the corresponding Tanh on the models of VGGNet, ResNet and DenseNet, and the obtained precision of ATanh in AlexNet and GoogleNet are only slightly lower than those of the corresponding Tanh. Table~\ref{tab:table2} shows the comparison results between AReLU and other adaptive activation functions. AReLU, except PReLU on the AlexNet and VGGNet, can overall obtain higher precision than other adaptive functions on various models. Note that some traditional adaptive functions like PELU and FReLU are not suitable for some network models owing to the lack of convergence during the training process. While the proposed methodology can apply to various deep learning models and has better generalization performance than other traditional methodologies.
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.24]{./Figures_6.eps}
\caption{The top row represents the loss-epoch convergence curves for various activation functions on CIFAR10 by using SGD optimizer on various models during the training process, such as AlexNet, DenseNet, VGGNet, ResNet and DenseNet. Whereas the bottom row illustrates the area enclosed by the corresponding convergence curves in top row, and the smaller area means the faster convergence speed.}
\label{fig:fig6}
\end{figure*}
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{The classification precision of various fixed activation functions for different models on CIFAR10.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{lccccc}
\toprule
Methods & AlexNet & VGGNet & GoogleNet & ResNet & DenseNet \\
\midrule
Sigmoid & 0.8141$\pm$1.52e-3 & 0.8680$\pm$1.17e-3 & 0.8027$\pm$1.13e-3 & 0.8079$\pm$1.25e-3 & 0.7611$\pm$3.86e-3 \\
ASigmoid(ours) & \myfontb{0.8469$\pm$1.68e-3} & \myfontb{0.8819$\pm$2.28e-4} & \myfontb{0.8517$\pm$4.07e-4} & \myfontb{0.8087$\pm$1.36e-3} & \myfontb{0.9076$\pm$8.16e-4} \\
\hline
Tanh & \myfontb{0.8043$\pm$9.68e-4} & 0.8841$\pm$7.72e-4 & \myfontb{0.8433$\pm$6.06e-4} & 0.5784$\pm$9.75e-4 & 0.8966$\pm$1.99e-5 \\
ATanh(ours) & 0.7900$\pm$5.31e-4 & \myfontb{0.8904$\pm$3.05e-4} & 0.8342$\pm$7.99e-4 & \myfontb{0.6014$\pm$8.11e-4} & \myfontb{0.9163$\pm$1.87e-4} \\
\hline
ReLU & \myfontb{0.8351$\pm$3.95e-4} & 0.8800$\pm$1.01e-4 & 0.8733$\pm$4.36e-5 & 0.6641$\pm$2.59e-3 & 0.9242$\pm$1.10e-4 \\
LReLU\citep{maas2013rectifier} & 0.8255$\pm$1.66e-3 & \myfontb{0.9253$\pm$4.68e-5} & 0.8742$\pm$9.31e-4 & 0.6698$\pm$1.27e-3 & 0.9289$\pm$2.37e-5 \\
AReLU(ours) & 0.8331$\pm$2.48e-4 & 0.8328$\pm$1.26e-3 & \myfontb{0.8773$\pm$1.83e-3} & \myfontb{0.9230$\pm$5.51e-4} & \myfontb{0.9538$\pm$5.58e-4} \\
\bottomrule
\end{tabular}
}
\label{tab:table1}
\end{table*}
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{The classification precision of various adaptive adaptive activation functions for different models on CIFAR10. The top results are highlighted in black bold and the second-best results in
blue.}
\centering
\begin{threeparttable}
\resizebox{\textwidth}{!}{
\begin{tabular}{lccccc}
\toprule
Methods & AlexNet & VGGNet & GoogleNet & ResNet & DenseNet \\
\midrule
PReLU\citep{he2015delving} & \myfontb{0.8558$\pm$4.61e-4} & \myfontb{0.9344$\pm$5.89e-5} & 0.8551$\pm$1.79e-4 & 0.6522$\pm$3.20e-4 & 0.9231$\pm$1.40e-4 \\
Swish\citep{ramachandran2017searching} & 0.7557$\pm$2.20e-3 &\textcolor{blue}{0.9171$\pm7.98$e-4} &\textcolor{blue}{0.8710$\pm$3.25e-3}& 0.6446$\pm$3.39e-4 & \textcolor{blue}{0.9276$\pm$1.87e-4} \\
PELU\citep{trottier2017parametric} & $-$\tnote{1} & $-$ & 0.8128$\pm$4.45e-4 & 0.7891$\pm$3.91e-4 & $-$ \\
FReLU\citep{qiu2018frelu} & $-$ & 0.8558$\pm$2.44e-4 & 0.8694$\pm$5.21e-3 &\textcolor{blue}{0.8726$\pm$3.71e-4} & 0.8214$\pm$4.11e-4 \\
AReLU(ours) & \textcolor{blue}{0.8331$\pm$2.48e-4}& 0.8328$\pm$1.26e-3 & \myfontb{0.8773$\pm$1.83e-3} & \myfontb{0.9230$\pm$5.51e-4} & \myfontb{0.9538$\pm$5.58e-4} \\
\bottomrule
\end{tabular}
}
\end{threeparttable}
\label{tab:table2}
\end{table*}
\subsection{CIFAR100}
To further verify the validity and applicability, CIFAR-100 is selected to training based on several typical network models, such as AlexNet, DenseNet (shown in Fig. \ref{fig:fig5}) and VGG-v. The Dataset is trained for 150 epochs with a 250-batch size and a fixed learning rate of 0.001, 0.0001 and 0.00001 with the training process, respectively. Note that VGG-v is an extended version of the VGGNet.
Table~\ref{tab:table3} shows the classification comparison results between the proposed methods and their corresponding baselines on AlexNet, VGG-v and DenseNet, respectively. Except AReLU in AlexNet, the proposed methods can surpass the respective corresponding baseline functions. Table~\ref{tab:table4}\footnote{Indicates that the activation function does not converge in this method.} illustrates the comparison results between AReLU and other adaptive functions. From the results, AReLU can achieve the best precision performance on the three network models. Similarly, PELU and FReLU cannot converge to desired loss values.
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{The classification precision of various fixed activation functions for different models on CIFAR100.}
\centering
\begin{tabular}{lcccc}
\toprule
Methods & AlexNet & VGG-v & DenseNet \\
\midrule
Sigmoid & 0.4662$\pm$2.49e-3 & 0.5545$\pm$5.89e-4 & 0.2561$\pm$1.23e-3 \\
ASigmoid(ours) & \myfontb{0.5312$\pm$8.66e-4} & \myfontb{0.6578$\pm$7.56e-4} & \myfontb{0.6103$\pm$3.15e-3} \\
\hline
Tanh & 0.5058$\pm$6.06e-4 & 0.6007$\pm$6.90e-4 & 0.5960$\pm$2.53e-3 \\
ATanh(ours) & \myfontb{0.5236$\pm$1.98e-4} & \myfontb{0.6166$\pm$2.41e-4} & \myfontb{0.6734$\pm$3.98e-4} \\
\hline
ReLU &\myfontb{0.5701}$\pm$1.18e-4 & 0.6972$\pm$4.99e-4 & 0.5616$\pm$1.09e-3 \\
LReLU\citep{maas2013rectifier} & 0.5500$\pm$1.09e-3 & 0.6991$\pm$5.53e-4 & 0.6914$\pm$5.12e-4 \\
AReLU(ours) & 0.5647$\pm$1.71e-4 & \myfontb{0.7005$\pm$1.04e-3} & \myfontb{0.7081$\pm$1.64e-3} \\
\bottomrule
\end{tabular}
\label{tab:table3}
\end{table*}
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{The classification precision of various adaptive activation functions for different models on CIFAR100.}
\centering
\begin{threeparttable}
\begin{tabular}{lcccc}
\toprule
Methods & AlexNet & VGG-v & DenseNet \\
\midrule
PReLU\citep{he2015delving} &0.5325$\pm$5.01e-4 & 0.6838$\pm$3.84e-5 & 0.5781$\pm$8.87e-3 \\
Swish\citep{ramachandran2017searching} &0.5519$\pm$1.16e-3 & 0.6729$\pm$8.32e-5 & 0.7079$\pm$3.01e-3 \\
PELU\citep{trottier2017parametric} & $-$\tnote{1} & $-$ & $-$ \\
FReLU\citep{qiu2018frelu} & $-$ & 0.1534$\pm2.17e-4$ & 0.1325$\pm$2.16e-4 \\
AReLU(ours) & \myfontb{0.5647$\pm$1.71e-4} & \myfontb{0.7005$\pm$1.04e-3} & \myfontb{0.7081$\pm$1.64e-3} \\
\bottomrule
\end{tabular}
\end{threeparttable}
\label{tab:table4}
\end{table*}
\subsection{Validity and practicability in various optimization strategies}
Gradient descent algorithms are often used as a black-box optimizer in neural networks, and different optimization strategies have great influence on the performance of activation functions in practice. Therefore, to further verify the validity and practicability in various optimization strategies, the best method AReLU is selected as activation function based on GoogleNet and ResNet, and a series of comparison experiments are achieved on CIFAR10 among various optimizers, such as SGD, Momentum, AdaGrad, AdaDelta and ADAM.
Fig. \ref{fig:fig7} shows the convergence curves by using various activation functions with various optimization strategies on GoogleNet and ResNet, respectively. The results show that AReLU converges faster than all other activation functions on the GoogleNet model. Whereas on the ResNet model, it is also obvious for AReLU to have an overall convergence advantage, especially compared with AdaGrad and AdaDelta. These results indicate that the proposed AReLU can accelerate convergence, thereby reducing the training cost.
Table~\ref{tab:table5} further reveals that the proposed AReLU can achieve better overall performance than other activation functions based on different optimization strategies and different network models. Except ReLU with a Momentum optimizer on GoogleNet and Swish with an ADAM optimizer on ResNet, the proposed AReLU surpasses other activation functions with various optimization strategies on both the network models, and the obtained precision performance is far better than other methods. While AReLU is only slightly worse than ReLU with a Momentum optimizer on GoogleNet and Swish with an ADAM optimizer on ResNet, respectively. Significantly, AReLU with SGD can generally achieve the best precision performance among these activation functions with various optimization strategies on both models. Fig. \ref{fig:fig8} illustrates the convergence of the proposed AReLU with various optimizers, and the results show that SGD has faster convergence speed than all other optimizers on both models, especially on ResNet.
The above results shows the proposed adaptive activation functions have faster convergence speed and higher precision than traditional activation functions. And it suggests the proposed methodology can avoid local minimums and accelerate convergence, thereby increasing the precision, reducing the training cost and improving the generalization performance.
\begin{figure*}[!t]
\centering
\includegraphics[width=6in]{./Figure_7a.eps}
\includegraphics[width=6in]{./Figure_7b.eps}
\caption{The convergence curves by using various optimization strategies on GoogleNet (top row) and ResNet (bottom row).}
\label{fig:fig7}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=6in]{./Figure_8.eps}
\caption{The convergence curves of AReLU by using various optimization strategies on GoogleNet and ResNet.}
\label{fig:fig8}
\end{figure*}
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Classification precision comparisons between various activation functions by using different optimization strategies and models on CIFAR10. The top results are highlighted in black bold and the second-best results in blue.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccc}
\toprule
Models & Methods & SGD & Momentum & AdaGrad & AdaDelta & ADAM \\
\midrule
&ReLU & 0.8733$\pm$4.36e-5 &\myfontb{0.8951$\pm$7.30e-5} &0.6065$\pm$1.53e-4 &0.5686$\pm$3.31e-4 &\textcolor{blue}{0.8797$\pm$1.08e-3} \\
&LReLU\citep{maas2013rectifier} &\textcolor{blue}{0.8742$\pm$9.31e-4} & 0.8498$\pm$4.20e-4 &0.6623$\pm$3.22e-3 &\textcolor{blue}{0.6521$\pm$1.92e-3} &0.8654$\pm$8.78e-5\\
GoogleNet &PReLU\citep{he2015delving} &0.8551$\pm$1.79e-4 &0.8138$\pm$2.14e-4 &0.6278$\pm$8.44e-6 &0.5367$\pm$1.45e-3 &0.8177$\pm$2.34e-6\\
&Swish\citep{ramachandran2017searching} &0.8710$\pm$3.25e-3 &\textcolor{blue}{0.8772$\pm$1.39e-3} &\textcolor{blue}{0.6747$\pm$1.58e-3} &0.6139$\pm$2.66e-4 &0.8743$\pm$1.92e-3 \\
&AReLU(ours) &\myfontb{0.8773$\pm$1.83e-3} & 0.8651$\pm$8.44e-5 &\myfontb{0.7408$\pm$6.38e-3} &\myfontb{0.7221$\pm$2.57e-3} &\myfontb{0.8910$\pm$3.37e-4} \\
\hline
&ReLU & 0.6641$\pm$2.59e-3 &0.6274$\pm$1.99e-3 & 0.4253$\pm$7.38e-4 &0.3553$\pm$1.57e-3 &0.8519$\pm$6.20e-4\\
&LReLU\citep{maas2013rectifier} &\textcolor{blue}{0.6698$\pm$1.27e-3} &\textcolor{blue}{0.6635$\pm$1.40e-3} &\textcolor{blue}{0.5162$\pm$1.61e-4} &\textcolor{blue}{0.4046$\pm$3.87e-3} &0.8065$\pm$1.07e-3\\
ResNet & PReLU\citep{he2015delving} & 0.6522$\pm$3.20e-4&0.6325$\pm$2.11e-4 & 0.3493$\pm$2.07e-3 & 0.2846$\pm$4.28e-3 &0.7889$\pm$8.08e-4\\
&Swish\citep{ramachandran2017searching} & 0.6446$\pm$3.39e-4 &0.6099$\pm$7.02e-5 & 0.4032$\pm$3.41e-3 &0.3112$\pm$3.31e-3 &\myfontb{0.8909$\pm$6.84e-5} \\
&AReLU(ours) & \myfontb{0.9230$\pm$5.51e-4} &\myfontb{0.7862$\pm$1.38e-3} & \myfontb{0.7341$\pm$2.92e-3}&\myfontb{0.7155$\pm$5.56e-4}&\textcolor{blue}{0.8805$\pm$2.31e-4} \\
\bottomrule
\end{tabular}
}
\label{tab:table5}
\end{table*}
\subsection{More complicated datasets}
Two more complicated datasets miniImageNet\citep{Oriol2016} and PASCAL VOC\citep{VOC2012} are used to further test the validity of the proposed methodology based on ResNet50. Table~\ref{tab:table6}\footnote{Indicates that the activation function does not converge in this method.} shows the performance comparisons of classification precision between AReLU and other adaptive functions. The results indicate that AReLU and PReLU can obtain the best classification precision in PASCAL VOC and miniImageNet, respectively. Overall, their performances are nearly equal in the two datasets. Note that PELU and FReLU cannot converge, meaning that the two datasets are very complicated and challenging. \par
The above all experiments are conducted for various models, datasets and methods based on classification tasks, and it means that the proposed adaptive activation functions can overall obtain the best classification performance. To test the validity and practicability in other deep learning tasks, PASCAL VOC is used for the object detection tasks by respectively adopting Faster RCNN\citep{RenHGS15} and YOLOv2\citep{redmon2016yolo9000} based on the proposed AReLU. Besides, a more complicated detection dataset COCO\citep{COCO2014} is selected to further verify the effectiveness by employing FCOS\citep{tian2021fcos}. Table~\ref{tab:table7}\footnote{Activation function does not converge in this model.} shows performance comparisons of detection precision among various adaptive functions. From the results, the proposed AReLU can nearly achieve the best detection performance including AP50, AP75 and mAP among various adaptive functions based on various methods and datasets. It means that our method has a validity and practicability. \par
From the results of a series of comparison experiments, the proposed methods can achieve better performance in various scenarios, such as datasets, network models, optimization methods and deep learning tasks. The most significant reason is that our methodology has an internal and external bilinear structure, and the internal function is embedded within the external, such case will enable the optimization toward a global optimum solution more efficiently from all directions, thereby accelerating the convergence and improving the performance. More importantly, the proposed methodology only adds a small number of parameters (i.e., four parameters for each layer) , and the number of parameters is negligible compared with millions of parameters in entire network model, which means the amount of network computing and the risk of over-fitting is only increased inconsiderably.
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{The classification precision of various activation functions on miniImageNet and PASCAL VOC based on ResNet50. The top results are highlighted in black bold and the second-best results in blue.}
\centering
\begin{threeparttable}
\begin{tabular}{lcc}
\hline
Methods & miniImageNet & PASCAL VOC \\
\midrule
ReLU & 0.7562$\pm$4.60e-3 & 0.5933$\pm$5.70e-3 \\
PReLU\citep{he2015delving} &\myfontb{0.7813$\pm$1.20e-3} & \textcolor{blue}{0.6066$\pm$2.00e-4} \\
Swish\citep{ramachandran2017searching} &0.7134$\pm$2.50e-3 & 0.5154$\pm$2.30e-3 \\
PELU\citep{trottier2017parametric} & $-$\tnote{1} & $-$ \\
FReLU\citep{qiu2018frelu} & $-$ & $-$ \\
AReLU(ours) & \textcolor{blue}{0.7751$\pm$1.40e-3} & \myfontb{0.6092$\pm$3.02e-3} \\
\hline
\end{tabular}
\end{threeparttable}
\label{tab:table6}
\end{table*}
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{The detection precision of various activation functions for different methods on various datasets. The top results are highlighted in black bold and the second-best results in blue.}
\centering
\begin{threeparttable}
\resizebox{\textwidth}{!}{
\begin{tabular}{lccc|ccc|ccc}
\hline
\multicolumn{1}{c}{\multirow{2}{1.2cm}{Methods}} & \multicolumn{3}{c|}{Faster-RCNN (PASCAL VOC)} & \multicolumn{3}{c|}{YOLOv2 (PASCAL VOC)} & \multicolumn{3}{c}{FCOS (COCO)}\\
\cline{2-10}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{AP50} &\multicolumn{1}{c}{AP75} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c}{AP50} & \multicolumn{1}{c}{AP75} &\multicolumn{1}{c|}{mAP} & \multicolumn{1}{c}{AP50} & \multicolumn{1}{c}{AP75} & \multicolumn{1}{c}{mAP}\\
\midrule
ReLU &\textcolor{blue}{0.8501} & 0.6266 & 0.7661 &0.5924 & 0.1459 & 0.3970 &0.4974 & 0.3426 & 0.3232\\
PReLU\citep{he2015delving} & $-$\tnote{1} & $-$ & $-$ & 0.1777 & 0.0176 & 0.0931 & 0.4943 & 0.3399 & 0.3214\\
Swish\citep{ramachandran2017searching} & 0.7111 & 0.4981 & 0.6658 &\textcolor{blue}{0.6250} &\textcolor{blue}{0.2432} &\textcolor{blue}{0.4621} & 0.4923 &0.3418 & 0.3220 \\
PELU\citep{trottier2017parametric} & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\
FReLU\citep{qiu2018frelu} & 0.8486 & \myfontb{0.6273} &\textcolor{blue}{0.7669} & 0.2604 & 0.0627 & 0.1670 &\myfontb{0.5030} &\textcolor{blue}{0.3473} &\textcolor{blue}{0.3274} \\
AReLU(ours) &\myfontb{0.8530} &\textcolor{blue}{0.6271} & \myfontb{0.7675} & \myfontb{0.6384} & \myfontb{0.2459} & \myfontb{0.4687} & \textcolor{blue}{0.5009} & \myfontb{0.3490} & \myfontb{0.3279}\\
\hline
\end{tabular}
}
\end{threeparttable}
\label{tab:table7}
\end{table*}
\section{Conclusions}
In this work, a novel methodology is proposed to adaptively customize activation functions for various layers, and it will contribute to avoiding local minimums and accelerating convergence, thereby increasing the precision, reducing the training cost and improving the generalization performance. In this methodology, a small number of parameters are introduced to the traditional activation functions such as Sigmoid, Tanh and ReLU, and some theoretical and experimental analysis for accelerating the convergence and improving the performance is presented. To verify the effectiveness of the proposed methodology, a series of experiments are implemented on CIFAR10, CIFAR100, miniImageNet, PASCAL VOC and COCO by employing various network models such as VGGNet, GoogleNet, ResNet, DenseNet, various optimization strategies such as SGD, Momentum, AdaGrad, AdaDelta and ADAM, and various task like classification and detection tasks. The results show that the proposed methodology is very simple but with significant performance in convergence speed, precision and generalization, and it can surpass other popular methods like ReLU and swish in almost all experiments in terms of overall performance.
\section{Acknowledgments}
The authors would like to express their appreciation to the referees for their helpful comments and suggestions.
This work was supported in part by Zhejiang Provincial Natural Science Foundation of China (Grant No. LGF20H180002 and GF22F037921), and in part by National Natural Science Foundation of China (Grant No. 61802347, 61801428 and 61972354), and the National Key Research and Development Program of China (Grant No. 2018YFB1305202).
\bibliographystyle{unsrtnat}
|
1,116,691,499,059 | arxiv |
\section{Introduction}
Reinforcement Learning (RL) provides a general framework for autonomous agents to learn complex behavior, adapt to changing environments, and generalize to unseen tasks and environments with little human interference or engineering effort.
However, RL in high-dimensional state spaces generally suffers from a difficult exploration problem, making learning prohibitively slow and sample-inefficient for many real-world tasks with sparse rewards.
A possible strategy to increase the sample efficiency of RL algorithms is reward shaping \citep{mataric1994reward,randlov1998learning}, in particular potential-based reward shaping (PB-RS) \citep{ng1999policy}.
Reward shaping provides a dense reward signal to the RL agent, enabling it to converge faster to the optimal policy.
In robotics tasks, approximate domain knowledge is often available and can be used by a planning algorithm to generate approximate plans.
Here, the resulting plan can be provided to the RL agent using plan-based reward shaping \citep{grzes2008plan,brys2015reinforcement}.
Thus, plan-based reward shaping offers a natural way to combine the efficiency of planning with the flexibility of RL.
We analyze the use of plan-based reward shaping for RL.
The key novelty is that we theoretically introduce Final-Volume-Preserving Reward Shaping (FV-RS), a superset of PB-RS.
Intuitively speaking, FV-RS allows for shaping rewards that convey the information encoded in the shaping reward in a more direct way than PB-RS, since the value of following a policy is not only determined by the shaping reward at the end of the trajectory, but can also depend on all intermediate states.
While FV-RS inevitably relaxes the optimality guarantees provided by PB-RS, we show in the experiments that FV-RS can significantly improve sample efficiency beyond PB-RS, e.g.\ allowing RL agents to learn simulated 10-dimensional continuous robotic manipulation tasks after ca.\ 300 rollout episodes.
We argue that the strict notion of optimality in PB-RS is not necessary in many robotics applications, while on the other hand relaxing PB-RS to FV-RS facilitates speeding up the learning process.
Using FV-RS could be a better trade-off between optimality and sample efficiency in many domains.
The contributions of this work are:
\begin{itemize}
\item We introduce FV-RS as a new class of reward shaping for RL methods in general.
\item We propose to specifically use FV-RS for plan-based reward shaping.
\item We show that compared to no RS and plan-based PB-RS, plan-based FV-RS significantly increases the sample efficiency in several robotic manipulation tasks.
\end{itemize}
\section{Related Work}
\subsection{Sparse Rewards and Reward Shaping}
In many real-world RL settings, the agent is only given sparse rewards, exacerbating the exploration problem.
There exist several approaches in the literature to overcome this issue.
These include mechanisms of intrinsic motivation and curiosity \citep{barto2004intrinsically,oudeyer2007intrinsic,schembri2007evolving}, which provide the agent with additional intrinsic rewards for events that are novel, salient, or particularly useful for the learning process.
In reward optimization \citep{sorg2010reward, sequeira2011emotion, sequeira2014learning}, the reward function itself is being optimized to allow for efficient learning.
Similarly, reward shaping \citep{mataric1994reward,randlov1998learning} is a technique to give the agent additional rewards in order to guide it during training.
In PB-RS \citep{ng1999policy,wiewiora2003potential,wiewiora2003principled,devlin2012dynamic}, this is done in a way that ensures that the resulting optimal policy is the same with and without shaping.
\citet{ng1999policy} also showed that the reverse statement holds as well; PB-RS is the only type of modification to the reward function that can guarantee such an invariance if no other assumptions about the Markov Decision Process (MDP) are made.
In this work, we introduce Final-Volume-Preserving Reward Shaping (FV-RS), a subclass of reward shaping that is broader than PB-RS and not necessarily potential-based, and therefore is not guaranteed to leave the optimal policy invariant.
However, FV-RS still guarantees the invariance of the asymptotic state of the MDP under optimal control.
In the experiments section, we show that this relaxed notion of reward shaping allows us to substantially improve the sample efficiency during training.
\subsection{Demonstration- and Plan-Based Reward Shaping}
Learning from Demonstration (LfD) aims at creating a behavioral policy from expert demonstrations.
Existing approaches differ considerably in how the demonstration examples are collected and how the policy is derived from this \citep{argall2009survey,ravichandar2020recent}.
The HAT algorithm \citep{taylor2011integrating} introduces an intermediate policy summarization step, in which the demonstrated data is translated into an approximate policy that is then used to bias exploration in a final RL stage.
In \citet{hester2017deep}, the policy is simultaneously trained on expert data and collected data, using a combination of supervised and temporal difference losses.
In \citet{salimans2018learning}, the RL agent is at the start of each episode reset to a state in the single demonstration. Other approaches \citep{thomaz2006reinforcement,knox2010combining} rely on interactive human feedback during the training process.
At the intersection of RL and LfD, reward shaping offers a natural way to include expert demonstrations or plans of the correct behavior into an RL training process.
Prior work in this area includes using abstract plan knowledge represented in the form of STRIPS operators to create a potential function for PB-RS \citep{grzes2008plan,efthymiadis2016overcoming,devlin2016plan}, which has been applied to the strategy game of Starcraft \citep{efthymiadis2013using}.
\citet{brys2015reinforcement} use expert demonstrations to directly construct a Gaussian potential function, and in \citet{suay2016learning}, this is extended to include multiple demonstrations that are translated into a potential function using Inverse Reinforcement Learning as an intermediate step.
In this work, we use a planned sequence in state-space to construct a shaping function similar to \citet{brys2015reinforcement}, but in contrast to the aforementioned work, we do not use this shaping function as a potential function for PB-RS.
Instead, we use it directly as a reward function for FV-RS.
We show that this significantly improves the sample efficiency during training.
\section{Background}
\subsection{Markov Decision Processes and Reinforcement Learning}
We consider decision problems that can be described as a discrete-time MDPs \citep{bellman1957markovian} $\langle {\mathbb{S}}, {\mathbb{A}}, T, \gamma, R \rangle$. Here, ${\mathbb{S}}$ is the set of all possible states, and ${\mathbb{A}}$ is the set of all possible actions.
$T: {\mathbb{S}} \times {\mathbb{A}} \times {\mathbb{S}} \rightarrow [0,1]$ describes the dynamics of the system; $T(s'|s,a)$ is the probability (density) of the next state being $s'$, provided that the current state is $s$ and the action taken is $a$. After the transition, the agent is given a reward $R(s,a,s')$. The discount factor $\gamma \in [0,1)$ trades off immediate and future reward.
The goal of RL is to learn an optimal policy $\pi^*: {\mathbb{S}}, {\mathbb{A}} \rightarrow [0,1] $ that maximizes the expected discounted return, i.e.
\begin{align}
\label{eq:rl_problem}
\pi^* = \argmax_{\pi} \sum_{t=0}^\infty \gamma^t \mathbb{E}_{s_{t+1} \sim T(\cdot\mid s_t,a_t),\, a_{t} \sim \pi(\cdot\mid s_t), s_0 \sim p_0}\left[ R(s_t,a_t,s_{t+1}) \right] \text{,}
\end{align}
where $p_0$ is the initial-state distribution,
from collected transition and reward data $D = \{(s_i,a_i,s_i')\}_{i=0}^n$.
The $Q$-function of a given policy $\pi$ is the expected return for choosing action $a_0$ in state $s_0$, and following $\pi$ thereafter, i.e.
\begin{align}
Q^\pi(s_0,a_0) = \mathbb{E}_{s_{t+1} \sim T(\cdot\mid s_t,a_t),\, a_{t} \sim \pi(\cdot\mid s_t)} \left[ \sum_{t=0}^\infty \gamma^t R(s_t,a_t,s_{t+1}) \right] \quad \text{.}
\end{align}
There exists a range of RL algorithms to solve \eqref{eq:rl_problem}.
Our reward-shaping approach only modifies the reward and therefore can be combined with any RL algorithm.
In this work, we are interested in applications in robotics, where both ${\mathbb{S}}$ and ${\mathbb{A}}$ are typically continuous.
A popular algorithm in this case is Deep Deterministic Policy Gradient (DDPG) \citep{lillicrap2015continuous}, which will be used for most of the robotic manipulation examples in this work.
An exception to this are the examples in appendix \ref{sec:non_ddpg_examples}, which use Proximal Policy Optimization (PPO) \citep{schulman2017proximal}.
\subsection{Potential-Based Reward Shaping}
In many RL problems, rewards are sparse, making it harder for the RL algorithm to converge to the optimal policy. One possibility to alleviate this problem is to modify the reward $R$ of the original MDP with a shaping reward $F$.
\begin{align}
\tilde{R}(s,a,s') = R(s,a,s') + F(s,a,s')
\end{align}
In general, the optimal policy $\tilde{\pi}^*$ of the MDP $\tilde{M}$ with the shaped reward $\tilde{R}$ is different from the optimal policy $\pi^*$ of the MDP $M$ with the original reward $R$.
\citet{ng1999policy} showed however that $\tilde{\pi}^* \equiv \pi^*$ if and only if $F(s,a,s')$ is derived from a potential function $\Phi: {\mathbb{S}} \rightarrow \mathbb{R}$:
\begin{align}
F(s,a,s') = \gamma \Phi(s') - \Phi(s)
\end{align}
This proof was extended \citep{wiewiora2003principled} to potential functions of both actions and states, provided that the next action taken, $a'$, is known already.
A further generalization to time-dependent potential functions was introduced in \citet{devlin2012dynamic}.
\section{Final-Volume-Preserving Reward Shaping}
\label{sec:cp_rs}
In the following, we introduce the notion of the optimal final volume, which will then allow us to introduce Final-Volume-Preserving Reward Shaping (FV-RS).
FV-RS does not guarantee the invariance of the optimal policy like PB-RS does, but instead provides a less restrictive guarantee of preserved long-term behavior.
\subsection{Optimal Final Volume}
\begin{definition}
\label{def:convergence_under_optimal_control}
Let $M$ be an MDP with state-space ${\mathbb{S}}$.
If the set ${\mathbb{G}} = \cap_{{\mathbb{G}}_a \in {\mathbb{A}}} {\mathbb{G}}_a$ is non-empty, we call ${\mathbb{G}}$ the \textbf{optimal final volume of M}.
Here, ${\mathbb{A}}$ is the set of non-empty sets ${\mathbb{G}}_a \subseteq {\mathbb{S}}$ such that
\begin{align}
\exists t_0>0: \quad P_{s_t \sim q_{\pi^*}}(s_t \in {\mathbb{G}}_a) = 1 \quad \forall t\geq t_0 \quad \text{.}
\end{align}
\end{definition}
Thus, the optimal final volume of an MDP $M$ is the minimal subset of the state space ${\mathbb{S}}$ of $M$ that the MDP can be found in under optimal control with probability $1$ after finite time.
Notice that in order to keep the notation more concise, we use the strict requirement $P_{s_t \sim q_{\pi^*}}(s_t \in {\mathbb{G}}) = 1$ instead of e.g. $P_{s_t \sim q_{\pi^*}}(s_t \in {\mathbb{G}}) \geq 1- \epsilon$ in definition \ref{def:convergence_under_optimal_control}.
For many stochastic MDPs, the stationary distribution under optimal policy has nonzero density in the entire state space ${\mathbb{S}}$. This means that strictly speaking, our definition would result in the trivial optimal final volume ${\mathbb{G}} = {\mathbb{S}}$ for these MDPs.
In practice however, many interesting MDPs are such that we can find a volume ${\mathbb{G}}$ for which it is so unlikely that the optimal agent will leave it after some finite time has passed that we can readily assume that this probability is $0$ for all practical purposes.
\subsection{Compatible MDPs and Final-Volume-Preserving Reward Shaping}
\begin{definition}
Let $M = \langle {\mathbb{S}}, {\mathbb{A}}, T, \gamma, R \rangle$ and $\tilde{M} = \langle {\mathbb{S}}, {\mathbb{A}}, T, \tilde{\gamma}, \tilde{R} \rangle$ be MDPs that are identical apart from different reward functions $R$ and $\tilde{R}$, as well as different discount factors $\gamma$ and $\tilde{\gamma}$.
Let both $M$ and $\tilde{M}$ have optimal final volumes ${\mathbb{G}}$ and $\tilde{{\mathbb{G}}}$, respectively.
We call $\tilde{M}$ \textbf{a compatible MDP to M} iff $\tilde{{\mathbb{G}}} \subseteq {\mathbb{G}}$,
and we write $\tilde{M} \subseteq M$.
If this is the case, we also call $\tilde{R}$ a \textbf{final-volume-preserving} reward shaping function.
\end{definition}
If $\tilde{M} \subseteq M$, this means that although the rewards and discounts $(R, \gamma)$ and $(\tilde{R}, \tilde{\gamma})$ are different and, in contrast to PB-RS, in general result in different optimal policies, they are similar in the sense that after a finite time, the state of $\tilde{M}$ under optimal control will be inside the optimal final volume ${\mathbb{G}}$ of $M$.
In other words; in the long run, behaving optimally in $\tilde{M}$ will have the same ``result'' as behaving optimally in $M$.
Since PB-RS leaves the optimal policy invariant, $M$ and $\tilde{M}$ are also compatible if $\tilde{M}$ is the PB-RS counterpart to $M$.
Thus, the notion of compatibility is less restrictive than the notion of an invariant optimal policy; FV-RS is a superset of PB-RS.
\subsection{FV-RS}
We now proceed to describe specific recipes to find compatible counterparts for MDPs with sparse reward.
\begin{theorem}
\label{theorem:cp_reward_function}
Let $M$ be an MDP with state space ${\mathbb{S}}$ and with the sparse reward function
\begin{align}
R(s,a,s') = 1 \quad \text{if}\ s' \in {\mathbb{B}} \text{;} \quad R(s,a,s') = 0 \quad \text{else,}
\end{align}
where ${\mathbb{B}}\subseteq {\mathbb{S}}$ is non-empty. Let $M$ have the optimal final volume ${\mathbb{G}}$, and let ${\mathbb{G}}\subseteq{\mathbb{B}}$. Let $\tilde{R}(s,a,s')$ be a reward function that fulfills
\begin{align}
\tilde{R}(s,a,s') = 1 \, \ \quad \text{if}\ s' \in {\mathbb{B}} \text{;} \quad \tilde{R}(s,a,s') \leq \Delta \quad \text{else.}
\label{eq:reward_conditions}
\end{align}
Then, for every $0<\Delta<1$ there exists a discount factor $0<\tilde{\gamma}<1$ such that the MDP $\tilde{M}$ corresponding to $\tilde{R}$ and $\tilde{\gamma}$ is compatible with $M$.
\end{theorem}
\begin{proof}
See appendix \ref{sec:cp_reward_function_proof}.
\end{proof}
This theorem gives us a worst-case bound on the reward function:
Independently of how small the difference $1- \Delta$ between the reward in ${\mathbb{B}}$ and elsewhere is, as long as the conditions in \eqref{eq:reward_conditions} are fulfilled, we can select a sufficiently large $\tilde{\gamma} < 1$ that renders $\tilde{M}$ a compatible MDP to $M$.
\begin{corollary}
\label{cor:gamma_lower_bound}
This lower bound on $\tilde{\gamma}$ is $\tilde{\gamma} > \Delta^{1/(t_0-1)}$.
\end{corollary}
\begin{proof}
Follows directly from the proof of theorem \ref{theorem:cp_reward_function}.
\end{proof}
Even if $t_0$ is unknown but finite, $\tilde{M}\subseteq M$ if $\tilde{\gamma}$ is chosen large enough. Note that so far, we have imposed no restrictions on $\tilde{R}(s,a,s')$ for $s' \notin {\mathbb{B}}$, other than $\tilde{R}(s,a,s') \leq \Delta$.
In that sense, the statements above represent an ``upper bound on the lower bound of $\tilde{\gamma}$'' if we can not assume any additional structure for $\tilde{R}$.
However, if we put more restrictions on $\tilde{R}$, this worst-case bound can be relaxed.
As an example, we discuss the case of a reward function that the agent can follow in a step-wise manner in appendix \ref{sec:step_wise_reward_function}.
There it is shown that, depending on the step width $w$, the lower bound $\tilde{\gamma} > \Delta^{1/(t_0-1)}$ can be relaxed to $\tilde{\gamma} > \Delta^{1/(w-1)}$. In the important case that the agent can follow the reward function monotonically (i.e.\ $w=1$), this lower bound becomes $0$.
For more details and a precise definition of the step width $w$, please refer to appendix \ref{sec:step_wise_reward_function}.
\subsection{Plan-Based FV-RS}
\label{sec:plan_based_mdp}
We now describe how to construct a plan-based FV-RS shaping reward function from a plan, given as a sequence in state space.
The goal is to use the plan to create a continuous reward function that gives dense feedback to the policy to guide it along the plan.
\begin{theorem}
\label{theorem:cp_plan_based}
Let $M$ be an MDP with metric state-space $({\mathbb{S}},d)$ with metric $d: {\mathbb{S}} \times {\mathbb{S}} \rightarrow \mathbb{R}$ and with the reward function
\begin{align}
\label{eq:starting_mdp_reward_function}
R(s,a,s') = 1 \quad & \text{if}\ s' \in {\mathbb{B}} \text{;} \quad
R(s,a,s') = 0 \quad \text{else,}
\end{align}
where ${\mathbb{B}}\subseteq {\mathbb{S}}$ is non-empty. Let $M$ have the optimal final volume ${\mathbb{G}}$, and let ${\mathbb{G}} \subseteq {\mathbb{B}}$. From a planned sequence $(p_0, p_1, ..., p_{L-1})$ in ${\mathbb{S}}$ with $p_{L-1} \in {\mathbb{B}}$, we can construct the reward function
\begin{align}
\tilde{R}(s,a,s') = 1 \quad \text{if}\ s' \in {\mathbb{B}} \text{;} \quad
\tilde{R}(s,a,s') = (1-\Delta) \frac{k(s')+1}{L} \ g(d(p_{k(s')},s'))\quad \text{else}\text{,}
\end{align}
where $k(s')=\argmin_i(d(p_i,s'))$ and $g: \mathbb{R}^{0+} \rightarrow (0,1]$ is strictly monotonically decreasing, where $g(0) = 1$. Then, for every $0<\Delta<1$ there exists $0<\tilde{\gamma}<1$ such that the MDP $\tilde{M}$ corresponding to $\tilde{R}$ and $\tilde{\gamma}$ is a compatible MDP to $M$.
\end{theorem}
\begin{proof}
Special case of theorem \ref{theorem:cp_reward_function}.
\end{proof}
This translates the plan into a continuous reward function that leads towards the target area ${\mathbb{B}}$. The corresponding MDP $\tilde{M}$ is guaranteed to result in the same optimal final volume as $M$.
Thus, we established the machinery to translate any MDP $M$ with the sparse reward function in \eqref{eq:starting_mdp_reward_function} into its plan-based FV-RS counterpart MDP $\tilde{M}$.
\section{Experiments}
We demonstrate the efficiency of using FV-RS for plan-based reward shaping using several examples.
We start with a simple discrete toy example to illustrate the difference between PB-RS and FV-RS that is not potential-based.
We then compare PB-RS and FV-RS using a realistic 10-dimensional simulated robotic pushing example.
This is one of several robotic pushing and pick-and-place examples we investigated.
All results can be found in appendix \ref{sec:robotic_man_more_examples}.
Videos of all manipulation examples can be found in the supplementary material.
\subsection{Discrete Toy Example}
\label{sec:toy_example}
We start off considering a simple discrete example as shown in \figref{fig:discrete_example_setup}.
\begin{figure}
\centering
\subfloat[Setup\label{fig:discrete_example_setup}]{%
\includegraphics[width=0.5\linewidth]{figs/basic_example_setup.png}}
\\
\hspace{0.05\linewidth}
\subfloat[Reward functions\label{fig:discrete_example_reward}]{%
\includegraphics[width=0.4\linewidth]{figs/basic_example_reward_functions.pdf}}
\hfill
\subfloat[Q-functions for $\pi(s)=\texttt{right}$\label{fig:discrete_example_q_functions}]{%
\includegraphics[width=0.4\linewidth]{figs/basic_example_q_functions.pdf}}
\caption{The toy example described in \secref{sec:toy_example}.
(a) The agent starts in the middle, and can move to the right and to the left.
The goal is to grab the flag when at the goal position.
The agent collects data during two episodes indicated by the arrows.
(b) The reward functions without shaping, with PB-RS, and with FV-RS.
(c) The resulting Q-functions for policy $\pi(s)=\texttt{right}$, based on the data from the two episodes.}
\hspace{0.05\linewidth}
\label{fig:toy_example}
\end{figure}
The agent starts in the middle and can choose between the actions ${\mathbb{A}} = \{\texttt{left}, \texttt{right}, \texttt{grab} \}$, where $\texttt{grab}$ does not change the state, but allows the agent to grab the flag provided that it is already at the position of the flag.
The reward function is shown in \figref{fig:discrete_example_reward}.
Without shaping, reward $1$ is only given if the flag is captured. We also define the potential-based reward function
\begin{align}
\tilde{R}_\text{PB}(s,a,s') = R(s,a,s') + \tilde{\gamma}\Phi(s') - \Phi(s) \text{;} \qquad
\Phi(s) = \begin{cases}
0.5 \quad & \text{if}\ s = \mathrm{\texttt{finish}} \\
0 & \text{else}
\end{cases} \quad \text{,}
\end{align}
where the shaping potential $\Phi(s)$ assigns value to being at the correct position, even if the flag is not captured. Similarly, we define the reward function
\begin{align}
\tilde{R}_\text{FV}(s,a,s') = \begin{cases}
1 \quad & \text{if}\ s' = \mathrm{\texttt{finish}} \text{ and } a=\texttt{grab} \\
0.5 \quad & \text{if}\ s' = \mathrm{\texttt{finish}} \text{ and } a\neq\texttt{grab} \\
0 & \text{else}
\end{cases} \quad \text{,}
\end{align}
which is not potential-based but final-volume-preserving for the discount factor $\gamma=\tilde{\gamma}=0.7$ of the original MDP.
The agent collects data during $2$ rollout episodes starting in the middle, during one of which it always chooses to move \texttt{right} until the end of the episode, and another episode during which the agent always moves \texttt{left}.
Assume that this data is used in an actor-critic setup, where the actor policy is not converged yet and therefore always chooses the action \texttt{right}.
For this given policy and given data, the resulting $Q$-functions are shown in \figref{fig:discrete_example_q_functions}.
Without reward shaping, there is no reward signal collected during the rollouts, and therefore naturally, the $Q$-function contains no information.
With PB-RS, the reward for moving to the correct position is exactly canceled out by the discounted penalty for moving away from it again.
As a result, the values of the $Q$-function at the starting position contain no information about which action is preferable after these two training rollouts.
With FV-RS, the shaping reward is not canceled out and propagated all the way through to the origin. In this case, the $Q$-function provides the agent with the information that moving to the right is preferable if it finds itself at the starting position.
With PB-RS, the shaped return that is assigned to following a certain policy only depends on the discounted difference of the rewards at the final step and initial step of the recorded episode \citep{grzes2017reward}.
For non potential-based shaping reward functions like $\tilde{R}_\text{FV}$ however, the shaped return depends on all intermediate rewards on the way there as well.
In that sense, FV-RS allows for shaped reward functions that propagate the reward information of intermediate states faster than PB-RS reward functions.
To use a physical analogy, PB-RS is analogous to a conservative force field, in which the potential energy of a particle only depends on its current position.
FV-RS that is not potential-based is like a non-conservative (e.g. friction) force, for which the energy of a particle is not only a function of its position, but a function of the entire past trajectory.
\subsection{Robotic Pushing Task}
\label{sec:pushing_task}
In this example, we test the efficiency of FV-RS in a simulated robotic pushing task with realistic physics.
The task is very similar to the \texttt{FetchPush-v1} environment in the OpenAI Gym \citep{brockman2016openai}, but is implemented using open-source software.
Specifically, we use the NVIDIA PhysX engine \citep{physxengine} to simulate a box of size $0.4 \times 0.4 \times 0.2$ lying on a table of size $3 \times 3$, as shown in \figref{fig:pushing_task_setup}.
\begin{figure}[h]
\centering
\subfloat[\label{fig:pushing_task_setup}]{%
\includegraphics[width=0.64\linewidth]{figs/pushing_setup_annotated.png}}
\subfloat[\label{fig:pushing_plan}]{%
\includegraphics[width=0.36\linewidth]{figs/pushing_plan.pdf}}
\hfill
\subfloat[\label{fig:pushing_success}]{%
\includegraphics[width=0.5\linewidth]{figs/pushing_001_largew_success_ratio.pdf}}
\subfloat[\label{fig:pushing_training}]{%
\includegraphics[width=0.5\linewidth]{figs/pushing_001_largew_top_quartile.pdf}}
\caption{Robotic pushing task.
(a) A box of size $0.4 \times 0.4 \times 0.2$ (dark gray) lying on a table of size $3 \times 3$ (light gray) is supposed to be pushed to the green position by the spherical end effector (dark gray).
(b) Top view of the table with 2-D projection of the planned 10-D trajectory (x and y coordinates of the end effector).
The planned trajectory is not executable in the noisy environment.
(c) With FV-RS, some agents are consistently successful after around $300$ training rollout episodes.
Without reward shaping or with PB-RS, none of the agents were able to consistently reach the goal in this experiment.
(d) Training progress of the top quartile of agents out of each category.
}
\label{fig:pushing_experiment}
\end{figure}
The agent controls the $3$-dimensional velocity of a spherical end effector of radius $0.06$.
The maximum velocity in any direction is $0.1$ per time step.
The actual movement of the end effector is noisy; in $10\%$ of the time steps, the end effector moves randomly.
Using quaternion representation, the state space ${\mathbb{S}}$ has $7+3 = 10$ degrees of freedom.
The goal is to push the box within a distance of $0.1$ from the goal position shown in green, with any orientation.
Let ${\mathbb{B}}$ be the set of states that fulfill this requirement.
The unshaped reward $R(s,a,s')$ is $1$ if $s'\in {\mathbb{B}}$ and is $0$ elsewhere.
The plan $(p_0, p_1, ..., p_{L-1})$ is given as a sequence of states, as shown in \figref{fig:pushing_plan}.
The planned sequence has been created manually using a noise-free model, and therefore can not be followed directly in the realistic simulation.
Instead, it is used to create a plan-based shaped reward from it.
Following the suggestion of theorem \ref{theorem:cp_plan_based}, we specifically compare the reward functions
\begin{align}
\label{eq:pushing_cp_and_potential}
\tilde{R}_\text{FV}(s,a,s') =
\begin{cases}
1 \quad & \text{if}\ s' \in {\mathbb{B}} \\
\Phi_\text{FV}(s') & \text{else}
\end{cases}
\text{;}
\quad
\tilde{R}_\text{PB}(s,a,s') = R(s,a,s') + \left(\tilde{\gamma} \Phi_\text{PB}(s') - \Phi_\text{PB}(s) \right)\text{.}
\end{align}
If $\Phi_\text{FV}$ is chosen suitably following theorem \ref{theorem:cp_plan_based}, $\tilde{R}_\text{FV}$ is final-volume-preserving with respect to reward function $R$ of the original MDP.
The potential-based function $\tilde{R}_\text{PB}$ acts as the baseline.
Notice that $\Phi_\text{FV}$ and $\Phi_\text{PB}$ are chosen independently in order to facilitate a fair comparison of FV-RS and PB-RS.
We discuss and compare different choices both for $\Phi_\text{FV}$ and $\Phi_\text{PB}$ in appendix \ref{sec:phi_selection}.
There we compare different gaussian plan-based functions, similar to the ones used in \citet{brys2015reinforcement}, as well as a more simple function that only depends on the distance of the box to the target.
Our analysis reveals that using the gaussian plan-based function
\begin{align}
\label{eq:cp_potential_function_winner}
\Phi_\text{FV}(s) = \frac{1}{2} \frac{k(s)+1}{L} \exp\left(-\frac{d^2(s,p_{k(s)})}{2 \sigma^2} \right)
\end{align}
with $\sigma=0.2$ results in a robust performance for FV-RS.
Here, $k(s) = \argmin_i(d(p_i,s))$, and $d(\cdot,\cdot)$ is the euclidean distance ignoring the coordinates corresponding to the orientation of the box.
The value of this function increases if the agent moves further along the planned path or closer towards the planned path.
Similarly, our analysis also shows that using $\Phi_\text{PB}(s) = K \Phi_\text{FV}(s)$ with $K=50$ results in a robust performance for PB-RS.
We would like to emphasize that we did not impose that $\Phi_\text{PB}(s)$ is a multiple of $\Phi_\text{FV}(s)$, rather our analysis revealed that this is the most suitable choice. For more details, please refer to appendix \ref{sec:phi_selection}.
We apply DDPG \citep{lillicrap2015continuous} for the examples presented throughout this paper, with the exception of the example presented in appendix \ref{sec:non_ddpg_examples} using PPO \citep{schulman2017proximal}.
We use $\gamma = \tilde{\gamma} = 0.9$ as discount factor.
The agent collects data in rollout episodes of random length sampled uniformly between $0$ and $300$.
Each of these episodes starts at the same initial position indicated in \figref{fig:pushing_task_setup}; we do not assume that the system can be reset to arbitrary states.
The exploration policy acts $\epsilon$-greedily with respect to the current actor, where $\epsilon=0.2$.
After every rollout, actor and critic are updated using the replay buffer.
Both actor and critic are implemented as neural networks in Tensorflow \citep{abadi2016}.
Implementation details can be found in the supplementary code material.
The data reported in \figref{fig:pushing_experiment} is obtained as follows:
For each of the shaping strategies $s \in \{\text{No RS}, \text{PB-RS}, \text{FV-RS}\}$, multiple agents $a \in \{1,...,A\}$ are trained independently from scratch.
After $N$ rollout episodes, multiple test episodes $m \in \{1,...,M\}$ are run for $300$ time steps or until the goal is reached. $d_{am}^{(s)}(N)$ is the distance of the box to the goal at the end of such a test episode.
We classify an agent as being consistently successful once $\bar{d}_a^{(s)} = 1/M \sum_m d_{am}^{(s)}(N) < 0.1$.
We emphasize that this is a rather strict criterion.
Over $M$ test rollouts, a consistently successful agent must achieve an \textit{average} asymptotic distance to the goal that is within the margin of $0.1$, meaning that essentially all test rollouts have to be within the margin.
We use $A=40$ and $M=30$ in all experiments reported.
The training of an agent $a$ is halted once it is consistently successful.
We report data from all agents in \figref{fig:pushing_success} and focus on the best performing quartile of agents out of each category in \figref{fig:pushing_training}.
The performance of agents is ranked by how quickly they become consistently successful, or by how close to the goal they get in case they never become consistently successful.
FV-RS helps to learn the task with significantly higher sample efficiency than PB-RS and without reward shaping.
Some agents (see \figref{fig:pushing_success}) are consistently successful after ca.\ $300$ rollout episodes (corresponding to ca.\ \num{45000} MDP transitions) using FV-RS.
In contrast, none of the agents using PB-RS or no reward shaping were able to consistently reach the goal after $1500$ training rollouts in this example.
Apart from this pushing example, we also investigated other robotic pushing or pick-and-place settings in appendix \ref{sec:robotic_man_more_examples}.
Furthermore, we compare FV-RS and PB-RS using PPO instead of DDPG in appendix \ref{sec:non_ddpg_examples}.
While DDPG is off-policy and deterministic, PPO is on-policy and stochastic.
Consistently across all experiments, we find a significant advantage of using FV-RS over PB-RS, which qualitatively confirms the findings discussed here.
Videos of this and other manipulation examples can also be found in the supplementary material.
\section{Discussion and Conclusions}
We introduced FV-RS, a subclass of reward shaping that relaxes the invariance guarantee of PB-RS to a guarantee of preserved long-time behavior.
We illustrated that since FV-RS is not necessarily potential-based, it can convey the information provided by the shaping reward in a more direct way than PB-RS.
Using FV-RS, the value of following a policy does not necessarily depend on the discounted final reward only, but can also depend on all intermediate rewards.
Thus, FV-RS is analogous to a non-conservative force (e.g. friction), and PB-RS to a conservative force (e.g. gravity).
We proposed to use FV-RS in order to increase the sample efficiency of plan-informed RL algorithms.
We demonstrated in the experiments that, compared to plan-based PB-RS, plan-based FV-RS can increase the sample efficiency of RL significantly.
This observation is consistent across multiple manipulation examples and RL algorithms.
In all manipulation examples tested, there were agents that were consistently successful after ca.\ 300 rollout episodes using FV-RS and DDPG.
An important limitation of plan-based reward shaping in general arises if the plan used for the shaping is globally wrong.
In such a case, the shaped reward would be of no use for the exploration and potentially misleading.
Here, combining FV-RS with other exploration strategies could possibly remedy this limitation.
Another limitation of the presented method is that it is currently formulated for a goal-driven sparse-reward setting.
We restricted the present paper to the discussion of this class of MDPs since it is often encountered e.g.\ in robotic manipulation tasks.
The direct application of RL still poses a major obstacle here, making reward shaping an important tool.
Since reward shaping only modifies the reward itself, our approach makes no other assumptions about the RL algorithm used. It is orthogonal to and can be combined with other techniques used in RL, including popular methods for sparse-reward settings like Hindsight Experience Replay \citep{andrychowicz2017}.
The same holds true for methods in LfD that also exploit task information provided in the form of an approximate demonstration or plan.
Plan-based FV-RS only relies on a relatively simple representation of the approximate plan in the form of a sequence in state space.
In addition, our method does not require the system to be reset to arbitrary states, as it is the case e.g. in \citet{salimans2018learning}.
Both these aspects make it a practical choice for real-world robotic applications.
In many robotic manipulation settings, an approximate planner for the task is available, but the resulting plan can not be directly executed on the real robot.
Plan-based FV-RS could facilitate this \textit{sim-2-real transfer} in an efficient and principled way using RL.
Trading strict optimality guarantees for increased sample efficiency (while still guaranteeing preserved long-time behavior) could be beneficial in these cases.
\subsubsection*{Acknowledgments}
We thank the anonymous reviewers for their insightful suggestions and comments.
We thank the MPI for Intelligent Systems for the Max Planck Fellowship funding.
|
1,116,691,499,060 | arxiv | \section{Introduction}
The measurement of the large scale galaxy clustering has been an
important probe in constraining the cosmological models. The large
scale structure (LSS) measurements have made remarkable progresses
by the observational efforts such as 2dFGRS and Sloan Digital Sky
Survey (SDSS), which have provided an accurate measurement of the
galaxy power spectrum and given a robust constraint on cosmological
parameters \cite{Tegmark:2006az,Cole:2005sx}.
The LAMOST \cite{lamost} project is a 4m quasi-meridian reflecting
Schmidt telescope laid down on the ground. It has a 5 degree field
of view, and may accommodate as many as 4000 optical fibers and the
light from 4000 celestial objects will be led into a number of
spectrographs simultaneously. Thus the telescope will be the one
that possesses the highest spectrum acquiring rate in the world. The
spectroscopic survey which contains the information about the radial
positions of galaxies, can probe the 3D distribution of galaxies
effectively. In this paper, we study the sensitivity of LAMOST to
the determination of cosmological parameters with the simulated
galaxy power spectrum. In our analysis, we also consider the
simulated observations for the future CMB and SN Ia measurements
from PLANCK and the 5-year SNLS, which are presumably conducted
during the same time period as LAMOST survey. Our results indicate
that the LAMOST has the promising potential in probing for the
cosmological parameters, especially in constraining on the EoS of
the dark energy and the neutrino mass.
The paper is organized as follows: In Section II we describe the
method of fitting and the simulation technique. In section III, we
present the results and discussions. The last section contains a
summary.
\section{Methodology}
In this section, we introduce the method and the fitting procedure.
For the dynamical dark energy model, we choose the parametrization
given by \cite{Linderpara}:
\begin{equation}
\label{EoS} w(a) = w_{0} + w_{a}(1-a)~,
\end{equation}
where $a=1/(1+z)$ is the scale factor and $w_{a}=-dw/da$
characterizes the ``running" of the EoS (Run w henceforth). For the
$\Lambda$CDM model, $w_0=-1$ and $w_a=0$.
When using the MCMC global fitting strategy to constrain
cosmological parameters, dark energy perturbations should be taken
into account properly, especially for models with time evolving
EoS of dark energy. This issue has been realized by many
researchers including the WMAP group
\cite{Zhao:2005vj,globf05,wmap3,pertother}. However, when the
parameterized EoS crosses $-1$, one cannot handle the dark energy
perturbations based on quintessence, phantom, k-essence and other
non-crossing models. By virtue of quintom \cite{Quintom}, the
perturbations at the crossing points are continuous. Thus we have
proposed a technique to treat dark energy perturbations in the
whole parameter space.
In this study, we have modified the publicly available Markov Chain
Monte Carlo package CosmoMC\cite{CosmoMC} to include the dark energy
perturbations. For handling the parametrization of the EOS getting
across -1, firstly we introduce a small positive constant $\epsilon$
to divide the full range of the allowed value of the EOS $w$ into
three parts: 1) $ w
> -1 + \epsilon$; 2) $-1 + \epsilon \geq w \geq-1 - \epsilon$; and
3) $w < -1 -\epsilon $.
Working in the conformal Newtonian gauge, the perturbations of DE
can be described by \begin{eqnarray}
\dot\delta&=&-(1+w)(\theta-3\dot{\Phi})
-3\mathcal{H}(c_{s}^2-w)\delta~~, \label{dotdelta}\\
\dot\theta&=&-\mathcal{H}(1-3w)\theta-\frac{\dot{w}}{1+w}\theta
+k^{2}(\frac{c_{s}^2\delta}{{1+w}}+ \Psi)~~ . \label{dottheta}
\end{eqnarray}
Neglecting the entropy perturbation, for the regions 1) and 3), the
EOS does not across $-1$ and the perturbation is well defined by
solving Eqs.(\ref{dotdelta},\ref{dottheta}). For the case 2), the
perturbation of energy density $\delta$ and divergence of velocity,
$\theta$, and the derivatives of $\delta$ and $\theta$ are finite
and continuous for the realistic quintom DE models. However for the
perturbations of the parameterizations, there is clearly a
divergence. In our study for such a regime, we match the
perturbations in region 2) to the regions 1) and 3) at the boundary
and set
\begin{equation}\label{dotx}
\dot{\delta}=0 ~~,~~\dot{\theta}=0 .
\end{equation}
In our numerical calculations we limit the range to be $|\Delta w =
\epsilon |<10^{-5}$ and find our method to be a very good
approximation to the multi-field quintom. More detailed treatments
can be found in Ref.\cite{Zhao:2005vj,globf05}.
Furthermore, we assume purely adiabatic initial conditions and a
flat universe.
The parameter space we begin with for the numerical
calculation is:
\begin{equation}
\label{para1} {\bf P} \equiv \left(\omega_{b}, \omega_{c},
\Theta_{s}, \tau, w_{0}, w_{a}, n_{s}, \ln(10^{10}A_{s})
\right)~,
\end{equation}
where $\omega_{b}\equiv\Omega_{b}h^{2}$ and
$\omega_{c}\equiv\Omega_{c}h^{2}$ with $\Omega_b$ and $\Omega_c$
being the baryon and cold dark matter densities relative to the
critical density, respectively, $\Theta_{s}$ is the ratio
(multiplied by 100) of the sound horizon at decoupling to the
angular diameter distance to the last scattering surface, and $\tau$
is the optical depth.
In Eq.(\ref{para1}), $A_{s}$ and $n_{s}$ characterize the power
spectrum of primordial scalar perturbations. For the pivot scale of
the primordial spectrum we set $k_{\ast} = 0.05$Mpc$^{-1}$.
In our calculations, we take the total likelihood to be the
products of the separate likelihoods (${\bf \cal{L}}_i$) of CMB,
LSS and SNIa. Defining $\chi_{L,i}^2 \equiv -2 \log {\bf
\cal{L}}_i$, we then have \begin{equation}\label{chi2} \chi^2_{L,total} =
\chi^2_{L,CMB} + \chi^2_{L,LSS} + \chi^2_{L,SNIa}~. \end{equation} If the
likelihood function is Gaussian, $\chi^2_{L}$ coincides with the
usual definition of $\chi^2$ up to an additive constant
corresponding to the logarithm of the normalization factor of
${\cal L}$.
The data used for current constraints include the three-year WMAP
(WMAP3)\footnote {With the newly released 5-year WMAP
data\cite{wmap5,komastue}, we have checked that the new data will
not significantly change the results. } Temperature-Temperature (TT)
and Temperature-Polarization (TE) power spectrum
\cite{wmap3:2006:1,wmap3:2006:2,wmap3:2006:3,wmap3:2006:4} as well
as the smaller scale experiments, including Boomerang-2K2
\cite{MacTavish:2005yk}, CBI \cite{Readhead:2004gy}, VSA
\cite{Dickinson:2004yr} and ACBAR \cite{Kuo:2002ua}, the SDSS
luminous red galaxy (LRG) sample \cite{Tegmark:2006az} and 2dFGRS
\cite{Cole:2005sx}, and recently released ESSENCE (192 sample) data
\cite{Miknaitis:2007jd,Davis:2007na}. For the LSS power spectrum, we
only use the data in the linear regime up to $k \sim 0.1 h
Mpc^{-1}$. In the calculation of the likelihood from SNIa we have
marginalized over the nuisance parameter \cite{DiPietro:2002cz}.
Furthermore, we make use of the Hubble Space Telescope (HST)
measurement of the Hubble parameter $H_{0}\equiv
100$h~km~s$^{-1}$~Mpc$^{-1}$ \cite{Hubble} by multiplying the
likelihood by a Gaussian likelihood function centered around
$h=0.72$ and with a standard deviation $\sigma=0.08$. We also impose
a weak Gaussian prior on the baryon density
$\Omega_{b}h^{2}=0.022\pm0.002$ (1 $\sigma$) from the Big Bang
Nucleosynthesis \cite{BBN}, and a cosmic age tophat prior as 10 Gyr
$< t_0 <$ 20 Gyr.
For the future data, we consider the measurements of LSS from
LAMOST, the CMB from PLANCK \cite{PLANCK} and the SN Ia from 5-year
SNLS\cite{SNLS}.
For the simulation of LAMOST, we mainly simulate the galaxy power
spectrum. We consider two sources of statical errors on the power
spectrum measurements: the sample variance and the shot noises which
due to the limited number of independent wavenumber sampled from a
finite survey volume and the imperfect sampling of fluctuations by
the finite number of the galaxies respectively,\cite{9304022} \begin{equation}
\label{eqn:dPK} (\frac{\sigma_P}{P})^2 = 2\times \frac{(2
\pi)^3}{V}\times \frac{1}{4 \pi k^2 \Delta k}\times (1+
\frac{1}{\bar{n}P})^2~, \end{equation} where $V$ is the survey volume and
$\bar{n}$ is the mean galaxy density. From the more conservative
estimation, we know that the redshift distribution of main sample of
LAMOST is between 0 and 0.6 and the mean redshift is around 0.2. So
for simplicity, in our study, we simulate the power spectrum of the
galaxies at $z=0.2$ that can be got from these galaxies. The survey
area is $15000$ $deg^2$ and the total number of galaxies within the
survey volume is $10^7$\cite{lamost}. We only consider the linear
regime, namely the maximum k we consider is $k \sim 0.1$ $h$
Mpc$^{-1}$. As we know that, the galaxy power spectrum P(k) in EQ.
(\ref{eqn:dPK}) is \begin{equation} \label{bias}P(k)=b^2 p_m(k),\end{equation} where
$p_m(k)$ is the linear matter power spectrum, and here we take $b$
as a constant $b=1$ when simulating the data and when using the
galaxy power spectrum to constrain cosmological parameters, we take
$b$ as a free parameter and marginalize over it.
For the simulation with PLANCK, we follow the method given in our
previous paper \cite{07081111}. We mock the CMB TT, EE and TE
power spectrum by assuming the certain fiducial cosmological
model. For the detailed techniques, please see our previous
companion paper \cite{07081111}. We have also simulated $500$ SN
Ia according to the forecast distribution of the SNLS
\cite{future-snls}. For the error, we follow the Ref.\cite{kim}
which takes the magnitude dispersion $0.15$ and the systematic
error $\sigma_{sys}=0.02\times z/1.7$. The whole error for each
data is given by:
\begin{equation}
\sigma_{maga}(z_i)=\sqrt{\sigma^2_{sys}(z_i)+\frac{0.15^2}{n_i}}~,\label{snap}
\end{equation}
where $n_i$ is the number of supernova of the $i'$th redshift bin.
As pointed out in our previous works
\cite{Xia:2006cr,Xia:2006wd,Zhao:2006qg}, the cosmological
parameters are highly affected by the dark energy models due to
the degeneracies among the EoS of DE and other parameters.
Therefore, in our study of this paper, we choose two fiducial
models with different dark energy properties: $\Lambda$CDM model
(fiducial model I henceforth) and dynamical dark energy (fiducial
model II henceforth) with time evolving EoS. The parameters of the
two sets of fiducial models are obtained from the current
observational data.
\section{Results and Discussions}
\begin{table}{\footnotesize
TABLE I. Constraints on cosmological parameters from the current
observations and the future simulations. For the current
constraints we have shown the mean values $1\sigma$ (Mean) and the
best fit results together. For the future mocked data we list the
standard deviation (SD) of these parameters with fiducial model I (FMI)
and fiducial model II (FMII). In order to highlight
the contribution from LAMOST, we compare the results with/without
LAMOST.
\begin{center}
\begin{tabular}{|c|cccc|cccc|}
\hline
&\multicolumn{2}{c|}{Current for
$\Lambda$CDM}&\multicolumn{2}{c|}{~Future~(SD with
FMI)}&\multicolumn{2}{c|}{Current for Run
w}&\multicolumn{2}{c|}{~Future~(SD
with FMII)}\\
\cline{2-9}
&\multicolumn{1}{c|}{~Best~Fit~}&\multicolumn{1}{c|}{~~~~Mean~~~~}&\multicolumn{1}{c|}{PLANCK$+$
SNLS}&\multicolumn{1}{c|}{PLANCK$+$
SNLS}&\multicolumn{1}{c|}{~Best~Fit~}&\multicolumn{1}{c|}{~~~~Mean~~~~}
&\multicolumn{1}{c|}{PLANCK$+$SNLS}&\multicolumn{1}{c|}{PLANCK$+$
SNLS} \\
&\multicolumn{1}{c|}{}&\multicolumn{1}{c|}{}&\multicolumn{1}{c|}{}&$+$LAMOST&\multicolumn{1}{c|}{}&\multicolumn{1}{c|}{}&\multicolumn{1}{c|}{}&$+$LAMOST\\
\hline
$w_0$&$-1$&$-1$&$0.118$&$0.100$&$-1.16$&$-1.03^{+0.15}_{-0.15}$&$0.0899$&$0.0598$\\
\hline
$w_a$&$0$&$0$&$0.522$&$0.417$&$0.968$&$0.405^{+0.562}_{-0.587}$&$0.332$&$0.162$\\
\hline
$~\Omega_{\Lambda}~$&$0.760$&$0.762^{+0.015}_{-0.015}$&$0.0115$&$0.00547$&$0.756$&$0.760^{+0.017}_{-0.018}$&$0.0125$&$0.00460$\\
\hline
$H_0$&$73.1$&$73.3^{+1.6}_{-1.7}$&$1.594$&$0.828$&$70.3$&$71.2^{+2.3}_{-2.3}$&$1.840$&$0.673$\\
\hline
$\sigma_8$&$0.769$&$0.755\pm0.031$&$0.0223$&$0.0174$&$0.634$&$0.675\pm0.068$&$0.0299$&$0.0220$\\
\hline
\end{tabular}
\end{center}}
\end{table}
In Table I, we present the numerical results of the constraints on
the cosmological parameters from the current data and the error
forecast from the simulated data. To show the importance of LAMOST,
we compare the two sets of results, one from PLANCK $+$ SNLS, the
other from PLANCK $+$ SNLS $+$ LAMOST. As we know, the matter power
spectrum is directly related to the horizon size at matter-radiation
equality, in turn the matter power spectrum will make accurate
measurement of $\Omega_m h$. On the other hand, there are
degeneracies between $\Omega_m h$ and the other cosmological
parameters, {\it e.g.} $\Omega_{\Lambda}$, $H_0$, $w_0$, $w_a$ and
so on, hence the tight constraint on $\Omega_m h$ will be helpful
for breaking these degeneracies and improve the constraints on these
cosmological parameters. For example, in Table I, one can find the
constraints on $\Omega_{\Lambda}$ and $H_0$ are tightened a lot by
including LAMOST. Also from Figure 1, one can see the constraits on
the age of universe is also shrank obviously, this is because the
age is directly related to the hubble constant and $\Omega_m$.
In figure \ref{fig1}, we plot the $2-$D cross correlation and $1-$D
probability distribution of some of the basic cosmological
parameters in Eq.(\ref{para1}) and also some of the reduced
parameters. The black solid lines are given by fitting with the
simulated PLANCK and SNLS, and the red solid lines are provided by
including the simulated LAMOST data. From the comparison, we find
that LAMOST have promising potentials in constraining cosmological
parameters, such as, the EoS of dark energy, the dark energy density
budget $\Omega_{\Lambda}$, the age of universe, $\sigma_8$ and the
Hubble constant.
In order to see explicitly the effect of LAMOST on dark energy
constraints, in figure \ref{fig2}, we plot the $2\sigma$
confidence level contours on $w_0$ and $w_a$. The black solid line
is given by the current constraints and the red solid line is
given by fitting with the simulated data of PLANCK and SNLS 5 year
data with fiducial model II, while the red dashed line is given by
including the simulated LAMOST data. This comparison shows clearly
LAMOST will contribute significantly in tightening the constraints
on the EoS of dark energy. Numerically we find the best fit model
with the current data is given by the dynamical quintom model with
EoS across -1, however the cosmological constant is within
$1\sigma$ confidence level. The future PLANCK measurement and
5-year SNLS SN Ia will be able to distinguish the cosmological
constant from the dynamical model at $2\sigma$ confidence level,
while LAMOST can improve this sensitivity significantly at
$3.3\sigma$. The blue solid lines and blue dashed lines show the
comparison between the results with and without LAMOST for
fiducial model I.
On the other hand, for the parameter related to the inflation
models, such as $n_s$ and $A_s$, the constraints are mainly from
PLANCK, as pointed out in our previous paper\cite{07081111}. Adding
in LAMOST can not further tighten the constraints. We have also done
another analysis with the additional parameters $\alpha_s$ and $r$,
and obtained the similar conclusion, where $\alpha_s$ characterizes
the running of the primordial power spectrum index and $r$ is the
ratio of tensor to scalar perturbations.
Now we study the cosmological constraint on the neutrino mass by
adding in a new parameter $f_{\nu}$ in Eq.(\ref{para1}). The
parameter $f_{\nu}$ is the dark matter neutrino fraction at present,
namely,
\begin{equation}
f_{\nu}\equiv\frac{\rho_{\nu}}{\rho_{DM}}=\frac{\Sigma
m_{\nu}}{93.105~eV~\Omega_ch^2}~,
\end{equation}
where $\Sigma m_{\nu}$ is the sum of the neutrino masses. In this
study, the mocked data we use are generated by assuming the massless
neutrino {\it i.e.} $f_{\nu} = 0$ in the fiducial models.
Consequently the constraints on $f_{\nu}$ should be regarded as the
upper limits of the neutrino mass which the future observations will
be sensitive to. It is well known that the massive neutrinos modify
the shape and amplitude of the matter power spectrum, and also the
epoch of matter-radiation equality, angular diameter distance to the
last scattering surface. Thus they leave imprints on the
observations of CMB and LSS. In Table II, we provide the constraints
on neutrino mass from the current observations and the future
simulated data. For the current data\footnote{Usually Lyman-$\alpha$
data will give stringent constraint on the neutrino mass, however,
its systematics are quite unclear currently. In our global analysis
to be conservative, we have not included it \cite{silk,komastue}.},
within the framework of the $\Lambda$CDM model, we get $\Sigma
m_{\nu}<0.958~eV~(95\%)$ which is consistent with the result in
Ref.\cite{Tegmark:2006az}. For the time evolving EoS of dark energy
model, this limit is relaxed to $\Sigma m_{\nu}<1.59~eV~(95\%)$, due
to the degeneracy between the dark energy parameters and the
neutrino mass\cite{Xia:2006wd,Hannestad:2005gj}.
\begin{table
TABLE II. Constraints on neutrino mass from the current observations
and the future simulations. We have shown the $2\sigma$ upper
limits. In order to highlight the contribution from LAMOST, we
compare the results with/without LAMOST. \iffalse Note that when we
perform the analysis for the future surveys, the parameter space is
given by Eq. (\ref{para1}) plus $f_{\nu}$ for the two sets of
fiducial models.\fi
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Current}&\multicolumn{3}{c|}{Future}\\
\cline{3-5}
\multicolumn{2}{|c|}{}&&PLANCK+SNLS&PLANCK+SNLS+LAMOST\\
\hline
$\Lambda$CDM&$<$0.958eV&FMI&$<$0.957eV&$<$0.377eV\\
Run w&$<$1.59eV&FMII&$<$0.915eV&$<$0.346eV \\ \hline
\hline
\end{tabular}
\end{center}
\end{table}
With the simulated data, in figure \ref{fig4}, we illustrate the one
dimensional probability distribution of the total neutrino mass
$\sum m_{\nu}$. The black solid line is given by the current
constraints, the red solid line is given by fitting with the
simulated PLANCK $+$ SNLS with fiducial model II and the red dashed
line is given by including the simulated LAMOST. The blue solid line
and blue dashed line are the results obtained with the fiducial
model I. Our results show that the LAMOST can provide a more
stringent constraint on the neutrino mass. For example, the
$2\sigma$ neutrino mass limit is changed from $0.957 eV$ to $0.377
eV$ by including the simulated LAMOST data with the fiducial model
I.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.9]{LAMOST_A2A3_tri9.eps}
\vskip-1.3cm \vspace{10mm}\caption{one-dimensional distributions and
two-dimensional $68\%$ and $95\%$ limits on the cosmological
parameters. The black solid lines are obtained with the simulated
PLANCK $+$ SN Ia and the red solid lines are from PLANCK $+$ SN Ia
$+$ LAMOST.\label{fig1}}
\end{center}
\end{figure}
\begin{figure}[htbp]
\vspace{3mm}
\begin{center}
\includegraphics[scale=0.5]{DE_2D2.eps}
\vskip-1.3cm \vspace{10mm}\caption{2-d joint $68\%$ and $95\%$
confidence regions of the parameters $w_0$ and $w_a$ for flat
universe. The black solid line is given by the current
constraints, the red solid line comes from the simulated data of
PLANCK and 5year SNLS with fiducial model II, while the red dashed
line is by combining the simulated LAMOST data. The blue solid
line and blue dashed line are the results with the fiducial model
I with/without LAMOST respectively.\label{fig2}}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.45]{Neu.eps}
\vskip-1.3cm \vspace{10mm}\caption{one-dimensional constraints on
the neutrino mass. The black solid line is given by the current
data, the red solid line is given by fitting with the simulated
PLANCK $+$ SNLS with fiducial model II and the red dashed line is
given by including the simulated LAMOST. The blue solid line and
blue dashed line are the results obtained with the fiducial model I.
\label{fig4}}
\end{center}
\end{figure}
\section{Summary}
In this paper we have studied the sensitivity of LAMOST project to
the determination of the cosmological parameters. With the simulated
$3$D matter power spectrum of LAMOST, in combination with the future
PLANCK data and 5-year SNLS data, we have obtained the constraints
on the various parameters by employing the MCMC method. Our results
show the potential for LAMOST in constraining the cosmological
parameters, especially on the EoS of dark energy and the neutrino
mass.
We have performed our analysis in flat universe, however, if we take
the curvature $\Omega_k$ into consideration, namely if we free
$\Omega_k$ in the global analysis, the basic conclusion will not
change. That is to say, we can also find the potential of the future
LAMOST data in determining cosmological parameters, however, the
specific contours of each cosmological parameters will be enlarged
for the additional degree of freedom, and more relevant discussion
can be seen in our previous paper\cite{Zhao:2006qg}, in which we
have implemented the global fitting with the observational data for
non-flat universe.
{\it Acknowledgement. --- }
Our MCMC chains were finished in the Shuguang 4000A of the Shanghai
Supercomputer Center (SSC). We thank Xuelei Chen, Bo Qin, Charling
Tao, Lifan Wang, Pengjie Zhang, Yong-Heng Zhao, Zong-Hong Zhu,
Tao-Tao Qiu and Lei Sun for discussions. This work is supported in
part by China postdoctoral science foundation, National Science
Foundation of China under Grant No. 10533010, and the 973 program
No.2007CB815401, and by the Key Grant Project of Chinese Ministry of
Education (No. 305001).
|
1,116,691,499,061 | arxiv | \section{General theory}
Consider $(M, \omega)$ --- a compact simply connected symplectic manifold of dimension $2n$, endowed with a symplectic form of integer type, $[\omega] \in H^2(M, \mathbb{Z})$.
Then it exists a prequantization data --- the pair $(L, a)$, where $L \to M$ is a hermitian line bundle and $a \in {\cal A}_h(L)$ is a hermitian connection such that the curvature form $F_a = 2 \pi i \omega$
(thus the first Chern class $c_1(L) = [\omega]$).
An $n$ - dimensional submanifold $S \subset M$ is called {\it lagrangian} iff the restriction $\omega|_S$ identically vanishes; $S$ is called {\it Bohr - Sommerfeld} lagrangian (or BS for short)
iff the resctriction $(L, a)|_S$ admits a covariantly constant section $\sigma_S \in \Gamma (L|_S)$, defined up to $\mathbb{C}^*$. For any choosen smooth section
$\alpha \in \Gamma(M, L)$ we say that $S \subset M$ is special with respect to $\alpha$ Bohr - Sommerfeld lagrangian cycles (or $\alpha$ - SBS for short) iff it is
Bohr - Sommerfeld lagrangian and the restriction
$\alpha|_S = e^{i c} f \sigma_S$, where $c$ is a real constant and $f $ is a strictly positive real function on $S$. In the present paper we consider compact orientable lagrangian submanifolds
only.
It was already shown that the definition above can be reformulated in terms of calibrated lagrangian geometry. For any smooth section $\alpha \in \Gamma (M, L)$ we define
the complex valued 1 -form
$$
\rho_{\alpha} = \frac{<\nabla_a \alpha, \alpha>}{<\alpha, \alpha>} \in \Omega^1_{\mathbb{C}}(M \backslash D_{\alpha})
$$
where $D_{\alpha} = \{ \alpha = 0 \} \subset M$ is the zeroset of $\alpha$. This form satisfies the floowing properties: its real part is exact being $d ({\rm ln} \vert \alpha \vert)$,
and the imaginary part is a canonical 1- form on the complement $M \backslash D_{\alpha}$ since $d ({\rm Im} \rho_{\alpha}) = 2 \pi \omega$.
In these terms an $n$ - dimensional submanifold $S \subset M$ is $\alpha$ - SBS lagrangian if and only if the restriction ${\rm Im} \rho_{\alpha}|_S$
identically vanishes (the proof and details can be found in [1]).
Using this ``calibrated reformulation'' of the definition one proved that any Weinstein neighborhood ${\cal O}(S_0)$ of an $\alpha$- SBS lagrangian submanifold $S_0$ cann't contain any other
$\alpha$ - SBS lagrangian submanifold of the same type. It follows that a fixed $\alpha$ admits a discrete set of $\alpha$ - SBS lagrangian submanifolds of the same topological type.
Recall that the situation stated above is the input of ALAG - programme, proposed by A. Tyurin and A. Gorodentsev in [2]: starting with such $(M, \omega)$ they constructed
certain moduli space of Bohr - Sommerfeld lagrangian cycles of fixed topological type, denoted as ${\cal B}_S$. Such a moduli space is a Frechet smooth infinite dimensional
real manifold, locally modelled by unobstracted isodrastic deformations of BS lagrangian submanifolds. To define ${\cal B}_S = {\cal B}_S({\rm top} S, [S])$ one has
to choose topological type of $S$ and the homology class
$[S] \in H_n(M, \mathbb{Z}$ of the corresponding BS submanifolds. Moreover, the BS - level can be shifted up, so one has a series of the moduli space ${\cal B}_S^k$
(details see in [2]).
Therefore in the situation presented above we can consider in the direct product ${\cal B}_S \times \mathbb{P} \Gamma (M, L)$ certain subset given by the condition: pair $(S, p) \in {\cal U}_{SBS}$
iff $S$ is $\alpha$ - SBS lagrangian submanifold where $\alpha$ corresponds to point $p$ in the projectivized space (and of course it is possible to shift the BS - level, getting
the corresponding subset in the direct product ${\cal B}_S^k \times \mathbb{P} \Gamma (M, L^k)$, but in the present text we leave aside the variation of BS- level).
This subset ${\cal U}_{SBS}$ was studied in [1]; the main result is that the canonical projection $p: {\cal U}_{SBS} \to \mathbb{P} \Gamma (M, L)$ has discrete fibers,
has non degenerated differential in smooth points and projects ${\cal U}_{SBS}$ to an open subset of the last projective space. As a corollary one establishes that
${\cal U}_{SBS}$ admits a Kahler structure at smooth points. It seems to be interesting since we have started from pure symplectic situation and came to
an object from the Kahler geometry.
\section{The case of algebraic varieties}
Let $X$ be a compact smooth simply connected algebraic variety which admits an ample line bundle $L$; then it can be regarded as a special case of the situation presented above.
Indeed, fixing an appropriate hermitian structure $h$ on $L$ one induces the corresponding Kahler form $\omega$: any holomorphic section $\alpha \in H^0(X, L)$
in the presence of $h$ defines the function $\psi_{\alpha} = - {\rm ln} \vert \alpha \vert_h$ on the complement $X \backslash D_{\alpha}$ which is a Kahler potential, therefore
$\omega$ is given by $ d I d \psi_{\alpha}$, and the ampleness condition ensures that whole $X$ is covered by the complements to divisors from the complete linear system $\vert L
\vert = \mathbb{P} H^0(X, L)$, so $\omega$ is globally defined in $X$, see [3].
Thus one can consider $(X, L)$ as a symplectic manifold with integer symplectic form, and $L$, the prequantization line bundle, is automatically endowed with a
prequantization connection $a$, compatible with the holomorphic structure on the bundle.
For a holomorphic section $\alpha$ one has $\nabla_a \alpha = \partial_a \alpha$ and consequently the form $\rho_{\alpha}$ has type (1, 0) with respect to the complex structure.
Then one can deduce that the SBS condition with respect to a holomorphic section is equivalent to the following condition: a lagrangian submanifold $S \subset X$ is
$\alpha$- SBS if and only if it is invariant under
the flow generated by the gradient vector field ${\rm grad} \psi_{\alpha}$ (see [4]).
It is well known is algebraic geometry fact: the complement $X \backslash D_{\alpha}$, described above, is an example of the Stein variety,
and since we would like to study lagrangian geometry of these complements we must follow the key points of the programme ``From Stein to Weinstein and back'', see [5].
The situation we are studying here must be regarded in the framework of the Weinstein manifolds and Weinstien structures, see [5] and [6].
Indeed, the gradient vector field ${\rm grad} \psi_{\alpha}$ is Liouville, while the function $\psi_{\alpha}$ is the second ingredient of the Weinstein structure
(of course, it just reflects the fact that $X \backslash D_{\alpha}$ is Stein).
Since we claim that a lagrangian $S \subset X \backslash D_{\alpha}$ is $\alpha$ - SBS if and only if it is stable with respect to the gradient flow of
$\psi_{\alpha}$ it follows that such an $S$ must be contained by the base set $B_{\alpha} \subset X \backslash D_{\alpha}$ defined as the union of (1) finite critical points of $\psi_{\alpha}$
and (2) finite trajectories of the gradient flow. Now we can translate our $\alpha$ - SBS condition to the language of Weinstein manifolds and structures:
a lagrangian submanifold $S \subset X$ is $\alpha$- SBS iff it is a component of the lagrangian sceleton defined by the Weinstein structure given by $({\rm grad} \psi_{\alpha}, \psi_{\alpha})$
on the complement $X \backslash D_{\alpha}$.
{\bf Remark.} In the previous texts [4] we use the term ``Lagrangian shadow of ample divisor'' for the lagrangian components of the lagrangian sceleton,
since we would like to emphersize the fact that the corresponding lagrangian components
arises for any ample divisor; in the theory of Weinstein manifolds which covers much wider situation than our complements $X \backslash D_{\alpha}$ one speaks about
regular lagrangian submanifolds. Below we use this parallel for the modified definition of moduli space of special Bohr - Sommerfled lagrangian cycles.
The old definition (see [4]) we have tried to exploite for the consruction of certain moduli space of SBS lagrangian cycles over algebraic varieties used to be the following.
Take the canonical projection $p: {cal U}_{SBS} \to \mathbb{P} \Gamma (M, L)$ to the second direct summand from the section 1, then in the present situation it is a finite dimensional
projective subspace $\mathbb{P} H^0(X, L) \subset \mathbb{P} \Gamma(X, L)$ which corresponds to holomorphic sections.
Then the preimage ${\cal M}_{SBS} = p^{-1}(\mathbb{P} H^0(X, L))$ must be finite (and we have proved it for smooth Bohr - Sommerfeld
submanifolds in [4]), and we would like to understand it as the ``moduli space''.
But the great problem appears in this case since the componnents of the lagrangian sceleton (or the base set) $B_{\alpha}$ are very far from being smooth lagrangian submanifolds (or even cycles),
therefore strictly speaking our coarse ``moduli space'' must be empty in many cases, and the framework of algebraic geometry doesn't admit any variational freedom to resolve this trouble.
In the simple case, when $H_n(X \backslash D_{\alpha}, \mathbb{Z}) = \mathbb{Z}$ for generic smooth element $D_{\alpha}$ of the complete linear system $\vert L \vert$ the moduli space ${\cal M}_{SBS}$
can be however correctly defined,
as it was done in [4], but in more geometrically interesting cases we face great problem in this way: we must either present a strong theory of desingularization of
the components of lagrangian shadows doing it however in concordance with the technical details of ALAG or find a different definition
of special Bohr - Sommerfeld cycles with respect to holomorphic sections such that these new special submanifolds should be automatically smooth.
\section{Desingularizing the definition}
Recall that we study the lagrangian geometry of the complement $X \backslash D$, where $D$ is a compact smooth simply connected algebraic variety and $D$ is an ample divisor;
then we have fixed an appropriate hermitian structure $h$ on the ample line bundle $L \to X$, corresponding to $D$, and get the Kahler form $\omega$, such that
the function $\psi_{\alpha} = - {\rm ln} \vert \alpha \vert$ is a Kahler potential ($D = \{ \alpha = 0 \} \subset X$).
The Kahler potential $\psi_{\alpha}$ defines the structure of the Weinstein domain on $X \backslash D$, given by 1- form $\lambda = I d \psi_{\alpha}$ and $\psi_{\alpha}$ itself (see [5]);
then we can study {\it exact} compact orientable Lagrangian submanifolds in $X \backslash D_{\alpha}$, so is such Lagrangian submanifolds $S \subset X \backslash D$ that the resctriction
$\lambda|_S$ is an exact form. Remark that any such an exact $S$ must be Bohr - Sommerfeld in whole $X$ with respect to the corresponidng prequantization data. Moreover,
we can introduce certain condition on Bohr - Sommerfeld lagrangian submanifolds in $X$ which is equivalent to the exactness condition on the complement $X \backslash D_{\alpha}$.
Namely, let $X \supset D$ is as above, and the corresponding symplectic form $\omega$ evidently rerpesents the cohomology class Poincare dual to $[D] \in H_{2n-2}(X,
\mathbb{Z})$. Then we say that a lagrangian submanifold $S \subset X$ is {\it D - monotonic} iff $D \cap S = 0$ and for any oriented loop $\gamma \subset S$ and any
compatible oriented disc $K_{\gamma} \subset X$, bounded by $\gamma$, the topological sum of the intersection points $D \cap K_{\gamma}$ equals to
the symplectic area of $K_{\gamma}$ (note that if $K_{\gamma}$ intersects $D$ non transversally then we can deform it to have transversal intersection). Of course, this
definition is applicable in much wider situation than stated above.
Now it is not hard to see that
{\bf Proposition.} {\it A lagrangian submanifold $S \subset X \backslash D$ is exact with respect to $\lambda$ if and only if $S$ is D - monotonic with respect to
$D$.}
In [6] one presents the list of open problems stated in the theory of Weinstein manifolds; and one of these problems hints how the definition
of spceial Bohr - Sommerfeld lagrangian submanifolds can be modified. Namely the Problem 5.1 from [6] asks are there non - regular exact lagrangian submanifolds
in $X \backslash D$? Regularity here means that they appear as components of lagrangian sceleton of the Weinstein domain; at the same time in our language regularity means that
they appear as components of tha Lagrangian shadow $Sh^{Lag}(D)$. If the answer is negative then we should get desingularizations of the components
of $Sh^{Lag}(D)$ (or lagrangian sceleton) given by exact lagrangian submanifolds in the complement $X \backslash D$. Of course the space of exact lagrangian
submanifolds is too huge for our purposes (finitness of the moduli space), but we can factorize the space of all exact lagrangian submanifolds modulo
Hamiltonian isotopies.
Even if Problem 5.1 has positive solutions we introduce the following
{\bf Definition.} {\it In the situation presented above consider the space of pairs $\tilde {\cal M}_{SBS} = \{([S], D)| S \in {\cal B_S}, D \in \vert L \vert \}$
where $[S]$ is a class of smooth compact orientable exact lagrangian submanifolds on the complement $X \backslash D$ up to Hamiltonian isotopy.}
The space $\tilde{\cal M}_{SBS}$ admits the forgetfull map $p: \tilde{\cal M}_{SBS} \to \vert L \vert$, and we can prove that the fibers are discrete
and that the differential of this map is non degenerated (the arguments are essentially the same as in [1]: we study the local picture over
a Weinstein neighborhood of a fixed D - monotonic $S \subset M$).
In this setup the negative answer to Problem 5.1 from [6] should mean that we have exactly the same spaces: ${\cal M}_{SBS} = \tilde{\cal M}_{SBS}$
when the first space can be correctly defined (f.e. if $H_n(X\backslash D, \mathbb{Z}) = \mathbb{Z})$). At the same time
even if there are examples of non regular exact lagrangian submanifolds we still have some freedom to claim that the identity takes place.
For example, let it be a lagrangian sphere which appears in the case when one doesn't have a sphere as a regular component in the sceleton
(the mostly possible example for Problem 5.1), --- but since we've fixed the topological type ${\rm top} S$ to construct the moduli space ${\cal B}_S$
then take
it different from the spherical one, and then the choice ``kills'' inapproriate components. On the other hand if Problem 5.1 has negative answer
then it ensures that the forgetfull map $p$ has finite fibers, so the moduli space admits the structure of a finite covering of an open part
of the projective space.
Let us illustrate the story by the example which has appeared several times in the previous texts, see [arX]. Take $X = \mathbb{C} \mathbb{P}^1$ and
consider $L = {\cal O}(3)$. Study the situation for certain concrite holomorphic section f.e. for the section defined by the polynomial $P_3 = z_0^3 -z_1^3$.
It vanishes at three roots of unity $p_1, p_2, p_3$ which become poles for the function $\psi = - {\rm ln} \vert P_3 \vert$; the last one has
exactly 5 finite critical points --- 2 local minima $m_1 = [1:0], m_2 = [0:1],$ and three saddle points $s_1, s_2, s_3$ at the roots of -1. The base set
consists of three lines $\gamma_i$ each of which joins $m_1$ and $m_2$ passing through $s_i$. Totally we get non smooth simple loops only in the base set:
each closed loop is formed by two lines $\gamma_i, \gamma_j$, and at the points $m_1, m_2$ the loop has corners. Therefore if we are looking for the ``old version''
of the moduli space ${\cal M}_{SBS}$ we must specialize what singular loops are allowed in our situation. However in this case the specialization can be done:
we may say that a singular loop is allowed if it can be transformed by a small deformation to a smooth Bohr - Sommerfeld loop. Then one gets exactly three
simple elements for the moduli space.
But our new ``desingularized'' definition of the moduli space works much better:
we claim that there are exactly three smooth exact closed loops on the complement $\mathbb{C} \mathbb{P}^1 \backslash \{p_1, p_2, p_3\}$ up to Hamiltonian isotopy.
Indeed, for each zero $p_i$ we can take a smooth loop surrounding $p_i$ only and then ``blow'' it to bound a disc of symplectic area $\frac{1}{3}$, ---
it is the desired one. Therefore the moduli space of special Bohr - Sommerfeld lagrangian cycles $\tilde {\cal M}_{SBS} (S^1, 0, {\cal O}(3))$
is organaized as follows: over generic point of $\mathbb{P}H^0 (\mathbb{C}\mathbb{P}^1, {\cal O}(3)) \backslash \Sigma$ where $\Sigma$ is the Veronose embedding of $\mathbb{C} \mathbb{P}^1$
one has three preimages, and the ramification appears over the discriminant locus, corresponding to multiple zeros, where three leaves come to one.
In a sense the presented passage from the components of sceleton to exact lagrangian submanifolds looks like the standard reduction from $\bar \partial$ - operators to hermitian connections
in the theory of stable holomorphic vector bundles, see [7]. Indeed, since the quotient space of $\bar \partial$ operators modulo locally non compact gauge group is
topologically extremely complicated one realizes the holomorphic structures by the gauge classes of hermitian connections.
The realization of special Bohr - Sommerfeld lagrangian cycles presented here via D- monotonic lagrangian submanifolds modulo Hamiltonian isotopies makes it possible to
realize the following ``mirror symmetry dream'': in [8] one claimed that lagrangian submanifolds should correspond to vector bundles. This conjectured duality
can be realized using the moduli space of SBS lagrangian submanifolds as follows: consider in our given algrebraic variety another Lagrangian submanifold $S_0 \subset X$. Then for any
point of the moduli space $([S], p) \in \tilde {\cal M}_{SBS}$ take the vector space $HF(S_0; S, \mathbb{C})$ of the Floer cohomology of the pair $S_0, S$, where $S$ is a smooth
D - monotonic lagrangian submanifold, representing the class $[S]$. Since the Floer cohomology is stable with respect to Hamiltonian isotopies,
the vector space doesn't depend on the particular choice of $S$; moreover, since the moduli space $\tilde {\cal M}_{SBS}$ is locally generated by specified
Hamiltonian isotopies this implies that globally over $\tilde {\cal M}_{SBS}$ the vector spaces are combined into a complex vector bundle, which
we denote as ${\cal F}_{S_0}$. The smoothness of the representative $S$ is important here.
Thus we get a functor from the space of lagrangian submanifolds in $X$ to the set of complex vector bundles on $\tilde{\cal M}_{SBS}$.
On the other hand the old definition of ${\cal M}_{SBS}$ as a fair subset of the direct product ${\cal B}_S \times \mathbb{P} H^0(X, L)$ admits
a strightforward introduction of a Riemannian metric on it: to do this one has to add an appropriate orientation of our Bohr - Sommerfled lagrangian
submanifolds. Indeed, since $X$ after the choice of $\omega$ is automatically endowed with the corresponding Riemannian metric $g$ fixing
an orientation on $S \in {\cal B}_S$ we get the corresponding inner product on the tangent space at each point (and if we don't fix an orientation then we get a conformal structure).
Recall that at a point $S \in {\cal B}_S$ the tangent space is given by $C^{\infty}(S, \mathbb{R})$ modulo constants (see [2]), and
in the presence of the restricted metric $g|_S$ this space can be modelled by smooth functions normalized by $\int_S f d \mu(g|_S) = 0$.
Then the inner product is given just by the integration $\int_S f_1 f_2 d \mu(g|_S)$. Taking the standard Fubini - Study metric on the second direct summand
we induce the metric on the direct product ${\cal B}_S \times \mathbb{P} H^0(X, L)$ and then restricting it to our subset we get
a Riemannian metric on it (of course, one must check all details of the construction to ensure that one gets a correctly defined metric).
Now if we are working with the modified definition on the moduli space $\tilde{\cal M}_{SBS}$ the situation turns to be much more delicate:
how to incorporate the discussed construction to the space of classes? Is it possible? --- the answer is not quite clear.
We continue the work on these problems.
{\bf Acknowledgments.} I would like to cordially thank Ya. Eliashberg, who greatly inspired me in my hamble studies.
$$$$
{\bf Bibliography}
[1] Nik. Tyurin, {\it ``Special Bohr - Sommerfeld lagrangian submanifolds''}, Izvestiya: Mathematics, 2016, 80:6, pp. 274–293;
[2] A. Gorodentsev, A. Tyurin, {\it ``Abelian Lagrangian Algebraic Geometry''}, Izvestiya: Mathematics, 2001, 65:3, pp. 437–467;
[3] P. Griffits, J. Harris, {\it ``Principles of algebraic geometry''}, NY, Wiley, 1978;
[4] Nik. Tyurin, arXiv:1508.06804, arXiv:1601.05975, arXiv:1609.00633;
[5] K. Cieliebak, Ya. Eliashberg, {\it ``From Stein to Weinstein and Back - Symplectic geometry of Affine Complex Geometry''}, Colloqium Publications Vol. 59, AMS (2012);
[6] Ya. Eliashberg, {\it ``Weinstein manifolds revisited''}, arXiv: 1707.03442;
[7] S. Donaldson, P. Kronheimer, {\it ``The geometry of four manifolds''}, Clarendon Press, Oxford, 1990;
[8] A. Tyurin, {\it ``Geometric quantization and Mirror symmetry''}, arXiv:math/9902027v.1.
\end{document}
|
1,116,691,499,062 | arxiv | \subsection{Single Ended Event Reconstruction - Diego+Xianyi}
In PROSPECT waveform analysis, features from individual PMTs are grouped into multi-segment \emph{clusters} within a 20~ns arrival time window. The paired waveforms of PMTs on opposite ends of a double-ended (DE) segment are stored and analyzed together as a reconstructed \emph{pulse} containing segment-level information.
Single-ended (SE) segments with only one operable PMT also have waveform features stored and reconstructed as pulses but the position dependence of scintillation light collection means that deposited energy cannot be accurately reconstructed~\cite{PROSPECT:2015rce, PROSPECT:2018hzo, stereo_2019, Neutrino-4}. Previous analysis only used information from DE segments, while here Single-Ended Event Reconstruction (SEER) is added. SE pulses were found to produce well-separated electronic and nuclear recoil PSD distributions across the relevant range of PMT pulse amplitudes, providing a mechanism to suppress background without full deposited energy information.
The SEER PSD parameter is determined from PMT pulse integrals as described in~\cite{PROSPECT:2020sxr}, providing the mean and width of the electromagnetic and nuclear recoil PSD distributions as a function of SE reconstructed energy (E$_{rec}$) and data collection period for each SE segment.
SEER-determined event information enters into the IBD candidate selection as follows:
(1) if a delayed cluster includes any SE pulses, it is rejected, since neutron capture on $^6$Li is localized in a single segment;
(2) if a prompt cluster contains SE pulses with E$_{rec}<$ 0.8 MeV and a PSD value $3.5$~$\sigma$ above the mean of the electromagnetic PSD distribution, the cluster is rejected, since it likely contains nuclear recoils;
(3) if a prompt cluster contains SE pulses with E$_{rec}$ $> 0.8$~MeV , it is rejected regardless of PSD to provide enhanced screening of $\gamma$-like backgrounds;
(4) IBD candidates within 170~$\mu$s of a cluster containing only high-PSD SE segments are vetoed.
The second improvement introduced here is the division of the dataset into multiple time periods with different segment configurations. This Data Splitting (DS) process better utilizes information from earlier periods with more functional PMTs.
The DS criteria were
(1) each period contains one full reactor operational cycle, resulting in five DS periods;
(2) all periods have reactor off data bracketing the included reactor on cycle, with the exception of period 1 since the first data collected by PROSPECT was with the reactor on; and
(3) reactor-off data is divided between adjacent periods approximately equally.
For three of the four period divisions, calibration campaigns provide a convenient break point.
All PMTs that were inoperative at any point during a DS period were excluded. Fig.\ref{fig:DS} illustrates the time evolution of DE and SE segments.
A reconstructed energy spectrum of IBD events is then formed for every DS period. The total number of IBD events detected is defined as $\text{N}_{\text{IBD}} = \text{N(E)}_{\text{On}} - \text{R} \cdot \text{N(E)}_{\text{Off}}$, where $\text{N}_{\text{On(Off)}}$ corresponds to the detected IBD-like candidates during reactor-on(off) periods with accidental backgrounds subtracted, $\text{E}$ runs over the energy region [0.8-7.4] MeV, and $\text{R}$ is the relative on-to-off live-time ratio~\cite{PROSPECT:2020sxr}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{./Live_Segment_Evolution.pdf}
\caption{Time evolution of DE and SE segment numbers with reactor-on (RxOn) and reactor-off(RxOff) running, and definition of the 5 DS periods. The number of DE segments can be seen to decrease over time while the sum of DE and SE segments decreases slightly. The DE and SE segment numbers at the end of each DS period define the detector configuration used.}
\label{fig:DS}
\end{figure}
With SEER and DS implemented, the IBD selection was optimized to maximize the effective number of signal IBD events ($N_{\rm eff}$) using 20$\%$ of the full dataset, randomly sampled. $N_{\rm eff} \equiv \sum_i ({\rm N}_{\rm IBD,i} / \sigma_i )^2$, where $\sigma_i$ includes the statistical uncertainty of both signal and background and the sum runs over 0.2~MeV prompt energy bins from 0.8 to 7.4~MeV.
$N_{\rm eff}$ illustrates the statistical significance of the measured IBD signal by incorporating the combined statistical uncertainty of both signal and background, being equal to the number of background-free signal events that yield equivalent statistical uncertainty.
Table~\ref{tab:stats} summarizes the relevant event rates for the previous and current analyses.
The increase in $N_{\rm eff}$ is largely due to the improved background rejection capabilities, as well as an increase in active volume.
\begin{table}[h]
\begin{tabular}{ccccc}
\textbf{Data Set} & \textbf{Rx-On(Off) Days} & $\textbf{N}_{\textbf{IBD}}$ & \textbf{ $\mathbf{N_{\rm \mathbf{eff}}}$} & \textbf{S:CB(AB)} \\
\hline
\hline
Prev. Analysis & 95.65(73.09) & 50560 $\pm$ 406 & 18100 & 1.37(1.78) \\
\hline
This Analysis & 95.62(72.65) & 60650 $\pm$ 338 & 35875 & 3.81(4.25) \\
\hline
Period 1 & 9.54(14.54) & 6245 $\pm$ 101 & 4196 & 3.66(6.10) \\
Period 2 & 22.83(15.71) & 16669 $\pm$ 172 & 10381 & 4.45(4.68)\\
Period 3 & 23.20(16.40) & 14983 $\pm$ 166 & 8953 & 4.00(4.36) \\
Period 4 & 22.29(16.79) & 13400 $\pm$ 161 & 7670 & 3.67(3.35)\\
Period 5 & 17.76(9.21) & 9353 $\pm$ 145 & 4674 & 3.30(2.75) \\
\end{tabular}
\caption{Final IBD event statistics for the previous and current analysis. Reactor On(Off) data taking time is presented in units of calendar days. $N_{\rm IBD}$, $N_{\rm eff}$ and both signal to cosmogenic background (S:CB) and signal to accidental background (S:AB) ratios are calculated over the IBD energy region of [0.8, 7.2]~MeV for previous analysis in \cite{PROSPECT:2020sxr} and [0.8, 7.4]~MeV for the current analysis.}
\label{tab:stats}
\end{table}
The final analysis method introduced in this Letter is a multi-period detector response unfolding.
This procedure enables the combination of spectra measured with varying detector responses in the different DS periods into a single antineutrino energy spectrum that can be compared to reactor models or other experimental measurements.
The PROSPECT $^{235}$U{} spectrum analysis uses the WienerSVD approach to perform the unfolding~\cite{Tang:2017rob}. Descriptions of the PROSPECT unfolding approach can be found in the previous joint analyses with the Daya Bay~\cite{DayaBay:2021owf} and STEREO~\cite{Stereo:2021wfd} collaborations.
The key difference in this analysis is that the separate PROSPECT DS periods are treated as correlated rather than uncorrelated inputs to a joint spectrum.
The unfolding process uses the Huber-Mueller (HM) $^{235}$U{} model~\cite{HM:Huber,HM:Mueller} as the assumed form of the antineutrino spectrum when constructing the Wiener filter.
The input to the analysis from each DS period includes the corresponding IBD prompt spectrum as well as response and covariance matrices. Each of the five prompt spectra spans a range of prompt energy of [0.8, 7.4]~MeV divided into 33 bins of 0.2~MeV width. Both non-fuel and non-equilibrium contributions from the reactor~\cite{PROSPECT:2020vcl} have been subtracted.
Response and covariance matrices are generated for each DS period using a well-benchmarked simulation following the procedure in~\cite{PROSPECT:2020sxr}.
The five prompt DS spectra are combined into a 165-bin joint energy spectrum vector.
Response and covariance matrices for each DS period are combined into their joint counterparts.
The output of the unfolding framework is a 26-bin antineutrino energy spectrum spanning the energy range of [1.8, 8.3]~MeV, with bin widths of 0.25~MeV. \par
Jointly unfolding data from closely related measurements requires consideration of correlated uncertainties between datasets.
Uncertainties considered as period-correlated are the liquid scintillator energy response, smearing of energy resolution due to liquid scintillator degradation \cite{prospect_prd}, optical grid panel thickness, fiducialization along the length of the cell, and data acquisition thresholds.
Systematic uncertainties from background variations and IBD spectrum background subtraction are considered to be uncorrelated between periods.
The energy bin and period uncertainty correlations for each effect are included in a joint covariance matrix as on- and off-diagonal blocks which are produced through the generation and analysis of systematically fluctuated MC datasets~\cite{PROSPECT:2020sxr}. \par
\section{The PROSPECT Experiment - Ben}
The PROSPECT antineutrino detector and experimental location at the High Flux Isotope Reactor (HFIR) are described in~\cite{PROSPECT:2018dnc}. HFIR is a 85~MW$_\mathrm{th}$ compact core research reactor that uses 93~\% enriched $^{235}$U{} fuel (HEU).
The combination of HEU fuel and full core replacement every 24-day reactor cycle means that fuel evolution is negligible and over 99~\% of emitted \ensuremath{\overline{\nu}_e}{} are due to ~$^{235}$U{} fission. The detector is located at surface level with minimal overburden, at an average baseline of 7.9\,m from the reactor core.
The detector comprises a single scintillator tank optically separated into 154 segments (14.5~cm $\times$ 14.5~cm $\times$ 117.6~cm), each readout by two PMTs~\cite{Ashenfelter_2019}.
Approximately 4 tons of $^6$Li-loaded liquid scintillator (LiLS) with good pulse shape discrimination (PSD) properties are used to reject fast neutron recoil backgrounds and identify neutron captures via the $^6\textrm{Li}(n,t)\alpha$ interaction. Neutrinos are detected via Inverse Beta Decay (IBD), in which an \ensuremath{\overline{\nu}_e}{} interacts with a proton in the LiLS, producing a positron and a neutron. IBD events are identified via the spatial-temporal correlation of a prompt electromagnetic deposition ($e^+$ ionization and annihilation) and a delayed neutron capture on $^6$Li ($50~\mu$s mean capture time).
Intrinsic, external, and cosmogenic radiation sources are used to establish the detector’s energy scale, characterize differences in response between segments, and correct for time variations in detector performance~\cite{PROSPECT:2020sxr}. A detailed \textsc{Geant4}~\cite{Mahmood:1027671} Monte Carlo model of the detector is tuned to accurately reproduce calibration-derived energy and segment multiplicity distributions from multiple sources spanning a range of energies between $0.5$-$13.4$~MeV. This tuned simulation model is used to predict the response matrix that connects incident \ensuremath{\overline{\nu}_e}{} energy to the observed prompt energy.
During operation, a number of PMTs gradually became inoperable because of current instabilities. In addition, there was a gradual decrease in the LiLS light yield.
Previous PROSPECT analyses~\cite{PROSPECT:2018dtt,PROSPECT:2018snc,PROSPECT:2020sxr} used a single detector configuration excluding all segments with any PMT inoperable at any point during data collection, thereby discarding information from earlier time periods where more PMTs were functional.
In this Letter, we introduce improved event reconstruction and analysis techniques that take advantage of multiple detector configurations to more efficiently use information from segments that were fully and/or partially instrumented for part of the data collection period. The resulting dataset yields a significant increase in statistical power thanks to the increased active-detector volume and improved background rejection.
\section{Acknowledgements}
This material is based upon work supported by the following sources: US Department of Energy (DOE) Office of Science, Office of High Energy Physics under Award No. DE-SC0016357 and DE-SC0017660 to Yale University, under Award No. DE-SC0017815 to Drexel University, under Award No. DE-SC0008347 to Illinois Institute of Technology, under Award No. DE-SC0016060 to Temple University, under Award No. DE-SC0010504 to University of Hawaii, under Contract No. DE-SC0012704 to Brookhaven National Laboratory, and under Work Proposal Number SCW1504 to Lawrence Livermore National Laboratory. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and by Oak Ridge National Laboratory under Contract DE-AC05-00OR22725. Additional funding for the experiment was provided by the Heising-Simons Foundation under Award No. \#2016-117 to Yale University.
J.G. is supported through the NSF Graduate Research Fellowship Program. This work was also supported by the Canada First Research Excellence Fund (CFREF), and the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery program under grant \#RGPIN-418579, and Province of Ontario.
We further acknowledge support from Yale University, the Illinois Institute of Technology, Temple University, University of Hawaii, Brookhaven National Laboratory, the Lawrence Livermore National Laboratory LDRD program, the National Institute of Standards and Technology, and Oak Ridge National Laboratory. We gratefully acknowledge the support and hospitality of the High Flux Isotope Reactor and Oak Ridge National Laboratory, managed by UT-Battelle for the U.S. Department of Energy.
|
1,116,691,499,063 | arxiv | \section{Introduction}
\label{S:1}
In biology, the notion that there exist a preferred hierarchy of structure and function among organisms is widely accepted as a fallacy \cite{SA}. Thus, the idea of the devolution of species is considered indiscernible from their evolution. However, in evolutionary computer science, the latter hierarchy is necessary and generally exists in the form of a value function. Evolutionary genetic algorithms constitute a class of meta-heuristic approaches that aim at reaching a close-to-optimal solution to a combinatorial optimization problem, through the improvement of the value function associated with a population of sub-optimal candidate solutions, over time. Good evolutionary genetic algorithms are expected to reach a close-to-optimal solution in a reduced number of generations, therefore in most of these algorithms the value function is improved in each new generation. Moreover, all generations of solutions are feasible and sub-optimal. Therefore, the concern lies in designing a process in which value increases fast enough to reach a satisfying solution in a reduced computation time. However, defining stopping conditions for evolutionary genetic algorithms can be a tedious task \cite{STOP}, such conditions often requiring to be defined on a case by case basis \cite{STOP2}, when they are not arbitrarily defined by the computation time available.\\
In numerous combinatorial optimization problems, generating super-optimal unfeasible solutions is a relatively easy task (e.g. through constraints relaxation). We call devolutionary algorithm a computation process in which successive generations of solutions are unfeasible super-optimal solutions and their value typically decreases over time. The goal in such processes is increasing the feasibility of successive generations over time, while trying to limit the decrease in value. Therefore, the main aim in designing devolutionary meta-heuristics lies in reaching an optimal or close-to-optimal solution among the first generations of feasible solutions. \\
In addition to providing natural stopping conditions (i.e. reaching one or a certain number of feasible solutions), there is an intuitive justification to the use of this devolutionary approach for bypassing the issue of premature convergence to local-optima, that is so prevalent in genetic algorithms \cite{local}. Indeed, in evolutionary algorithms, the absolute (i.e. global) value of the properties possessed by successive generations of solution that get passed onto offspring remains unknown all throughout the process. All generated solutions can only be good relative to the sample of solutions generated thus far, whereas with the devolutionary approach, super-optimal solutions in the initial population are expected to possess absolutely good structural properties for a given problem (e.g. a small number of colors for coloring problems), although they are not adequately adapted to this problem. The computation process is oriented in such a way as to pass on these properties to future generations while trying to improve their adaptability to the problem at hand. One can expect withal that generating the initial population of solution would be computationally more demanding in the latter type of processes than in the former, although the design of devolutionary algorithms can feed from advances in linear programming and other methods of generating super-optimal solutions. Another apparent limitation of the devolutionary approach is that the premise of using super-optimal solutions only seems applicable to single-objective combinatorial optimization problems, unlike the evolutionary approach that can be more generally used as a search procedure for a wider variety of tasks, such as multi-objective optimization \cite{111} or machine learning \cite{ML}.\\
This paper is an initial attempt to evaluate the pertinence of developing devolutionary genetic algorithms for hard combinatorial optimization problems. We choose to focus on a class of edge coloring problems known as the Minimum Labeling Steiner Tree (MLST) problem, defined in section \ref{problem}. This variant of the Steiner Tree problem presents the advantage of possessing some well-performing exact and heuristic solving methods, a brief review of which can be found in section \ref{review}. However, to the best of our knowledge, linear-programming based approaches have yet to be tested for the MLST. Thus, there exist a theoretical interest in developing such methods and the body of existing methods offers good opportunities for gauging their performances. In this regard, we propose an integer linear programming formulation of the problem in section \ref{formulation}. and describe the proposed devolutionary genetic algorithm in section \ref{proposed}, which also introduces a new class of valid inequalities that we use to solve relaxations of the MLST problem. We compare this algorithm with the aforementioned exact and heuristic methods in section \ref{computation}. Finally, we draw conclusions regarding the results of this experiment and perspective of further development of the proposed approach in section \ref{conclusion}.
\section{Problem statement}
\label{problem}
Given a graph with labeled (or colored) edges, one seeks a spanning tree covering a subset of nodes, known as terminals or basic nodes, whose edges have the least number of distinct labels (or colors). Formally, given $G = (V,E,L)$ a labeled, connected, undirected graph, where $V$ is the set of nodes, $E$ is the set of edges, that are labeled on the set $L$ of labels (or colors). Let $Q$ be the set of nodes that must be connected in a feasible solution. The objective is to find a spanning tree $T$ of the sub-graph connecting all the terminals $Q$ such that the number of colors used by $T$ is minimized. This problem has numerous real-world applications. For example, a multi-modal transportation networks is represented by a graph where each edge is assigned a color, denoting a different company managing that edge, and each node represents a different location. Thus, it is desirable to provide a complete service between a basic set of locations, without cycles, using the minimum number of companies, in order to minimize the costs.
\section{Related work}
\label{review}
The MLST problem is an extension of the well-studied Steiner tree problem \cite{7}, and of the minimum labeling tree spanning problem \cite{8}, which are known to belong to the class of NP-hard combinatorial optimization problems, and for which (evolutionary) genetic algorithms have proved to be successful meta-heuristic solving approaches \cite{9,10}. Just like the two problems it extends, the MLST problem belongs to the class of NP-hard problems. Thus, its most successful solving approach currently known relies on the use of heuristics and meta-heuristics. The problem was first considered in \cite{1} and a heuristic approach known as the Pilot Method, as well as meta-heuristic approaches, namely variable neighbourhood search \cite{2} and Particle Swarm Optimization \cite{6}, were successfully implemented for its resolution. The Pilot Method was found to obtain the best results compared to some meta-heuristic approaches (Tabu Search, Simulated Annealing, and Variable Neighbourhood Search). One can observe that the existing body of work on the MLST problem is exclusively based on graph-theoretic formulations of the problem. To the best of our knowledge, integer linear programming formulations, and their relaxations, have yet to be explored as a possible framework in which to develop meta-heuristic solving approaches to the problem, despite the fact that this research direction, known as Hybrid Meta-heuristics \cite{11} in the literature, has been fruitful for the Steiner tree problem \cite{12}.\\
The proposed devolutionary approach falls into the wider category of memetic algorithms, as defined in \cite{121} as \textit{``an evolutionary metaheuristic that can be viewed as a hybrid genetic algorithm combined with some kinds of local search''}, and can be more specifically classified as a hybrid nature-inspired algorithm \cite{122}. This type of algorithms have been previously used, with success, for optimization problems \cite{123} in general, and for integer linear programming problems \cite{124} in particular. Thus, a secondary novel aspect of the present research, in addition to devolutionary computation, lies in the use of a hybrid meta-heuristic approach based on an integer linear program formulation of the MLST problem and the introduction of a new class of valid constraints for Steiner problems. A key focus in memetic computing being the inclusion of problem knowledge into the solver technique \cite{125}, we will aim at effectively making use of these constraints in the algorithm, to guide the search procedure and fasten convergence.
\section{Integer Linear Programming Formulation}
\label{formulation}
Similarly to Beasley's formulation of the Steiner tree problem \cite{17}, the MLST problem can be stated as finding a minimum labeling spanning tree $T'$ in a modified network $G' = (V',E',L)$, generated by adding a new node $v'$, and connecting it using ``colorless'' edges to all nodes in $V \backslash Q$ and to an arbitrarily fixed terminal $q_0$, with additional constraints stating that every node in $V \backslash Q$ that is adjacent to $v'$ in $T'$, must be of degree one. As the adjective suggests, we consider ``colorless'' edges to be edges whose labels are not counted, when evaluating a tree they are part of. \\
Let edge variables $x_e \in \{0, 1\}, \forall e \in E'$ and label variables $y_l \in \{0, 1\}, \forall l \in L$ respectively indicate whether an edge $e$ and a label $l$ are used in a spanning tree corresponding to a solution to the MLST problem. We denote $X$ the solution-vector constituted of variables $x_e, \forall e \in E'$, $\delta(k)\subseteq E'$ the set of edges that are incident to a node $k, \forall k \in V'$ or, by extension, the set of edges that have exactly one endpoint in a subset of nodes, and $l(e)$ the color of edge $e, \forall e \in E$. Note that it is not necessary to model the labels of edges in $E' \backslash E$, i.e. ``colorless'' edges. \\
A generic formulation of the problem is given by the following integer linear program:
\begin{alignat}{4}
&\min \sum \limits_{l \in L} y_l & \\
\text{s.t. }& X\text{ is a Spanning tree}& \\
&y_{l(e)} \geq x_e & \forall e \in E \\
&x_{\{v',k\}} + x_{\{k,i\}} \leq 1 & \forall k \in V \backslash Q, i\in V'\backslash \{v'\}: \{k, i\} \in \delta(k) \\
&x_e \in \{0, 1\} & \forall e \in E' \\
&y_l \in \{0, 1\} & \forall l \in L
\end{alignat}
In this linear program, the objective function $(1)$ minimizes the number of required labels. The abstract constraint $(2)$ simply states that the resulting solution $X$ constitutes a Steiner tree. There exist numerous ways to make this constraint explicit. In the following, we assume that it is replaced by the two following types of inequalities, where $E(W)$ is the set of edges with both endpoints in a subset of nodes $W \subset V'$
\begin{alignat}{3}
\sum \limits_{e \in E'} x_e = n & & \\
\sum \limits_{e \in E':e\in E(W)} x_e & \leq |W|-1 \quad & \Phi \neq W \subset V'
\end{alignat}
Note that the number of constraints of type $(8)$ is exponential in the number of nodes of the graph. The two main challenges in solving this model are thus the exponential number of constraints and the integer nature of variables.\\
Inequalities $(3)$ ensure that the label variable associated with the label of each edge is equal to 1, if said edge is part of the solution. Inequalities $(4)$ enforce the previously-described degree constraints on nodes from $V \backslash Q$ that are adjacent to $v'$ in $T'$. Finally, constraints $(5)$ and $(6)$ ensure that all variables are binary.\\
The number of constraints in the linear program can be reduced by replacing inequalities $(3)$ with the following, in which $\left\vert{S_l}\right\vert$ denotes the cardinality of the subset $S_l \subset E$ of edges whose label is $l$:
\begin{alignat}{3}
\sum \limits_{e \in E:l(e)=l} x_e \leq min\{\left\vert{S_l}\right\vert, n-1\} \cdot y_l \quad & \forall l \in L
\end{alignat}
\section{Proposed Devolutionary Approach}
\label{proposed}
The proposed devolutionary approached can be outlined as follows:
\begin{enumerate}[label=\Alph*]
\item Generate a population of super-optimal solutions.
\item Evaluate each individual's fitness and determine population's average fitness.
\item Repeat
\begin{itemize}
\item Select best-ranking individuals to reproduce
\item Mate at random
\item Apply crossover operator
\item Apply mutation operator
\item Evaluate each individual's fitness
\item Determine population's average fitness
\end{itemize}
Until the desired number of feasible solution is reached, in which case a feasible solution cannot reproduce anymore.
\item Choose the best feasible solution.
\end{enumerate}
In the case of the MLST problem, the generation of the initial population, the crossover and mutation operators as well as the evaluation of fitness are performed as follows.
\subsection{Generating a population of initial solutions}
This is performed by relaxing integrity constraints in the integer linear programming formulation of the MLST problem and generating a subset of optimal solutions to the relaxed problem, or alternatively super-optimal solutions to the MLST problem.\\
As stated before the number of inequalities of type (8) is too high to deal with
all of them right from the start of the optimization process. Both the exact method we shall use for comparison and the meta-heuristic method presented here start with a reduced ILP formulation. All the presented inequalities get added except
constraints (8). An iterative process is started that consists in solving this reduced
linear program, separating violated constraints, adding them to the
problem and restarting with solving the enhanced problem. The separation
of violated inequalities, which is not the main focus of this paper, is done using a generic separation library under A Branch-And-CUt System (ABACUS) \cite{ABACUS}. \\
Various super-optimal solutions can be generated by varying the choice of the arbitrary terminal to which $v'$ is connected. For some instances, it can also be fruitful to generate this initial population by using a branching procedure on the values of a small subset of binary variables (e.g. a subset of labels).
\subsection{Fitness function}
An exploitable property of the present formulation is that if edge variables $x_e, \forall e \in E'$ take an integer value in a solution, label variables $y_l, \forall l \in L$ would also have an integer value, given the structure of constraints of type (3) or (9) and the fact that objective function (1) minimizes the sum of the latter type of binary variables. Thus, one can focus on increasing the number of edge variables that take an integer value to tend towards feasibility. For this reason, the fitness function evaluates the feasibility of a solution $X$ as the number of edge variables $x_e, \forall e \in E'$ that have an integer value in the solution-vector. It is formally calculated as
$\left\vert\{x_e \in X: x_e \in \{0, 1\} \}\right\vert$. Once a solution in which all edge variables take integer values is reached, this solution would be feasible for the MLST problem.
\subsection{Crossover and mutation}
In this section we introduce a new class of Chv\`atal-Gomory cutting planes. As opposed to the constraints presented in the previous sections, this class of inequalities is not needed to model the MLST problem as an integer linear program. However, their addition can reduce the solving time needed, by cutting the polyhedron which is already defined by inequalities (2) to (9). After proving the validity of these inequalities, we show how they can be used in the crossover operation, in order to speed up the convergence process. \\
The following example shows the type of fractional solutions that can result from relaxing integrity constraints in the previous integer linear program. In Figure \ref{example}, we consider a sub-graph including four terminals denoted $\{t_1,t_2,t_3, t_4\}$ and two Steiner nodes denoted $\{s_1, s_2\}$, to which a node $v'$ is added and connected to all Steiner nodes and to terminal $t_1$, as per the previously described formulation of the problem. \\
An integer solution, in which variables corresponding to edges $\{s_1,v'\},$ $\{v',t_1\},$ $\{t_1,t_2\},$ $\{s_2,t_2\},$ $\{t_2,t_3\}$ and $\{t_3,t_4\}$ take value $1$, while variables corresponding to all other edges of the sub-graph take value $0$, as represented in Figure \ref{example2}, is feasible and represents a Steiner tree in this sub-graph. We modify this solution by reducing the value of the variable corresponding to edge $\{t_1,t_2\}$ to $0.5$ and assigning another $0.5$ to the variable corresponding to edge $\{s_1,s_2\}$, as represented in Figure \ref{example3}. It is easy to see that this results in a fractional solution that would satisfy constraints (2) to (6) and be feasible for the relaxed problem. \\
In order to cut such solutions off from the polyhedron of solutions to the relaxed problem, we describe a set of constraints that can be imposed on a subset of elementary cycles passing through node $v'$ in network $G'$. Node $v'$ being connected to all Steiner node and to one terminal, this node is either adjacent to two Steiner nodes, or to one Steiner node and one terminal, in all cycles passing through $v'$. We focus on all elementary cycles passing through $v'$, that are of the former type. It should be noted that for each elementary path between two Steiner nodes in $G$, corresponds such a cycle in $G'$. \\
We first consider non-triangle cycles and then treat the case of triangles separately. Let us denote by $C$ a non-triangle cycle in $G'$ passing through $v'$ which contains two edges $\{v',s_i\}$ and $\{v',s_j\}$, where $s_i$ and $s_j $ are two Steiner nodes in $V \backslash Q$, by $x_{\{s_i,s_j\}}$ the variable corresponding to edge $\{s_i,s_j\}$, and by $\left\vert{C}\right\vert \geq 4$ the length (number of edges) of $C$. We introduce the following inequalities:
\begin{alignat}{3}
\sum \limits_{e \in C} x_e \leq \left\vert{C}\right\vert -2 - x_{\{s_i,s_j\}} & \forall C \mbox{ a cycle} : \{v',s_i\},\{v',s_j\} \in C, \{s_i,s_j\} \not \in C, s_i, s_j \in V \backslash Q
\end{alignat}
In Figure \ref{example}, one can observe for instance that cycle $v' s_2 t_2 t_3 t_4 s_1 v'$, of length 6, would result in the following inequality:
$$x_{\{v', s_2\}}+x_{\{s_2, t_2\}}+x_{\{t_2, t_3\}}+x_{\{t_3, t_4\}}+x_{\{t_4, s_1\}}+x_{\{ s_1, v'\}}\leq 4 - x_{\{s_2, s_1\}}$$
\begin{figure}[!htb]
\minipage{0.32\textwidth}
\caption{\label{example} An illustrative graph with four terminals and two Steiner nodes}
\includegraphics[width=\linewidth]{example.png}
\endminipage\hfill
\minipage{0.32\textwidth}
\caption{\label{example2} A feasible integer solution preserved by inequality (10)}
\includegraphics[width=\linewidth]{example2.png}
\endminipage\hfill
\minipage{0.32\textwidth}%
\caption{\label{example3} An unfeasible fractional solution cut off by inequality (10)}
\includegraphics[width=\linewidth]{example3.png}
\endminipage
\end{figure}
When applied to the integer solution represented in Figure \ref{example2}, this inequality would be tightly satisfied (4 $\leq$ 4), while it would not be satisfied (4 $\not\leq$ 3.5) by the fractional solution represented in Figure \ref{example3}.
We extend this results to all solutions in Proposition 1, a proof of which is provided below.
\begin{proposition}
Constraint (10) is a valid inequality for the set of feasible solutions to the MLST problem, defined by constraints (2) to (6).
\end{proposition}
\begin{proof}
We show that constraints (10) would be satisfied by any Steiner tree in $G$. Let $T'$ be a spanning tree in $G'$ that corresponds to a Steiner tree in $G$, that is to say that every node in $V \backslash Q$ that is adjacent to $v'$ is of degree one in $T'$. We consider $C$ to be a cycle of length $\left\vert{C}\right\vert \geq 4$ in $G'$ that contains two edges $\{v',s_i\}$ and $\{v',s_j\}$, with $s_i, s_j \in V \backslash Q$. \\
Since $T'$ is a spanning tree in $G'$, it cannot contain any cycle, thus the number of edges in $C \cap T'$ is at most $\left\vert{C}\right\vert -1$. Therefore $\sum \limits_{e \in C} x_e \leq \left\vert{C}\right\vert -1$ holds. Moreover, the degree constraint on Steiner nodes in $V \backslash Q$ imposes that the degrees of $s_i$ and $s_j$ be equal to one in $T'$, which has two consequences:
\begin{itemize}
\item If $\left\vert{C}\right\vert \geq 4$, the number of edges in $C \cap T'$ cannot be equal to $\left\vert{C}\right\vert -1$, because that would imply that one node among $s_i$ and $s_j$ has a degree at least equal to two in $T'$. Thus $\sum \limits_{e \in C} x_e \leq \left\vert{C}\right\vert -2$ holds.\\
\item If the number of edges in $C \cap T'$ was equal to $\left\vert{C}\right\vert -2$, we show that edge $\{s_i,s_j\}$ would not be part of $T'$, i.e. $x_{\{s_1,s_2\}}=0$. Firstly, edge $\{s_i,s_j\}$ being part of $T'$ would imply that edges $\{v',s_i\}$ and $\{v',s_j\}$ are not part of $T'$, otherwise $T'$ would contain triangle $v' s_i s_j v'$. Thus if edge $\{s_i,s_j\}$ is part of $T'$, then exactly one edge among $\{v',s_i\}$ and $\{v',s_j\}$, and exactly one edge among the two edges in $C\cap (\delta(s_i) \backslash \{v',s_i\}\cup \delta(s_j) \backslash \{v',s_j\})$ (i.e. the edge that comes after $\{v',s_i\}$ or the edge that comes after $\{v',s_j\}$ when going through cycle $C$ from node $v'$) are left out in the construction of $T'$, and these two edges cannot be incident to the same node. Without loss of generality let us consider that the number of edges in $C \cap T'$ is equal to $\left\vert{C}\right\vert -2$, and that the two edges in $C$ that are not part of $T'$ are $\{v',s_i\}$ and an edge $e_j \in C\cap (\delta(s_j) \backslash \{v',s_j\})$. If edge $\{s_i,s_j\}$ was part of $T'$ then node $s_i$ would have a degree at least equal to two, which violates the degree constraint on this node. Therefore, inequality $ x_{\{s_i,s_j\}} \leq \left\vert{C}\right\vert -2 - \sum \limits_{e \in C} x_e$ forcing $ x_{\{s_i,s_j\}}$ to take value 0 if the number of edges in $C \cap T'$ equals $\left\vert{C}\right\vert -2$, holds, which is an equivalent way to state inequality (10).
\end{itemize}
Therefore, constraint (10) is satisfied by any Steiner tree in $G$. The previous example additionally, showed that inequality (10) is not satisfied by some fractional solutions that would otherwise satisfy constraints (2) to (6), and that this inequality is tightly satisfied by some solutions corresponding to Steiner trees. Thus constraint (10) is a valid inequality for the MLST problem.
\end{proof}
In the case of triangles of the form $v' s_i s_j v'$, we should mention that constraints of type (10) would exclude integer solutions where $x_{\{v',s_i\}}=x_{\{s_j, v'\}}=1$ and $x_{\{s_i,s_j\}}=0$, which correspond to Steiner nodes $s_i$ and $s_j$ not being used in the corresponding tree. Indeed, in this case the number of edges in $C \cap T'$ can be equal to $\left\vert{C}\right\vert -1=2$. The following constraint can however be stated:
\begin{alignat}{3}
\frac{x_{\{v',s_i\}} + x_{\{v',s_j\}}}{2} \leq 1 - x_{\{s_i,s_j\}} \quad & \forall C \mbox{ a triangle } v' s_i s_j v': s_i, s_j \in V \backslash Q
\end{alignat}
This constraint, whose validity is easy to verify, is an adaptation of constraints (10) to triangles. It imposes that if edges $\{s_i,s_j\}$ is part of a Steiner tree $T'$, then edges $\{v',s_i\}$ and $\{v',s_j\}$ cannot be part of it.\\
Constraints (10) and (11) can be used in the crossover operation. Let $X_1$ and $X_2$ be two fractional solution vectors corresponding to super-optimal solutions, the crossover operation is performed in the following two steps:
\subsubsection{Passing on good features}
The crossover operation first crosses the two sets of colors used by edges incident to each terminal $t_i\in Q$, whose corresponding edge variables take value 1, as well as the two sets of such edges.\\
It should be noted that this terminal-by-terminal crossover is a different operation from only crossing the sets of colors used by common edges or crossing the sets of all colors used in each solution, as illustrated in Figure \ref{example4}, representing two subsets of arcs whose value is 1 in two super-optimal solutions over the same sub-graph. This sub-graph contains two terminals $\{t_1, t_2\}$ and four Steiner nodes $\{s_1, s_2, s_3, s_4\}$, and edges in these two solutions are colored in \textit{red} (R), \textit{green}, \textit{blue} (B) or \textit{yellow} (Y).\\
Crossing these two solutions leads to the yellow color being passed on to the offspring, as edge $\{t_1, s_3\}$ uses this color in both solutions, to the blue color being passed on to the offspring, although it is used in two different edges incident to $s_1$, but not the red color as it is used in edges adjacent to two different terminals, and obviously not the green color as it is used in only one of the two solutions. Thus, crossing these two solutions would lead to the values of variables $y_l , l \in \{\mbox{B}, \mbox{Y}\}$, as well as that of edge variable corresponding to $\{t_1, s_3\}$ being set to 1 in the offspring.
\begin{figure}[!htb]
\begin{center}
\caption{\label{example4} An illustration for the first phase of the crossover operation}
\includegraphics[width=0.7\linewidth]{example4.png}
\end{center}
\end{figure}
Formally we consider the following set of edge variables for each terminal $t_i \in Q$:
$Z_i=\{e \in \delta(t_i): y_{l(e)}=1, \mbox{ in both }\ X_1 \mbox{ and } X_2\}$ and set $y_{l(e)}=1, \forall e\in Z_i$ as well as $x_e=1, \forall e\in Z_i: x_e=1, \mbox{ in both }\ X_1 \mbox{ and } X_2$. \\
The intuitive idea of this procedure is to pass on the common colors and edges that are used to connect each terminal, in solutions $X_1$ and $X_2$ to their offspring. The second step of the crossover procedure aims at the progressive removal of fractional-valued variables.
\subsubsection{Cutting off bad features}
In this second step, we consider edge variables that have a fractional value in $X_1$ and $X_2$, specifically those corresponding to edges that are incident to one or two Steiner nodes, and impose a constraint of type (10) or (11) over a cycle containing each one of them. This procedure can be formally stated as follows:
$\forall e \in E': x_e \mbox{ is fractional in }\ X_1 \mbox{ or } X_2\}$:
\begin{itemize}
\item If only one extremity of $e$ is a Steiner node, identify an elementary path between the terminal extremity of $e$ and another Steiner node, using a Depth First Search, and impose a constraint of type (10) over the non-triangle cycle passing through $v'$ thus constituted.
\item If both extremities of $e$ are Steiner nodes, impose a constraint of type (11) over the triangle formed by $v'$ and the two extremities of $e$.
\end{itemize}
Once this procedure is performed, the resulting simplified linear program is solved and new solutions presenting the highest levels of fitness are subjected to the same procedure. Additionally, a mutation operator periodically selects a terminal and allows it to be connected using a previously unused color.
Given a graph $G' = (V',E',L)$, and two fractional solutions $X_1$ and $X_2$, the crossover operation, that would generate an offspring solution $X_3$, can be summarized by Algorithm \ref{cross}.
\begin{algorithm}[t]
\KwData{$G' = (V',E',L)$, $X_1$, $X_2$ }
\KwResult{$X_3$}
\For {all $t_i \in Q$}
{Define $Z_i=\{e \in \delta(t_i): y_{l(e)}=1, \mbox{ in both } X_1 \mbox{ and } X_2\}$\\
\For{all $e\in Z_i$ }
{
\If{$x_e=1, \mbox{ in both } X_1 \mbox{ and } X_2$}
{Set $x_e=1$ in the relaxed linear program\\
Set $y_{l(e)}=1$ in the relaxed linear program\\
}
}
}
\For {all $e \in E'$}
{\If{$x_e \mbox{ is fractional in } X_1 \mbox{ or } X_2$}
{Define $v_1, v_2=\mbox{ the two extremities of } e$\\
\If{$v_1 \in V \backslash Q$ and $v_2 \in V \backslash Q$ }
{Add constraint $\frac{x_{\{v',v_1\}} + x_{\{v',v_2\}}}{2} \leq 1 - x_{\{v_1,v_2\}}$ to the relaxed linear program\\
\textbf{else} \If{$v_1 \in V \backslash Q$ and $v_2 \in Q$}
{\For {all $s \in Q\backslash v_2$}
{Define cycles $C=\{v',s\}\cup e\cup DFS(v_1,s)\cup \{v',v_2\}$\\
Add constraint $\sum \limits_{a \in C} x_a \leq \left\vert{C}\right\vert -2 - x_{\{v_1,v_2\}}$ to the relaxed linear program
}
}
}
}
}
Define $X_3=$ Optimal solution of the relaxed linear program\\
\caption{\label{cross}Crossover procedure}
\end{algorithm}
\subsection{Illustrative example}
To illustrate the functioning of the proposed algorithm, let us consider the graph provided in Figure \ref{toy}, which contains three terminals denoted $\{t_1, t_2, t_3\}$ and three Steiner nodes denoted $\{s_1, s_2, s_3\}$, in addition to node $v'$ that is connected to all Steiner nodes and to terminal $t_2$ using colorless edges (labeled C), as per our formulation. Other edges in the graph can be colored in \textit{red} (R), \textit{green} (G), \textit{blue} (B) or \textit{yellow} (Y). Figure \ref{toy1} and Figure \ref{toy2}, in which edges are labeled according to the non-zero values of their corresponding edge variables in the linear program and omitted if those variables take a zero value, represent two super-optimal fractional solutions generated by solving the relaxed linear program corresponding to this graph. We respectively denote these two solutions $X_1$ and $X_2$.
One can observe that $X_1$ and $X_2$ are both of value $1.5$ and exhibit the typical structure of a super-optimal fractional solution in our formulation. Indeed, some edges are selected and their corresponding label variable $y_l, l \in L$ take value 1 (e.g. the green color in our two super-optimal solutions), While other edges take a fractional value and thus their corresponding label variables $y_l, l \in L$ take a fractional value. This is the case for the red color in the super-optimal solution in Figure \ref{toy1}, and for the blue color in the super-optimal solution in Figure \ref{toy2}. The goal of the crossover procedure is, simply put, to exploit this common structure for super-optimal solutions, by passing on a subset of the former type of edges, while cutting off the latter type of edges. Thus, as per Algorithm \ref{cross}, crossing $X_1$ and $X_2$ leads to selecting common edges $\{t_1, t_2\}$ and $\{t_3, s_2\}$ that are incident to at least one terminal. Figure \ref{toy3} represent the selected edges after this phase of the crossing procedure is performed. Furthermore, a depth-first search from edge $\{s_3,t_3\}$, whose corresponding edge variable has a fractional value in $X_1$ identifies the cycle $C_1=v' s_3 t_3 s_2 v'$ of length 4. A similar search from edge $\{s_1,t_3\}$, whose corresponding edge variable has a fractional value in $X_2$, identifies the cycle $C_2=v' s_1 t_3 s_2 v'$, also of length 4. Thus, the two following inequalities of type (10) are to be imposed respectively over $C_1$ and $C_2$, in the relaxed linear program:
$$x_{\{v',s_3\}} + x_{\{s_3,t_3\}} + x_{\{t_3,s_2\}} + x_{\{s_2,v'\}} \leq 4 -2 - x_{\{s_3,s_2\}} $$
$$x_{\{v',s_1\}} + x_{\{s_1,t_3\}} + x_{\{t_3,s_2\}} + x_{\{s_2,v'\}} \leq 4 -2 - x_{\{s_1,s_2\}} $$
It is easy to observe that solution $X_1$ does not satisfy the first inequality (1.5 $\not\leq$ 1), while solution $X_2$ does not satisfy the second inequality (2 $\not\leq$ 1.5). Therefore, these two super-optimal fractional solutions would be cut off by these the two constraints.
Finally, solving the relaxed linear program under these conditions generates the integer solution $X_3$ represented in Figure \ref{toy4}, which for our example, constitutes an optimal Steiner tree of value 2.
\begin{figure}[!htb]
\minipage{0.32\textwidth}
\caption{\label{toy} An illustrative graph with three terminals and three Steiner nodes}
\includegraphics[width=\linewidth]{toy.png}
\endminipage\hfill
\minipage{0.32\textwidth}
\caption{\label{toy1}First super-optimal fractional solution ($X_1$)}
\includegraphics[width=\linewidth]{toy1.png}
\endminipage\hfill
\minipage{0.32\textwidth}%
\caption{\label{toy2}Second super-optimal fractional solution ($X_2$)}
\includegraphics[width=\linewidth]{toy2.png}
\endminipage
\end{figure}
\begin{figure}[!htb]
\minipage{0.05\textwidth}
\hphantom
\endminipage\hfill
\minipage{0.32\textwidth}%
\caption{\label{toy3} Edges selected by crossing the two super-optimal solutions}
\includegraphics[width=\linewidth]{toy3.png}
\endminipage\hfill
\minipage{0.32\textwidth}%
\caption{\label{toy4} Optimal Steiner tree generated through the crossover operation ($X_3$)}
\includegraphics[width=\linewidth]{toy4.png}
\endminipage\hfill
\minipage{0.05\textwidth}
\hphantom
\endminipage
\end{figure}
\section{Application and experimental results}
\label{computation}
\subsection{Test instances}
We have conducted preliminary experiments in which different C++ implementations of the proposed generic algorithmic approach for the MLST problem were compared in terms of solution quality and computational running time. The best-performing implementation of the devolutionary genetic algorithm (DGA) in our experiment was compared to an exact branch and bound method (BB) and to the Pilot Method (PM). We considered $20$ different randomly generated data-sets, each containing $5$ instances of the problem, resulting in a total of a $100$ instances, with $n \in \{50,\dots,200\}$ nodes, a number of basic nodes $|Q|=0.25\cdot n$, values of $m$ derived from densities $ d \in \{0.25, 0.5, 0.8\}$, and a number of colors $c \in \{0.25\cdot n, 0.50\cdot n, 1.00\cdot n\}$. The three algorithms (DGA, BB and PM) were ran once for each instance. All computations have been conducted on an Intel Core i7 processor at $8 \times 2.20 GHz$ with $8 Gb$ Ram. Computation duration for PM and DGA were calculated by recording the time at which the best solution found was first discovered, notwithstanding the additional time in which the algorithms ran without improvement to this solution. The GNU Linear Programming Package (GLPK) was used for solving integer linear programs and their relaxations.\\
It should be noted that a comparison with an established evolutionary genetic algorithm for the problem at hand would have been informative as well. However, no such method has been previously reported in the literature, to the best of our knowledge. Moreover, developing an ad-hoc evolutionary genetic algorithm in this work, in addition to being a challenging task in its own right, given the sensitivity of solving the MLST problem to the initial random setting of labels, as reported in \cite{1}, would have defeated the purpose of this experiment, which was to test the performances of the devolutionary approach by comparison with established methods for solving the MLST problem.
\subsubsection{Comparison results}
\begin{table}[!htb]
\centering
\begin{minipage}{.4\linewidth}
\centering
\caption{\label{res1} Average objective function value for $|Q|=0.25\cdot n$}
\begin{tabular}{|c|c|c||c|c|c|}
\hline
$n$& $d$ & $c$ & PM & DGA & BB\\
\hline
& & $12$ &2.6 & 2.8 & 2.6\\
& 0.25& $25$ &2.8 & 3 & 2.8\\
& & $50$ & 3.1 & 3.3 & 3.1\\
\cline{2-6}
& & $12$ & 1.5 & 1.25 &1.25 \\
50 & 0.50& $25$ & 1.7 & 1.41 & 1.4\\
& & $50$ &2.2 & 2.2 &2.2 \\
\cline{2-6}
& & $12$ & 1.4 & 1.2 &1.2 \\
& 0.80& $25$ & 1.35 & 1.35 & 1.35\\
& & $50$ & 1.3 & 1.3 & 1.3\\
\hline
& & $25$ & 9.6 & 9.2 &8.5 \\
& 0.25& $50$ & 10.65 & 9.0 & 8.0\\
& & $100$ & 12.3 & 10.5 & 9.2\\
\cline{2-6}
& & $25$ &8.4 & 8.0 & 8.0\\
100 & 0.50 &$50$ &8.9 & 8.5 & 7.8\\
& & $100$ & 10.3 & 9.3 & 9\\
\cline{2-6}
& & $25$ & 8.0 & 6.2 &7.5 \\
& $0.80$ &$50$ & 8.7 & 7.4 & 7.4\\
& & $100$ &9.2 & 9.2 &8.2 \\
\hline
& & $50$ & 11.6 & 11.2 &11.0 \\
& 0.25& $100$ & 18.2 & 13.8 & 12.35\\
& & $200$ & 20.4 & 19.0 & 17.3\\
\cline{2-6}
& & $50$ & 10.5 & 10.2 &8.4 \\
200 & 0.50& $100$ & 12.6 & 11.9 & 10.0\\
& & $200$ &16.8 &15.0 &14.6 \\
\cline{2-6}
& & $50$ &7.6 & 7.8 & 7.8\\
& 0.80& $100$ &8.8 & 8.3 & 8.5\\
& & $200$ & 9.1 & 9.1 & 9.0\\
\hline
\end{tabular}
\end{minipage}%
\hspace{.1\linewidth}
\begin{minipage}{.4\linewidth}
\centering
\caption{\label{res2} Average computation duration (in seconds) for $|Q|=0.25\cdot n$}
\begin{tabular}{|c|c|c||c|c|c|}
\hline
$n$& $d$ & $c$ & PM & DGA &BB\\
\hline
& & $50$ & 2 & 3&4.8\\
& 0.25& $100$ & 3 & 3.5&6.3\\
& & $200$ & 5 & 8.2&12\\
\cline{2-6}
& & $50$ & 1.7 &3 & 3.5\\
50 & 0.50& $100$ & 2.3 & 3.2& 4.3\\
& & $200$ & 5.5 &7.5&9 \\
\cline{2-6}
& & $50$ & 1.5 &1.5 & 2.4\\
& 0.80& $100$ & 2 & 3.0& 3.2\\
& & $200$ & 3.2 & 3.5& 5\\
\hline
& & $50$ & 21.5 &112.7 & 162.7\\
& 0.25& $100$ & 18 & 136.1& 229.2\\
& & $200$ & 27 & 173.3& 300.5\\
\cline{2-6}
& & $50$ & 5.6 &7.6& 13.5 \\
100 & 0.50& $100$ & 11.3 & 36.2& 56.3\\
& & $200$ & 15.2 &53.5&89 \\
\cline{2-6}
& & $50$ & 4.4 &4.8 &9\\
& 0.80& $100$ &6.3 & 7.2&13.5\\
& & $200$ & 11.1 & 13.4&20.9\\
\hline
& & $50$ & 32.3 & 132.6&400.8\\
& 0.25& $100$ & 40.7 & 142.6&1014.0\\
& & $200$ & 51.7 & 260.2&Unknown\\
\cline{2-6}
& & $50$ & 12.6 &17.3 & 153.5\\
200 & 0.50& $100$ & 15.5 & 14.5& 204.9\\
& & $200$ & 15.8 &19.1 &314.0\\
\cline{2-6}
& & $50$ & 11.3 &11.9 & 52.5\\
& 0.80& $100$ & 14.2 & 15.1& 68.9\\
& & $200$ & 15.2 & 17.5& 112.8\\
\hline
\end{tabular}
\end{minipage}
\end{table}
Tables \ref{res1} and \ref{res2} present a summary of the computational results we have obtained. When comparing the results of DGA with those of the PM, we can conclude that the former consistently finds better solutions, with close running times for high-density graphs. The Wilcoxon rank sum tests yield error probabilities of less than 1\% for the hypothesis that the average objective values from DGA are smaller. However, the devolutionary genetic algorithm although, it still finds higher quality solutions seems to under-perform with networks of a low density and requires high computation times that are closer to those of an exact solving approach (BB). This can be explained by the fact that the linear programming formulation we have used does not produce a very tight relaxation for such graph. Thus, We can globally conclude that the proposed hybrid meta-heuristic approach presents a good compromise between a heuristic and an exact approach, although, its use is not indicated for low-density graphs.
\subsubsection{Sensitivity Analysis}
Generating the initial population of super-optimal solutions is unsurprisingly the phase of the algorithm that requires the biggest computational effort. The size of the initial population and the tightness of the relaxation are the aspect that seem to have the most influence both on the quality of the feasible solutions reached and on running time. Only the former of these two aspects being controllable, the primary adjustment parameter for DGA is thus the size of the initial population of super-optimal solutions. In Figure \ref{size}, we study the influence of the number of initial super-optimal solutions on the quality of generated solutions and on running time for the same set of a $100$ instances. In this graph, the X axis represents the number of initial solutions, while the Y axis represents the average percentage of variation in the value of the best generated solution, compared to the optimum generated by BB, as well as the average percentage of variation in computation time, compared to the computation time when starting with two super-optimal solutions.
\begin{figure}[!htb]
\begin{center}
\caption{\label{size} Influence of the size of the initial population on value and computation time (averages for $100$ instances)}
\includegraphics[width=0.75\linewidth]{variation.png}
\end{center}
\end{figure}
We can observe that if this size is too small, then the form of the resulting solutions is relatively restricted from the get-go leading to fast convergence but with a higher probability of convergence to a local optimum. If, on the other hand, the size is too large, then a disproportionate amount of computational time is required. We have observed that increasing the initial population size beyond a dozen, although increasing the computational effort does not significantly improve the quality of the feasible solutions that are eventually generated.
\section{Conclusion}
\label{conclusion}
The preliminary experiments we have performed support the use of devolutionary algorithms for the MLST problem and their development for other NP-hard combinatorial optimization problems. Ongoing investigation will consist in evaluating their results for larger instances with low densities. A comparison of different linear programming formulations of the MLST problem, such as the work done in \cite{15} for the Steiner tree problem, and in \cite{16} for the minimum labeling spanning problem, is outside the scope of the current research. However, such a study would certainly have been profitable to the design of hybrid meta-heuristic approaches like the one we propose in this paper, in addition to its obvious usefulness for computing lower-bounds to the problem in exact solving approaches.
|
1,116,691,499,064 | arxiv | \section{Introduction}
The goal of this paper is to establish new results pertaining to diophantine inequalities involving odd real polynomials and to obtain some applications to combinatorial number theory and ergodic theory.\\
Assume that $v$ is a real polynomial, with $\deg(v)\geq 1$, satisfying $v(0)=0$ and let $\epsilon>0$. Consider the set
\begin{equation}
\mathcal R(v,\epsilon)\index{$\mathcal R(v,\epsilon)$}=\{n\in\mathbb{N}=\{1,2,...\}\,|\,\|v(n)\|<\epsilon\},
\end{equation}
where $\|\cdot\|$\index{$\|\cdot\|$} denotes the distance to the nearest integer.\\
It is well known that sets of the form $\mathcal R(v,\epsilon)$ are large in more than one sense. For example, it follows from Weyl's
equidistribution theorem (see \cite{weyl1916Mod1}) that $\mathcal
R(v,\epsilon)$ has positive natural density. One can also show that $\mathcal R(v,\epsilon)$ is syndetic \index{syndetic} (\cite[Theorem 1.21]{FBook}), meaning
that finitely many translations of $\mathcal R(v,\epsilon)$ cover $\mathbb{N}$ (i.e. $\mathcal R(v,\epsilon)$ has "bounded gaps"). As a matter of fact, the sets $\mathcal R(v,\epsilon)$ posses a stronger property which is called IP$^*$\index{IP$^*$}. A set $E\subseteq \mathbb{N}$ is an IP$^*$ set if it has a non-trivial intersection with every set of the form
$$\text{FS}((n_k)_{k\in\mathbb{N}})\index{FS$((n_k)_{k\in\mathbb{N}})$}=\{n_{k_1}+\cdots +n_{k_m}\,|\,k_1<\cdots<k_m;\text{ }m\in\mathbb{N}\},$$
where $(n_k)_{k\in\mathbb{N}}$ is an arbitrary increasing sequence of natural numbers.\footnote{
Sets of the form $\text{FS}((n_k)_{k\in\mathbb{N}})$ (or, sometimes, supersets of such sets) are called IP sets\index{IP}. IP$^*$ sets form a dual family in the sense of \cite[Chapter 9 ]{FBook}.
}\\
One can show with the help of Hindman's theorem\footnote{
Hindman's theorem states that if $E\subseteq \mathbb{N}$ is an IP set and $C_1,...,C_r\subseteq \mathbb{N}$ are such that $E = \bigcup_{j=1}^r C_j$, then there exists $s\in\{1,...,r\}$ such that $C_s$ is an IP set (see \cite{HIPPartitionRegular}).
}
that IP$^*$ sets have the finite intersection property, meaning that if $E_1,...,E_m\subseteq \mathbb{N}$ are IP$^*$ sets, then $\bigcap_{j=1}^mE_j$ is also IP$^*$.\\
When $v$ is linear, $\mathcal R(v,\epsilon)$ has an even stronger property than IP$^*$, namely that of $\Delta^*$\index{$\Delta^*$}.
A set $E\subseteq \mathbb{N}$ is called a $\Delta^*$\index{$\Delta^*$} set if for any increasing sequence $(n_k)_{k\in\mathbb{N}}$, there exist $i<j$ for which
$$n_j-n_i\in E.$$
It is not hard to show that every $\Delta^*$ set is IP$^*$. Moreover, the family of $\Delta^*$ sets strictly contains the family of IP$^*$ sets. For example, the set
$$\mathbb{N}\setminus\{2^j-2^i\,|\,i,j\in\mathbb{N},\,i<j\}$$
is $\Delta^*$ but not IP$^*$ (See
\cite[pp. 165]{BergelsonErdosDifferences}).\\
One can show, with the help of Ramsey's Theorem, that $\Delta^*$ sets have the finite intersection property (see \cite[pp.179]{FBook}). This implies, in particular, that for any $\alpha_1,...,\alpha_m\in \mathbb{R}$ and any $\epsilon>0$, the set $\bigcap_{j=1}^m\{n\in\mathbb{N}\,|\,\|n\alpha_j\|<\epsilon\}$ is $\Delta^*$.\\
Unfortunately, for polynomials of degree two, the sets $\mathcal R(v,\epsilon)$ are no longer $\Delta^*$ (see, for example, \cite[pp.177-178]{FBook}). One is tempted to conjecture that the $\Delta_2^*$ sets, namely sets intersecting any set of the form
\begin{equation}\label{0.SecondDifferences}
\{(n_{k_4}-n_{k_3})-(n_{k_2}-n_{k_1})\,|\,k_4>k_3>k_2>k_1\},
\end{equation}
could be useful in dealing with polynomials of degree two and the corresponding sets $\mathcal R(v,\epsilon)$.
However, one can show, by using a natural modification of the construction in \cite{FBook}, that there exists $\epsilon>0$ such that for any irrational $\alpha$, the set $\{n\in\mathbb{N}\,|\,\|n^2\alpha\|<\epsilon\|\}$ is not a $\Delta_2^*$ set.\\
To see this, fix an irrational number $\alpha$ and let $(n_k)_{k\in\mathbb{N}}$ be an increasing sequence in $\mathbb{N}$ such that
\begin{equation}\label{0.SquareAndLimitOfSquares}
\lim_{k\rightarrow\infty}\|n_k\alpha\|=0
\text{ and }
\lim_{k\rightarrow\infty}\|n_k^2\alpha-\frac{1}{3}\|=0.\footnote{
The existence of such a sequence $(n_k)_{k\in\mathbb{N}}$ follows from \cite[Theorem 1.011]{HardyLittlewood1914some}. One can also use, for example, the two-dimensional version of Weyl's equidistribution theorem \cite{weyl1916Mod1}. Finally, one could also invoke the fact that the transformation $T:\mathbb T^2\rightarrow \mathbb T^2$ defined by $T(x,y)=(x+\alpha,y+2x+\alpha)$ is minimal. See for example \cite[Lemma 1.25]{FBook}.}
\end{equation}
By passing, if needed, to a subsequence, we can also assume that for any $j,k\in\mathbb{N}$ with $j<k$,
\begin{equation}\label{0.TwoTupleLimit}
\|n_jn_k\alpha\|<\frac{1}{k}.
\end{equation}
So, for any large enough and distinct $j,k\in\mathbb{N}$, we have $\|n_jn_k\alpha\|<\dfrac{\epsilon}{16}$ and $\|n_k^2\alpha-\frac{1}{3}\|<\dfrac{\epsilon}{16}$. It follows by a simple calculation that for large enough $k_4>k_3>k_2>k_1$,
$$\|[(n_{k_4}-n_{k_3})-(n_{k_2}-n_{k_1})]^2\alpha-\frac{4}{3}\|<\epsilon,$$
which implies that the set $\mathcal R(n^2\alpha,\frac{1}{6})$ is not $\Delta_2^*$.\\
It comes as a pleasant surprise that $\Delta_2^*$ sets work well with the sets $\mathcal R(n^3\alpha,\epsilon)$.
\begin{prop}\label{0.CubicCase}
For any real number $\alpha$ and any $\epsilon>0$, the set
$$\mathcal R(n^3\alpha,\epsilon)=\{n\in\mathbb{N}\,|\,\|n^3\alpha\|<\epsilon\}$$
is $\Delta_2^*$.
\end{prop}
It turns out that \cref{0.CubicCase} generalizes nicely to odd real polynomials, namely polynomials of the form
\begin{equation}\label{0.OddPolynomial}
v(x)=\sum_{j=1}^\ell a_j x^{2j-1}.
\end{equation}
(Note that a real polynomial $v$ satisfies $v(-x)=-v(x)$ if and only if $v$ is of the form \eqref{0.OddPolynomial}).\\
Before formulating a generalization of \cref{0.CubicCase} to odd polynomials of arbitrary degree, we have to introduce the family of $\Delta_\ell^*$ sets, $\ell\in\mathbb{N}$.\\
Define the function $\partial:\bigcup_{\ell\in\mathbb{N}}\mathbb{Z}^{2^\ell}\rightarrow\mathbb{Z}$\index{$\partial(m_1,...,m_{2^\ell})$} recursively by the formulas:
\begin{enumerate}
\item $\partial(m_1,m_2)=m_2-m_1$.
\item $\partial(m_1,...,m_{2^\ell})=\partial(m_{2^{\ell-1}+1},...,m_{2^\ell})-\partial(m_1,...,m_{2^{\ell-1}}),$ $\ell>1$.
\end{enumerate}
Given $\ell\in\mathbb{N}$, we will say that a set $E\subseteq \mathbb{N}$ is $\Delta_\ell^*$\index{$\Delta_\ell^*$} if for any
increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$, there exist $$k_1<k_2<k_3<\cdots<k_{2^\ell}$$ for which
$$\partial (n_{k_1},...,n_{k_{2^\ell}})\in E.\footnote{
For example, a set $E\subseteq \mathbb{N}$ is $\Delta_3^*$ if for any increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$, there exist $k_1<\cdots<k_8$ for which
$$[(n_{k_8}-n_{k_7})-(n_{k_6}-n_{k_5})]-[(n_{k_4}-n_{k_3})-(n_{k_2}-n_{k_1})]\in E.$$
}$$
One can show that for each $\ell\in\mathbb{N}$, $\Delta_\ell^*$ sets have the finite intersection property (See Section \ref{SectionBetaN} for more information on $\Delta_\ell^*$ sets).\footnote{
Note that the notion of $\Delta_1^*$ is the same as the notion of $\Delta^*$\index{$\Delta^*$} defined above.
}\\
We are now in position to state a generalization of \cref{0.CubicCase}.
\begin{thm}\label{0.OddDegreeRecurrence}
For any odd real polynomial $v(x)=\sum_{j=1}^{\ell}a_jx^{2j-1}$ and any $\epsilon>0$, the set $$\mathcal R(v,\epsilon)=\{n\in\mathbb{N}\,|\,\|v(n)\|<\epsilon\}$$
is $\Delta_\ell^*$.
\end{thm}
\begin{rem}
One can show that for $\ell>1$, the families IP$^*$ and $\Delta_\ell^*$ are, so to say, in general position. Namely, IP$^*\not\subseteq\Delta_\ell^*$ (see \cref{3.Delta^lRichNoIPs}) and $\Delta_\ell^*\not\subseteq\text{IP}^*$ (see \cref{3.IPWithNoDelta2}).
\end{rem}
The following theorem shows that odd real polynomials are, roughly, the only polynomials for which the sets $\mathcal R(v,\epsilon)$ are always $\Delta_\ell^*$:
\begin{thm}\label{0.CharacterizationOfOddPolynoials}
Let $\ell\in\mathbb{N}$ and let $v(x)$ be a real polynomial. The set $\mathcal R(v,\epsilon)$ is $\Delta_\ell^*$ for any $\epsilon>0$ if and only if there exists a polynomial $w\in\mathbb{Q}[x]$ with $w(0)\in\mathbb{Z}$ and such that $v-w$ is an odd polynomial of degree at most $2\ell-1$.
\end{thm}
There are two basic approaches to the proof of \cref{0.OddDegreeRecurrence}. The first approach is based on the inductive utilization of (the finite) Ramsey Theorem. The second approach uses a special family of ultrafilters in $\beta\mathbb{N}$ which is of interest in its own right and has not been utilized before in a similar context. Each of these approaches has its own pros and cons.\\
The first approach allows to formulate and prove a finitistic version of \cref{0.OddDegreeRecurrence} (this is a pro), but the proof gets quite cumbersome (this is a con). This approach is carried out in Subsection \ref{SubsectionFinitisticProof}.\\
The second approach, which is implemented in Subsection \ref{SubsectionUltrafilterApproach}, has the advantage of being shorter and much easier to follow. The disadvantage of this approach seems to be mostly lying with the fact that some readers may not be familiar with ultrafilters. We remedy this by giving detailed definitions and some of the necessary background in Section \ref{SectionBetaN}.\\
It is worth mentioning that we will also utilize the ultrafilter technique in the proofs of \cref{0.CharacterizationOfOddPolynoials} (see Section \ref{SectionCharPol}) and of a converse to \cref{0.OddDegreeRecurrence} (see Section \ref{SectionAConverseResult}).\\
In Section \ref{SecHilbert}, we deal with applications to unitary actions. In particular, we establish the following result.
\begin{thm}\label{0.CompactHilbert}
Let $U:\mathcal H\rightarrow \mathcal H$ be a unitary operator and let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be a non-zero odd polynomial with $v(\mathbb{Z})\subseteq\mathbb{Z}$. The following are equivalent:
\begin{enumerate}[(i)]
\item $U$ has discrete spectrum (i.e. $\mathcal H$ is spanned by eigenvectors of $U$).
\item For any $f\in\mathcal H$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\,\|U^{v(n)}f-f\|_{\mathcal H}<\epsilon\}$$
is $\Delta_\ell^*$.
\end{enumerate}
\end{thm}
\cref{0.CompactHilbert} has the following ergodic-theoretical corollary.
\begin{cor}\label{0.CompactMeasurable}
Let $(X,\mathcal A,\mu)$ be a probability space\footnote {Throughout this paper we will assume that the probability spaces we deal with are standard,
that is, isomorphic mod 0 to a disjoint union of an interval equipped
with the Lebesgue measure and a countable number of atoms.}
and let $T:X\rightarrow X$ be an ergodic invertible probability measure preserving transformation. The following are equivalent:
\begin{enumerate}[(i)]
\item $(X,\mathcal A,\mu, T)$ is isomorphic to a translation on a compact abelian group.
\item For any odd polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$, any $A\in\mathcal A$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\, \mu(A\cap T^{-v(n)}A)>\mu(A)-\epsilon\}$$
is $\Delta_\ell^*$.
\item There exists a non-zero odd polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$ such that for any $A\in\mathcal A$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\, \mu(A\cap T^{-v(n)}A)>\mu(A)-\epsilon\}$$
is $\Delta_\ell^*$.
\end{enumerate}
\end{cor}
Another application of \cref{0.CompactHilbert} to measure preserving systems requires the introduction of the notion of an \textit{almost} $\Delta_\ell^*$ set, denoted by A-$\Delta_\ell^*$\index{A-$\Delta_\ell^*$}. Given $\ell\in\mathbb{N}$, a set $D\subseteq \mathbb{N}$ is A-$\Delta_\ell^*$ if there exists a set $E\subseteq \mathbb{N}$ with $d^*(E)=0$\index{$d^*(E)$},\footnote{
The upper Banach density of a set $E\subseteq \mathbb{N}$, $d^*(E)$, is defined by $$d^*(E)=\limsup_{N-M\rightarrow\infty}\frac{|E\cap\{M+1,...,N\}|}{N-M},$$
where, for a finite $F\subseteq \mathbb{N}$, $|F|$ denotes the cardinality of $F$.
}
such that $D\cup E$ is $\Delta_\ell^*$.
\begin{thm}\label{0.AlmostMeasurableCase}
Let $(X,\mathcal A,\mu,T)$ be an invertible probability measure preserving system and let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be an odd polynomial with $v(\mathbb{Z})\subseteq\mathbb{Z}$. For any $A\in\mathcal A$ and any $\epsilon>0$, the set
\begin{equation}\label{0.SimpleSet}
\mathcal R_A(v,\epsilon)=\{n\in\mathbb{N}\,|\,\mu(A\cap T^{-v(n)}A)>\mu^2(A)-\epsilon\}
\end{equation}
is A-$\Delta_\ell^*$.
\end{thm}
\begin{rem}
It was shown in \cite{BFM} that the "sets of large returns" $\mathcal R_A(v,\epsilon)$ have the IP$^*$ property for any polynomial $v$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$ and satisfying $v(0)=0$. It will be shown in Section \ref{SectionNotionsOfLargness} that for each $\ell\in\mathbb{N}$, there exists an IP$^*$ set which is not A-$\Delta_\ell^*$. So, \cref{0.AlmostMeasurableCase} provides new information about sets of large returns when $v$ is an odd polynomial.
\end{rem}
We remark that the quantity $\mu^2(A)$ in \eqref{0.SimpleSet} is optimal (consider any strongly mixing system\footnote
{A probability measure preserving system $(X,\mathcal A,\mu,T)$ is strongly mixing if for any $A,B\in\mathcal A$,
$$\lim_{n\rightarrow\infty}\mu(A\cap T^{-n}B)=\mu(A)\mu(B).$$
}).\\
The following corollary of \cref{0.AlmostMeasurableCase} is a result in additive combinatorics which might be seen as a variant of the Furstenberg-S{\'a}rk{\"o}zy theorem (see \cite{sarkozy1978difference} and \cite[Theorem 3.16]{FBook}).
\begin{cor}\label{0.SarkozyLike}
Let $E\subseteq\mathbb{N}$ be such that $d^*(E)>0$
and let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be an odd polynomial with $v(\mathbb{Z})\subseteq\mathbb{Z}$. Then the set
$$\{n\in\mathbb{N}\,|\,v(n)\in E-E\}$$
is A-$\Delta_\ell^*$.
\end{cor}
We also have a new characterization of weakly mixing systems\footnote{
A probability measure preserving system $(X,\mathcal A,\mu,T)$ is weakly mixing if for any $A,B\in\mathcal A$,
$$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{j=1}^N|\mu(A\cap T^{-n}B)-\mu(A)\mu(B)|=0.$$
}:
\begin{cor}\label{0.WeaklyMixingCase}
Let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be a non-zero odd polynomial with $v(\mathbb{Z})\subseteq\mathbb{Z}$. An invertible probability measure preserving system $(X,\mathcal A,\mu,T)$ is weakly mixing if and only if for any $A,B\in\mathcal A$ and any $\epsilon>0$, the set
$$\mathcal R_{A,B}(v,\epsilon)=\{n\in\mathbb{N}\,|\,|\mu(A\cap T^{-v(n)}B)-\mu(A)\mu(B)|<\epsilon\}$$
is A-$\Delta_\ell^*$.
\end{cor}
In Section \ref{SecExample}, we provide an example of a weakly mixing system $(X,\mathcal A,\mu,T)$ which shows that in the statement of \cref{0.WeaklyMixingCase}, A-$\Delta_\ell^*$ can not be replaced by $\Delta_\ell^*$.\\
We conclude the introduction with formulating a recent result \cite{BerZelMixingOfAllorders} which demonstrates yet another connection between $\Delta_\ell^*$ sets and ergodic theory.
\begin{thm}[Cf. \cite{KuangYeDeltaMixing}]
Let $(X,\mathcal A,\mu,T)$ be an invertible probability measure preserving system. The following are equivalent:
\begin{enumerate}[(i)]
\item $(X,\mathcal A,\mu, T)$ is strongly mixing.
\item There exists an $\ell\in\mathbb{N}$ such that for any $A\in\mathcal A$ and any $\epsilon>0$, the set
$$\mathcal R_A(v,\epsilon)=\{n\in\mathbb{N}\,|\, |\mu(A\cap T^{-n}A)-\mu^2(A)|<\epsilon\}$$
is $\Delta_\ell^*$.
\item For any $\ell\in\mathbb{N}$, any $A_0,...,A_{\ell+1}\in\mathcal A$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\, |\mu(A_0\cap T^{-n}A_1 \cdots\cap T^{-(\ell+1)n}A_{\ell+1})-\prod_{j=0}^{\ell+1}\mu(A_j)|<\epsilon\}$$
is $\Delta_\ell^*$.
\end{enumerate}
\end{thm}
The structure of the paper is as follows. In \Cref{SectionBetaN}, we provide the necessary background on ultrafilters and establish the connection between ultrafilters and $\Delta_\ell^*$ sets. In Section 3, we prove \cref{0.OddDegreeRecurrence} as well as its finitistic version. In \Cref{SectionAConverseResult}, we prove a converse to \cref{0.OddDegreeRecurrence}. In \Cref{SectionCharPol}, we prove \cref{0.CharacterizationOfOddPolynoials}. In \Cref{SecHilbert}, we focus on applications to unitary actions. In \Cref{SecExample}, we provide an example of a weakly mixing system which demonstrates that \cref{0.WeaklyMixingCase} can not be improved. In \Cref{SectionNotionsOfLargness}, we discuss the relations between the various families of subsets of $\mathbb{N}$ that we deal with throughout this paper.
\section{$\beta\mathbb{N}$ and $\Delta_\ell^*$ sets}\label{SectionBetaN}
In this section we provide some background on the space of ultrafilters $\beta\mathbb{N}$ and connect the notion of $\Delta_\ell^*$ with a natural family in $\beta\mathbb{N}$.\\
Let $p$ be a family of subsets of $\mathbb{N}$. We say that $p$ is a \textbf{filter} if it has the following properties:
\begin{enumerate}[(i)]
\item $\emptyset\not\in p$ and $\mathbb{N}\in p$.
\item If $A,B\in p$, then $A\cap B\in p$.
\item If $A\in p$ and $A\subseteq B$, then $B\in p$.
\end{enumerate}
If, in addition, $p$ satisfies
\begin{enumerate}[(iv)]
\item for any $r\in\mathbb{N}$ and any $C_1,...,C_r\subseteq \mathbb{N}$ such that
$$\bigcup_{s=1}^rC_s\in p,$$
we have that for some $t\in\{1,...,r\}$, $C_t\in p,$
\end{enumerate}
then we say that $p$ is an \textbf{ultrafilter}.
In other words, an ultrafilter is a maximal filter. The set of all ultrafilters on $\mathbb{N}$ is denoted by $\beta\mathbb{N}$.\\
One can introduce a natural topology on $\beta\mathbb{N}$: given $A\subseteq \mathbb{N}$, let $$\overline A=\{p\in\beta\mathbb{N}\,|\,A\in p\}.$$ The family $\{\overline A\,|\,A\subseteq\mathbb{N}\}$ forms a basis for the open sets (and a basis for the closed sets) for this topology. With this topology, $\beta\mathbb{N}$ becomes a compact Hausdorff space. Identifying $n\in\mathbb{N}$ with the \textbf{principal ultrafilter}\index{principal ultrafilter} $\overline n=\{A\subseteq\mathbb{N}\,|\,n\in A\}$ allows us to interpret $\beta\mathbb{N}$ as a representation of the Stone-\v{C}ech compactification of $\mathbb{N}$. We remark in passing that the cardinality of $\beta\mathbb{N}$ is that of $\mathcal P(\mathcal P(\mathbb{N}))$ (and so, $\beta\mathbb{N}$ is a non-metrizable topological space).\\
An alternative way of looking at $\beta\mathbb{N}$ is to identify each ultrafilter $p\in\beta\mathbb{N}$ with a finitely additive, $\{0,1\}$-valued probability measure $\mu_p$ on the power set $\mathcal P (\mathbb{N})$. The measure $\mu_p$ is naturally defined by the condition $\mu_p(A)=1$ if and only if $A\in p$. In this way, we can say that $A\subseteq \mathbb{N}$ is $p$-large whenever $\mu_p(A)=1$ (or equivalently, if $A\in p$).\\
One can naturally extend the operation + from $\mathbb{N}$ to an associative binary operation $+:\beta\mathbb{N}\times\beta\mathbb{N}\rightarrow\beta\mathbb{N}$ by defining $p+q$ to be the unique ultrafilter such that $A\in p+q$ if and only if
\begin{equation}\label{1.SumFormula}
\{n\in\mathbb{N}\,|\,-n+A\in q\}\in p
\end{equation}
(the set $-n+A$ is defined by $m\in(-n+A)$ if and only if $n+m\in A$).\\
With the operation $+$, $\beta\mathbb{N}$ becomes a compact right topological semigroup (meaning that the function $\rho_p:\beta\mathbb{N}\rightarrow\beta\mathbb{N}$, defined by $\rho_p(q)=q+p$ is continuous). \\
In a similar way, one can define $(\beta\mathbb{Z},+)$ (This kind of construction actually works for any discrete semigroup. For more on the Stone-\v{C}ech compactification of a discrete semigroup see \cite{HBook}). Note that $(\beta\mathbb{N},+)$ is a closed sub-semigroup of $(\beta\mathbb{Z},+)$.\\
For each non-principal ulltrafilter $p\in\beta\mathbb{N}$, the family of subsets of $\mathbb{N}$
\begin{equation}\label{1.DeltaPDef}
\{A\subseteq\mathbb{N}\,|\,\{n\in\mathbb{N}\,|\,n+A\in p\}\in p\}
\end{equation}
is again a non-principal ultrafilter, which we denote by $-p+p$. Note that the notation $-p+p$ for the ultrafilter defined by \eqref{1.DeltaPDef} has to be taken with a grain of caution. To justify the notation, $-p+p$, observe that given a non-principal
ultrafilter $p\in\beta\mathbb{N}$, one can naturally define the ultrafilter $-p\in\mathbb{Z}^*=\beta\mathbb{Z}\setminus\mathbb{Z}$ by the rule
$-A\in p$ if and only if $A\in-p$. Now, it is not hard to check that
$\mathbb{N}^*=\beta\mathbb{N}\setminus\mathbb{N}$ is a left ideal of the semigroup $(\beta\mathbb{Z},+)$, and so, if $p\in\mathbb{N}^*$, then $-p+p\in\mathbb{N}^*$.\\
Let $X$ be a topological space and let $p\in\beta\mathbb{N}$ be a non-principal ultrafilter. Given a sequence $(x_k)_{k\in\mathbb{N}}$, we will write
$$\plimgG{p}{n}{\mathbb{N}}x_n=x\index{$\plimgG{p}{n}{\mathbb{N}}x_n$}$$
if for any neighborhood $U$ of $x$
$$\{n\in\mathbb{N}\,|\,x_n\in U\}\in p.$$
It is easy to see that $\plimgG{p}{n}{\mathbb{N}} x_n$ exists and is unique in any compact Hausdorff space.\\
The proof of the following useful lemma is similar to the proof of Theorem 3.8 in \cite{ERTaU}.
\begin{lem}\label{1.LemaDeltaEquality}
Let $X$ be a compact Hausdorff space, let $p\in \beta \mathbb{N}$ be a non-principal ultrafilter and let $(x_k)_{k\in \mathbb{N}}$ be a sequence in $X$. Then
\begin{equation}\label{1.EquationDeltaEquality}
\plimgG{(-p+p)}{n}{\mathbb{N}} x_n=\plimgG{p}{m}{\mathbb{N}} \plimgG{p}{n}{\mathbb{N}} x_{n-m}.
\end{equation}
\end{lem}
\begin{proof}
For a non-empty open set $U\subseteq X$, let
$$A_U=\{n\in \mathbb{N}\,|\,x_n\in U\}.$$
For any $m\in \mathbb{N}$, let
$$B_U(m)=\{n\in \mathbb{N}\,|\,n>m\text{ and }x_{n-m}\in U\}.$$
Note that for each $m\in \mathbb{N}$,
$$B_U(m)=(m+A_U).$$
So, by \eqref{1.DeltaPDef}, $A_U\in -p+p$ if and only if $\{m\in \mathbb{N}\,|\, B_U(m)\in p\}\in p$. Hence,
$$\plimgG{(-p+p)}{n}{\mathbb{N}} x_n=\plimgG{p}{m}{\mathbb{N}} \plimgG{p}{n}{\mathbb{N}} x_{n-m}.$$
\end{proof}
In what follows we will need an extension of \cref{1.LemaDeltaEquality} for "iterated differences" of ultrafilters which are defined for any $\ell\in\mathbb{N}$ and any non-principal ultrafilter $p\in\beta\mathbb{N}$ by
$$p_\ell=-p_{\ell-1}+p_{\ell-1},\index{$p_\ell$}$$
where, by convention, $p_0=p$.\footnote{
Note that for any $\ell\in\mathbb{N}$ and any $t\leq \ell$, $p_\ell=(p_{\ell-t})_{t}$.
}
Before formulating this extension, let us recall the recursive definition of the map $\partial:\bigcup_{\ell\in\mathbb{N}} \mathbb{Z}^{2^\ell}\rightarrow \mathbb{Z}$ which was introduced in the Introduction:
\begin{enumerate}
\item For $(n_1,n_2)\in\mathbb{Z}^2$, $\partial(n_1,n_2)=n_2-n_1$.
\item For each $\ell>1$ and any $(n_1,...,n_{2^\ell})\in\mathbb{Z}^{2^\ell}$, $$\partial(n_1,...,n_{2^\ell})=\partial(n_{2^{\ell-1}+1},...,n_{2^\ell})-\partial(n_1,...,n_{2^{\ell-1}}).$$
\end{enumerate}
For instance $\partial(1,2)=2-1=1$, $\partial(5,3)=3-5=-2$ and $\partial(1,2,5,3)=\partial(5,3)-\partial(1,2)=-3$.\\
By induction on $\ell\geq 2$, one can show that for any $n_1,...,n_{2^\ell}\in\mathbb{Z}$,
\begin{equation}\label{1.TechnicalIdentity}
\partial(n_1,...,n_{2^\ell})=\partial(\partial(n_1,n_2),...,\partial(n_{2^\ell-1},n_{2^\ell})).
\end{equation}
To verify \eqref{1.TechnicalIdentity}, one just needs to note that for any $\ell\geq 3$,
\begin{multline*}
\partial(\partial(n_1,n_2),...,\partial(n_{2^\ell-1},n_{2^\ell}))\\
=\partial(\partial(n_{2^{\ell-1}+1},n_{2^{\ell-1}+2}),...,\partial(n_{2^\ell-1},n_{2^\ell}))-\partial(\partial(n_1,n_2),...,\partial(n_{2^{\ell-1}-1},n_{2^{\ell-1}})).
\end{multline*}
We are in position now to formulate the desired extension of \cref{1.LemaDeltaEquality}, the proof of which can be done by routine induction with the help of \eqref{1.TechnicalIdentity} and is omitted.
\begin{lem}\label{1.LemaIteratedDifLimit}
Let $X$ be a compact Hausdorff space, let $p\in \beta \mathbb{N}$ be a non-principal ultrafilter and let $(x_k)_{k\in \mathbb{N}}$ be a sequence in $X$. Then for each $\ell\in\mathbb{N}$,
\begin{multline*}
\plimgG{p_\ell}{m}{\mathbb{N}}x_m=\plimgG{p_{\ell-1}}{m_1}{\mathbb{N}}\plimgG{p_{\ell-1}}{m_2}{\mathbb{N}}x_{\partial(m_1,m_2)}\\
=\plimgG{p_{\ell-2}}{m_1}{\mathbb{N}}\plimgG{p_{\ell-2}}{m_2}{\mathbb{N}}\plimgG{p_{\ell-2}}{m_3}{\mathbb{N}}\plimgG{p_{\ell-2}}{m_4}{\mathbb{N}}x_{\partial(\partial(m_1,m_2),\partial(m_3,m_4))}\\
=\plimgG{p_{\ell-2}}{m_1}{\mathbb{N}}\plimgG{p_{\ell-2}}{m_2}{\mathbb{N}}\plimgG{p_{\ell-2}}{m_3}{\mathbb{N}}\plimgG{p_{\ell-2}}{m_4}{\mathbb{N}}x_{\partial(m_1,m_2,m_3,m_4)}\\
=\cdots=\plimgG{p}{m_1}{\mathbb{N}}\cdots\plimgG{p}{m_{2^\ell}}{\mathbb{N}}x_{\partial(m_1,...,m_{2^\ell})},
\end{multline*}
\end{lem}
Now we turn our attention to $\Delta_\ell$ sets, $\ell\in\mathbb{N}$. When $\ell=1$, $E\subseteq \mathbb{N}$ is a $\Delta_1$ set (or $\Delta$\index{$\Delta$} set for simplicity) if there exists an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ with the property that
$$\{n_j-n_i\,|\,i<j\}\subseteq E.$$
The following result, which establishes the connection between ultrafilters of the form $-p+p$ and $\Delta$ sets, is a version of Lemma 3.12 in \cite{BerHindQuotientSets}.
\begin{prop}\label{1.DeltaIfandonlyIfDifferenceUltra}
Let $A\subseteq\mathbb{N}$. There exists a non-principal ultrafilter $p\in\beta\mathbb{N}$ such that $A\in-p+p$ if and only if there exists an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ such that
$$\{n_j-n_i\,|\,i<j\}\subseteq A.$$
\end{prop}
Given $\ell\in\mathbb{N}$, a set $E\subseteq \mathbb{N}$ is a \textbf{$\Delta_\ell$\index{$\Delta_\ell$} set} if it contains the $\ell$-th differences set generated by an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$. Given a sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{Z}$, the \textbf{$\ell$-th differences set generated by $(n_k)_{k\in\mathbb{N}}$} is the set defined by
\begin{equation}\label{1.DeltaSetDef}
D_\ell((n_k)_{k\in\mathbb{N}})=\{\partial(n_{j_1},...,n_{j_{2^\ell}})\,|\,j_1<\cdots<j_{2^\ell}\}.\index{$D_\ell((n_k)_{k\in\mathbb{N}})$}
\end{equation}
(Note that the class of all $\Delta_1$ sets is exactly the class of all $\Delta$ sets.)\\
The following theorem forms a natural extension of \cref{1.DeltaIfandonlyIfDifferenceUltra} to $\Delta_\ell$ sets.
\begin{thm}\label{1.DeltaCharacterization}
Let $d\in\mathbb{N}$ and let $A_0,...,A_d\subseteq \mathbb{N}$. The following are equivalent:
\begin{enumerate}[(i)]
\item There exists a non-principal ultrafilter $p\in \beta \mathbb{N}$ such that for each $\ell\in \{0,...,d\}$, $A_{\ell}\in p_\ell$.
\item There is an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ such that for each $\ell\in\{0,...,d\}$,
$$D_\ell((n_k)_{k\in\mathbb{N}})\subseteq A_\ell,$$
where by convention $D_0((n_k)_{k\in\mathbb{N}})=\{n_k\,|\,k\in\mathbb{N}\}$.
\item There exist infinite sets $I_0,...,I_d\subseteq \mathbb{N}$ such that for each $\ell\in\{0,...,d\}$, $I_{\ell}\subseteq A_\ell$ and if $\ell<d$, we have that for all $n\in I_\ell$, the cardinality of $$I_{\ell}\setminus(n+ I_{\ell+1})$$
is finite\footnote{Note that for $d=1$, (iii) provides an alternative definition of a $\Delta$ set avoiding the use of the "-" operation. This "additive" definition of a $\Delta$ set can be extended to any semigroup (See \cite{BerHindAbundant}).
}.
\end{enumerate}
\end{thm}
\begin{proof}
(i)$\implies$(ii): We will consider the compact Hausdorff space $X=\{0,1\}^{\mathbb{N}\cup\{0\}}$ with the continuous map $\sigma:X\rightarrow X$ given by
$$(\sigma(x))(n)=x(n+1).$$
Note that for each $\ell\in\{0,...,d\}$ there exists $x_\ell\in X$ such that
$$\plimgG{ p_\ell}{n}{\mathbb{N}}\sigma^{n} \mathbbm 1_{A_\ell} =x_\ell.$$
So, since for any $n\in A_\ell$,
$$\sigma^n\mathbbm 1_{A_\ell}(0)=\mathbbm 1_{A_\ell}(n)=1$$
and $A_\ell\in p_\ell$, we have that $x_\ell(0)=1$.\\
Now observe that by \cref{1.LemaIteratedDifLimit} we have
\begin{equation*}\label{1.Delta()Limit1}
\plimgG{p}{n_1}{\mathbb{N}}\cdots\plimgG{p}{n_{2^\ell}}{\mathbb{N}} \sigma^{\partial(n_1,...,n_{2^\ell})}\mathbbm 1_{A_\ell}=x_\ell,
\end{equation*}
which implies that for any $\ell\in\{0,...,d\}$ there exists a $B_1^\ell\in p$ such that for any $r\in\{2,...,2^\ell\}$, there exists a set $B_r^\ell(n_1,...,n_{r-1})\in p$ defined recursively for $n_1\in B_1^\ell$, $n_2\in B_2^\ell(n_1)$,..., $n_{r-1}\in B_{r-1}^\ell(n_1,...,n_{r-2})$ with the property that if $n_1\in B_1^\ell$ and for each $r\in\{2,...,2^\ell\}$, $n_r\in B_r^\ell(n_1,...,n_{r-1})$, then
$$\sigma^{\partial(n_1,...,n_{2^\ell})}\mathbbm 1_{A_\ell}(0)=x_{\ell}(0)=1.$$
So, $\partial(n_1,...,n_{2^\ell})\in A_\ell$.\\
With the definition of $B_r^\ell(n_1,...,n_{r-1})$ in mind and adhering to the convention that $\bigcap _{j\in\emptyset}F_j=\mathbb{N}$, we can pick the sequence $(n_k)_{k\in\mathbb{N}}$ inductively as follows:\\
First, pick
\begin{equation}\label{1.ProofLemmaFirstInduction}
n_1\in\bigcap_{\ell=0}^d B_1^\ell\in p.
\end{equation}
For $t\leq 2^d-1$, pick
\begin{equation}\label{1.ProofLemmaFiniteInduction}
n_{t+1}\in\bigcap_{\ell=0}^d\left[B_1^\ell\cap \bigcap_{s\in\{1,...,\min\{2^\ell,t\}\}\setminus\{2^\ell\}}\bigcap_{1\leq j_1<\cdots<j_s\leq t}B_{s+1}^\ell(n_{j_1},...,n_{j_s})\right]\in p.
\end{equation}
Finally, for $t\geq 2^d$, pick
\begin{equation}\label{1.ProofLemmaUnboundedInduction}
n_{t+1}\in\bigcap_{\ell=0}^d\left[B_1^\ell\cap \bigcap_{s\in\{1,...,2^\ell\}\setminus\{2^\ell\}}\bigcap_{1\leq j_1<\cdots<j_s\leq t}B_{s+1}^\ell(n_{j_1},...,n_{j_s})\right]\in p.
\end{equation}
Since the sets in \eqref{1.ProofLemmaFirstInduction}, \eqref{1.ProofLemmaFiniteInduction} and \eqref{1.ProofLemmaUnboundedInduction} are members of $p$ and $p$ is a non-principal ultrafilter, we can assume without loss of generality, that for each $k\in\mathbb{N}$, $n_{k+1}>n_{k}$; which completes the proof of (i)$\implies$(ii).\\
(ii)$\implies$(iii): Let $(n_k)_{k\in\mathbb{N}}$ be an increasing sequence of natural numbers
such that for each $\ell\in\{0,...,d\}$,
$$D_\ell((n_k)_{k\in\mathbb{N}})\subseteq A_\ell.$$
For each $\ell\in \{0,...,d\}$, let
\begin{equation}\label{1.defI_l}
I_{\ell}=D_\ell((n_k)_{k\in\mathbb{N}}).
\end{equation}
Let $\ell\in\{0,...,d-1\}$. It follows from \eqref{1.defI_l} that
$$I_{\ell+1}=\{\partial(n_{j_{2^\ell+1}},...,n_{j_{2^{\ell+1}}})-\partial(n_{j_1},...,n_{j_{2^\ell}})\,|\,j_1<\cdots<j_{2^{\ell+1}}\},$$
so, for any $n\in I_{\ell}$, $$I_{\ell}\setminus(n+ I_{\ell+1})$$
is a finite set. In particular, if $I_\ell$ is infinite, $I_{\ell+1}$ is infinite as well. So, since $(n_k)_{k\in\mathbb{N}}$ is increasing, we have that for each $\ell\in\{0,...,d\}$, $I_\ell$ is infinite. Hence (ii)$\implies$(iii) follows from the observation that, by \eqref{1.defI_l}, for each $\ell\in\{0,...,d\}$,
$$I_\ell=D_\ell((n_k)_{k\in\mathbb{N}})\subseteq A_\ell.$$
(iii)$\implies$(i): First note that since $I_{\ell}\subseteq A_{\ell}$ for any $\ell\in\{0,...,d\}$, it
suffices to show that there exists a non-principal $p\in \beta \mathbb{N}$ with the property that for each $\ell\in\{0,...,d\}$, $I_\ell\in p_\ell$.\\
There exists a non-principal ultrafilter $p\in\beta \mathbb{N}$ such that
$$I_0\in p_0.$$
We claim that for any $\ell\in \{1,...,d\}$, $I_\ell\in p_\ell$. To see this, assume that
$I_{\ell-1}\in p_{\ell-1}$.
Then, by (iii), we have that
$$\{n\in \mathbb{N}\,|\,n+ I_{\ell}\in p_{\ell-1}\}\in p_{\ell-1}.$$
So, by the definition of $p_{\ell}$, $I_{\ell}\in p_{\ell}$. We are done.
\end{proof}
Recall that, given $\ell\in\mathbb{N}$, a set $E\subseteq \mathbb{N}$
is a $\Delta_\ell^*$\index{$\Delta_\ell^*$} set if for any increasing sequence $(n_k)_{k\in\mathbb{N}}$, there exists $j_1<\cdots<j_{2^\ell}$ for which $\partial(n_{j_1},...,n_{j_{2^\ell}})\in E$. As a corollary to \cref{1.DeltaCharacterization}, we obtain the following characterization of $\Delta_\ell^*$ sets.
\begin{prop}\label{1.WorkingDfn}
Given $\ell\in\mathbb{N}$, a set $E\subseteq\mathbb{N}$ is a $\Delta_\ell^*$ set if and only if $E$ has a non-trivial intersection with any $\Delta_\ell$ set.
\end{prop}
\begin{proof}
If $E$ is a $\Delta_\ell^*$ set, it is clear that it has a non-trivial intersection
with every $\Delta_\ell$ set.\\
Now suppose that $E$ has a non-trivial intersection with any $\Delta_\ell$ set. Let $(n_k)_{k\in\mathbb{N}}$ be an increasing sequence in $\mathbb{N}$ and let $p\in\beta\mathbb{N}$ be a non-principal ultrafilter with $\{n_k\,|\,k\in\mathbb{N}\}\in p$. Since $\mathbb{N}\in p_\ell$, \cref{1.DeltaCharacterization} implies that there exists a subsequence $(n_{k_j})_{j\in\mathbb{N}}$ of $(n_k)_{k\in\mathbb{N}}$ for which
$$D_\ell((n_{k_j})_{j\in\mathbb{N}})\subseteq\mathbb{N}.$$
But $D_\ell((n_{k_j})_{j\in\mathbb{N}})$ is itself a $\Delta_\ell$ set. Thus, since $E\cap D_\ell((n_{k_j})_{j\in\mathbb{N}})\neq \emptyset$, there exist $j_1<\cdots<j_{2^\ell}$ for which
$\partial(n_{j_1},...,n_{j_{2^\ell}})\in E$, completing the proof.
\end{proof}
We record for the future use two immediate corollaries of \cref{1.DeltaCharacterization} and \cref{1.WorkingDfn}:
\begin{cor}\label{1.DeltaL*characterization} Let $E\subseteq \mathbb{N}$ and let $\ell\in\mathbb{N}$. $E$ is a $\Delta_\ell^*$ set if and only if $E\in p_{\ell}$ for any non-principal ultrafilter $p\in\beta\mathbb{N}$.
\end{cor}
\begin{cor}
For any $N\in\mathbb{N}$ and any $\Delta_\ell^*$ sets $E_1,...,E_N$, the set $E_1\cap\cdots\cap E_N$ is also $\Delta_\ell^*$.
\end{cor}
We conclude this section by noting that if $D$ is a $\Delta_\ell^*$ set, then it is syndetic (i.e. there exist $n_1,...,n_N\in\mathbb{N}$ such that $\mathbb{N}\subseteq \bigcup_{j=1}^ND-n_j$).
\begin{lem}\label{1.Delta*IsSyndetic}
For any $\ell\in\mathbb{N}$, any $\Delta_\ell^*$ set is syndetic.
\end{lem}
\begin{proof}
Let $\ell\in\mathbb{N}$. We will show that if $D\subseteq \mathbb{N}$ is not syndetic, then it is not $\Delta_\ell^*$. If $D$ is not syndetic, then there exist increasing sequences of natural numbers $(L_k)_{k\in\mathbb{N}}$ and $(R_k)_{k\in\mathbb{N}}$ such that (1) for each $k\in\mathbb{N}$, $L_k<R_k<L_{k+1}$; (2) $\lim_{k\rightarrow\infty}R_k-L_k=\infty$ and (3) $D\cap \bigcup_{k\in\mathbb{N}}[L_k,R_k]=\emptyset$. Without loss of generality we can assume that for any $k\in\mathbb{N}$
$$\sum_{s=1}^kR_s+L_{k+1}<R_{k+1}.$$
So, for any $k\geq 2^\ell-1$ and any $1\leq j_1<\cdots<j_{2^\ell-1}\leq k$,
\begin{multline*}
L_{k+1}<R_{k+1}-\sum_{s=1}^kR_s<\partial(R_{j_1},...,R_{j_{2^{\ell}-1}},R_{k+1})\\
<R_{k+1}-R_{j_{2^\ell-1}}+\sum_{s=1}^{j_{2^\ell-1}-1}R_s<R_{k+1}-L_{j_{2^\ell-1}}<R_{k+1}.
\end{multline*}
This shows that $$D_\ell((R_k)_{k\in\mathbb{N}})\subseteq\bigcup_{k\in\mathbb{N}}[L_k,R_k].$$
Hence, $D$ is not a $\Delta_\ell^*$ set.
\end{proof}
\section{$\Delta_\ell^*$ sets and diophantine inequalities}
As was mentioned in the introduction, we have two approaches to proving \cref{0.OddDegreeRecurrence}: an ultrafilter approach which is, so to say, soft and clean, and an elementary approach which is based on Ramsey's theorem and which, while being more cumbersome, allows to obtain somewhat stronger finitistic results. The first approach is implemented in Subsection \ref{SubsectionUltrafilterApproach}, the second --- in Subsection. \ref{SubsectionFinitisticProof}.
\subsection{The ultrafilter approach}\label{SubsectionUltrafilterApproach}
In order to establish the $\Delta_\ell^*$ property of the set
$$\mathcal R(v,\epsilon)=\{n\in\mathbb{N}\,|\,\|v(n)\|<\epsilon\}$$
we will find it convenient to work with the torus $\mathbb T=\mathbb{R}/\mathbb{Z}$ (which is identified with the unit interval $[0,1]$ with the endpoints glued up). In what follows we will tacitly assume that $v(n)$ is "reduced" mod 1 and while considering limits of the form $\plimgG{p}{n}{\mathbb{N}} v(n)$, will think of the sequence $(v(n))_{n\in\mathbb{N}}\subseteq \mathbb T$ as corresponding to the sequence $v(n)\text{ mod 1}\in [0,1)$. In particular
$$\plimgG{p}{n}{\mathbb{N}} v(n)=\alpha\text{ if and only if }\plimgG{p}{n}{\mathbb{N}}\|v(n)-\alpha\|=0,$$
where $\|\cdot\|$ denotes the distance to the closest integer.\\
We start with proving \cref{0.CubicCase} from the introduction (which in this section becomes \cref{2.CubicCase}). While this result forms a special case of \cref{0.OddDegreeRecurrence} (=\cref{2.OddDegreeRecurrence} below), we believe that its short proof will help the reader to better understand the underlying ideas.
\begin{prop}\label{2.CubicCase}
For any real number $\alpha$ and any $\epsilon>0$, the set
$$\mathcal R(n^3\alpha,\epsilon)=\{n\in\mathbb{N}\,|\,\|n^3\alpha\|<\epsilon\}$$
is $\Delta_2^*$.
\end{prop}
\begin{proof}
By \cref{1.DeltaL*characterization}, it suffices to show that for any non-principal ultrafilter $p\in\beta\mathbb{N}$,
$$\plimgG{p_2}{n}{\mathbb{N}}n^3\alpha=\plimgG{(-p_1+p_1)}{n}{\mathbb{N}}n^3\alpha=0.$$
As a preliminary result, we will show that for any $\gamma\in\mathbb T$ and any non-principal $p\in\beta\mathbb{N}$, $$\plimgG{(-p+p)}{n}{\mathbb{N}}n\gamma=0.$$
Indeed, let $\plimgG{p}{n}{\mathbb{N}}n\gamma=c$ and $\plimgG{(-p+p)}{n}{\mathbb{N}}n\gamma=\beta$. Then
$$\beta=\plimgG{p}{m}{\mathbb{N}}\plimgG{p}{n}{\mathbb{N}}(n-m)\gamma=\plimgG{p}{m}{\mathbb{N}}(c-m\gamma)=c-c=0.$$
Now, let $p_1=-p+p$, let $\plimgG{p_1}{n}{\mathbb{N}}n^3\alpha=\beta$
and let $\plimgG{p_1}{n}{\mathbb{N}}n^2\alpha=\gamma$.
We have
\begin{multline*}
\plimgG{p_2}{n}{\mathbb{N}}n^3\alpha=\plimgG{(-p_1+p_1)}{n}{\mathbb{N}}n^3\alpha=\plimgG{p_1}{m}{\mathbb{N}}\plimgG{p_1}{n}{\mathbb{N}}(n-m)^3\alpha\\
=\plimgG{p_1}{m}{\mathbb{N}}\plimgG{p_1}{n}{\mathbb{N}}(n^3\alpha-3mn^2\alpha+3m^2n\alpha-m^3\alpha)\\
=\plimgG{p_1}{m}{\mathbb{N}}(\beta-3m\gamma+0-m^3\alpha)
=\beta-0+0-\beta=0.
\end{multline*}
\end{proof}
We proceed now to the proof of the general case.
\begin{thm}\label{2.OddDegreeRecurrence}
For any odd real polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ and any non-principal ultrafilter $p\in\beta\mathbb{N}$,
$$\plimgG{p_\ell}{n}{\mathbb{N}}v(n)=0.$$
Equivalently, for any $\epsilon>0$, the set
$$\mathcal R(v,\epsilon)=\{n\in\mathbb{N}\,|\,\|v(n)\|<\epsilon\}$$
is $\Delta_\ell^*$.
\end{thm}
\begin{proof}
We first note that it is sufficient to prove \cref{2.OddDegreeRecurrence} for odd polynomials of the special form $v=n^{2\ell-1}\alpha$, where $\alpha\in\mathbb{R}$. Indeed, the general case follows, via the identity
$$\plimgG{p_\ell}{n}{\mathbb{N}}\sum_{j=ad}^\ell a_{j}n^{2j-1}=\sum_{j=1}^\ell\plimgG{p_\ell}{n}{\mathbb{N}}a_jn^{2j-1},$$
from the fact that for any non-principal ultrafilter $p$, $p_\ell=(p_{\ell-t})_t$ for any $t\leq \ell$.\\
Let $p$ be a non-principal ultrafilter and let $\alpha$ be a real number. We proceed by induction on $\ell\in\mathbb{N}$. If $\ell=1$, then we have that
$$\plimgG{p_1}{n}{\mathbb{N}}n\alpha=\plimgG{p}{m}{\mathbb{N}}\plimgG{p}{n}{\mathbb{N}}(n-m)\alpha=\plimgG{p}{n}{\mathbb{N}}n\alpha-\plimgG{p}{m}{\mathbb{N}}m\alpha=0.$$
Now let $\ell>1$ and suppose that \cref{2.OddDegreeRecurrence} holds for $t<\ell$. Let $\gamma\in\mathbb T$ be such that
$$\plimgG{p}{n}{\mathbb{N}}n^{2\ell-1}\alpha=\gamma.$$
Then
\begin{multline}\label{2.InductiveEquality1}
\plimgG{p_\ell}{n}{\mathbb{N}}n^{2\ell-1}\alpha=\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}(n-m)^{2\ell-1}\alpha=\\
\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\left(n^{2\ell-1}\alpha-m^{2\ell-1}\alpha+v_m(n)\alpha+\sum_{j=1}^{\ell-1}m^{2j-1}w_j(n)\alpha\right),
\end{multline}
where for each $m\in\mathbb{N}$,
$$v_m(n)=\sum_{j=1}^{\ell-1}\frac{(2\ell-1)!}{(2j)!(2(\ell-j)-1)!}m^{2j}n^{2(\ell-j)-1}$$
and for each $j\in\{1,...,\ell-1\}$,
$$w_j(n)=\frac{(2\ell-1)!}{(2j-1)!(2(\ell-j))!}n^{2(\ell-j)}.$$
Since $v_m$ is an odd polynomial for each $m\in\mathbb{N}$, the inductive hypothesis implies that
$$\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}v_m(n)\alpha=0.$$
It also follows from the inductive hypothesis that for each $j\in\{1,...,\ell-1\}$,
$$\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\left(m^{2j-1}\left(\plimgG{p_{\ell-1}}{n}{\mathbb{N}}w_j(n)\alpha\right)\right)=0.$$
Thus, the right-hand side of \eqref{2.InductiveEquality1} equals
$$\gamma-\gamma+0+0=0,$$
completing the proof.
\end{proof}
\subsection{The finitistic aproach}\label{SubsectionFinitisticProof}
The finitistic approach to the proof of \cref{0.OddDegreeRecurrence} requires the use of the following version of Ramsey's Theorem. For any $k\in\mathbb{N}$ and any set $S$, we denote by $S^{(k)}$ the set of all $k$-element subsets of $S$.
\begin{thm}[Ramsey's Theorem]\label{7.FinitisticRamsey}
Let $\ell,M\in\mathbb{N}$ and $r\geq2^\ell$. There exists a natural number $R=R(\ell,M,r)$, with the following property:\\
For any $M$-partition
$$\{1,...,R\}^{(2^\ell)}= \bigcup _{t=1}^M C_t,$$
one of the $C_t$'s contains $D^{(2^\ell)}$ for some set $D$ with $|D|\geq r$.
\end{thm}
\begin{rem}
Note that, in the above theorem, $\{1,...,R\}^{(2^\ell)}$ can be replaced (when convenient) by the set $\{n_1,....,n_R\}^{(2^\ell)}$, where $(n_j)_{j=1}^R$ is an R-element increasing sequence in $\mathbb{N}$.
\end{rem}
Let $\ell\in \mathbb{N}$ and $r\geq 2^\ell$. A set $E\subseteq
\mathbb{N}$ is called a \textbf{$\Delta_{\ell,r}$\index{$\Delta_{\ell,r}$} set} if there exists an r-element sequence $(n_k)_{k=1}^r$ in $\mathbb{Z}$ such that
$$D_\ell((n_k)_{k=1}^r)=\{\partial(n_{j_1},...,n_{j_{2^\ell}})\,|\,1\leq j_1<\cdots<j_{2^\ell}\leq r\}\index{$D_\ell((n_k)_{k=1}^r)$}$$
is a subset of $E$.
If, in addition, $E$ contains a $\Delta_{\ell,r}$ set for each $r\geq 2^\ell$, then we say that $E$ is a \textbf{$\Delta_{\ell,0}$\index{$\Delta_{\ell,0}$} set}.\\
A set $E\subseteq \mathbb{N}$ is a \textbf{$\Delta_{\ell,r}^*$\index{$\Delta_{\ell,r}^*$} set} if it has a non-trivial intersection with any $\Delta_{\ell,r}$ set. Similarly, $E\subseteq \mathbb{N}$ is a \textbf{$\Delta_{\ell,0}^*$\index{$\Delta_{\ell,0}^*$} set} if it has a non-empty intersection with any $\Delta_{\ell,0}$ set.\\\
We summarize in the following proposition the relations between the families of sets which were just introduced above. These relations follow directly from \cref{7.FinitisticRamsey}; we omit the proofs.
\begin{prop}\label{7.DeltaRSummary}
Let $\ell,\ell_1,\ell_2\in\mathbb{N}$, let $r\geq2^\ell$, let $r_1\geq 2^{\ell_1}$ and let $N\in\mathbb{N}$. The following statements hold:
\begin{enumerate}[(i)]
\item If $\ell_1\leq \ell_2$ and $2^{\ell_2-\ell_1}r_1\leq R $, then, any $\Delta_{\ell_1,r_1}^*$ set is a $\Delta_{\ell_2,R}^*$ set.
\item A set $E\subseteq \mathbb{N}$ is a $\Delta_{\ell,0}^*$ set if and only if there exists $R\geq 2^\ell$ for which $E$ is a $\Delta_{\ell,R}^*$ set.
\item There exists $R\geq r$ such that for any $\Delta_{\ell,r}^*$ sets $E_1,...,E_N
\subseteq \mathbb{N}$, the set $E_1\cap\cdots\cap E_N$ is $\Delta_{\ell,R}^*$. In particular, $\Delta_{\ell,0}^*$ sets have the finite intersection property.
\end{enumerate}
\end{prop}
\begin{rem}
The finite intersection property of $\Delta_{\ell,0}^*$ (item (iii) in \cref{7.DeltaRSummary}) also follows from a general set-theoretical fact which states that if $\Phi$ is a partition regular family of non-empty subsets of $\mathbb{N}$\footnote{
A family $\Phi$ of non-empty subsets of $\mathbb{N}$ is called partition regular if for any $N\in\mathbb{N}$ and any sets $C_1,...,C_N$ with $\bigcup_{j=1}^NC_j\in \Phi$, one has that some $C_j$ belongs to $\Phi$.
}, then
$$\Phi^*=\{A\subseteq\mathbb{N}\,|\, \forall B\in\Phi,\, A\cap B\neq\emptyset\}$$
has the finite intersection property.
\end{rem}
To visualize the relations between the various classes of sets which were introduced above, let us denote by $\Delta_\ell^*$, $\Delta_{\ell,0}^*$ and $\Delta_{\ell,r}^*$ the families of sets with the corresponding properties. Then for $\ell_1< \ell_2$ and $r_1<r_2$, we have the following diagram of equalities and inclusions:
$$\begin{matrix}
\vdots& & & &\vdots& & & &\vdots& & & &\vdots& &\vdots\\
\rotatebox[origin=c]{-90}{$=$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subsetneq$}& &\rotatebox[origin=c]{90}{$\subsetneq$}\\
\Delta_{\ell_2,2^{\ell_2}}^*&\subseteq&\cdots&\subseteq&\Delta_{\ell_2,2^{\ell_2}r_1}^*&\subseteq&\cdots&\subseteq&\Delta_{\ell_2,2^{\ell_2}r_2}^*&\subseteq&\cdots&\subseteq&\Delta_{\ell_2,0}^*&\subsetneq &\Delta_{\ell_2}^*\\
\rotatebox[origin=c]{-90}{$=$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subsetneq$}& &\rotatebox[origin=c]{90}{$\subsetneq$}\\
\vdots& & & &\vdots& & & &\vdots& & & &\vdots& &\vdots\\
\rotatebox[origin=c]{-90}{$=$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subsetneq$}& &\rotatebox[origin=c]{90}{$\subsetneq$}\\
\Delta_{\ell_1,2^{\ell_1}}^*&\subseteq&\cdots&\subseteq&\Delta_{\ell_1,2^{\ell_1}r_1}^*&\subseteq&\cdots&\subseteq&\Delta_{\ell_1,2^{\ell_1}r_2}^*&\subseteq&\cdots&\subseteq&\Delta_{\ell_1,0}^*&\subsetneq &\Delta_{\ell_1}^*\\
\rotatebox[origin=c]{-90}{$=$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subsetneq$}& &\rotatebox[origin=c]{90}{$\subsetneq$}\\
\vdots& & & &\vdots& & & &\vdots& & & &\vdots& &\vdots\\
\rotatebox[origin=c]{-90}{$=$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subseteq$}& & & &\rotatebox[origin=c]{90}{$\subsetneq$}& &\rotatebox[origin=c]{90}{$\subsetneq$}\\
\Delta_{1,2}^*&\subseteq&\cdots&\subseteq&\Delta_{1,2r_1}^*&\subseteq&\cdots&\subseteq&\Delta_{1,2r_2}^*&\subseteq&\cdots&\subseteq&\Delta_{1,0}^*&\subsetneq &\Delta^*.
\end{matrix}
$$
(The strict inclusions appearing in the two right-most columns of the above diagram follow from \cref{2.OddPolynomialCharacterization} and \cref{3.StrictDeltaEllinclusion} below.)\\
Before embarking on the proof of the finitary version of \cref{0.OddDegreeRecurrence}, we will illustrate the main ideas in the special case $v(n)=n^3\alpha$.
\begin{prop}[Finitary version of \cref{0.CubicCase}]\label{7.CubicFinitisticCase}
For any $\epsilon>0$, there exists $r\in\mathbb{N}$ such that for any real $\alpha$, the set
$$\mathcal R(n^3\alpha,\epsilon)=\{n\in\mathbb{N}\,|\,\|n^3\alpha\|<\epsilon\}$$
is $\Delta_{2,r}^*$.
\end{prop}
\begin{proof}
Let $\epsilon>0$ and let $N\in\mathbb{N}$ be such that $\frac{7}{N}<\epsilon$. We will show that for any $R\in\mathbb{N}$ large enough, any R-element sequence $(n_j)_{j=1}^R$ in $\mathbb{Z}$ with $D_\ell((n_j)_{j=1}^R)\subseteq \mathbb{N}$ and any $\alpha\in\mathbb{R}$, there exist $1\leq j_1<\cdots<j_4\leq R$ such that
\begin{equation}\label{7.CubeCloseTo0}
\|(\partial(n_{j_1},...,n_{j_4}))^3\alpha\|<\epsilon.
\end{equation}
Define the sets $\mathcal Q(k_1,k_2,k_3)$ for $k_1,k_2, k_3\in\{0,...,N-1\}$ by
\begin{equation*}\label{7.SmalllCube}
\mathcal Q(k_1,k_2,k_3)=\{(\alpha_1,\alpha_2,\alpha_3)\in\mathbb T^3\,|\,\alpha_j\in \left[\frac{k_j}{N},\frac{k_j+1}{N}\right)\}
\end{equation*}
Let $R=R(2,N^3,6)$ be as in the statement of \cref{7.FinitisticRamsey}.
Let $(n_j)_{j=1}^R$ be any R-element sequence in $\mathbb{Z}$, let $\alpha\in\mathbb{R}$ and let
$$D=\{((n_{j_2}-n_{j_1})^3\alpha,n_{j_3}(n_{j_2}-n_{j_1})^2\alpha,(n_{j_3}-n_{j_2})^2n_{j_1}\alpha)\in\mathbb T^3\,|\,1\leq j_1<\cdots<j_4\leq R\}.$$
Since
$$D=\bigcup_{k_1,k_2,k_3=0}^{N-1}\left(\mathcal Q(k_1,k_2,k_3)\cap D\right),$$
\cref{7.FinitisticRamsey} implies that there exists $1\leq t_1<\cdots <t_6\leq R$ and $0\leq s_1,s_2,s_3\leq N-1$ for which the set
$$D'=\{((n_{t_{j_2}}-n_{t_{j_1}})^3\alpha,n_{t_{j_3}}(n_{t_{j_2}}-n_{t_{j_1}})^2\alpha,(n_{t_{j_3}}-n_{t_{j_2}})^2n_{t_{j_1}}\alpha)\in\mathbb T^3\,|\,1\leq j_1<\cdots<j_4\leq 6\}$$
is a subset of $\mathcal Q(s_1,s_2,s_3)$.\\
Since $D'\subseteq \mathcal Q(s_1,s_2,s_3)$, we have
$$\|((n_{t_4}-n_{t_3})^3-(n_{t_2}-n_{t_1})^3)\alpha\|<\frac{1}{N},$$
$$\|3((n_{t_4}-n_{t_3})(n_{t_2}-n_{t_1})^2)\alpha\|<\frac{3}{N}$$
and
$$\|3((n_{t_4}-n_{t_3})^2(n_{t_2}-n_{t_1}))\alpha\|<\frac{3}{N};$$
which proves \eqref{7.CubeCloseTo0}.
\end{proof}
We move now to the finitary version of \cref{0.OddDegreeRecurrence}.
\begin{thm}\label{7.FinitisticOddDegreeRecurrence}
Let $\ell\in\mathbb{N}$. Then for any $\epsilon>0$, there exists an $r=r(\epsilon,\ell)\geq2^\ell$ such that for any odd real polynomial $v(x)=\sum_{j=1}^{\ell}a_jx^{2j-1}$, the set $$\mathcal R(v,\epsilon)=\{n\in\mathbb{N}\,|\,\|v(n)\|<\epsilon\}$$
is a $\Delta_{\ell,r}^*$ set.
\end{thm}
\begin{proof}
By \cref{7.DeltaRSummary}, it suffices to show that for any $\ell\in\mathbb{N}$ and any $\epsilon>0$ there exists an $R\in\mathbb{N}$ such that for any $\alpha\in\mathbb{R}$ the set
$\mathcal R( n^{2\ell-1}\alpha,\epsilon)$
is $\Delta_{\ell,R}^*$.\\
We proceed by induction on $\ell\in\mathbb{N}$. The case $\ell=1$ follows from the pigeon hole principle. Now let $\ell\geq 2$ and suppose that the result holds for any $t<\ell$. Let $\epsilon>0$ and let $R_1\geq 2^{\ell-1}$ be a natural number guaranteeing that for any $\alpha_1,...,\alpha_{\ell-1}\in\mathbb{R}$, and any $R_1$-element sequence $(n_k)_{k=1}^{R_1}$ in $\mathbb{Z}$ with $D_{\ell}((n_k)_{k=1}^{R_1})\subseteq \mathbb{N}$, the set
$$\{\partial(n_{k_1},...,n_{k_{2^{\ell-1}}})\,|\,1\leq k_1<\cdots<k_{2^{\ell-1}}\leq R_1\}$$
has a non-empty intersection with the set
$\mathcal R(\sum_{j=1}^{\ell-1}n^{2j-1}\alpha_j,\frac{\epsilon}{5})$.\\
By \cref{7.FinitisticRamsey}, there exists $R\in\mathbb{N}$ such that for any $\alpha\in\mathbb{R}$ and any R-element sequence $(n_k)_{k=1}^{R}$ in $\mathbb{Z}$ with $D_\ell((n_k)_{k=1}^R)\subseteq \mathbb{N}$, there exist $\beta_1,\beta_2\in\mathbb T$ and a 2R$_1$-element subsequence $(m_s)_{s=1}^{2R_1}$ of $(n_k)_{k=1}^R$ such that for any $1\leq s_1<\cdots<s_{2^{\ell-1}}\leq R_1$ and any $R_1+1\leq t_1<\cdots<t_{2^{\ell-1}}\leq 2R_1$,
\begin{equation}\label{7.NearbyLargestPower}
\|(\partial(m_{t_1},...,m_{t_{2^{\ell-1}}}))^{2\ell-1}\alpha-(\partial(m_{s_1},...,m_{s_{2^{\ell-1}}}))^{2\ell-1}\alpha\|<\frac{\epsilon}{5},
\end{equation}
\begin{multline}\label{7.NearbyPolysLargeOdd}
\|\sum_{j=1}^{\ell-1}\frac{(2\ell-1)!}{(2j)!(2(\ell-j)-1)!}(-\partial(m_{s_1},...,m_{s_{2^{\ell-1}}}))^{2j}(\partial(m_{t_1},...,m_{t_{2^{\ell-1}}}))^{2(\ell-j)-1}\alpha\\
-\beta_1\|<\frac{\epsilon}{10},
\end{multline}
and
\begin{multline}\label{7.NearbyPolysSmallOdd}
\|\sum_{j=1}^{\ell-1}\frac{(2\ell-1)!}{(2(\ell-j)-1)!(2j)!}(-\partial(m_{s_1},...,m_{s_{2^{\ell-1}}}))^{2(\ell-j)-1}(\partial(m_{t_1},...,m_{t_{2^{\ell-1}}}))^{2j}\alpha\\
-\beta_2\|<\frac{\epsilon}{10}.
\end{multline}
By our choice of $R_1$, we have that for any $1\leq s_1<\cdots<s_{2^{\ell-1}}\leq R_1$ there exists $R_1+1\leq t_1<\cdots<t_{2^{\ell-1}}\leq 2R_1$, such that
$$\|\sum_{j=1}^{\ell-1}\frac{(2\ell-1)!}{(2j)!(2(\ell-j)-1)!}(-\partial(m_{s_1},...,m_{s_{2^{\ell-1}}}))^{2j}(\partial(m_{t_1},...,m_{t_{2^{\ell-1}}}))^{2(\ell-j)-1}\alpha\|<\frac{\epsilon}{5},$$
which, together with \eqref{7.NearbyPolysLargeOdd}, implies that
for any $1\leq s_1<\cdots<s_{2^{\ell-1}}\leq R_1$ and any $R_1+1\leq t_1<\cdots<t_{2^{\ell-1}}\leq 2R_1$,
$$\|\sum_{j=1}^{\ell-1}\frac{(2\ell-1)!}{(2j)!(2(\ell-j)-1)!}(-\partial(m_{s_1},...,m_{s_{2^{\ell-1}}}))^{2j}(\partial(m_{t_1},...,m_{t_{2^{\ell-1}}}))^{2(\ell-j)-1}\alpha\|<\frac{2\epsilon}{5}.$$
Similarly, by \eqref{7.NearbyPolysSmallOdd}, we have that for any $1\leq s_1<\cdots<s_{2^{\ell-1}}\leq R_1$ and any $R_1+1\leq t_1<\cdots<t_{2^{\ell-1}}\leq 2R_1$,
$$\|\sum_{j=1}^{\ell-1}\frac{(2\ell-1)!}{(2(\ell-j)-1)!(2j)!}(-\partial(m_{s_1},...,m_{s_{2^{\ell-1}}}))^{2(\ell-j)-1}(\partial(m_{t_1},...,m_{t_{2^{\ell-1}}}))^{2j}\alpha\|<\frac{2\epsilon}{5}.$$
So, by \eqref{7.NearbyLargestPower}, we have that for any $1\leq s_1<\cdots<s_{2^{\ell-1}}\leq R_1$ and any $R_1+1\leq t_1<\cdots<t_{2^{\ell-1}}\leq 2R_1$,
$$\|(\partial(m_{t_1},...,m_{t_{2^{\ell-1}}})-\partial(m_{s_1},...,m_{s_{2^{\ell-1}}}))^{2\ell-1}\alpha\|<\epsilon;$$
completing the proof.
\end{proof}
As we mentioned in the Introduction, for any real polynomial $v$, with $v(0)\in\mathbb{Z}$, the set $\mathcal R(v,\epsilon)$ is an IP$^*$ set (see \cite[Theorem 1.21]{FBook}). As a matter of fact, $\mathcal R(v,\epsilon)$ is actually an IP$^*_r$ set. Given $r\in\mathbb{N}$, a set $E\subseteq \mathbb{N}$ is called an \textbf{IP$_r$ set}\index{IP$_r$} if it contains the finite sums set of the form $$\text{FS}((n_k)_{k=1}^r)=\{n_{k_1}+\cdots+n_{k_m}\,|\,1\leq k_1<\cdots<k_m\leq r,\,1\leq m\leq r\},\index{FS$((n_k)_{k=1}^r)$}$$
where $(n_k)_{k=1}^r$ is an r-element sequence in $\mathbb{N}$. A set $E\subseteq \mathbb{N}$ is an \textbf{IP$_r^*$\index{IP$_r^*$} set} if it has a non-trivial intersection with any IP$_r$ set. It is of interest to juxtapose \cref{7.FinitisticOddDegreeRecurrence} with the following result.
\begin{thm}[see the proof of Theorem 7.7 in \cite{BerUltraAcrossMath}]\label{3.IP0Returns}
Let $N\in\mathbb{N}$. Then for any $\epsilon>0$ there exists $r=r(N,\epsilon)\in\mathbb{N}$ such that for any real polynomial $v(x)=\sum_{j=1}^{N}a_jx^j$ the set $\mathcal R(v,\epsilon)$
is an IP$_r^*$ set.
\end{thm}
It is natural to ask what is the relation between \cref{7.FinitisticOddDegreeRecurrence} and \cref{3.IP0Returns} when $v$ is an odd real polynomial. We will show in Section 8 that the families of sets $\Delta_{\ell,r}^*$ and IP$_r^*$ are, so to say, in general position and so \cref{7.FinitisticOddDegreeRecurrence} provides a new diophantine approximation result when $v$ is an odd real polynomial.
\section{A converse to \cref{0.OddDegreeRecurrence}}\label{SectionAConverseResult}
In this section we prove the following converse of \cref{0.OddDegreeRecurrence}.
\begin{thm}\label{2.OddPolyDegreeThm}
Let $\ell\in\mathbb{N}$ and let $v(x)$ be an odd real polynomial with irrational leading coefficient. If for each $\epsilon>0$ the set
$$\mathcal R(v,\epsilon)=\{n\in\mathbb{N}\,|\,\|v(n)\|<\epsilon\}$$
is $\Delta_\ell^*$, then $\deg (v)\leq 2\ell-1$.
\end{thm}
We will derive \cref{2.OddPolyDegreeThm} from the following Lemma which will be also used below in Section 8.
\begin{lem}\label{2.TechnicalLemma}
Let $p\in\beta\mathbb{N}$ be a non-principal ulltrafilter, let $\ell\in\mathbb{N}$ and let $\alpha_0,...,\alpha_\ell\in\mathbb T$ be such that for $j\in\{1,...,\ell\}$,
\begin{equation}\label{2.LemmaCondition1}
\plimgG{p}{n}{\mathbb{N}}n\alpha_j=0
\end{equation}
and
\begin{equation}\label{2.LemmaCondition2}
-2^{j-1}\frac{(2j+1)!}{2!(2j-1)!}\plimgG{p}{n}{\mathbb{N}}n^2\alpha_j=\alpha_{j-1}.
\end{equation}
Then
\begin{equation}\label{2.LemmaConsequence}
\plimgG{p_\ell}{n}{\mathbb{N}} n^{2\ell+1}\alpha_\ell=\plimgG{p}{n}{\mathbb{N}}n\alpha_0.
\end{equation}
\end{lem}
\begin{proof}
The proof is by induction on $\ell\in\mathbb{N}$. When $\ell=1$, note that
$$\plimgG{(-p+p)}{n}{\mathbb{N}}n^3\alpha_1=\plimgG{p}{m}{\mathbb{N}}\plimgG{p}{n}{\mathbb{N}}\left((n^3-m^3)\alpha_1+3m^2n\alpha_1-3mn^2\alpha_1\right).$$
So by \eqref{2.LemmaCondition1} and \eqref{2.LemmaCondition2}, we have that
$$\plimgG{(-p+p)}{n}{\mathbb{N}}n^3\alpha_1=\plimgG{p}{m}{\mathbb{N}}m\alpha_0$$
as desired.\\
Let $\ell\geq 2$, let $p\in\beta\mathbb{N}$ be a non-principal ultrafilter and let $\alpha_0,...,\alpha_\ell\in\mathbb T$ satisfy \eqref{2.LemmaCondition1} and \eqref{2.LemmaCondition2}. Suppose that \cref{2.TechnicalLemma} holds for all $\ell_0<\ell$. For any $\alpha\in \mathbb T$ and any $d\in\{1,...,\ell-1\}$, let $q^{(d)}=p_{\ell-d}$ and define $\beta^{(d)}_{d-1},...,\beta^{(d)}_0\in \mathbb T$ by letting $\beta^{(d)}_{d-1}=\alpha$ and setting
$$\beta^{(d)}_{j-1}=-2^{j-1}\frac{(2j+1)!}{2!(2j-1)!}\plimgG{q^{(d)}}{n}{\mathbb{N}}n^2\beta_{j}^{(d)}$$
for each $j\in\{1,...,d-1\}$.\\
Since for any non-principal ultrafilter $q\in\beta\mathbb{N}$ and any $\beta\in\mathbb T$,
$$\plimgG{ q_1}{n}{\mathbb{N}}n\beta=\plimgG{q}{m}{\mathbb{N}}\plimgG{q}{n}{\mathbb{N}}\big((n-m)\beta\big)=0,$$
it follows from the inductive hypothesis that for any $d\in\{1,...,\ell-1\}$,
\begin{equation}\label{2.InductiveCorollary}
\plimgG{p_{\ell-1}}{n}{\mathbb{N}}n^{2d-1}\alpha=\plimgG{q^{(d)}_{d-1}}{n}{\mathbb{N}}n^{2d-1}\beta^{(d)}_{d-1}=\plimgG{q^{(d)}}{n}{\mathbb{N}}n\beta^{(d)}_0=0.
\end{equation}
Note now that for each $j\in\{2,...,\ell\}$,
\begin{equation*}
-2^{j-2}\frac{(2j-1)!}{2!(2j-3)!}\plimgG{p}{n}{\mathbb{N}}\left(n^2\alpha_j\cdot 2^{j-1}\frac{(2j+1)!}{(2j-1)!}\right)=\alpha_{j-1}\cdot 2^{j-2}\frac{(2j-1)!}{(2j-3)!}.
\end{equation*}
Thus, the inductive hypothesis applied to
$$\alpha_{1}\cdot \frac{3!}{1!},...,\alpha_{\ell}\cdot 2^{\ell-1}\frac{(2\ell+1)!}{(2\ell-1)!}$$
implies that
\begin{equation}\label{2.Inductie2}
2^{\ell-1}\frac{(2\ell+1)!}{(2\ell-1)!}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}n^{2\ell-1}\alpha_\ell=\plimgG{p}{n}{\mathbb{N}}\left(n\alpha_1\cdot\frac{3!}{1!}\right)=0.
\end{equation}
So, \eqref{2.InductiveCorollary} and \eqref{2.Inductie2} imply that for any $t\in\{1,...,\ell\}$,
\begin{equation}\label{2.Lemma4(t)}
\plimgG{p_{\ell-1}}{n}{\mathbb{N}}n^{2(\ell-t)+1}\alpha_\ell=0.
\end{equation}
By the Binomial Theorem, the left-hand side of \eqref{2.LemmaConsequence} equals
\begin{equation}\label{2.Lemma3}
\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\sum_{j=1}^{2\ell}\frac{(2\ell+1)!}{j!(2\ell+1-j)!}(-m)^jn^{2\ell+1-j}\alpha_\ell,
\end{equation}
which in turn equals
\begin{multline*}
\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\sum_{s=1}^{\ell}\frac{(2\ell+1)!}{(2s-1)!(2(\ell-s)+2)!}(-m)^{2s-1}n^{2(\ell-s)+2}\alpha_\ell\\
+\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\sum_{t=1}^{\ell}\frac{(2\ell+1)!}{(2t)!(2(\ell-t)+1)!}(-m)^{2t}n^{2(\ell-t)+1}\alpha_\ell\\
\end{multline*}
By \eqref{2.InductiveCorollary},
$$\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\sum_{s=1}^{\ell-1}\frac{(2\ell+1)!}{(2s-1)!(2(\ell-s)+2)!}(-m)^{2s-1}n^{2(\ell-s)+2}\alpha_\ell=0$$
and by \eqref{2.Lemma4(t)},
$$\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\sum_{t=1}^{\ell}\frac{(2\ell+1)!}{(2t)!(2(\ell-t)+1)!}(-m)^{2t}n^{2(\ell-t)+1}\alpha_\ell=0.$$
So, the expression in \eqref{2.Lemma3} equals
\begin{equation*}\label{2.Lemma7}
\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\frac{(2\ell+1)!}{2!(2\ell-1)!}(-1)^{2\ell-1}m^{2\ell-1}n^{2}\alpha_\ell.
\end{equation*}
Finally, by \eqref{2.LemmaCondition1}, we have that for any $m\in\mathbb{N}$,
\begin{multline*}
\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\frac{(2\ell+1)!}{2!(2\ell-1)!}(-1)^{2\ell-1}m^{2\ell-1}n^{2}\alpha_\ell\\
=\plimgG{p}{n_1}{\mathbb{N}}\cdots\plimgG{p}{n_{2^{\ell-1}}}{\mathbb{N}}\frac{(2\ell+1)!}{2!(2\ell-1)!}(-1)^{2\ell-1}m^{2\ell-1}(\partial(n_1,\cdots,n_{2^{\ell-1}}))^{2}\alpha_\ell\\
=\plimgG{p}{n_1}{\mathbb{N}}\cdots\plimgG{p}{n_{2^{\ell-1}}}{\mathbb{N}}\frac{(2\ell+1)!}{2!(2\ell-1)!}(-1)^{2\ell-1}m^{2\ell-1}(\sum_{j=1}^{2^{\ell-1}}n_j^{2})\alpha_\ell\\
=m^{2\ell-1}\left(-2^{\ell-1}\frac{(2\ell+1)!}{2!(2\ell-1)!}\plimgG{p}{n}{\mathbb{N}}n^2\alpha_\ell\right).
\end{multline*}
Thus, by \eqref{2.LemmaCondition2} and the inductive hypothesis,
\begin{multline*}
\plimgG{p_\ell}{m}{\mathbb{N}}m^{2\ell+1}\alpha_\ell=\plimgG{p_{\ell-1}}{m}{\mathbb{N}}m^{2\ell-1}\left(-2^{\ell-1}\frac{(2\ell+1)!}{2!(2\ell-1)!}\plimgG{p}{n}{\mathbb{N}}n^2\alpha_\ell\right)\\=\plimgG{p_{\ell-1}}{m}{\mathbb{N}}m^{2\ell-1}\alpha_{\ell-1}=\plimgG{p}{m}{\mathbb{N}}m\alpha_0,
\end{multline*}
completing the proof.
\end{proof}
Now we proceed to the proof of \cref{2.OddPolyDegreeThm}.
\begin{proof}[Proof of \cref{2.OddPolyDegreeThm}]
Let $\ell\in\mathbb{N}$ and let $v(x)=\sum_{j=1}^{\ell'}a_jx^{2j-1}$ be an odd polynomial with irrational leading coefficient. In order to prove the contrapositive of \cref{2.OddPolyDegreeThm}, it suffices to show that if $\ell'>\ell$, then there exists a non-principal ultrafilter $p\in \beta\mathbb{N}$ such that
$$\plimgG{p_\ell}{n}{\mathbb{N}}v(n)\neq0.$$
To prove this, suppose that $\ell'>\ell$. Choose $\ell'$ irrational numbers $\alpha_0,...,\alpha_{\ell'-1}$ with $\alpha_{\ell'-1}=a_{\ell'}$ and with the property that $\alpha_0,\alpha_1,...,\alpha_{\ell'-1}$ are rationally independent. By a classical result of Hardy and Littlewood \cite{HardyLittlewood1914some}, the set
$$\{(n\alpha_0,...,n\alpha_{\ell'-1},n^2\alpha_0,...,n^2\alpha_{\ell'-1}) \,|\,n\in\mathbb{N}\}$$
is dense in $\mathbb T^{2(\ell')}$ and hence there exists an increasing sequence $(n_k)_{k\in\mathbb{N}}$ such that, in $\mathbb T$,
\begin{enumerate}[(1)]
\item $\lim_{k\rightarrow\infty}n_k\alpha_0=\frac{1}{2}.$
\item For any $j\in\{1,...,\ell'-1\}$,
$$\lim_{k\rightarrow\infty}n_k\alpha_j=0.$$
\item For any $j\in\{1,...,\ell'-1\}$, $$\lim_{k\rightarrow\infty}n^2_k\left(-2^{j-1}\frac{(2j+1)!}{2!(2j-1)!}\alpha_j\right)=\alpha_{j-1}.$$
\end{enumerate}
Let $q\in\beta\mathbb{N}$ be a non-principal ultrafilter with $\{n_k\,|\,k\in\mathbb{N}\}\in q$. By \cref{2.OddDegreeRecurrence},
$$\plimgG{q_{(\ell'-1)}}{n}{\mathbb{N}}v(n)=\plimgG{q_{(\ell'-1)}}{n}{\mathbb{N}}n^{2\ell'-1}\alpha_{\ell'-1}.$$
So, by \cref{2.TechnicalLemma}, we have
$$\plimgG{q_{(\ell'-1)}}{n}{\mathbb{N}}v(n)=\plimgG{q}{n}{\mathbb{N}}n\alpha_0=\frac{1}{2}.$$
Finally, let $t\geq 0$ be such that $t+\ell=\ell'-1$. Letting $p=q_t$, we have
$$q_{(\ell'-1)}=q_{(t+\ell)}=(q_t)_\ell=p_\ell.$$
It follows that $\plimgG{p_\ell}{n}{\mathbb{N}}v(n)=\frac{1}{2}$. We are done.
\end{proof}
\section{Odd polynomials and the combinatorial properties of sets of the form $\mathcal R(v,\epsilon)$}\label{SectionCharPol}
In this section we will show that, roughly speaking, odd real polynomials are the only polynomials $v(x)$ such that for any $\epsilon>0$, the set
$$\mathcal R(v,\epsilon)=\{n\in\mathbb{N}\,|\,\|v(n)\|<\epsilon\}$$ is $\Delta_\ell^*$ for some $\ell\in\mathbb{N}$. More precisely:
\begin{thm}\label{2.OddPolynomialCharacterization}
Let $\ell\in\mathbb{N}$ and let $v(x)$ be a real polynomial. The following are equivalent:
\begin{enumerate}[(i)]
\item There exists a polynomial $w\in\mathbb{Q}[x]$ such that $w(0)\in\mathbb{Z}$ and $v-w$ is an odd polynomial of degree at most $2\ell-1$.
\item For any $\epsilon>0$, there exists $r\in\mathbb{N}$ for which $\mathcal R(v,\epsilon)$
is $\Delta_{\ell,r}^*$.
\item For any $\epsilon>0$, $\mathcal R(v,\epsilon)$ is $\Delta_\ell^*$.
\end{enumerate}
\end{thm}
In order to prove \cref{2.OddPolynomialCharacterization} we will need the following two lemmas.
The first lemma deals with polynomials with rational coefficients and is an easy consequence of the pigeon hole principle. The second more technical lemma emphasises the distinct properties of $\mathcal R(v,\epsilon)$ for even and odd polynomials.
\begin{lem}\label{2.Q[x]Recurrence}
Let $v(x)$ be a polynomial with rational coefficients satisfying $v(0)\in\mathbb{Z}$. Then there exists $r\in\mathbb{N}$ such that for any $\epsilon>0$,
$\mathcal R(v,\epsilon)$
is $\Delta_{1,r}^*$
Equivalently, for each $a\in\mathbb{N}$, $a\mathbb{N}$ is $\Delta_{1,a+1}^*$.
\end{lem}
\begin{proof}
Let $v(x)=\sum_{j=0}^N\frac{a_j}{b_j}x^j$, where $a_j\in\mathbb{Z}$, $b_j\in\mathbb{N}$ and $b_0=1$. Let $b=\prod_{j=1}^Nb_j$ and let $(n_k)_{k=1}^{b+1}$ be a (b+1)-element sequence in $\mathbb{Z}$. Since there exists $s,t\in\{1,...,b+1\}$ with $s<t$ for which $b|(n_t-n_s)$, we have that $v(n_t-n_s)\in\mathbb{Z}$.
Thus,
$$\{n\in\mathbb{N}\,|\,v(n)\in\mathbb{Z}\}$$
is $\Delta_{1,b+1}^*$.
\end{proof}
\begin{lem}\label{2.LackOfRecurrenceForEvenPowers}
Let $v(x)=\sum_{j=0}^{s}a_jx^{2j}$ be a non-zero even polynomial such that each $a_j$ is either zero or irrational. Then there exists an $\epsilon>0$ such that for any $\ell\in\mathbb{N}$, the set
$\mathcal R(v,\epsilon)$
is not $\Delta_\ell^*$.
\end{lem}
\begin{proof}
It suffices to show that there exist a finite set $F\subseteq \mathbb T\setminus\{0\}$ and a non-principal ultrafilter $p\in\beta\mathbb{N}$ such that for any $\ell\in\mathbb{N}$,
\begin{equation}\label{2.EquationToProveInEvenPowers}
\plimgG{p_\ell}{n}{\mathbb{N}}v(n)\in F.
\end{equation}
Indeed, \eqref{2.EquationToProveInEvenPowers} implies that there is an $\epsilon>0$ with the property that for any $\ell\in\mathbb{N}$ the set $\mathcal R(v,\epsilon)\not\in p_\ell$. Hence, $\mathcal R(v,\epsilon)$ is not $\Delta_\ell^*$ for any $\ell\in\mathbb{N}$.
We now proceed to prove \eqref{2.EquationToProveInEvenPowers}. Let $s\in\mathbb{N}\cup\{0\}$ be such that $\deg(v)=2s$. If $s=0$, then $v(x)=v(0)\in\mathbb{R}\setminus\mathbb{Q}$, and hence for any $\ell\in\mathbb{N}$ and any non-principal ultrafilter $p\in\beta\mathbb{N}$,
$$\plimgG{p_\ell}{n}{\mathbb{N}}v(n)=v(0)\neq 0.$$
To take care of the case $s\neq 0$, we will first show, by induction on $t\in\mathbb{N}$, that for any irrational numbers $\gamma_1,...,\gamma_t$ there exist rationally independent irrational numbers $\beta_1,...,\beta_r$ with the property that for each $j\in\{1,...,t\}$ there exist $b_1^{(j)},...,b_r^{(j)}\in\mathbb{Z}$ for which
\begin{equation}\label{2.IrrationalLinearCombinations}
\gamma_j=b_1^{(j)}\beta_1+\cdots+b_r^{(j)}\beta_r.
\end{equation}
When $t=1$, there is nothing to prove. Now let $t>1$ and suppose that the result holds for any $t'<t$. Note that either $\gamma_1,...,\gamma_t$ are rationally independent or there exist $c_1,...,c_t\in \mathbb{Z}$ for which
$$c_1\gamma_1+\cdots+c_t\gamma_t=0$$
where, without loss of generality, $c_t\neq 0$. In the former case, it is easy to see that \eqref{2.IrrationalLinearCombinations} holds. In the latter case, we have
$$c_1\frac{\gamma_1}{c_t}+\cdots+c_{t-1}\frac{\gamma_{t-1}}{c_t}=-\gamma_t.$$
Thus, by applying the inductive hypothesis to the irrationals $\frac{\gamma_1}{c_t},...,\frac{\gamma_{t-1}}{c_t}$, we can obtain irrational numbers $\beta_1,...,\beta_r$ satisfying \eqref{2.IrrationalLinearCombinations} for $j\in\{1,...,t\}$. This completes the induction.\\
Now, assuming $s\neq 0$, choose rationally independent irrational numbers $\beta_1,...,\beta_r$ such that for each $j\in\{1,...,s\}$,
$$a_j=b_1^{(j)}\beta_1+\cdots+b_r^{(j)}\beta_r,$$
where $b_1^{(j)},...,b_r^{(j)}\in\mathbb{Z}$.\\
By a result by Hardy and Littlewood \cite{HardyLittlewood1914some}, the set
$$\{(n\beta_1,\dots,n\beta_r,n^2\beta_1,...,n^2\beta_r,...,n^{2s}\beta_1,...,n^{2s}\beta_r)\,|\,n\in\mathbb{N}\}$$
is dense in $\mathbb T^{(2s)r}$. Thus, we can find an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ such that for each $t\in\{1,...,r\}$ and $l\in\{1,...,2s-1\}$,
$$\lim_{k\rightarrow\infty}n_k^l\beta_t=0,$$
and if $b_{t}^{(s)}\neq 0$,
$$\lim_{k\rightarrow\infty}n_k^{2s}\beta_t=\frac{\text{sign}(b_t^{(s)})}{\sum_{l=1}^r|b_t^{(s)}|}\frac{1}{3},$$
where $\text{sign}(b_t^{(s)})=\frac{b_t^{(s)}}{|b_t^{(s)}|}$.\\
So for any $\ell\in\mathbb{N}$ and any non-principal ultrafilter $p\in\beta\mathbb{N}$ for which $$\{n_k\,|\,k\in\mathbb{N}\}\in p,$$
we have
\begin{multline*}
\plimgG{p_{\ell}}{m}{\mathbb{N}}v(m)=\plimgG{p}{m_1}{\mathbb{N}}\cdots\plimgG{p}{m_{2^\ell}}{\mathbb{N}}v(\partial(m_1,...,m_{2^\ell}))\\
=\plimgG{p}{m_1}{\mathbb{N}}\cdots\plimgG{p}{m_{2^\ell}}{\mathbb{N}}\sum_{j=1}^{2^\ell}a_sm^{2s}_j+a_0=2^\ell\plimgG{p}{m}{\mathbb{N}}\left(\sum_{t=1}^rb_t^{(s)}\beta_tm^{2s}\right)+a_0=\frac{2^{\ell}}{3}+a_0.
\end{multline*}
So, since $a_0$ is either zero or irrational, the set $\{(\frac{2^\ell}{3}+a_0)\in\mathbb T\,|\,\ell\in\mathbb{N}\}$ has exactly two non-zero elements, completing the proof.
\end{proof}
\begin{rem}\label{2.ASingleSequenceForAllL}
Using a method similar to the one used in the proof of \cref{2.LackOfRecurrenceForEvenPowers}, one can actually show that for any $\epsilon\in(0,\frac{1}{3})$ and any real even polynomial $v(x)$ with $v(0)=0$ and with at least one irrational coefficient, there exists an increasing sequence $(n_k)_{k\in\mathbb{N}}$ such that for each $\ell\in\mathbb{N}$,
$$D_\ell((n_k)_{k\in\mathbb{N}})\subseteq\{n\in\mathbb{N}\,|\,\|v(n)\|>\epsilon\}.$$
\end{rem}
\begin{proof}[Proof of \cref{2.OddPolynomialCharacterization}]
(i)$\implies$(ii): Assume that there exists a polynomial $w(x)$ with rational coefficients and $w(0)\in\mathbb{Z}$ such that $v-w$ is a non-zero odd polynomial of degree at most $2\ell-1$ (if $v-w=0$, there is nothing to prove). Let $\epsilon>0$. By \cref{2.Q[x]Recurrence}, there exists $r_1\in\mathbb{N}$ such that $\mathcal R(w,\frac{\epsilon}{2})$ is $\Delta^*_{1,r_1}$. By \cref{7.FinitisticOddDegreeRecurrence}, there
exists $r_2\in\mathbb{N}$ for which the set $\mathcal R(v-w,\frac{\epsilon}{2})$ is $\Delta_{\ell,r_2}^*$. So, since
$$\mathcal R(v,\epsilon)\supseteq\mathcal R(w,\frac{\epsilon}{2})\cap\mathcal R(v-w,\frac{\epsilon}{2}),$$
\cref{7.DeltaRSummary} implies that there exists $R\in\mathbb{N}$ for which $\mathcal R(v,\epsilon)$ is $\Delta_{\ell,R}^*$.\\
(ii)$\implies$(iii): Note that for any $r\in\mathbb{N}$, a $\Delta_{\ell,r}^*$ set is a $\Delta_\ell^*$ set.\\
(iii)$\implies$(i): Let $v_e,v_o,v_r$ be the (unique) real polynomials such that:
\begin{enumerate}[(1)]
\item $v(x)=v_e(x)+v_o(x)+v_r(x)$.
\item Each of the coefficients of $v_e$ and $v_o$ are either zero or an irrational number. \item $v_e$ is an even polynomial.
\item $v_0$ is an odd polynomial.
\item $v_r\in\mathbb{Q}[x]$.
\end{enumerate}
Let $\ell'\geq \ell$ be such that $\deg(v)\leq2\ell'-1$. Note that it follows from (iii) that for any non-principal ultrafilter $p\in\beta\mathbb{N}$,
$$\plimgG{p_{\ell'}}{n}{\mathbb{N}}v(n)=0.$$
We also have by \cref{2.OddDegreeRecurrence} that
$$\plimgG{p_{\ell'}}{n}{\mathbb{N}}v_o(n)=0$$
and by \cref{2.Q[x]Recurrence} that
$$\plimgG{p_{\ell'}}{n}{\mathbb{N}}v_r(n)=0.$$
So for any non-principal $p\in\beta\mathbb{N}$,
$$\plimgG{p_{\ell'}}{n}{\mathbb{N}}v_e(n)=0.$$
Hence, by \cref{2.LackOfRecurrenceForEvenPowers}, $v_e=0$.\\
Furthermore, by (iii) and \cref{2.Q[x]Recurrence}, we have that for any non-principal ultrafilter $p\in\beta\mathbb{N}$,
$$0=\plimgG{p_\ell}{n}{\mathbb{N}}v(n)=\plimgG{p_\ell}{n}{\mathbb{N}}v_o(n).$$
So by \cref{2.OddPolyDegreeThm}, $\deg(v_o)$ is at most $2\ell-1$.
\end{proof}
\begin{rem}
The natural number $r$ appearing in \cref{7.FinitisticOddDegreeRecurrence}, which guarantees that $\mathcal R(v,\epsilon)$ is a $\Delta_{\ell,r}^*$ set for any polynomial of degree at most $2\ell-1$, while depending on $\epsilon$ and the degree of $v$, does not depend on $v$ itself.
The situation with Theorem 5.1 is different: the number $r$ appearing in (ii) not only depends on $\epsilon$ and the degree of $v$, but also on $v$ itself. \\
To see this, let $\alpha$ be an irrational number and let $\epsilon\in(0,\frac{1}{3})$. Note that by \cref{2.ASingleSequenceForAllL}, there is an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ with the property that for any $\ell\in\mathbb{N}$, $D_\ell((n_k)_{k\in\mathbb{N}})$ does not intersect the set $R(n^2\alpha,\epsilon)$.\\
Thus; since for each $n\in\mathbb{N}$, the map $x\mapsto nx$ from $\mathbb{R}$ to $\mathbb T$ is continuous; for each $\ell\in\mathbb{N}$ and each $r\geq 2^\ell$, there is an increasing $r$-element subsequence $(n_{k_j})_{j=1}^r$ of $(n_k)_{k\in\mathbb{N}}$ and a rational number $\frac{a}{b}$ close enough to $\alpha$, for which $D_\ell((n_{k_j})_{j=1}^r)\subseteq\mathbb{N}$ and
$$D_\ell((n_{k_j})_{j=1}^r)\cap\mathcal R(n^2\frac{a}{b},\epsilon)=\emptyset.$$
So $\mathcal R(n^2\frac{a}{b},\epsilon)$ is not $\Delta_{\ell,r}^*$.
\end{rem}
\section{Applications to polynomial recurrence}\label{SecHilbert}
The goal of this section is to prove (slightly amplified versions of) Theorems \ref{0.CompactHilbert}, \ref{0.AlmostMeasurableCase} and Corollaries \ref{0.CompactMeasurable}, \ref{0.SarkozyLike} and \ref{0.WeaklyMixingCase}.\\
We start with recalling the classical Koopman-von Neumann decomposition theorem \cite{KoopmanVonNeumannContinuousSpectra}.
\begin{thm}
Given a unitary operator $U:\mathcal H\rightarrow \mathcal H$ one has an orthogonal decomposition
\begin{equation}\label{4.DirectSum}
\mathcal H=\mathcal H_c\oplus \mathcal H_{\text{wm}},
\end{equation}
where the $U$ and $U^{-1}$-invariant subspaces $\mathcal H_c$ and $\mathcal H_{\text{wm}}$ are defined as follows:
\begin{equation}\label{4.Hc}
\mathcal H_c=\overline{\langle\{f\in\mathcal H\,|\,\exists \lambda\in\mathbb T,\;Uf=e^{2\pi i\lambda}f\}\rangle}
\end{equation}
and
\begin{equation}\label{4.Hwm}
\mathcal H_{\text{wm}}=\{f\in\mathcal H\,|\,\lim_{N-M\rightarrow\infty}\frac{1}{N-M}\sum_{n=M+1}^N|\langle U^nf,f\rangle|=0\},
\end{equation}
\end{thm}
Throughout this section we will be using the fact that for any non-constant polynomial $v$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$
\begin{equation}\label{4.PolynomialWMSet}
\mathcal H_{\text{wm}}=\{f\in\mathcal H\,|\,\forall g\in\mathcal H,\,\lim_{N-M\rightarrow\infty}\frac{1}{N-M}\sum_{n=M+1}^N|\langle U^{v(n)}f,g\rangle|=0\}.\footnote{This is a special case of \cite[Theorem 3.7]{WMPet}.}
\end{equation}
\begin{thm}[Cf. \cref{0.CompactHilbert}]\label{4.CompactHilbert}
Let $U:\mathcal H\rightarrow \mathcal H$ be a unitary operator and let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be a non-zero odd polynomial with $v(\mathbb{Z})\subseteq\mathbb{Z}$. The following are equivalent:
\begin{enumerate}[(i)]
\item $U$ has discrete spectrum (i.e. $\mathcal H$ is spanned by eigenvectors of $U$).
\item For any $f\in\mathcal H$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\,\|U^{v(n)}f-f\|_{\mathcal H}<\epsilon\}$$
is $\Delta_{\ell,0}^*$.
\item For any $f\in\mathcal H$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\,\|U^{v(n)}f-f\|_{\mathcal H}<\epsilon\}$$
is $\Delta_\ell^*$.
\end{enumerate}
\end{thm}
\begin{proof}
(i)$\implies$(ii): Note that $U$ has discrete spectrum if and only if $\mathcal H=\mathcal H_c$. So, by \cref{7.FinitisticOddDegreeRecurrence}, we have that for any $\epsilon>0$, there exists an $r\in\mathbb{N}$ such that for any $f\in\mathcal H$ and $\lambda\in \mathbb T$ with the property that $Uf=e^{2\pi i\lambda}f$, the set
$$\{n\in\mathbb{N}\,|\,\|U^{v(n)}f-f\|_{\mathcal H}<\epsilon\}=\{n\in\mathbb{N}\,|\,\|f\|_{\mathcal H}|e^{2\pi iv(n)}-1|<\epsilon\}$$
is $\Delta_{\ell,0}^*$. Since $\Delta_{\ell,0}^*$ sets have the finite intersection property, (ii) follows.\\
(ii)$\implies$(iii): Every $\Delta_{\ell,0}^*$ set is a $\Delta_\ell^*$ set by definition.\\
(iii)$\implies$(i): Suppose, by way of contradiction, that $U$ does not have discrete spectrum. Choose $f\in\mathcal H_{\text{wm}}$ such that $f\neq0$. Note that if $D$ is a $\Delta_{\ell}^*$ set, then, by \cref{1.Delta*IsSyndetic}, $D$ is syndetic. Thus, we have by (iii) that
$$\limsup_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^N|\langle U^{v(n)}f,f\rangle|>0,$$
which contradicts \eqref{4.PolynomialWMSet}, completing the proof.
\end{proof}
\begin{cor}[Cf. \cref{0.CompactMeasurable}]\label{4.CompactMeasurePreserving}
Let $(X,\mathcal A,\mu, T)$ be an ergodic invertible probability measure preserving system. The following are equivalent:
\begin{enumerate}[(i)]
\item $(X,\mathcal A,\mu, T)$ is isomorphic to an (ergodic) translation on a compact abelian group.
\item For any odd polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$, any $A\in\mathcal A$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\, \mu(A\cap T^{-v(n)}A)>\mu(A)-\epsilon\}$$
is $\Delta_{\ell,0}^*$.
\item For any odd polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$, any $A\in\mathcal A$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\, \mu(A\cap T^{-v(n)}A)>\mu(A)-\epsilon\}$$
is $\Delta_\ell^*$.
\item There exists a non-zero odd polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$ such that for any $A\in\mathcal A$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\, \mu(A\cap T^{-v(n)}A)>\mu(A)-\epsilon\}$$
is $\Delta_{\ell,0}^*$.
\item There exists a non-zero odd polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$ such that for any $A\in\mathcal A$ and any $\epsilon>0$, the set
$$\{n\in\mathbb{N}\,|\, \mu(A\cap T^{-v(n)}A)>\mu(A)-\epsilon\}$$
is $\Delta_\ell^*$.
\end{enumerate}
\end{cor}
\begin{proof}
The equivalence of (i), (ii), (iii), (iv) and (v) follows by applying \cref{4.CompactHilbert} to the unitary operator $U_T$ induced by $T$ on $L^2(\mu)$ via the formula
$$U_Tf=f\circ T.$$
Indeed, all we need to note is that an ergodic invertible probability measure preserving system $(X,\mathcal A,\mu,T)$ is isomorphic to a translation on a compact abelian group if and only if $L^2(\mu)=\mathcal H_c$ (see \cite{neumann1932operatorenmethode} or \cite[Theorem 3.6] {waltersIntroduction}).
\end{proof}
Given $\ell\in\mathbb{N}$, we will say that a set $D\subseteq \mathbb{N}$ is \textbf{A-$\Delta_{\ell,0}^*$}\index{A-$\Delta_{\ell,0}^*$}, (or \textbf{almost $\Delta_{\ell,0}^*$}) if there exists a set $E\subseteq \mathbb{N}$ with $d^*(E)=0$,
such that $D\cup E$ is $\Delta_{\ell,0}^*$.
\begin{thm}[Cf. \cref{0.AlmostMeasurableCase}]\label{4.AlmostMeasurableCase}
Let $(X,\mathcal A,\mu,T)$ be an invertible probability measure preserving system and let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be an odd polynomial with $v(\mathbb{Z})\subseteq\mathbb{Z}$. For any $A\in\mathcal A$ and any $\epsilon>0$,
\begin{equation}\label{4.LargeIntersectionsInLemma}
\mathcal R_A(v,\epsilon)=\{n\in\mathbb{N}\,|\,\mu(A\cap T^{-v(n)}A)>\mu^2(A)-\epsilon\}
\end{equation}
is A-$\Delta_{\ell,0}^*$.
\end{thm}
\begin{proof}
It suffices to show that for each $f\in L^2(\mu)$ and any $\epsilon>0$,
\begin{equation}\label{4.LargeIntersectionProof}
\{n\in\mathbb{N}\,|\,|\langle U_T^{v(n)}f, f\rangle>\left(\int_X f\text{d}\mu\right)^2-\epsilon\}
\end{equation}
is A-$\Delta_{\ell,0}^*$.\\
Let $f\in L^2(\mu)$, $f_c\in\mathcal H_c$ and $f_{\text{wm}}\in\mathcal H_{\text{wm}}$ be such that $f=f_c+f_{\text{wm}}$. Since $\mathcal H_c$ and $\mathcal H_{\text{wm}}$ are orthogonal, $U_T$- and $U_T^{-1}$-invariant subspaces of $L^2(\mu)$, we have that $\langle U_T^{v(n)}f,f\rangle =\langle U_T^{v(n)}f_c,f_c\rangle +\langle U_T^{v(n)}f_{\text{wm}},f_{\text{wm}}\rangle$. Now let $\epsilon>0$. Note that by \cref{4.CompactHilbert}, the set
$$\{n\in\mathbb{N}\,|\,|\langle U_T^{v(n)}f_c,f_c\rangle -\|f_c\|_{L^2}^2|<\frac{\epsilon}{2}\}$$
is $\Delta_{\ell,0}^*$. Applying Cauchy-Schwarz inequality we get
$$\|f_c\|_{L^2}^2=\langle f_c,f_c\rangle\langle \mathbbm 1_X,\mathbbm 1_X\rangle\geq |\langle f_c,\mathbbm 1_X\rangle|^2=\left(\int_X f_c\text{d}\mu\right)^2=\left(\int_X f\text{d}\mu\right)^2,$$
which implies that the set
\begin{equation*}\label{4.CompactSetInequality}
D=\{n\in\mathbb{N}\,|\,\langle U_T^{v(n)}f_c,f_c\rangle >\left(\int_X f\text{d}\mu\right)^2-\frac{\epsilon}{2}\}
\end{equation*}
is $\Delta_{\ell,0}^*$.\\
On the other hand, it follows from \eqref{4.PolynomialWMSet} that the set
\begin{equation*}\label{4.WeakMixingComponent}
E=\{n\in\mathbb{N}\,|\,|\langle U_T^{v(n)}f_{\text{wm}},f_{\text{wm}}\rangle |\geq \frac{\epsilon}{2}\}
\end{equation*}
has zero upper Banach density. So, since for any $n\in D\setminus E$,
$$\langle U_T^{v(n)}f,f\rangle>\left( \int_X f\text{d}\mu\right)^2-\epsilon,$$
we have that
$$D\subseteq\{n\in\mathbb{N}\,|\,\langle U_T^{v(n)}f,f\rangle > \left(\int_Xf\text{d}\mu\right)^2-\epsilon\}\cup E.$$
Since $D$ is $\Delta_{\ell,0}^*$, the set in \eqref{4.LargeIntersectionProof} is A-$\Delta_{\ell,0}^*$. We are done.
\end{proof}
We remark that the quantity $\mu^2(A)$ in \eqref{4.LargeIntersectionsInLemma} is optimal (consider any strongly mixing system).\\
Similarly to the situation with the sets $\mathcal R(v,\epsilon)$ which was discussed in Subsection 3.2, there is an IP-flavored result dealing with the sets $\mathcal R_A(v,\epsilon)$. We need first to introduce some terminology. A set $E\subseteq \mathbb{N}$ is called an \textbf{IP$_0$\index{IP$_0$} set} if it is an IP$_r$ set for each $r\in\mathbb{N}$. The set $E\subseteq \mathbb{N}$ is an \textbf{IP$_0^*$\index{IP$_0^*$} set} if it has a non-trivial intersection with any IP$_0$ set. Finally, a set $D\subseteq \mathbb{N}$ is called \textbf{A-IP$_0^*$}\index{A-IP$_0^*$} (or almost IP$_0^*$) if there exists a set $E\subseteq \mathbb{N}$ with $d^*(E)=0$ such that $D\cup E$ is IP$_0^*$.
\begin{thm}[Cf. \cite{AlmostIPBerLeib}, Theorem 1.8, case $k=1$]\label{3.AlmostIPRecurrence}\
Let $(X,\mathcal A,\mu,T)$ be an invertible probability measure preserving system and let $v(x)=\sum_{j=1}^Na_jx^j$ be a polynomial with $v(\mathbb{Z})\subseteq\mathbb{Z}$. For any $\epsilon>0$, the set
$$\mathcal R_A(v,\epsilon)=\{n\in\mathbb{N}\,|\,\mu(A\cap T^{-v(n)}A)>\mu^2(A)-\epsilon\}$$
is A-IP$^*_0$.
\end{thm}
We will show in Section 8 below (see \cref{3.FirstImportantApplication}) that for each $\ell\in\mathbb{N}$, there exists an $\text{A-IP}_0^*$ set which is not $\text{A-}\Delta_{\ell,0}^*$. Thus, \cref{4.AlmostMeasurableCase} provides new information about sets of the form $\mathcal R_A(v,\epsilon)$.\\
We will give now two corollaries of \cref{4.AlmostMeasurableCase}. The first one is a variant of the Furstenberg-S{\'a}rk{\" o}zy theorem (see \cite{sarkozy1978difference} and \cite[Theorem 3.16]{FBook}). The second provides yet another characterization of weakly mixing systems.
\begin{cor}[Cf. \cref{0.SarkozyLike}]\label{4.SarkozyREsult}
Let $E\subseteq\mathbb{N}$ and assume that $d^*(E)>0$. Then for any odd polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$, the set
$$\{n\in\mathbb{N}\,|\,v(n)\in E-E\}$$
is A-$\Delta_{\ell,0}^*$.
\end{cor}
\begin{proof}
By Furstenberg's correspondence principle (see \cite[Theorem 1.1]{BerErgodicTheory1985}), there exists an invertible probability measure preserving system $(X,\mathcal A,\mu, T)$ and a set $A\in\mathcal A$ with $\mu(A)=d^*(E)$ such that for all $n\in\mathbb{Z}$,\
$$d^*(E\cap (E-n))\geq \mu(A\cap T^{-n}A).$$
\cref{4.AlmostMeasurableCase} implies that the set
$$D=\{n\in\mathbb{N}\,|\,d^*(E\cap (E-v(n)))>0\}$$
is A-$\Delta_{\ell,0}^*$. Since
$$D\subseteq\{n\in\mathbb{N}\,|\, v(n)\in E-E\},$$
we are done.
\end{proof}
\begin{rem}
Actually, Furstenberg's correspondence principle allows us to get a finer result. Namely, given any $\epsilon>0$, any odd polynomial $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$ and any set $E\subseteq \mathbb{N}$ with $d^*(E)>0$,
$$\{n\in\mathbb{N}\,|\,d^*(E\cap (E-v(n)))>(d^*(E))^2-\epsilon\}$$
is A-$\Delta_{\ell,0}^*$
\end{rem}
\begin{cor}[Cf. \cref{0.WeaklyMixingCase}]\label{4.WeaklyMixingCase}
Let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be a non-zero odd polynomial with $v(\mathbb{Z})\subseteq\mathbb{Z}$ and let $(X,\mathcal A,\mu, T)$ be an invertible probability measure preserving system. The following are equivalent:
\begin{enumerate}[(i)]
\item $T$ is weakly mixing.
\item For any $A,B\in\mathcal A$ and any $\epsilon>0$,
$$\mathcal R_{A,B}(v,\epsilon)=\{n\in\mathbb{N}\,|\,|\mu(A\cap T^{-v(n)}B)-\mu(A)\mu(B)|<\epsilon\}$$
is A-$\Delta_{\ell,0}^*$.
\item For any $A,B\in\mathcal A$ and any $\epsilon>0$,
$$\mathcal R_{A,B}(v,\epsilon)=\{n\in\mathbb{N}\,|\,|\mu(A\cap T^{-v(n)}B)-\mu(A)\mu(B)|<\epsilon\}$$
is A-$\Delta_{\ell}^*$.
\end{enumerate}
\end{cor}
\begin{proof}
(i)$\implies$(ii): Since $(X,\mathcal A,\mu, T)$ is weakly mixing, we have that
$$L^2(\mu)=\mathbb C \mathbbm 1_X\oplus \mathcal H_{\text{wm}}.$$
Hence, it follows from \eqref{4.PolynomialWMSet} that for any $\epsilon>0$ and any $f,g\in L^2(\mu)$, the set
$$E=\{n\in\mathbb{N}\,|\,|\langle U_T^{v(n)}f,g\rangle-\int_Xf\text{d}\mu\int_Xg\text{d}\mu|\geq \epsilon\}$$
satisfies $d^*(E)=0$, which, in turn, implies that
$$\{n\in\mathbb{N}\,|\,|\langle U_T^{v(n)}f,g\rangle-\int_Xf\text{d}\mu\int_Xg\text{d}\mu|< \epsilon\}$$
is A-$\Delta_{\ell,0}^*$.\\
(ii)$\implies$(iii): Every A-$\Delta_{\ell,0}^*$ set is an A-$\Delta_\ell^*$ set.\\
(iii)$\implies$(i): Suppose that $(X,\mathcal A,\mu,T)$ is not weakly mixing. It suffices to show that for some $A\in\mathcal A$ and some $\epsilon>0$, the set
$$E_{\epsilon}(A)=\{n\in\mathbb{N}\,|\,|\mu(A\cap T^{-v(n)}A)-\mu^2(A)|<\epsilon\}$$
is not A-$\Delta_\ell^*$.\\
Since $T$ is not weakly mixing, there exists a non-zero function $f\in\mathcal H_c\setminus \mathbb C\mathbbm 1_X$ and $a_1,a_2,b_1,b_2\in\mathbb{R}$ for which the set
$$A=\{x\in X\,|\,a_1\
\leq\text{Re}(f(x))<a_2\text{ and }b_1\
\leq\text{Im}(f(x))<b_2\}$$
satisfies $\mu(A)\in(0,1)$.\\
By \cref{4.CompactHilbert}, for any $\epsilon>0$ the set
$$D_\epsilon=\{n\in\mathbb{N}\,|\,\mu(A\cap T^{-v(n)}A)>\mu(A)-\epsilon\}$$
is $\Delta_{\ell}^*$. Since $\mu(A)>\mu^2(A)$, we can find an $\epsilon>0$ such that for any $n\in D_\epsilon$,
$$\mu(A\cap T^{-v(n)}A)>\mu^2(A)+\epsilon.$$
It follows that given $E\subseteq\mathbb{N}$ with $d^*(E)=0$, $(E_{\epsilon}(A)\cup E)\cap D_\epsilon=E\cap D_\epsilon$ and hence
$$d^*((E_{\epsilon}(A)\cup E)\cap D_\epsilon)=0.$$
By noting that the intersection of any two $\Delta_\ell^*$ sets is again $\Delta_\ell^
*$ and that for any $\Delta_\ell^*$ set $D$, $d^*(D)>0$, we conclude that $E_\epsilon(A)\cup E$ is not $\Delta_\ell^*$. Since $E$ was arbitrary, $E_\epsilon(A)$ is not A-$\Delta_\ell^*$, completing the proof.
\end{proof}
\begin{rem}
In the next section we will show that in the statement of \cref{4.WeaklyMixingCase}, A-$\Delta_{\ell,0}^*$ and A-$\Delta_\ell^*$ can not be replaced by $\Delta_\ell^*$ (see \cref{5.MainResultOfSection7} below).
\end{rem}
\section{\cref{0.WeaklyMixingCase} cannot be improved}\label{SecExample}
Our goal in this section is to prove the following result:
\begin{prop}\label{5.ExampleProposition}
For any odd polynomial $v(x)$ with $v(\mathbb{Z})\subseteq\mathbb{Z}$, there exists a weakly mixing invertible probability measure preserving system $(X,\mathcal A,\mu, T)$, a set $A\in\mathcal A$ with $\mu(A)\in(0,1)$ and a non-principal ultrafilter $p\in\beta\mathbb{N}$ such that for any $\ell\in\mathbb{N}$,
\begin{equation}\label{5.MainObjective}
\plimgG{p_\ell}{n}{\mathbb{N}}\mu(A\cap T^{-v(n)}A)=\mu(A).
\end{equation}
\end{prop}
\begin{rem}\label{5.MainResultOfSection7}
Let the invertible probability measure preserving system $(X,\mathcal A, \mu, T)$ and the set $A\in\mathcal A$ be as in the statement of \cref{5.ExampleProposition}. Then for any small enough $\epsilon>0$,
$$\mathcal R_A(v,\epsilon)=\{n\in\mathbb{N}\,|\,|\mu(A\cap T^{-v(n)}A)-\mu^2(A)|<\epsilon\}$$
is not $\Delta_{\ell}^*$ for any $\ell\in\mathbb{N}$. In particular, in the statement of \cref{4.WeaklyMixingCase}, A-$\Delta_\ell^*$ and A-$\Delta_{\ell,0}^*$ can not be replaced by $\Delta_{\ell}^*$ (or $\Delta_{\ell,0}^*$).
\end{rem}
In the proof of \cref{5.ExampleProposition}, we will be using the fact that for any continuous symmetric probability measure $\gamma$ on $\mathbb T$,\footnote{
A Borel probability measure $\gamma$ on $\mathbb T$ is called symmetric if for any $n\in\mathbb{Z}$
$$\int_\mathbb T e^{2\pi i nx}\text{d}\gamma(x)=\int_\mathbb T e^{-2\pi inx}\text{d}\gamma(x).$$
}
there exists a weakly mixing invertible probability measure preserving system $(X,\mathcal A, \mu, T)$ called a Gaussian system. Such a system has the property that for some $f\in L^2(\mu)$:
\begin{enumerate}[(1)]
\item For any Borel-measurable $B\subseteq \mathbb{R}$,
\begin{equation}\label{5.GaussianDistributioon}
\mu(f^{-1}(B))=\frac{1}{\sqrt{2\pi}}\int_Be^{-\frac{x^2}{2}}\text{d}x
\end{equation}
(i.e. $f$ has a Gaussian distribution with mean 0 and variance 1).
\item For any $n\in\mathbb{Z}$,
\begin{equation}\label{5.InnerProductOff}
\langle U_T^nf,f\rangle=\int_\mathbb T e^{2\pi inx}\text d\gamma(x).
\end{equation}
\end{enumerate}
Note that for such a function $f$, $\|f\|_{L^2}=1$.
For information on Gaussian systems see for example \cite[Chapters 8.2 and 14]{cornfeld1982ergodic}.\\
\begin{proof}[Proof of \cref{5.ExampleProposition}]
Let $v(x)$ be an odd polynomial with $v(\mathbb{Z})\subseteq \mathbb{Z}$. It is not hard to check that $v(z)\in\mathbb{Q}[x]$ and hence, there exists an $m\in\mathbb{N}$ for which $mv(x)\in\mathbb{Z}[x]$. Suppose that \cref{5.ExampleProposition} holds for the odd polynomial $mv(x)$. Then, there exists a weakly mixing invertible measure preserving transformation $T$ and a set $A\in\mathcal A$ satisfying \eqref{5.MainObjective}. By considering the weakly mixing transformation $T^m$, one sees that \cref{5.ExampleProposition} also holds for $v(x)\in\mathbb{Q}[x]$. Thus, without loss of generality, we can assume that $v(x)\in\mathbb{Z}[x]$.
For convenience, we will prove \cref{5.ExampleProposition} for $v(x)=x^3$. The proof for a general odd polynomial with integer coefficients can be done similarly.\\
Note that it is enough to show that there exists a continuous symmetric probability measure $\gamma$ on $\mathbb T$ with the property that for some non-principal ultrafilter $p\in\beta\mathbb{N}$ and any $\ell\in\mathbb{N}$,
\begin{equation}\label{5.ConditionWannaShow}
\plimgG{p_\ell}{n}{\mathbb{N}}\int_\mathbb T e^{2\pi i n^3 x}\text d\gamma(x)=1.
\end{equation}
Indeed, if such a probability measure $\gamma$ exists, we would be able to find a Gaussian system $(X,\mathcal A,\mu, T)$ and a function $f\in L^2(\mu)$ satisfying \eqref{5.GaussianDistributioon} and \eqref{5.InnerProductOff}. For such a function and each $\ell\in\mathbb{N}$ we will have that
$$\plimgG{p_\ell}{n}{\mathbb{N}}\langle U_T^{n^3}f,f\rangle=\plimgG{p_\ell}{n}{\mathbb{N}}\int_\mathbb T e^{2\pi i n^3 x}\text d\gamma(x)=1=\|f\|_{L^2}^2.$$
So,
$$\plimgG{p_\ell}{n}{\mathbb{N}}U_T^{n^3}f=f$$
in the norm-topology of $L^2(\mu)$.
Thus, for $A=f^{-1}([-1,1])$, we will have
$$\plimgG{p_\ell}{n}{\mathbb{N}}\mu(A\cap T^{-n^3}A)=\mu(A),$$
which proves \eqref{5.MainObjective}.\\
Note also that, in order to achieve our goal, it is enough to find a not necessarily symmetric continuous Borel probability measure $\rho$ on $\mathbb T$ such that for some non-principal ultrafilter $p\in\beta\mathbb{N}$ and any $\ell\in\mathbb{N}$,
\begin{equation}\label{5.TrueConditionToFind}
\plimgG{p_\ell}{n}{\mathbb{N}}\int_\mathbb T e^{2\pi i n^3 x}\text d\rho(x)=1.
\end{equation}
Indeed, let $\rho$ be such a measure. Define $\tilde\rho$ to be the unique probability measure satisfying
$$\int_\mathbb T e^{2\pi i nx}\text{d}\tilde\rho(x)=\int_{\mathbb T}e^{-2\pi i nx}\text{d}\rho(x)$$
for each $n\in\mathbb{Z}$. Then, the measure $\gamma=\frac{\rho+\tilde\rho}{2}$ is a symmetric continuous Borel probability measure on $\mathbb T$ for which \eqref{5.ConditionWannaShow} holds.\\
Let $\mathcal C=\{0,1\}^\mathbb{N}$ be endowed with the product topology. Let $\nu$ be the $(\frac{1}{2},\frac{1}{2})$-probability measure on $\mathcal C$. We will introduce now a continuous function $F:\mathcal C\rightarrow \mathbb T$ such that the measure $\rho=\nu\circ F^{-1}$ is a continuous Borel probability measure on $\mathbb T$ satisfying \eqref{5.TrueConditionToFind}.\\
For each $k\in\mathbb{N}$, let $n_k=2^{6^k}$ and let $F:\mathcal C\rightarrow \mathbb T$ be defined by
$$F(\omega)=\sum_{s\in\mathbb{N}}\frac{\omega(s)}{n_s^3},\text{ }\omega\in\mathcal C.$$
Note that $F$ is continuous, injective and for each $\omega\in\mathcal C$,
\begin{multline*}
\limsup_{k\rightarrow\infty}\|n^3_kF(\omega)\|=\limsup_{k\rightarrow\infty}\|n^3_k\sum_{s\in\mathbb{N}}\frac{\omega(s)}{n^3_s}\|=\limsup_{k\rightarrow\infty}\|\sum_{s\in\mathbb{N}}\frac{n^3_k}{n^3_s}\omega(s)\| \\
=\limsup_{k\rightarrow\infty}\|\sum_{s=1}^{k-1}\frac{2^{3\cdot 6^k}}{2^{3\cdot6^s}}\omega(s)+\omega(k)+\sum_{s=k+1}^\infty\frac{2^{3\cdot6^k}}{2^{3\cdot6^s}}\|\\
\leq\limsup_{k\rightarrow\infty}\left(\|\sum_{s=1}^{k-1}2^{3(6^k-6^s)}\omega(s)\|+\|\omega(k)\|+\|\sum_{s=k+1}^\infty\frac{\omega(s)}{2^{3(6^s-6^k)}}\|\right).
\end{multline*}
So, since
$$\sum_{s=1}^{k-1}2^{3(6^k-6^s)}\omega(s)\in \mathbb{Z}$$ and
$$|\sum_{s=k+1}^\infty\frac{\omega(s)}{2^{3(6^s-6^k)}}|\leq \frac{1}{2^{6^k}}\sum_{s\in\mathbb{N}}\frac{1}{2^s}<\frac{1}{2^{6^k}},$$
we have that $\lim_{k\rightarrow\infty}\|n_k^3F(\omega)\|=0$.\\
We also have that for any $M\in\mathbb{Z}$ and any $\omega\in\mathcal C$,
\begin{multline*}
\limsup_{k\rightarrow\infty}\|Mn^2_kF(\omega)\|=\limsup_{k\rightarrow\infty}\|M\sum_{s\in\mathbb{N}}\frac{2^{2\cdot 6^k}}{2^{3\cdot6^s}}\omega(s)\| \\
\leq|M|\limsup_{k\rightarrow\infty}\left(\|\sum_{s=1}^{k-1}2^{2\cdot6^k-3\cdot 6^s}\omega(s)\|+\|\frac{\omega(k)}{2^{6^k}}\|+\|\sum_{s=k+1}^\infty\frac{\omega(s)}{2^{3\cdot 6^s- 2\cdot 6^k}}\|\right)=0,
\end{multline*}
which implies that $\lim_{k\rightarrow \infty}\|Mn_k^2F(\omega)\|=0$.\\
Similarly, for any $M\in\mathbb{Z}$ and any $\omega \in \mathcal C$,
$$\lim_{k\rightarrow\infty}\|Mn_kF(\omega)\|=0.$$
Thus, for any continuous function $g:\mathbb T\rightarrow\mathbb C$ and any $M_1,M_2\in\mathbb{Z}$,
\begin{equation}\label{5.SequentialLimit}
\lim_{k\rightarrow\infty}\int_\mathcal C g(F(\omega))e^{2\pi i(n_k^3+M_1n_k^2+M_2n_k)F(\omega)}\text{d}\nu(\omega)=\int_\mathcal C g(F(\omega))\text{d}\nu(\omega).
\end{equation}
Now let $p\in\beta\mathbb{N}$ be a non-principal ultrafilter with $\{n_k\,|\,k\in\mathbb{N}\}\in p$. We will show that for any $\ell\in\mathbb{N}$, any $M_1,M_2\in\mathbb{Z}$ and any continuous function $g:\mathbb T\rightarrow \mathbb C$, we have
\begin{equation}\label{5.StrongerResultInduction}
\plimgG{p_\ell}{n}{\mathbb{N}}\int_\mathcal C g(F(\omega))e^{2\pi i(n^3+M_1n^2+M_2n)F(\omega)}\text{d}\nu(\omega)=\int_\mathcal C g(F(\omega))\text{d}\nu(\omega).
\end{equation}
We proceed by induction on $\ell\in\mathbb{N}\cup\{0\}$. When $\ell=0$, $p_0=p$ and, by \eqref{5.SequentialLimit}, we have that
$$\plimgG{p}{n}{\mathbb{N}}\int_{\mathcal C}g(F(\omega))e^{2\pi i(n^3+M_1n^2+M_2n)F(\omega)}\text{d}\nu(\omega)=\int_\mathcal Cg(F(\omega))\text{d}\nu(\omega).$$
Next, fix $\ell\in\mathbb{N}$ and suppose that \eqref{5.StrongerResultInduction} holds for $\ell'<\ell$. Then, by \cref{1.LemaDeltaEquality}, we have that the left-hand side of \eqref{5.StrongerResultInduction} equals the expression
\begin{equation*}\label{5.FirstStepInductiveStep}
\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\int_\mathcal C g(F(\omega))e^{2\pi i((n-m)^3+M_1(n-m)^2+M_2(n-m))F(\omega)}\text{d}\nu(\omega),
\end{equation*}
which in turn equals
\begin{multline}\label{5.SecondStepInductiveStep}
\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\plimgG{p_{\ell-1}}{n}{\mathbb{N}}\int_\mathcal C g(F(\omega))e^{-2\pi i(m^3-M_1m^2+M_2m)F(\omega)}\\
e^{2\pi i(n^3-(3m-M_1)n^2+(3m^2-2mM_1+M_2)n)F(\omega)}\text{d}\nu(\omega)
\end{multline}
By applying the inductive hypothesis to the function
$$G_m(x)=g(x)e^{-2\pi i(m^3-M_1m^2+M_2m)x},$$
we have that \eqref{5.SecondStepInductiveStep} equals
$$\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\int_\mathcal C G_m(F(\omega))\text{d}\nu(\omega).$$
It follows that
\begin{multline*}
\plimgG{p_\ell}{n}{\mathbb{N}}\int_\mathcal C g(F(\omega))e^{2\pi i(n^3+M_1n^2+M_2n)F(\omega)}\text{d}\nu(\omega)\\
=\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\int_\mathcal C g(F(\omega))e^{-2\pi i(m^3-M_1m^2+M_2m)F(\omega)}\text{d}\nu(\omega)\\
=\plimgG{p_{\ell-1}}{m}{\mathbb{N}}\overline{\int_\mathcal C \overline{g(F(\omega))}e^{2\pi i(m^3-M_1m^2+M_2m)F(\omega)}\text{d}\nu(\omega)}
=\int_\mathcal C g(F(\omega))\text{d}\nu(\omega),
\end{multline*}
completing the induction.\\
Finally, since $\rho=\nu\circ F^{-1}$, we have that for any $\ell\in\mathbb{N}$,
$$\plimgG{p_\ell}{m}{\mathbb{N}}\int_\mathbb T e^{2\pi i m^3x}\text {d}\rho(x)=\plimgG{p_\ell}{m}{\mathbb{N}}\int_\mathcal C e^{2\pi im^3F(\omega)}\text{d}\nu(\omega)=1,$$
showing that \eqref{5.TrueConditionToFind} holds for any non-principal ultrafilter $p\in\beta\mathbb{N}$ for which
$$\{n_k\,|\,k\in\mathbb{N}\}\in p.$$
\end{proof}
\section{Hierarchy of notions of largeness}\label{SectionNotionsOfLargness}
In this section we will review the relations between various notions of largeness which played an instrumental role in the formulations and proofs of the results concerning the sets $\mathcal R(v,\epsilon)$ and $\mathcal R_A(v,\epsilon)$.
In particular, we will supply the proofs of the results mentioned in Subsection 3.2 and Section 6 which juxtapose the $\Delta^*$-flavored theorems \ref{7.FinitisticOddDegreeRecurrence} and \ref{4.AlmostMeasurableCase} with the IP$^*$-flavored theorems \ref{3.IP0Returns} and \ref{3.AlmostIPRecurrence} (see \cref{3.FirstImportantApplication} and \cref{3.SecondImportantApplication} below).
\subsection{Some classes of subsets of $\mathbb{N}$}
In this subsection we review the definitions and properties of the families of sets (such as, say $\Delta_{\ell,r}$) which appeared before in this paper and which were employed in the formulations and proofs of various results dealing with diophantine approximation and recurrence. The material
presented in this subsection will facilitate the discussion in Subsection 8.2, where the relation between these families of sets are discussed and summarised.\\
The following table presents in a compact form the pertinent definitions.\\
\begin{center}
\begin{tabular}{| M{2cm} | M{2.5cm} |m{7cm} |}
\hline
\textbf{Symbol} & \textbf{Parameters} & \textbf{Each member contains a set of the form...} \\
\hline\hline
$\Delta_{\ell,r}$ & $\ell\in\mathbb{N}$, $r\geq 2^\ell$ & $D_\ell((n_k)_{k=1}^r)$, where $(n_k)_{k=1}^r$ is an r-element sequence in $\mathbb{Z}$. \\
\hline
$\Delta_{\ell,0}$ & $\ell\in\mathbb{N}$ & $D_\ell((n_k)_{k=1}^r)$ for each $r\geq 2^\ell$. Here, for each $r\geq 2^\ell$, $(n_k)_{k=1}^r$ is an r-element sequence in $\mathbb{Z}$. \\
\hline
$\Delta_{\ell}$ & $\ell\in\mathbb{N}$ & $D_\ell((n_k)_{k\in\mathbb{N}})$, where $(n_k)_{k\in\mathbb{N}}$ is an increasing sequence in $\mathbb{N}$. \\
\hline
$\Delta_{\ell,0}$-rich\index{$\Delta_{\ell,0}$-rich} & $\ell\in\mathbb{N}$ & A set $D\subseteq\mathbb{N}$ such that for any $E\subseteq \mathbb{N}$ with $d^*(E)=0$, $D\setminus E$ is a $\Delta_{\ell,0}$ set \\
\hline
$\Delta_\ell$-rich\index{$\Delta_\ell$-rich} & $\ell\in\mathbb{N}$ & A set $D\subseteq\mathbb{N}$ such that for any $E\subseteq \mathbb{N}$ with $d^*(E)=0$, $D\setminus E$ is a $\Delta_\ell$ set \\
\hline
IP$_r$& $r\in\mathbb{N}$ & $\text{FS}((n_k)_{k=1}^r)$, where $(n_k)_{k=1}^r$ is an r-element sequence in $\mathbb{N}$.\\
\hline
IP$_0$& -- & $\text{FS}((n_k)_{k=1}^r)$ for each $r\in\mathbb{N}$. Here, for each $r\in\mathbb{N}$, $(n_k)_{k=1}^r$ is an r-element sequence in $\mathbb{N}$.\\
\hline
IP\index{IP}& -- & $\text{FS}((n_k)_{k\in\mathbb{N}})$ for some increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$.\\
\hline
IP$_0$-rich\index{IP$_0$-rich}& -- & A set $\Gamma\subseteq \mathbb{N}$ such that for any $E\subseteq\mathbb{N}$ with $d^*(E)=0$, $\Gamma\setminus E$ is an IP$_0$ set.\\
\hline
IP-rich\index{IP-rich}& -- & A set $\Gamma\subseteq \mathbb{N}$ such that for any $E\subseteq\mathbb{N}$ with $d^*(E)=0$, $\Gamma\setminus E$ is an IP set.\\
\hline
\end{tabular}
\end{center}
If $\Phi$ is a family of subsets of $\mathbb{N}$, $\Phi^*$ will usually stand for the family of subsets of $\mathbb{N}$ having a non-trivial intersection with any member of $\Phi$. For instance, IP$^*$ denotes the family of all subsets of $\mathbb{N}$ having a non-trivial intersection with any IP set. When dealing with the families $(\Delta_\ell\text{-rich})^*$, $(\Delta_{\ell,0}\text{-rich})^*$, (IP-rich)$^*$ and (IP$_0$-rich)$^*$ we will find it more convenient to denote these families, correspondingly, by A-$\Delta_\ell^*$, A-$\Delta_{\ell,0}^*$, A-IP$^*$ and A-IP$_0^*$ (where "A" stands for "almost"). The following lemma provides a useful characterization for each of these families.
\begin{lem}\label{3.AlmostAnd*}
Let $D\subseteq \mathbb{N}$ and let $\ell\in\mathbb{N}$. Then:
\begin{enumerate}[(i)]
\item $D$ is almost $\Delta_\ell^*$ (or A-$\Delta_\ell^*$) if and only if there exists a set $E\subseteq\mathbb{N}$ with $d^*(E)=0$ such that $D\cup E$ is a $\Delta_\ell^*$ set.\index{A-$\Delta_\ell^*$}
\item $D$ is almost $\Delta_{\ell,0}^*$ (or A-$\Delta_{\ell,0}^*$) if and only if there exists a set $E\subseteq\mathbb{N}$ with $d^*(E)=0$ such that $D\cup E$ is a $\Delta_{\ell,0}^*$ set.\index{A-$\Delta_{\ell,0}^*$}
\item (See \cite{mccutcheonDsetRich}) $D$ is almost IP$^*$ (or A-IP$^*$) if and only if there exists a set $E\subseteq\mathbb{N}$ with $d^*(E)=0$ such that $D\cup E$ is an IP$^*$ set.\index{A-IP$^*$}
\item $D$ is almost IP$_0^*$ (or A-IP$_0^*$) if and only if there exists a set $E\subseteq\mathbb{N}$ with $d^*(E)=0$ such that $D\cup E$ is an IP$_0^*$ set. \index{A-IP$_0^*$}
\end{enumerate}
\end{lem}
\begin{proof}
We will only prove (i). The proofs of (ii), (iii) and (iv) are similar.\\
First, suppose that there exists a set $E\subseteq \mathbb{N}$ with $d^*(E)=0$ such that $D\cup E$ is $\Delta_\ell^*$. Let $S$ be a $\Delta_\ell$-rich set. Since $S\setminus E$ is a $\Delta_\ell$ set, we have that
$$\emptyset\neq (D\cup E)\cap (S\setminus E)\subseteq D\cap S.$$
This shows that $D$ has a non-trivial intersection with every $\Delta_\ell$-rich set.\\
For the other direction, suppose that for any $E\subseteq \mathbb{N}$ with $d^*(E)=0$, $D\cup E$ is not $\Delta_\ell^*$. Since for any $E\subseteq \mathbb{N}$, $(\mathbb{N}\setminus D)\setminus E=\mathbb{N}\setminus(D\cup E)$, we have by our assumption that $\mathbb{N}\setminus D$ is a $\Delta_\ell$-rich set. So, $D$ is not A-$\Delta_\ell^*$ (recall that, by definition, A-$\Delta_\ell^*$ denotes the family of subsets of $\mathbb{N}$ having a non-trivial intersection with every $\Delta_\ell$-rich set).
\end{proof}
\subsection{Relations between various notions of largeness}
The following diagram (where $\ell_1,\ell_2\in\mathbb{N}$ and $\ell_1<\ell_2$) presents in a unified way the relations between the various families of sets which were introduced in the previous sections.
$$\begin{matrix}
\text{IP}_0& \not\supsetneq & \Delta_{1}\text{-rich}
& \subsetneq & \Delta_1 & \subsetneq & \Delta_{1,0} & \supsetneq &\text{IP}_0&&\\
\rotatebox[origin=c]{-90}{$=$}& & \rotatebox[origin=c]{90}{$\subsetneq$}
& & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{90}{$\subsetneq$}&& \\
\text{IP}_0& \not\supsetneq & \Delta_{2}\text{-rich}
& \subsetneq & \Delta_2 & \subsetneq & \Delta_{2,0} & \not\supsetneq & \text{IP}&\subsetneq&\text{IP}_0\\
\rotatebox[origin=c]{-90}{$=$}& & \rotatebox[origin=c]{90}{$\subsetneq$}
& & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{-90}{$=$}&&\rotatebox[origin=c]{-90}{$=$} \\
\vdots& & \vdots
& & \vdots & & \vdots & & \vdots&&\vdots \\
\rotatebox[origin=c]{-90}{$=$}& & \rotatebox[origin=c]{90}{$\subsetneq$}
& & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{-90}{$=$}&&\rotatebox[origin=c]{-90}{$=$} \\
\text{IP}_0& \not\supsetneq & \Delta_{\ell_1}\text{-rich}
& \subsetneq & \Delta_{\ell_1} & \subsetneq & \Delta_{\ell_1,0} & \not\supsetneq & \text{IP}&\subsetneq &\text{IP}_0\\
\rotatebox[origin=c]{-90}{$=$}& & \rotatebox[origin=c]{90}{$\subsetneq$}
& & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{-90}{$=$}&&\rotatebox[origin=c]{-90}{$=$} \\
\vdots& & \vdots&
& \vdots & & \vdots & & \vdots&&\vdots \\
\rotatebox[origin=c]{-90}{$=$}& & \rotatebox[origin=c]{90}{$\subsetneq$}
& & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{-90}{$=$}&&\rotatebox[origin=c]{-90}{$=$} \\
\text{IP}_0& \not\supsetneq & \Delta_{\ell_2}\text{-rich}
& \subsetneq & \Delta_{\ell_2} & \subsetneq & \Delta_{\ell_2,0} & \not\supsetneq & \text{IP}&\subsetneq &\text{IP}_0\\
\rotatebox[origin=c]{-90}{$=$}& & \rotatebox[origin=c]{90}{$\subsetneq$}
& & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{90}{$\subsetneq$} & & \rotatebox[origin=c]{-90}{$=$}&&\rotatebox[origin=c]{-90}{$=$} \\
\vdots& & \vdots&
& \vdots & & \vdots & & \vdots&&\vdots
\end{matrix}
$$
In what follows we will provide explanations/proofs for the non-obvious inclusions presented in the diagram above.\\
We begin with noting that by \cref{2.OddPolynomialCharacterization} and the definitions of the families $\Delta_\ell$-rich, $\Delta_\ell$ and $\Delta_{\ell,0}$, the following diagram holds for any $\ell_1<\ell_2$:
$$\begin{matrix}
\Delta_{\ell_1}\text{-rich}&\subseteq&\Delta_{\ell_1}&\subseteq&\Delta_{\ell_1,0}\\
\rotatebox[origin=c]{90}{$\subseteq$}& &\rotatebox[origin=c]{90}{$\subsetneq$}& &\rotatebox[origin=c]{90}{$\subsetneq$} \\
\Delta_{\ell_2}\text{-rich}&\subseteq&\Delta_{\ell_2}&\subseteq&\Delta_{\ell_2,0}
\end{matrix}
$$
We will show now that the following relations hold for any $\ell\in\mathbb{N}$:
$$\begin{matrix}
\text{IP}_0&\not\supseteq&\Delta_{\ell}\text{-rich}\\
& &\rotatebox[origin=c]{90}{$\subsetneq$}\\
& &\Delta_{\ell+1}\text{-rich}
\end{matrix}
$$
\begin{lem}\label{3.Delta^lRichNoIPs}
For each $\ell\in\mathbb{N}$, there exists a $\Delta_\ell\text-\text{rich}$ set which is not an $\text{IP}_0$ set nor a $\Delta_{\ell'}$ set for any $\ell'>\ell$.
\end{lem}
\begin{proof}
First we will show that, given $\ell\in\mathbb{N}$ and $D\subseteq \mathbb{N}$ with $d^*(D)=\delta>0$, for any $M\geq 2^\ell$, $n_1,...,n_M\in\mathbb{N}$ and any $E\subseteq\mathbb{N}$ with $d^*(E)=0$, there exists an $n\in D$ for which
\begin{equation}\label{3.DcapEisEmpty}
\{\partial(n_{j_1},...,n_{j_{2^\ell-1}},n)\,|\,1\leq j_{1}<j_2<j_3<\cdots< j_{2^\ell-1}\leq M\}\cap E=\emptyset.
\end{equation}
To prove the contrapositive, note that for any $m_1,m_2,m_3,\cdots,m_{2^\ell}\in\mathbb{Z}$,
\begin{multline*}
\partial(m_1,...,m_{2^\ell})=\partial(m_{2^{\ell}-2^{\ell-1}+1},...,m_{2^\ell})-\partial(m_1,...,m_{2^{\ell}-2^{\ell-1}})\\
=\partial(m_{2^{\ell}-2^{\ell-2}+1},...,m_{2^\ell})-\partial(m_{2^{\ell}-2^{\ell-1}+1},...,m_{2^\ell-2^{\ell-2}})-\partial(m_1,...,m_{2^{\ell}-2^{\ell-1}})\\
=\partial(m_{2^{\ell}-2^{\ell-2}+1},...,m_{2^\ell})-\sum_{t=0}^{1}\partial(m_{2^{\ell}-2^{\ell-t}+1},...,m_{2^{\ell}-2^{\ell-t-1}})\\
=\cdots=m_{2^\ell}-\sum_{t=0}^{\ell-1}\partial(m_{2^{\ell}-2^{\ell-t}+1},...,m_{2^{\ell}-2^{\ell-t-1}}).
\end{multline*}
So if \eqref{3.DcapEisEmpty} does not hold for any $n\in D$, we have
$$D\subseteq\bigcup_{m\in E}\{m+\sum_{t=0}^{\ell-1}\partial(n_{j_{2^{\ell}-2^{\ell-t}+1}},...,n_{j_{2^{\ell}-2^{\ell-t-1}}})\,|\,1\leq j_1<\cdots<j_{2^\ell-1}\leq M\}$$
and hence $d^*(E)\geq \frac{\delta}{M^{2^\ell}}>0$.\\
Now let $\ell\in\mathbb{N}$, let $E\subseteq \mathbb{N}$ with $d^*(E)=0$ and let $\alpha_0,...,\alpha_\ell\in\mathbb T$ be rationally independent irrational numbers. By Weyl's equidistribution theorem \cite{weyl1916Mod1}, the sequence
$$(n\alpha_0,n^2\alpha_0,n\alpha_1,n^2\alpha_1,...,n\alpha_\ell,n^2\alpha_\ell),\text{ }n=1,2,...$$
is uniformly distributed on $\mathbb T^{2(\ell+1)}$. Hence, by \eqref{3.DcapEisEmpty}, we can choose inductively an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ such that
\begin{enumerate}[(1)]
\item $D_\ell((n_k)_{k\in\mathbb{N}})\cap E=\emptyset$.
\item For each $j\in\{1,...,\ell\}$,
$$\lim_{k\rightarrow\infty}n_k\alpha_j=0.$$
\item For each $j\in\{1,...,\ell\}$, $$-2^{j-1}\frac{(2j+1)!}{2!(2j-1)!}\lim_{k\rightarrow\infty}n^2_k\alpha_j=\alpha_{j-1}.$$
\item $\lim_{k\rightarrow\infty}n_k\alpha_0=\frac{1}{2}.$
\end{enumerate}
Let $p\in\beta\mathbb{N}$ be a non-principal ultrafilter with $\{n_k\,|\,k\in\mathbb{N}\}\in p$.
Since $p$ satisfies the hypothesis of \cref{2.TechnicalLemma}, we have
$$\plimgG{p_\ell}{n}{\mathbb{N}}n^{2\ell+1}\alpha_\ell=\plimgG{p}{n}{\mathbb{N}}n\alpha_0=\frac{1}{2}.$$
By \cref{1.DeltaCharacterization} there exists a subsequence $(n_{k_j})_{j\in\mathbb{N}}$ of $(n_k)_{k\in\mathbb{N}}$ such that for each $n\in D_\ell((n_{k_j})_{j\in\mathbb{N}})$,
$$\|n^{2\ell+1}\alpha_\ell-\frac{1}{2}\|<\frac{1}{4}$$
and hence
$$\|n^{2\ell+1}\alpha_\ell\|\geq\frac{1}{4}.$$
So for any $E\subseteq \mathbb{N}$ with $d^*(E)=0$ we can find an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ for which
$$D_\ell((n_k)_{k\in\mathbb{N}})\subseteq \{n\in\mathbb{N}\,|\,\|n^{2\ell+1}\alpha_\ell\|\geq\frac{1}{4}\}\setminus E.$$
Thus, the set
$$\{n\in\mathbb{N}\,|\,\|n^{2\ell+1}\alpha_\ell\|\geq\frac{1}{4}\}$$
is $\Delta_\ell\text-$rich. However, by \cref{2.OddDegreeRecurrence} and \cref{3.IP0Returns}, it is not an $\text{IP}_0$ set nor a $\Delta_{\ell'}$ set for any $\ell'>\ell$.
\end{proof}
\begin{rem}
An argument similar to the one used to prove \cref{2.LackOfRecurrenceForEvenPowers} can be utilized to show that for any $\epsilon\in (0,\frac{1}{3})$ and any even polynomial $v(x)$ with no constant term and at least one irrational coefficient, the set
$$\{n\in\mathbb{N}\,|\,\|v(n)\|>\epsilon\}$$ is $\Delta_\ell\text-$rich for each $\ell\in\mathbb{N}$. On the other hand, by \cref{3.IP0Returns}, this set is not an $\text{IP}_0$ set.
\end{rem}
\begin{question}
In \cite[Section 2]{AlmostIPBerLeib}, it was shown that A-IP$_0^*\not\supsetneq$A-IP$^*$. Hence $\text{IP-rich}\not\supseteq\text{IP}_0\text{-rich}.$
Is it true that for any $\ell\in\mathbb{N}$,
$\Delta_{\ell}\text{-rich}\not\supseteq \Delta_{\ell,0}\text{-rich}$?
\end{question}
\begin{cor}\label{3.FirstImportantApplication}
Let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be a non-zero odd real polynomial and let $(X,\mathcal A, \mu, T)$ be an invertible probability measure preserving system. Then:
\begin{enumerate}[(i)]
\item There exists $r\in\mathbb{N}$ and $E\subseteq \mathbb{N}$ such that for any $R\geq r$, $E$ is IP$_R^*$ but not $\Delta_{\ell,R'}^*$ for any $R'\geq 2^\ell$ (i.e. the mere fact that given $\epsilon$ small enough, there exists $R\geq r$ for which $R(v,\epsilon)\in \text{IP}_R^*\setminus \text{IP}_{R-1}^*$,\footnote{
For any $\epsilon>0$, let $r(\epsilon)\in\mathbb{N}$ be the least natural number for which the set $\mathcal R(v,\epsilon)$ is IP$_{r(\epsilon)}^*$ (such a constant is guaranteed to exist by \cref{3.IP0Returns}) and let $r\in\mathbb{N}$. Since $r(\epsilon)\rightarrow\infty$ as $\epsilon\rightarrow 0$, for any $\epsilon$ small enough, $r(\epsilon)\geq r$. Thus, if we let $R=r(\epsilon)$, we have that $R\geq r$ and $\mathcal R(v,\epsilon)\in\text{IP}_{R}^*\setminus \text{IP}_{R-1}^*$.
}
does not imply that there exists $R'\geq 2^\ell$ for which $\mathcal R(v,\epsilon)\in\Delta_{\ell,R'}^*$).
\item There exists a set $E\subseteq \mathbb{N}$ such that $E$ is A-IP$_0^*$ but not A-$\Delta_{\ell,0}^*$ (i.e. if $v(\mathbb{Z})\subseteq\mathbb{Z}$, the mere
fact that $\mathcal R_A(v,\epsilon)$ is A-IP$_0^*$ does not imply that it is A-$\Delta_{\ell,0}^*$).
\end{enumerate}
\end{cor}
\begin{proof}
We will only show (i), the proof of (ii) is similar. For this we will first note that by \cref{3.Delta^lRichNoIPs}, there exists a set $D\subseteq \mathbb{N}$ which is a $\Delta_\ell$-rich set but not an IP$_0$ set. In particular, $D$ is $\Delta_{\ell,0}$ but not IP$_0$. Thus, there exists $r\in\mathbb{N}$ such that for any $R\geq r$, $D$ is not an IP$_R$ set. It follows that for any $R\geq r$, the set $E=\mathbb{N}\setminus D$ is an IP$_R^*$ set but not a $\Delta_{\ell,R'}^*$ set for any $R'\geq 2^\ell$.\\
\end{proof}
Now we prove that for each $\ell\in\mathbb{N}$,
$$\Delta_\ell\text{-rich}\subsetneq \Delta_\ell.$$
\begin{lem}\label{3.DensityZeroDeltaL}
Let $\ell\in\mathbb{N}$. Any $\Delta_\ell$ set contains a $\Delta_\ell$ set with zero upper Banach density.
\end{lem}
\begin{proof}
Let $D\subseteq\mathbb{N}$ be a $\Delta_\ell$ set and let $(n_k)_{k\in\mathbb{N}}$ be an increasing sequence such that $D_\ell((n_k)_{k\in\mathbb{N}})\subseteq D$. Note that for any $s\in\mathbb{N}$ and any subsequence $(m_{k})_{k\in\mathbb{N}}$ of $(n_k)_{k\in\mathbb{N}}$, there exist a further subsequence $(
m_{k_j})_{j\in\mathbb{N}}$ of $(m_{k})_{k\in\mathbb{N}}$ for which
$$D_\ell((m_{k_j})_{j\in\mathbb{N}})\subseteq s\mathbb{N}.$$
By using a diagonalization argument we can find a subsequence $(n_{k_j})_{j\in\mathbb{N}}$ such that for each $s\in\mathbb{N}$,
$$D_\ell((n_{k_j})_{j=s}^\infty)\subseteq s^s\mathbb{N}.$$
Without loss of generality we will assume that $(n_{k_j})_{j\in\mathbb{N}}=(n_k)_{k\in\mathbb{N}}$.\\
Given $s\geq 2^\ell$, $t\in\{1,...,2^{\ell}-1\}$ and any $j_1\in\{1,...,s-1\}$,...,$j_t\in\{j_{t-1}+1,...,s+t-2\}$; we define the set
$$A(j_1,...,j_t)=\{\partial(n_{j_1},...,n_{j_{2^\ell}})\,|\,s+t\leq j_{t+1}<\cdots<j_{2^\ell}\}.$$
Associated to $A(j_1,...,j_t)$ there exists a constant $z=z(j_1,...,j_t)\in\mathbb{Z}$ with the property that for any $s+t\leq j_{t+1}<\cdots<j_{2^\ell}$,
$$\partial(n_{j_1},...,n_{j_{2^\ell}})+z=\partial(n_{s},...,n_{s+t-1},n_{j_{t+1}},...,n_{j_{2^\ell}}).$$
It follows that
$$A(j_1,...,j_t)+z(j_1,...,j_t)\subseteq D_\ell((n_k)_{k=s}^\infty).$$
So, since
\begin{multline*}
D_\ell((n_k)_{k\in\mathbb{N}})\setminus D_\ell((n_k)_{k=1}^{s+2^\ell-2})\subseteq D_\ell((n_k)_{k=s}^\infty)\cup\bigcup_{t=1}^{2^\ell-1}\bigcup_{j_1=1}^{s-1}\cdots\bigcup_{j_t=j_{t-1}+1}^{s+t-2}A(j_1,...,j_t),
\end{multline*}
we have that
\begin{multline*}
d^*(D_\ell((n_k)_{k\in\mathbb{N}}))=d^*(D_\ell((n_k)_{k\in\mathbb{N}})\setminus D_{\ell}((n_k)_{k=1}^{s+2^\ell-2}))\\
\leq (s+2^\ell)^{2^\ell} d^*(D_\ell((n_k)_{k=s}^\infty))\leq\frac{(s+2^\ell)^{2^\ell}}{s^s}.
\end{multline*}
Thus, $d^*(D_\ell((n_k)_{k\in\mathbb{N}})=0,$ completing the proof.
\end{proof}
Next we show that for any $\ell\in\mathbb{N}$, $$\Delta_\ell\subsetneq \Delta_{\ell,0}.$$
\begin{lem}\label{3.StrictDeltaEllinclusion}
There exists a set $E\subseteq\mathbb{N}$ that is a $\Delta_{\ell,0}$ set for each $\ell\in\mathbb{N}$ but is not a $\Delta_\ell$ set for any $\ell\in\mathbb{N}$.
\end{lem}
\begin{proof}
For each $k\in\mathbb{N}$, let $E_k=\{(2k)!,(2k)!2...,(2k)!k\}$
and let
$$E=\bigcup_{k\in\mathbb{N}}E_k.$$
Let $\ell\in\mathbb{N}$. Since for any $r\geq 2^\ell$, there exists an $R\geq r$ for which $E_{R}$ is a $\Delta_{\ell,r}$ set, $E$ is a $\Delta_{\ell,0}$ set. It only remains to show that $E$ does not contain any $\Delta_1$ set (this will imply that $E$ contains no $\Delta_\ell$ set for any $\ell\in\mathbb{N}$).\\
Given a $\Delta_1$ set $D$ there exists an increasing sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ such that $D_1((n_k)_{k\in\mathbb{N}})\subseteq D$. Note that for such a sequence
\begin{equation}\label{3.LargeGap0NotNormalProof}
(n_{k}-n_{1})-(n_{k}-n_2)=n_2-n_1
\end{equation}
for each $k\geq 3$. Since $\max E_s<\min E_{s+1}-(2s)!$ for any $s\in\mathbb{N}$, we have that for $n,m\in E$ large enough, $|n-m|>n_2-n_1$. It follows from \eqref{3.LargeGap0NotNormalProof} that $D$ can not be a subset of $E$, which completes the proof.
\end{proof}
\begin{rem}
The set $E$ in the proof of \cref{3.StrictDeltaEllinclusion}, is an IP$_0$ set which is not an IP set. Hence
$$\text{IP}\subsetneq \text{IP}_0.$$
\end{rem}
The next two results show that
$$ \text{IP}_0\subsetneq \Delta_{1,0}.$$
\begin{lem}\label{3.EveryDeltaIsIP}
Any IP$_0$ set is a $\Delta_{1,0}$ set.
\end{lem}
\begin{proof}
The proof is similar to the well known fact that any IP set is a $\Delta$ set (see for example \cite[Lemma 9.1]{FBook}). We will actually show that any IP$_r$ set contains a $\Delta_{1,r}$ set.\\
Let $r\geq2$ and let $\Gamma$ be an IP$_r$ set containing FS$((n_k)_{k=1}^r)$ for some $n_1,...,n_r\in\mathbb{N}$. For each $k\in\{1,...,r\}$, let $s_k=n_1+n_2+\cdots+n_k$. Then for any $k>l$, $s_k-s_l\in \Gamma$. Thus, $\Gamma$ is a $\Delta_{1,r}$ set.
\end{proof}
The result contained in the following short lemma is similar to a remark made in the Introduction to \cite{BergelsonErdosDifferences} .
\begin{lem}\label{3.DeltaWithNoIP}
The set
$$D=D_1((10^k)_{k\in\mathbb{N}})=\{9\sum_{s=i}^j10^s\,|\,0\leq i\leq j\},$$
is a $\Delta$ set but not an IP$_3$ set.
\end{lem}
\begin{proof}
If $x,y,z\in D$ and $x\leq y\leq z$. Then, if $y+z\in D$, we have by analysing the decimal expansion of $y$, $z$ and $y+z$, that $x+z\not\in D$ or $x=y$. But in the latter case $x+y\not\in D$. So, $D$ is not an IP$_3$ set.
\end{proof}
Finally, we prove that for $\ell\geq 2$,
$$\Delta_{\ell,0}\not\supseteq \text{IP}.$$
We will denote by $\mathcal F$ the set of all finite non-empty subsets of $\mathbb{N}$.
\begin{thm}\label{3.IPWithNoDelta2}
Let $(n_k)_{k\in\mathbb{N}}$ be an increasing sequence of natural numbers. Suppose that for any $\alpha,\beta,\gamma\in\mathcal F$,
\begin{equation}\label{3.injectiveCondition}
\sum_{j\in\alpha}n_j+\sum_{j\in\beta}n_j=\sum_{j\in\gamma}n_j\text{ if and only if }\alpha\cup\beta=\gamma\text{ and }\alpha\cap\beta=\emptyset.
\end{equation}
Then for any $\ell\geq 2$, $\text{FS}((n_k)_{k\in\mathbb{N}})$ is not a $\Delta_{\ell,2^{\ell-2}14}$ set.\footnote{
Equation \eqref{3.injectiveCondition} holds for any sequence $(n_k)_{k\in\mathbb{N}}$ in $\mathbb{N}$ which satisfies $\frac{n_{k+1}}{n_k}\geq 3$ for every $k\in\mathbb{N}$.
}
\end{thm}
\begin{proof}
By the definition of $\Delta_{\ell,r}$ sets, all we need to show is that given a 14-element sequence $(c_k)_{k=1}^{14}$ in $\mathbb{Z}$ with $D_2((c_k)_{k=1}^{14})\subseteq\mathbb{N}$,
$D_2((c_k)_{k=1}^{14})\not\subseteq \text{FS}((n_k)_{k\in\mathbb{N}})$. Assume for contradiction that
$D_2((c_k)_{k=1}^{14})\subseteq \text{FS}((n_k)_{k\in\mathbb{N}})$. For any
$k_1,k_2\in\{3,...,14\}$ with $k_1 < k_2$, let $\alpha (k_1,k_2)\in\mathcal F$ be such that
\begin{equation}\label{3.SubstractFirstTerms}
(c_{k_2}-c_{k_1})-(c_2-c_1)=\sum_{j\in\alpha(k_1,k_2)}n_j.
\end{equation}
Since $D_2((c_k)_{k=1}^{14})\subseteq \text{FS}((n_k)_{k\in\mathbb{N}})$, for any $k_1,...,k_4$ which satisfy $3\leq k_1<k_2<k_3<k_4\leq 14$, there exists $\alpha\in\mathcal F$ such that
\begin{equation}\label{3.FallingInside}
(c_{k_4}-c_{k_3})-(c_{k_2}-c_{k_1})=\sum_{j\in\alpha}n_j.
\end{equation}
It follows from \eqref{3.SubstractFirstTerms} and \eqref{3.FallingInside} that
\begin{equation}\label{3.GeneralDifference}
\sum_{j\in\alpha(k_3,k_4)}n_j=\sum_{j\in\alpha}n_j+\sum_{j\in\alpha(k_1,k_2)}n_j.
\end{equation}
Thus, we get from \eqref{3.GeneralDifference} and \eqref{3.injectiveCondition} that $$\alpha(k_1,k_2)\subseteq \alpha(k_3,k_4).$$
Consider now the set $A=\{c_5-c_4,c_8-c_7,c_{11}-c_{10},c_{14}-c_{13}\}$ and note that $|A|\geq 3$, (otherwise we would have that $0\in\mathbb{N}$). Hence, at least two of the elements of $A$ are either strictly positive or strictly negative. Without loss of generality, we will assume that $c_5-c_4,c_8-c_7>0$.\\
Let $\lambda_1=\alpha(k_3,k_5)\setminus\alpha(k_3,k_4)$ and let $\rho_1=\alpha(k_3,k_4)\setminus\alpha(k_3,k_5)$, then
$$c_5-c_4=(c_5-c_3)-(c_4-c_3)=\sum_{j=\alpha(k_3,k_5)}n_j-\sum_{j=\alpha(k_3,k_4)}n_j=\sum_{j\in\lambda_1}n_j-\sum_{j\in\rho_1}n_j.$$
(Note that a priori $\lambda_1$ or $\rho_1$ could be empty. We follow the convention that $\sum_{j\in\emptyset}n_j=0$.)\\
Since $c_5-c_4>0$, we must have that $\lambda_1\neq\emptyset$. A similar argument shows that if we let $\lambda_2=\alpha(k_6,k_8)\setminus\alpha(k_6,k_7)$ and $\rho_2=\alpha(k_6,k_7)\setminus\alpha(k_6,k_8)$, then
$$c_8-c_7=\sum_{j\in\lambda_2}n_j-\sum_{j\in\rho_2}n_j$$
and hence $\lambda_2\neq \emptyset$.\\
Since $\alpha(k_3,k_5)\cup\alpha(k_3,k_4)\subseteq\alpha(k_6,k_8)\cap\alpha(k_6,k_7)$, the sets $\lambda_1,\lambda_2,\rho_1,\rho_2$ are pairwise disjoint. Let $\alpha\in\mathcal F$ be such that
$$\sum_{j\in\alpha}n_j=(c_8-c_7)-(c_5-c_4)\in\text{FS}((n_k)_{k\in\mathbb{N}}),$$
then
\begin{equation}\label{3.DisjointSum}
\sum_{j\in\alpha}n_j+\sum_{j\in\lambda_1\cup\rho_2}n_j=\sum_{j\in\lambda_2\cup\rho_1}n_j.
\end{equation}
By noting that $\lambda_1\not\subseteq \lambda_2\cup\rho_1$, we see that \eqref{3.DisjointSum} contradicts \eqref{3.injectiveCondition}. This completes the proof.
\end{proof}
\begin{cor}\label{3.SecondImportantApplication}
Let $v(x)=\sum_{j=1}^\ell a_jx^{2j-1}$ be a non-zero odd real polynomial. There exists $r\in\mathbb{N}$ and $E\subseteq \mathbb{N}$ such that for any $R\geq r$, $E$ is $\Delta_{\ell,R}^*$ but not IP$_{R'}^*$ for any $R'\in\mathbb{N}$ (i.e. the mere fact that given $\epsilon$ small enough, there exists $R\geq r$ for which $R(v,\epsilon)\in \Delta_{\ell,R}^*\setminus \Delta_{\ell,R-1}^*$,
does not imply that there exists $R'\in\mathbb{N}$ for which $\mathcal R(v,\epsilon)\in\text{IP}_{R'}^*$).
\end{cor}
\begin{proof}
The proof is similar to the proof of \cref{3.FirstImportantApplication}.
\end{proof}
\begin{question}
Is it true that $\text{A-}\Delta_{\ell,0}^*\not\subseteq\text{A-IP}^*$?
\end{question}
\printindex
|
1,116,691,499,065 | arxiv | \section{Introduction}
A \emph{combinatorial neural code} is a set of $0/1$-vectors that is used to model the co-firing patterns of certain neurons in the brain of an animal.
These neurons are call \emph{place cells} and are active when the animal is in a particular region within its environment, called a \emph{place field}, or simply just a \emph{field}.
Here, we are not concerned with timing and spiking of neural activity; we consider only the case where the place cell is considered ``on'' or ``off.''
Recall that \emph{Venn diagram} is a diagram consisting of regions bounded by $n$ simple, closed curves such that all possible intersections of the curves' interiors appear.
An \emph{Euler diagram} is a generalization of a Venn diagram, where the curves' interiors do not need to intersect in all possible ways.
Consider the following Euler diagram, where $U_i$ refers to the interior of the (innermost) curve in which the label is contained:
\begin{center}
\begin{tikzpicture}
\draw (-3,0.5) -- (3,0.5) -- (3,4.5) -- (-3,4.5) -- cycle;
\draw (-1,2.5) circle [radius=1.5];
\draw (-0.75,2.5) circle [radius=.75];
\draw (1,2.5) circle [radius=1.5];
\node at (-1.9,3.2) {$U_1$};
\node at (-1.1,2.5) {$U_2$};
\node at (2,2.5) {$U_3$};
\end{tikzpicture}
\end{center}
This diagram models the co-firing patterns of three place cells, and the regions in which the place cells are active are inside of the three circles.
The labels $U_1$, $U_2$, and $U_3$ are the interiors of the three curves, and the regions of the diagram can be encoded by triples of $0$s and $1$s, indicating which of the place cells are active.
We use the standard convention of using $0$ to denote an inactive neuron and $1$ to denote an active neuron.
So, for example, the codeword $101$ corresponds to the intersection $U_1 \cap U_2^c \cap U_3$, where $U_2^c$ denotes the complement of $U_2$ in the diagram.
The full neural code is
\begin{equation}\label{ci}
\cC = \{000, 100, 001, 110, 101, 111\}.
\end{equation}
\commentout{We will be interested in rings of polynomials which are generated by such codes. For $\cC$ defined in \eqref{ci} the associated ring is
\[ R_1 = \mathbb{C}[x,z,xy,xz,xyz]. \]
(note the exclusion of $1 = x^0y^0z^0$).
Using this notation, $R_1$ is the closure of $x,z,xy,xz,xyz$ under products and sums.
Examples of some elements are
\[ 1,\, x, \, x+z, \,x + xy, \dots , \, x^5,\dots, \, 4+ 3(xy)^6 + (xz)^2, \textnormal{ etc}\dots \in R_1 \]
In fact, $R_1$ can be written more succinctly:
\[ \mathbb{C}[x,z,xy] \cong R_1 \]
since there is no need to write $xz$ if we have both $x$ and $z$.
If we label these generators as $t_1 = x$, $t_2 = z$, and $t_3 = xz$, then the relationship among these generators, called a \emph{syzygy}, is $t_1t_2 = t_3$.
It is precisely the syzygies of these rings, we will be interested in later.
\begin{defn}
A set $S \subseteq \mathbb{R}^n$ is \emph{convex} if, for every $u,v \in S$, the line segment
\[
\{ tu + (1-t)v \mid 0 \leq t \leq 1\}
\]
is contained entirely in $S$.
\end{defn}
}
Intuitively, convex sets do not have holes or dents; a disc is convex, but a circle is not.
If a code has an Euler diagram consisting of convex sets, then the code is called \emph{convexly realizable}.
More generally, one may ask whether every code is convexly realizable in $\mathbb{R}^n$ for some $n$ rather than just $\mathbb{R}^2$.
The answer is yes \cite{FrankeMuthiah}, but it is much less clear how to determine the smallest $n$ needed.
In this article, we focus on convex realizability in $\mathbb{R}^2$ only, turning neural codes into algebraic objects, with the goal of deducing information about the code via algebraic techniques.
\subsection{Models and results}
An Euler diagram $\mathcal{D}$ is a collection of sets $\mathcal{D} = \{U_1,...,U_n\}$, which we refer to as place fields, and where each field $ U_i$ is a subset of $ \mathbb{R}^2$. The sets $U_i$ are sufficiently nice, with boundaries $\lambda_i = \partial U_i $ which are piecewise smooth curves. We will label the set of boundary curves $\Lambda = \{\lambda_1,..,\lambda_n\}$. Our next step is to label the connected components of $\mathbb{R}^2 \setminus(\lambda_1 \cup \cdots \cup \lambda_n)$. Thus, for any codeword $w \in \{0,1\}^{[n]}$
the associated \emph{zone} is defined as
\[ Z_w = \left( \bigcap_{i:w_i = 1} U_i \right) \cap \left( \bigcap_{i:w_i = 0} U_i^c \right). \]
The code for $\mathcal{D}$ is the set
\[ \cC_{\mathcal{D}} = \{ w \in \{0,1\}^{[n]} : Z_w \neq \emptyset \} \]
and the zones are collected in the set $\mathcal{Z}_{\mathcal{D}} = \{ Z_w : w \in \cC \}$.
\begin{defn}\label{wf}
An Euler diagram is {\it well-formed} if
\begin{enumerate}
\item each curve label is used exactly once,
\item all curves intersect at exactly 0 or 2 points,
\item each point in the plane is passed by at most 2 curves, and
\item each zone is connected.
\end{enumerate}
If $\cC$ is a code with a well-formed Euler diagram, then we call $\cC$ a \emph{well-formed code}.
\end{defn}
Requiring a diagram to be well-formed can be partly thought of as insisting that the curves in the diagram intersect ``generically enough,'' as long as the zones stay connected.
It follows from Definition \ref{wf} that the zones are the exactly the connected components of $\mathbb{R}^2 \setminus (\lambda_1\cup \cdots \cup \lambda_n)$.
\subsubsection{Special types of diagrams}
In this work, we are able to draw conclusions from diagrams of limited complexity. This is specified by allowing diagrams of limited {\it depth} or diagrams constructed of {\it zero-} or {\it one-piercings}.
\begin{defn}
Let $\mathcal{D}$ be a well-formed Euler diagram.
A curve $\lambda = \partial U$ is a \emph{$0$-piercing} of $\mathcal{D}$ if, for all $i \in [n]$, $U \subset U_i$, $U \supset U_i$, or $U \cap U_i = \emptyset$.
A curve $\lambda = \partial U$ is a \emph{$1$-piercing} of $\mathcal{D}$ if there is exactly one $j \in [n]$ so that the sets $U\cap \lambda_j $, $U \setminus U_j$, and $U_j \setminus U $ are nonempty
We say $\mathcal{D}$ is \emph{$0$-inductively pierced} if there exists some labeling of the curves $\lambda_1,\lambda_2,\dots, \lambda_n$ so that for each $k \in [n-1]$, $\lambda_{k+1}$ is a $0$-piercing of the diagram $\mathcal{D}_k = \{U_1,..,U_k\}$. Similarly, a diagram $\mathcal{D}$ is \emph{$1$-inductively pierced} if there is a labeling of the curves so that for each $k \in [n-1]$, $\lambda_{k+1}$ is a $0$- or $1$-piercing of the diagram $\mathcal{D}_k$.
\end{defn}
These $0$- and $1$-inductrively pierced diagrams are special cases of $k-$inductively pierced diagrams, where $k$ may be any nonnegative integer.
In this paper we focus only on the $k = 0,1$ cases but we refer to \cite{TheoryPiercings} for further background on inductively pierced codes.
We now define the depth of a diagram, which will be an important factor in the results we present.
\begin{defn}
A field $ U \in \mathcal{D} $ is of \emph{} $d$ if there are $U_{i_1},..,U_{i_d} \in \mathcal{D}$ such that
\[U \subset U_{i_1} \subset \cdots \subset U_{i_d}. \]
The \emph{depth} of the diagram $\mathcal{D}$ is the maximum depth over all fields in the diagram.
\end{defn}
\subsubsection{Algebraic construction}
We will study homomorphisms between polynomial rings generated by variables respectively labeled by the diagram's zones $\mathcal{Z}$ and the diagram's field labels $\Lambda$.
For each $w \in \cC$ we identify $w$ with the map $[n] \mapsto \{0,1\}^n$ that sends $i$ to $0$ if $w_i = 0$ and sends $i$ to $1$ otherwise.
Additionally, we label a variable associated to the zone $Z_w $ by $ t_{ w^{-1}(1) } $. For each $ \lambda \in \Lambda $ we label a variable associated to the field by $x_\lambda$. We introduce a homomorphism
\[ \pi_{\cC}: \mathbb{C}[ t_{ w^{-1}(1) } : w \in \cC ] \to \mathbb{C}[ x_{ \lambda } : \lambda \in \Lambda ] \]
via the mapping
\[ \pi_{\cC}(t_{w^{-1}(1)}) \mapsto x^{w} = \prod_\lambda x_\lambda^{w_\lambda} \]
The primary object we are interested in is the \emph{toric ideal} $I_{\cC}$, defined as the kernel of the map $\pi_{\cC}$. Hilbert's Basis Theorem guarantees that $I$ is finitely generated. Moreover, it is not hard to see that the ideal is generated by binomials.
We will be especially interested in a set of binomial generators with particular properties.
To define the special generating set, we first note that monomial order on $\mathbb{C}[t_1,\dots,t_m]$ is a total ordering $\prec$ of its monomials such that
\begin{enumerate}
\item $\prec$ respects multiplication: if $u,v,w$ are monomials and $u \prec v$, then $uw \prec vw$, and
\item $\prec$ is a well-ordering: $1 \prec u$ for all monomials $u$.
\end{enumerate}
As a first example, we describe a well-known and computationally-efficient order.
The \emph{graded reverse lexicographic order}, or simply \emph{grevlex} order, on $\mathbb{C}[t_1,\dots,t_m]$ is denoted by $\prec_{\grevlex}$ and sets $t^a \prec_{\grevlex} t^b$ if $\sum a_i < \sum b_j$ or if $\sum a_i = \sum b_j$ and the last nonzero entry of $a-b$ is negative.
While this is not the most intuitive monomial order, it will still be useful to us.
A second useful class of monomial orders on $\mathbb{C}[t_1,\dots,t_n]$ is the set of weight orders. A \emph{weight order} $\prec_{w,\sigma}$ is determined by a vector $w \in \mathbb{R}^n$ and an existing monomial order $\prec_\sigma$, and sets $t^a \prec_{w,\sigma} t^b$ if and only if either $w\cdot a < w \cdot b$, where $\cdot$ is the dot product, or $w\cdot a = w \cdot b$ and $t^a \prec_{\sigma} t^b$; for this reason, $\prec_\sigma$ is often informally referred to as the ``tie-breaker.''
Now, given a monomial ordering $\prec$, any polynomial $f$ has a unique initial term which is denoted $\init_{\prec}(f)$. This further leads to the initial ideal of a given ideal $I$ defined as
\[ \init_{\prec}(I) = \{ \init_{\prec}(f) : f \in I\}. \]
If $I = (g_1,..,g_k)$, it is not necessarily true that $(\init_{\prec}(g_1),..., \init_{\prec}(g_k))$ equals $\init(I)$.
\begin{defn}
Let $\mathcal{G} = \{g_1,\dots,g_k\}$ be a generating set of an ideal $I$ of $\mathbb{C}[x_1,\dots,x_n]$ and let $\prec$ be a monomial order.
If
\[
\init_{\prec}(I) = (\init_{\prec}(g_1),\dots,\init_{\prec}(g_k)
\]
then we call $\mathcal{G}$ a \emph{Gr\"obner basis} of $I$ with respect to $\prec$.
Moreover, we say a $\mathcal{G}$ is \emph{reduced} if the leading coefficient (with respect to $\prec$) of every element is $1$ and if, for every $g,g' \in \mathcal{G}$, $\init_{\prec}(g)$ does not divide any term of $g'$.
\end{defn}
Although there might be many Gr\"obner bases for a given ideal and monomial order, there is a unique reduced Gr\"obner basis. Finally, we say that a Gr\"obner basis $\mathcal{G}$ is a \emph{universal Gr\"obner basis} if it is a Gr\"obner basis with respect to any monomial order. Because $I$ has finitely many initial ideals, we can always construct a finite universal Gr\"obner basis by taking the union of all reduced Gr\"obner bases of $I$. We will call this union {\it the} universal Gr\"obner basis.
In \cite{GrossObatakeYoungs}, the authors successfully found ways to algebraically detect when a code is $k$-inductively pierced for small $k$ and/or few neurons.
The main results of that article are summarized in the theorem below.
\begin{thm}[see \cite{GrossObatakeYoungs}]\label{thm: GOY}
Let $\cC$ be a well-formed code on $n$ neurons such that each neuron fires at least once, that is, there is no $i$ for which $w_i = 0$ for all codes $w \in \cC$.
\begin{enumerate}
\item The toric ideal $I_{\cC} = (0)$ if and only if $\cC$ is $0$-inductively pierced.
\item A well-formed diagram $\mathcal{D}$ is inductively $0$-pierced if and only if no two curves in any well-formed realization of $\mathcal{D}$ intersect.
\item If $\cC$ is $1$-inductively pierced then the toric ideal $I_{\cC}$ is either generated by quadratics or $I_{\cC} = (0)$.
\item When $n=3$, the code is $1$-inductively pierced if and only if the reduced Gr\"obner basis of $I_{\cC}$ with respect to the weighted grevlex order with the weight vector $w = (0, 0, 0, 1, 1, 1, 0)$ contains only binomials of degree $2$ or less.
\end{enumerate}
\end{thm}
The authors further made the following conjecture.
\begin{conj}[see \cite{GrossObatakeYoungs}]
For each $n$, there exists a monomial order such that a code is $0$- or $1$-inductively pierced if and only if the reduced Gr\"obner basis contains binomials of degree $2$ or less.
\end{conj}
\subsubsection{Graphical construction}
Any Euler diagram can be associated to a graph. Recall a graph is a pair $(\mathcal{V},\mathcal{E}) $ where $\mathcal{V}$ is a set of vertices and $\mathcal{E}$ is a set of two-element subsets of $\mathcal{V}$.
\begin{defn}[Dual graphs]
Given a code $\cC$ with $m = |\cC|$ elements, we define a \emph{dual graph} $G_\cC$ whose vertices are labeled uniquely by elements of the code $\cC$. A pair $\{w_1,w_2\}$ is an edge if and only if zones $Z_{w_1}$ and $Z_{w_2}$ have a nontrivial intersection of the boundary, i.e. $\partial Z_{w_1}\cap \partial Z_{w_2}\neq \emptyset $. For our purposes, these dual graphs we will always include a vertex labeled with the codeword $0\dots0$.
\end{defn}
Furthermore, we can define the obvious inclusion mapping $\iota:\cC \xhookrightarrow{}{\mathbb{Z}}^n$. Then a weight function (a distinct notion from weighted monomial orders) can be introduced by setting
\begin{equation}\label{mudef}
\mu( w ) = \| \iota\circ w\|_1.
\end{equation}
\begin{defn}
[Weighted dual graphs]
A \emph{weighted dual graph} is a triple $(\mathcal{V},\mathcal{E}, \mu)$ such that $(\mathcal{V},\mathcal{E})$ is a dual graph and $\mu$ is a mapping as in \eqref{mudef}.
\end{defn}
Notice an edge between two nodes $w_1$ and $w_2$ exists only if
\[ \| \iota\circ w_1 - \iota\circ w_2 \|_1 = 1 . \]
With some abuse of notation, we can extend the definition of $\mu$ to monomials:
\begin{equation}\label{mudef ext}
\mu\left(\prod_{w \in \cC}t_{w^{-1}(1)}^{a_w} \right)
= \left\| \sum_{w \in \cC} a_w \iota \circ w \right\|_1.
\end{equation}
We say a binomial is of \emph{weight} $k$ if all the terms are of weight $k$.
Determining the binomials in the toric ideal of a code from its Euler diagram can be reduced to finding specific subgraphs of the dual graph. We will now define subgraph embeddings.
\begin{defn}
[Weighted graph embeddings] We say the weighted dual graph $H = (\mathcal{V},\mathcal{E},\mu)$ is \emph{embedded} in the weighted dual graph $G = (\mathcal{W}, \mathcal{F}, \nu)$ if there is a one-to-one mapping $\phi: \mathcal{V} \to \mathcal{W}$ such that $\mu(x)= \nu(\phi(x))$ for all $x \in \mathcal{V}$ and the pair $\{\phi(x),\phi(y)\}$ belongs to $\mathcal{F}$ for all $\{x,y\}\in \mathcal{E}$.
\end{defn}
Notice we can extend any graph embedding $\phi$ to a map on the associated polynomial rings. Explicitly, let $H$ be a graph constructed from the neural code $\widetilde \cC$ which embeds via $\phi$ into a graph $G$ constructed from the neural code $\cC$. Let $\jmath$ (respectively $\widetilde \jmath$) map codewords in $\cC$ (respectively $\widetilde \cC$) to nodes of the graph $G$ (respectively $H$). Define the mapping
\[ \phi: \mathbb{C}[ t_{ v^{-1}(1) } : v \in \widetilde \cC ] \to \mathbb{C}[ t_{ w^{-1}(1) } : w \in \cC ] \]
such that $ \phi(t_{v^{-1}(1)}) = t_{w^{-1}(1)} $ if and only if $ \phi (\widetilde (\jmath) v) = \jmath ( w ) $.
\subsection{Results}
Here we will summarize the main results of this article.
First, we give two more definitions.
\begin{defn}
Given a code $C$ let
\[ \calA_\cC
= \{ t_{w^{-1}(c_1)} t_{w^{-1}(c_2)} - t_{w^{-1}(c_3)} | c_1,c_2,c_3\in \cC : c_1 + c_2 = c_3; \mu(c_1) = \mu(c_2) = 1 \} \]
\end{defn}
\begin{defn}
A code $\cC \subset \{0,1\}^{[n]}$ is \emph{external} if there are $n$ unique codewords $w_i$ such that $\mu(w_i) = 1$.
\end{defn}
\begin{thm}\label{Thm Aext}
For an external code $\cC$, the indispensable binomials (defined in the next section) of $I_\cC$ are exactly those in $\calA_\cC$.
\end{thm}
In the last section we define a class of codes called \emph{internal codes}.
For these, we are able to find generating sets that are Gr\"obner bases for all term orders.
In fact, these \emph{universal Gr\"obner bases} consist entirely of quadratic binomials.
Before we proceed to the proof of Theorem \ref{Thm Aext}, we will examine 1-inductively pierced diagrams of maximum depth 1 in Section \ref{depth1}. For such diagrams we can identify exactly the set of indispensable binomials. These binomials are specified by their weighted graph embeddings which are depicted in Table \ref{table dep1}.
\commentout{
\subsection{Toric Ideals}
A \emph{Laurent polynomial over $\mathbb{C}$ in the variables $x_1,\dots,x_n$} is an expression of the form
\[
\sum_{a = (a_1,\dots,a_n) \in \mathbb{Z}^n} c_ax_1^{a_1}\cdots x_n^{a_n}
\]
where $c_a \in \mathbb{C}$ for each $a \in \mathbb{Z}^n$ and $c_a = 0$ for all but finitely many $a$.
The set of Laurent polynomials forms an algebra and is denoted $\mathbb{C}[x_1^{\pm 1},\dots,x_n^{\pm 1}]$.
Essentially, a Laurent polynomial is a polynomial which is allowed to have negative integer exponents.
Now, given a semigroup $S \subseteq \mathbb{Z}^n$, the \emph{semigroup algebra generated by $S$} is
\[
\mathbb{C}[S] := \mathbb{C}[x^a \mid a \in S] \subseteq \mathbb{C}[x_1^{\pm 1},\dots,x_n^{\pm 1}],
\]
where, if $a = (a_1,\dots,a_n)$, then $x^a := x_1^{a_1}x_2^{a_2}\cdots x_n^{a_n}$.
So, we have translated $S$, which only has one available operation, into an algebra, which has two available operations.
The addition in $S$ corresponds to multiplication of monomials in $\mathbb{C}[S]$, but it is less clear what addition of monomials in $\mathbb{C}[S]$ translates to in $S$ -- this will be exploited later.
Although polynomial rings are very familiar objects, it is hard to deal with a semigroup algebra directly.
Part of this is because there is no longer necessarily a \emph{unique} way to add and multiply generators together to obtain polynomials.
To demonstrate, let $A = \{(1,3),(2,2),(3,1)\}$.
The corresponding semigroup algebra over $\mathbb{C}$ is
\[
\mathbb{C}[\mathbb{N} A] = \mathbb{C}[x_1^ax_2^b \mid (a,b) \in \mathbb{N} A] = \mathbb{C}[x_1x_2^3, \, x_1^2x_2^2, \, x_1^3x_2].
\]
Notice that the monomial $x_1^4x_2^4$ can be obtained as product of the monomial generators in two ways: through the product $(x_1x_2^3)(x_1^3x_2)$ or through $(x_1^2x_2^2)^2$.
This is very very different from just $\mathbb{C}[x_1,\dots,x_n]$, where there is exactly one way to take sums and products of generators to obtain elements of the algebra.
This turns out to be a huge complication!
In order to get more control over semigroup algebras, we want to keep track of how their generators produce elements.
We do so in the following way: again suppose $A = \{a_1,\dots,a_m\} \subseteq \mathbb{Z}^n$, and $A$ is a set of minimal generators of $\mathbb{N} A$.
For each $a_i \in A$ let us associate a term $ x^{a_i} \equiv x_1^{a_{i,1}}\dots x_n^{a_{i,n}}$ and represent it by a variable $t_i$.
We thus have the homomorphism
\[
\pi_A: \mathbb{C}[t_1,\dots,t_m] \to \mathbb{C}[\mathbb{N} A]
\]
defined by $\pi_A(t_i) = x^{a_i}$
By construction $\pi_A$ is surjective,
hence, by the First Isomorphism Theorem,
\[
\mathbb{C}[t_1,\dots,t_m]/I_A \cong \mathbb{C}[\mathbb{N} A].
\]
where $I_A = \ker \pi_A$.
The ideal $I_A$ is called the \emph{toric ideal} of $A$, and contains an astounding amount of information about our original lattice points $A$.
An ideal $I$ of a ring $R$ is \emph{finitely generated} if there exist a finite number of elements $g_1,\dots,g_k \in I$ such that
\[
I = \{ \sum_{i = 1}^k r_ig_i \mid r_i \in R\}.
\]
When this happens, we write $I = (g_1,\dots,g_k)$ and say that $g_1,\dots,g_k$ are \emph{generators} of $I$.
There are typically many generating sets of an ideal, and rarely is there a ``natural'' choice of generators.
Since $\mathbb{C}$ is a field, Hilbert's Basis Theorem tells us that every ideal of $\mathbb{C}[t_1,\dots,t_m]$ is finitely generated.
In particular, toric ideals are finitely generated.
Here are some additional facts about toric ideals:
\begin{enumerate}
\item Toric ideals are generated by binomials of the form $t^a - t^b$ where $\pi(t^a) = \pi(t^b)$.
\item If $A \subseteq \mathbb{Z}^n$ is finite and generates $\mathbb{N} A$, then $I_A$ is generated by homogeneous polynomials (every monomial has the same total degree) if and only if every element of $A$ lies on a hyperplane that does \emph{not} pass through the origin.
\end{enumerate}
In order to extract more useful information from a toric ideal, we have to introduce the notion of a Gr\"obner basis.
}
\commentout{
\section{Toric Algebra and Weight Vectors}
\subsection{Toric Ideals}
Given a code $\cC$ on $n$ neurons, set
\[
\mathbb{C}[\cC] := \mathbb{C}[x^c \mid c \in \cC \setminus \{0\dots0\}] \subseteq \mathbb{C}[x_1,\dots,x_n]
\]
where, if $c = c_1\dotsc_n$, then $x^c := x_1^{c_1}x_2^{c_2}\cdots x_n^{c_n}$.
While $\mathbb{C}[\cC]$ is not a polynomial ring, it is a quotient of a polynomial ring.
Indeed, let $T_\cC = \mathbb{C}[t_{w^{-1}(c)} \mid c \in \cC \setminus \{0\dots0\}]$ and consider the map
\[
\pi_\cC: T_\cC \to \mathbb{C}[\cC]
\]
defined by setting $\pi_\cC(t_{w^{-1}(c)}) = x^c$ and then extending by homomorphism.
By construction, $\pi_\cC$ is surjective, so
\[
\mathbb{C}[\cC] \cong T_\cC/\ker \pi_\cC.
\]
The ideal $\ker \pi_C$, which we will denote by $I_\cC$, is the \emph{toric ideal} of $\cC$.
Toric ideals can be constructed from a wide variety of combinatorial objects, such as graphs, matroids, and polytopes.
In all cases, toric ideals contain an astounding amount of information about the original object that may have been otherwise completely opaque.
Now, we present some additional facts about toric ideals; see \cite{sturmfels} for more details about toric ideals.
First, toric ideals are generated by binomials of the form $t^a - t^b$ where $\pi(t^a) = \pi(t^b)$.
Moreover, $I_\cC$ is generated by homogeneous binomials if and only if there is some $\omega \in \mathbb{R}^n$ such that the dot product $c \cdot \omega = 1$ for every nonzero codeword $c$ in $\cC$.
There are many more useful facts surrounding toric ideals, but in order to make use of them, we must introduce the notion of a Gr\"obner basis.
\subsubsection{Gr\"obner bases}
A \emph{monomial order}, or \emph{term order} on $\mathbb{C}[t_1,\dots,t_m]$ is a total ordering $\prec$ of its monomials such that
\begin{enumerate}
\item $\prec$ respects multiplication: if $u,v,w$ are monomials and $u \prec v$, then $uw \prec vw$, and
\item $\prec$ is a well-ordering: $1 \prec u$ for all monomials $u$.
\end{enumerate}
As a first example, we describe a well-known and computationally-efficient order.
The \emph{graded reverse lexicographic order}, or simply \emph{grevlex} order, on $\mathbb{C}[t_1,\dots,t_n]$ is denoted by $\prec_{\grevlex}$ and sets
For a second example, let $w \in \mathbb{R}^m$ and let $\sigma$ denote some monomial order.
The \emph{weight order} $\prec_{w,\sigma}$ determined by $w$ and $\sigma$ sets $t^a \prec_{w,\sigma} t^b$ if and only if $w\cdot a < w \cdot b$, where $\cdot$ is the dot product.
Notice that $\prec_{\glex} = \prec_{(1,\dots,1),\lex}$.
We cannot guarantee that that either $w\cdot a < w \cdot b$ or $w\cdot b < w \cdot a$, so $\sigma$ is used as a ``tie-breaker'' in order to get a single monomial out.
If $\prec$ is a monomial ordering of $\mathbb{C}[t_1,\dots,t_m]$, then each polynomial $f \in \mathbb{C}[t_1,\dots,t_m]$ has a unique \emph{initial term}, $\init_{\prec}(f)$, which is $\prec$-first among the monomials of $f$.
This further leads to the \emph{initial ideal} of an ideal $I$, defined as
\[
\init_{\prec}(I) = (\init_{\prec}(f) \mid f \in I)
\]
Although $\init_{\prec}(I)$ typically contains an infinite number of polynomials, it is still an ideal of $\mathbb{C}[t_1,\dots,t_m]$, so by Hilbert's Basis Theorem, it is finitely generated.
If $I = (g_1,\dots,g_k)$, it is not necessarily true that $(\init_{\prec}(g_1),\dots,\init_{\prec}(g_k))$ equals $\init_{\prec}(I)$.
However, \emph{if} this does happen, then we call $\{g_1,\dots,g_k\}$ a \emph{Gr\"obner basis of $I$ with respect to $\prec$}.
Given a finite set of generators for $I$, Buchberger's Algorithm provides a way to produce a Gr\"obner basis from it. In general, this algorithm will produce a Gr\"obner basis.
Because this algorithm allows for choices to be made, many resulting Gr\"obner bases are possible.
However, by imposing additional conditions, we can obtain a canonical choice of Gr\"obner basis.
Namely, we say a Gr\"obner basis $\mathcal{G}$ is \emph{reduced} if the leading coefficient (with respect to $\prec$) of every element is $1$ and if, for every $g,g' \in \mathcal{G}$, $\init_{\prec}(g)$ does not divide any term of $g'$.
Although there are many Gr\"obner bases for a given ideal and monomial order, there is a unique reduced Gr\"obner basis.
Another special kind of Gr\"obner basis will be of use to us.
We say that a Gr\"obner basis $\mathcal{G}$ of an ideal $I$ is a \emph{universal Gr\"obner basis} if it is a Gr\"obner basis with respect to any monomial order.
Because $I$ has finitely many initial ideals, we can always construct a finite universal Gr\"obner basis by taking the union of all reduced Gr\"obner bases of $I$.
This set is what we will canonically call \emph{the universal Gr\"obner basis} of $I$.
For each $n$, there exists a monomial order such that a code is $0$- or $1$-inductively pierced if and only if the reduced Gr\"obner basis contains binomials of degree $2$ or less.
}
\section{Depth 1 diagrams}\label{depth1}
\subsection{Special Binomials}
While computing the reduced Gr\"obner basis of a code is generally very difficult, codes arising from Euler diagrams that fall in certain classes have toric ideals that are easy to describe. In this case, there are certain binomials which are required to be present in a generating set of the ideal.
More precisely, a binomial $f$ is called \emph{indispensable} if, for any set $\mathcal{B}$ of binomial generators of the ideal, $f \in \mathcal{B}$ or $-f \in \mathcal{B}$.
We will introduce several special subgraphs that naturally give rise to the indispensable binomials.
\begin{table}[h]
\centering
\begin{tabular}{| >{\centering\arraybackslash}m{.5in}| *2{>{\centering\arraybackslash}m{1.75in}|} >{\centering\arraybackslash}m{1.25in}| @{}m{0pt}@{}}
\hline
Type & Binomial & Euler diagram & Dual graph
\\ \hline
1 &
$t_{\{1\}}t_{\{2\}} - t_{\{1,2\}}$ & \includegraphics[page=2,scale=.6]{lozenge.pdf} &
\includegraphics[page=1,scale = .5]{lozenge.pdf}
\\ \hline
2 & $t_{\{2\}}t_{\{1,3 \}} - t_{\{1,2,3\}}$ &
\includegraphics[page=2,scale=.6]{domino.pdf}
&
\includegraphics[page=1,scale = .5]{domino.pdf}
\\ \hline
3 & $t_{\{1,2\}}t_{\{1,3 \}} - t_{\{1\}}t_{\{1,2,3\}}$ &
\includegraphics[page=2,scale = .6]{lollipop.pdf}
&
\includegraphics[page=1,scale = .5]{lollipop.pdf}
\\ \hline
4
& $t_{\{1,2,3\}}t_{\{1,4 \}} - t_{\{1,2\}}t_{\{1,3,4\}}$
&
\includegraphics[page=2,scale = .6]{flower.pdf}
&
\includegraphics[page=1,scale = .5]{flower.pdf}
\\ \hline
5
& $t_{\{1,2,3\}}t_{\{1,4 \}} - t_{\{1,2\}}t_{\{1,3,4\}}$
&
\includegraphics[page=4,scale = .6]{flower.pdf}
&
\includegraphics[page=3,scale = .5]{flower.pdf}
\\ \hline
6
& $t_{\{1,2,3\}}t_{\{1,4 \}} - t_{\{1,2\}}t_{\{1,3,4\}}$
&
\includegraphics[page=6,scale = .6]{flower.pdf}
&
\includegraphics[page=5,scale = .5]{flower.pdf}
\\ \hline
\end{tabular}
\caption{Indispensable binomials for 1 inductively pierced, depth 1 binomials}
\label{table dep1}
\end{table}
In the following table we list several diagrams and their associated dual graphs. Any 1 inductively pierced diagram with depth less than or equal to 1 has indispensable binomials that are determined in Table \ref{table dep1}.
\begin{thm}\label{thm dep1 1p}
Let $\cC$ be a neural code such that its associated diagram $\mathcal{D}$ is a one inductively pierced diagram of depth $\leq 1$. Let $ G = (\mathcal{V},\mathcal{E},\mu)$ be the dual graph associated to $\mathcal{D}$. If $\widetilde \mathcal{D}$ is a diagram such that it's dual graph $ H = (\mathcal{W},\mathcal{F},\nu) $ belongs to Table \ref{table dep1} and has an embedding $\phi$ into $G$, then the image of the associated binomial under $\phi$ is an indispensable binomial for the ideal of $\pi_{\cC}$. Moreover, $\pi_{\cC}$ has no other indispensable binomials.
\end{thm}
The theorem is an easy consequence of Lemmas \ref{Lemma wg 23} -- \ref{Lemma wg 6}.
\subsection{Binomials}
\begin{lemma}\label{Lemma wg 23}
The toric ideal of a diagram $\mathcal{D}$ has an indispensable binomial of weight 2 and weight 3 respectively if and only if there is a weighted graph embedding from a Type 1 graph (respectively Type 2 graph) to the weighted dual graph of $\mathcal{D}$.
\end{lemma}
\begin{proof}
Monomials of weight 2 arises only in the form $t_{\{1\}}t_{\{2\}}$ and $ t_{\{1,2\}}$. Thus the only binomials of order 2 are of the form $t_{\{1,2\}} - t_{\{1\}}t_{\{2\}}$.
Monomials of weight 3 arise only as $t_{\{1,2,3\}} $ or $t_{\{1,2\}}t_{\{3\}}$ or higher order terms. These diagrams arise only from a `stacked lozenge' diagram.
Notice the alternative of a pair of monomials $t_{\{1,2\}}t_{\{3\}}$ and $t_{\{1\}} t_{\{2,3\}}$. The existence of both these monomials implies that $t_{\{2\}}$ exists. Then the binomial $t_{\{1,2\}}t_{\{3\}}-t_{\{1\}} t_{\{2,3\}}$ is generated by binomials $t_{\{1,2\}} - t_{\{1\}} t_{\{2\} }$ and $t_{\{2,3\}} - t_{\{2\}} t_{\{3\} }$.
\end{proof}
\begin{lemma}\label{Lemma wg 4}
The toric ideal of a diagram $\mathcal{D}$ has an indispensable binomial of weight 4 if and only if there is a weighted graph embedding from a Type 3 graph to the weighted dual graph of $\mathcal{D}$.
\end{lemma}
\begin{proof}
Monomials of order 4 arise from terms $t_{\{i,j\}}t_{\{k,l\}}$ and $t_{\{i,j,k\}} t_{\{l\}}$.
Let us begin with monomials of the form $t_{\{i,j\}}t_{\{k,l\}}$.
We rule out the weight 2 + 2 term with $i = k,\ j =l$, ie $t_{\{1,2\}}t_{\{1,2\}}$, as it has no linear or quadratic balancing monomials. Now consider terms of the form $t_{\{1,2\}}t_{\{1,3\}}$. $\{1,2\},\{1,3\} \in \cC $ imply $\{1\} \in \cC$. The only possible nontrivial balancing monomial is $t_{\{1,2,3\}} t_{\{1\}} $, which requires zone $\{1,2,3\} $ exists. In this case we have $p=t_{\{1,2\}}t_{\{1,3\}} -t_{\{1,2,3\}} t_{\{1\}} $ in the kernel. If in addition $\{2\} \in \cC$, then $p$ is generated by binomials $t_{\{1,2\}} - t_{\{1\}} t_{\{2\}} $ and $t_{\{1,2,3\}} - t_{\{1,3\}}t_{\{2\}}$. If neither $\{2\}$ or $\{3\}$ exists then $U_2\cup U_3 \subset U_1$, so the binomial arises from the lollipop diagram.
Finally consider monomial $t_{\{1,2\}} t_{\{3,4\}} $ and note that the binomial $t_{\{1,2\}} t_{\{3,4\}} - t_{\{2,3\}} t_{\{1,4\}} $ is not permitted in a one piercing diagram.
If $\{1,2,3\},\{4\} \in \cC$ then we have binomial $ p = t_{\{1,2\}} t_{\{3,4\}} - t_{\{1,2,3\}} t_{\{4\}} $ but then $\{3\}\in \cC$ so $ p $ is generated by $t_{\{3,4\}} - t_{\{3\}}t_{\{4\}}$ and $t_{\{1,2,3\}} - t_{\{1,2\}} t_{\{3\}}$.
This concludes all possible binomials containing a weight 2 + 2 term.
Now we consider terms of the form $t_{\{i,j,k\}} t_{\{l\}}$. We only need to consider binomials with balancing terms which are weight 3 + 1. The possibilities are $t_{\{1,2,3\}} t_{\{4\}}$ or $t_{\{1,2,3\}} t_{\{1\}}$. There are no alternative possible balancing weight 4 = 3+1 monomials for the second type. For the first type the only balancing term is $t_{\{2,3,4\}} t_{\{1\}}$, clearly this does not exist if $U_1 \subset U_2 \cup U_3 $. We may assume that $U_3 \subset U_1$, and $U_3 \cap U_2 \neq \emptyset$. But zones if zones $\{1\}$, $\{1,2,3\}$, $\{2,3,4\}$, and $\{4 \} $ exist this requires adding a lozenge to the stick-and-lozenge diagram with boundary incident on two sides.
\end{proof}
\begin{lemma}\label{Lemma wg 5}
The toric ideal of a diagram $\mathcal{D}$ has an indispensable binomial of weight 5 if and only if there is a weighted graph embedding from a Type 4,5, or 6 graphs to the weighted dual graph of $\mathcal{D}$.
\end{lemma}
\begin{proof}
Monomials of order 5 arise only as weight 3+2 terms: $t_{\{i,j,k\}}t_{\{l,m\}}$. We will write these as $ t_{\{1,2,3\}}t_{\{l,m\}}$ with $ U_3 \subset U_1$.
Note that, if $U_3 \subset U_1$, $ t_{\{1,2,3\}}t_{\{2,3\}}$ does not exist, and $t_{\{1,2,3\}}t_{\{1,m\}}$ for $m = 2,3$ has no balancing term.
Let us consider $ t_{\{1,2,3\}}t_{\{1,4\}}$. If $U_4 \cap U_2 \neq \emptyset $ then $U_4 \subset U_1 $, the only permitted balancing term is $ t_{\{1,2,4\}}t_{\{1,3\}}$ this arises only as 2 one-piercings within $U_1$. Now let us consider $ t_{\{1,2,3\}}t_{\{2,4\}}$ the only possible balancing monomials are $t_{\{1,2\}}t_{\{2,3,4\}}$ or $t_{\{2,3\}}t_{\{1,2,4\}}$ however neither zones $\{2,3\}$ nor $\{2,3,4\}$ are permitted.
Finally let us consider $a = t_{\{1,2,3\}}t_{\{4,5\}}$, but $(U_4\cup U_5 )\cap U_i$ can only be non-empty for one of $i = 1,2$. If $(U_4\cup U_5 )\cap U_1$ is nonempty, we have zones $\{1,4,5\}$ or $\{1,4\}$, however the zones $\{2,3\}$ and $\{2,3,5\}$ do not exist so these do not correspond to balancing monomials for $a$. On the other hand if $(U_4\cup U_5 )\cap U_2$ is non-empty and $U_5 $ is contained in $U_4$ and is a piercing of $U_2$ then we have the binomial $ t_{\{1,2,3\}}t_{\{4,5\}} - t_{\{2,4,5\}}t_{\{1,3\}} $, but this is binomial is generated by the pair $t_{\{1,2,3\}} -t_{\{1,3\}}t_{\{2 \}}$ and $t_{\{2,4,5\}} -t_{\{4,5\}}t_{\{2 \}}.$
\end{proof}
\begin{lemma}\label{Lemma wg 6}
The toric ideal of any Euler diagram $\mathcal{D}$ has no indispensable binomials of order 6 or higher.
\end{lemma}
\begin{proof}
Weight 6 monomials arise only as weight 3+3 terms.
If the zones have 2 curves in common $\{1,2,3\}$ and $\{1,2,4\}$, then a balancing monomial would contain zone-variable corresponding to zone $\{1,2,3,4\}$ which does not exist.
Suppose the zones have 1 curve in common $\{1,2,3\}$ and $\{1,4,5\}$, then $U_1 $ cannot be contained in $U_2\cup U_3$. If zone $\{1,3,5\}$ exists then $U_3\cup U_5 \subset U_1$ but then the zone $\{1,4,5\}$ only exists if $U_4\subset U_1$ so the balancing term is $t_{\{1,3,5\}}t_{\{1,2,4\}}$ Again, if the zone $\{1,2,4\}$ exists the zone $\{1,3,5\}$ does not exist as loops are prohibited in the piercing graph.
Finally, suppose the two zones have no curves in common $\{1,2,3\}$ and $\{4,5,6\}$. We must have $U_3 \subset U_1$ and $U_6 \subset U_4$. But if the zone $\{2,4,6 \}$ exists, the zone $\{1,3,5\} $ is prohibited as this would violate the piercing construction.
\end{proof}
\section{External Diagrams}
\begin{defn} Let $\mathcal{D}$ be a well-formed Euler diagram on curves $ \{ \lambda_1, \ldots, \lambda_n \} $ with corresponding interiors $ \{ U_1, \ldots, U_n\} $.
If
\[
U_i \setminus \bigcup_{j \neq i} U_j \neq \emptyset
\]
for each $i \in [n]$, then $D$ is called an \emph{external} Euler diagram.
If a code has an external Euler diagram as a realization, then we call the code \emph{external} as well.
\end{defn}
\begin{figure}[h]
\center
\includegraphics[width=4in]{external_3.jpg}
\caption{An external Euler diagram (left) and an Euler diagram that is not external (right).}\label{fig:externals}
\end{figure}
In Figure~\ref{fig:externals}, the diagram on the left is external, as none of the $\lambda_i$ are completely contained in the interior of the others. However, the diagram on the right is not external, as $\mu_2$ is contained within the interior of $\mu_1$.
We point out here that external diagrams on $n$ neurons will always induce a code containing the $n$ codewords where all but one entry is zero.
\begin{defn}
Let $\cC$ be an external code and $c \in \cC$.
The \emph{support} of $c = c_1\dots c_n$, which we denote $\supp(c)$, is the set
\[
\supp(c) = \{ i \mid c_i \neq 0\}.
\]
The \emph{weight} of $c$, which we denote $\wt(c)$, is $\wt(c) = |\supp(c)|$.
\end{defn}
It is easy to see then that, for an external Euler diagram, any codeword can be written as the sum of codewords with weight one. This gives us a set of nice binomials that we know must be in the toric ideal of any external diagram.
\begin{example}\label{ex: example GB}
Consider Figure~\ref{fig:externals}. The corresponding code of this diagram is
\[
\cC_1 = \{000, 100 , 010, 001, 110, 101, 011, 111 \}.
\]
Two reduced Gr\"obner bases of the toric ideal $I_{\cC_1}$ are
\[
\begin{aligned}
G_1 = & \{ t_{\{1,3\}} t_{\{2,3\}} - t_{\{3\}}t_{\{1,2,3\}},
t_{\{1,2\}} t_{\{2,3\}} - t_{\{2\}} t_{\{1,2,3\}},
t_{\{1\}} t_{\{2,3\}} - t_{\{1,2,3\}},
t_{\{1,2\}} t_{\{1,3\}} - t_{\{1\}} t_{\{1,2,3\}}, \\
& t_{\{2\}} t_{\{1,3\}} - t_{\{1,2,3\}},
t_{\{3\}} t_{\{1,2\}} - t_{\{1,2,3\}},
t_{\{2\}} t_{\{3\}} - t_{\{2,3\}},
t_{\{1\}} t_{\{3\}} - t_{\{1,3\}},
t_{\{1\}} t_{\{2\}} - t_{\{1,2\}}\},
\end{aligned}
\]
for which the grevlex ordering is used, and
\[
G_2 = \{t_{\{2,3\}} - t_{\{2\}} t_{\{3\}} , t_{\{1,3\}} - t_{\{1\}} t_{\{3\}} , t_{\{1,2\}} - t_{\{1\}} t_{\{2\}} , t_{\{1,2,3\}} - t_{\{1\}} t_{\{2\}} t_{\{3\}}\},
\]
for which the weight vector $(0,0,0,1,1,1,2)$, and ties decided by the grevlex ordering, is used.
The only binomials that are in both $G_1$ and $G_2$ are $t_{\{1,2\}} - t_{\{1\}} t_{\{2\}}, t_{\{1,3\}} - t_{\{1\}} t_{
\{3\}}$, and $t_{\{2,3\}} - t_{\{2\}} t_{\{3\}}$.
It turns out that this is not a coincidence, as they are indispensable binomials.
\end{example}
Given a code $\cC$, let
\[
\calA_\cC = \{t_{w^{-1}(c_i)} t_{w^{-1}(c_j)} - t_{w^{-1}(c_k)} \mid c_i,c_j,c_k \in \cC, \, c_i + c_j = c_k, \, \wt(c_i)=\wt(c_j)=1\}.
\]
\begin{prop}\label{prop: A indispensable}
If $\cC$ is an external code, then the binomials in $\calA$ are indispensable.
\end{prop}
\begin{proof}
Let $b \in \calA_\cC$.
If $b$ does appear in a binomial generating set $G$ of $I_\cC$, then $b$ is a polynomial combination of other binomials in $G$.
However, one of the monomials of $b$ has degree $1$, and the toric ideal has no elements of degree $0$.
So, any product of binomials in $G$ must result in a polynomial whose terms all are of degree $2$ or higher.
Therefore, $b$ must be indispensable.
\end{proof}
As a result, we know that for any external code, the binomials in $\calA_\cC$ will, up to sign, appear in every reduced Gr\"obner basis of $I_\cC$. For the toric ideal of the code $\cC_1$ from Example~\ref{???}, it is unsurprising that $\calA$ is exactly the set of indispensable binomials. However, it is not obvious that these two sets of binomials coincide for all external codes.
Let $\omega$ be the weight vector on $\mathbb{C}[t_{w^{-1}(c_1)} , \ldots , t_{w^{-1}(c_n)}]$ satisfying $\omega_i = \wt(c_i) - 1$. Define $\prec_\omega$ to be the monomial ordering on $\mathbb{C} [ t_{w^{-1}(c_1)} , \ldots , t_{w^{-1}(c_n)} ]$ where $t^{\alpha} \prec_\omega t^{\beta}$ if $\omega \cdot \alpha < \omega \cdot \beta$, with ties being determined by the grevlex order.
Recall as well that $c_0 = 0\dots0$.
\begin{lem}\label{lem: lead terms}
If $c_i \in \cC$ and $c_i = \sum_{c \in \cC'} c$ for some subset $\cC'$ of $\cC \setminus \{c_0,c_i\}$, then $\prod_{c \in \cC'} t_{w^{-1}(c)} \prec_\omega t_{w^{-1}(c_i)}$.
\end{lem}
\begin{proof}
Let $c_i \in \cC$ with $\wt(c_i) = k$. Assume that $c_i = \sum_{c \in \cC'} c$ for some subset $\cC'$ of $\cC$ with no $c = c_i$ and $|\cC'| = m \geq 2$. Then $\sum_{c \in \cC'} \wt(c) = k$ and we see that $\omega \cdot t_{w^{-1}(c_i)} = k-1$. Moreover,
\[
\omega \cdot \prod_{c \in \cC'} t_{w^{-1}(c)} = \sum_{c \in \cC'} \omega(c) \leq k - |\cC'| = k-m < k = \omega \cdot t_{w^{-1}(c_i)},
\]
as desired.
\end{proof}
Since we are concerned with only external diagrams, we know that any $c_i \in \cC$ with $\wt(c_i) \geq 2$ is the sum of some set of $c_j \in \cC$ with $\wt(c_j) = 1$. This implies that we have polynomials in our toric ideal with one linear term $t_{w^{-1}(c_i)}$ and one term with degree $\wt(c_i)$.
\begin{lem}\label{lem: create B}
Let $\cC$ be an external code.
The reduced Gr\"obner basis of $I_{\cC}$ with respect to $\prec_\omega$ is exactly the set
\[
B = \left\{ t_{w^{-1}(c)} - \prod_{j \in \supp(c)} t_{w^{-1}(e_j)} \mid \wt(c) = k \geq 2\right\},
\]
where $e_j$ is the $j^{th}$ standard basis vector.
\end{lem}
\begin{proof}
Let $c \in \cC$ with $\wt(c) = k \geq 2$, and set
\[
g_c = t_{w^{-1}(c)} - \prod_{j \in \supp(c)} t_{w^{-1}(e_j)}.
\]
By construction, $g_i \in I_{\cC}$. By Lemma~\ref{lem: lead terms}, we know that $t_{w^{-1}(c)}$ will be the initial term of this binomial with respect to $\prec_\omega$. So, for all $c \in \cC$ with $\wt(c) \geq 2$, we have $t_{w^{-1}(c)} \in \init_{\prec}(I_{\cC})$.
Let $\mathcal{G}$ be the reduced Gr\"obner basis of $I_{\cC}$ with respect to $\prec_\omega$. So, for all $c \in \cC$ with $\wt(c) \geq 2$, there exists $g \in G$ such that $\init_{\prec}(g) = t_{w^{-1}(c)}$. Also, for all $c' \in \cC$ with $\wt(c') = 1$, there does not exist $p \in I_{\cC}$ with $\init_{\prec}(p) = t_{w^{-1}(c')}$, since, also by construction, there can be no binomial $t_{w^{-1}(c')} - t_{w^{-1}(c'')}$ in $I_{\cC}$ where $\wt(c') = \wt(c'') = 1$.
Let $g \in \mathcal{G}$ with $\init_{\prec}(g) = t_{w^{-1}(c)}$ for some $c \in \cC$ with $\wt(c) \geq 2$. Since $\mathcal{G}$ is a reduced Gr\"obner basis, we know that no initial term of any $g' \in \mathcal{G}$, $g' \neq g$, can divide either term of $g$. But for all $c' \in \cC$ with $\wt(c') \geq 2$, $t_{w^{-1}(c')} \in \init_{\prec}(I_\cC)$. So, the nonlinear term of $g$ must be the product of some $t_{w^{-1}(c_l)}$ with $\wt(c_l) = 1$.
This forces $g = g_c$.
Moreover, there can be no binomials $g'$ for which $\init_{\prec}(g')$ has degree at least $2$, since $\init_{\prec}(g')$ would then be divisible by $\init_{\prec}(g_c)$ for some $c$.
Thus, $\mathcal{G} = \mathcal{B}$.
\end{proof}
While this lemma gives us a reduced Gr\"obner basis of $I_\cC$, not all of the binomials are indispensable, as evidenced by Example~\ref{ex: example GB}
\begin{lem}\label{lem: elements of B in A}
The only binomials from $\mathcal{B}$ in the reduced Gr\"obner basis of $I_{\cC}$ with respect to grevlex are the binomials in $\calA_\cC$.
\end{lem}
\begin{proof}
Choose an order $c_1, \ldots , c_n$ of the codewords in $\cC$ such that $\wt(c_i) < \wt(c_j)$ implies $i < j$. Let $\mathcal{G}$ be the reduced Gr\"obner basis with respect to grevlex. We have shown that the set $A$ is indispensable, so we have that $\calA_\cC \subseteq G$.
Consider a binomial $g \in \mathcal{B} \setminus \calA_\cC$. Then
\[
g = t_{w^{-1}(c)} - \prod_{j \in \supp(c)} t_{e_j}
\]
for some $c \in \cC$ with $\wt(c) = k \geq 3 $. Since $\cC$ is external, we know that for some $t_{e_i},t_{e_j}$ in the nonlinear term of $g$, $g' = t_{c'}-t_{e_i} t_{e_j}$ is in $\calA_\cC$ for some $c' \in \cC$. So, $g_1 \in G$. But, $\init_{\prec}(g')$ divides $\init_{\prec}(g)$, so $g \notin \mathcal{G}$. So no element of $\mathcal{B}$ that is not in $\calA_\cC$ is indispensable.
\end{proof}
\begin{thm}
For an external code $\cC$, the indispensable binomials of $I_{\cC}$ are exactly those in $\calA_\cC$.
\end{thm}
\begin{proof}
This follows directly from Lemmas~\ref{prop: A indispensable}, \ref{lem: create B}, and \ref{lem: elements of B in A}.
\end{proof}
So for any given external diagram, we know exactly which binomials must be in a reduced Gr\"obner basis of its toric ideal. However, other binomials clearly appear in some reduced Gr\"obner bases of the toric ideals of external diagrams. For certain classes of external diagrams, there is even more to say about Gr\"obner bases.
To close this section, we will use graphs to help us describe properties of toric ideals of external codes.
Given an external code $\cC$ on neurons $\lambda_1,\dots,\lambda_k$, let $\Delta_\cC$ denote the graph with vertices $1,\dots, k$ and edges $\{i, j\}$ if $e_i+e_j$ is a codeword in $\cC$.
\begin{lem}\label{lem: distance two}
Let $\cC$ be a code such that $\Delta_\cC$ is a tree.
If $i,j$ are vertices of $\Delta_\cC$ that are distance two apart, then there exists a unique vertex $k$ for which $t_{w^{-1}(c_i)}t_{w^{-1}(c_j)+w^{-1}(c_k)} - t_{w^{-1}(c_i)+w^{-1}(c_k)}t_{w^{-1}(c_j)} \in I_\cC$
\end{lem}
The proof of this lemma is short and straightforward, so its proof is omitted.
For the next result, if $v$ is the vertex of a graph, let $d(v)$ denote the degree of $v$.
\begin{thm}
Let $\cC$ be an external code such that $\Delta_\cC = (V,E)$ is a tree.
In the universal Gr\"obner basis of $I_\cC$, there are $\sum_{v \in V} \binom{d(v)}{2}$ polynomials of the form $t_{w^{-1}(c_1)}t_{w^{-1}(c_2)} - t_{w^{-1}(c_3)}t_{w^{-1}(c_4)}$ for $c_1,\dots,c_4 \in \cC$ and $c_1+c_2=c_3+c_4$.
\end{thm}
\begin{proof}
Suppose $\Delta_\cC$ is a tree, and let $i, j,k$ be vertices of $\Delta_\cC$ such that $k$ is adjacent to both $i$ and $j$.
By Lemma~\ref{lem: distance two}, we know that $p(t) = t_{w^{-1}(c_i)}t_{w^{-1}(c_j)+w^{-1}(c_k)} - t_{w^{-1}(c_i)+c_k}t_{w^{-1}(c_j)} \in I_\cC$.
Without loss of generality let $\init_{\prec}(p(t)) = t_{w^{-1}(c_i)}t_{w^{-1}(c_j)+w^{-1}(c_k)}$.
Now, we will show that $p(t)$ is in some reduced Gr\"obner basis $\mathcal{G}$ of $I_\cC$.
Consider the grevlex order.
In this case, no binomial in $\mathcal{G}$ has a linear initial term.
Thus, the only way for $\init_{\prec}(p(t))$ to be divisible by an initial term of a binomial in $\mathcal{G}$ is if that term is $\init_{\prec}(p(t))$ itself.
So, $\init_{\prec}(p(t))$ is the initial term of a binomial $b$ in $\mathcal{G}$.
Since $\pi_\cC(\init_{\prec}(p(t))) = x^ix^jx^k$,
there are three possibilities of the non-initial term of $b$: $t_{w^{-1}(c_i)+c_j+c_k}$, $t_{w^{-1}(c_i)}t_{w^{-1}(c_j)}t_{w^{-1}(c_k)}$, and $t_{w^{-1}(c_i)+c_k}t_{w^{-1}(c_j)}$.
The first possibility cannot happen since $\Delta_\cC$ is a tree, meaning $|\supp(t_{w^{-1}(c)})| \leq 2$ for all $c \in \cC$. The second possibility also cannot occur since, otherwise, it would be the initial term of $b$ under grevlex.
This leaves one possibility, hence $p(t) \in \mathcal{G}$.
Since we obtain a polynomial $p(t)$ for each vertex in $\Delta_\cC$ that is the midpoint of a length-two path, the number of homogeneous quadratic binomials in the universal Gr\"obner basis is the same as the number of paths in the tree of length two, which are enumerated by
\[
\sum_{v\in V}\binom{d(v)}{2}. \qedhere
\]
\end{proof}
\section{Internal Codes}
In this section, we focus on a particular class of codes.
Let the $n^{th}$ \emph{internal code} be the code $\mathcal{L}_n = \{0\dots0,c_1,\dots,c_{2n}\}$ where the nonzero codewords are
\[
c_j = \begin{cases}
e_1 + \sum_{i=2}^j e_j & \text{ if } j \leq n, \\
c_{j-n}-e_1 & \text{ if } j > n.
\end{cases}
\]
It is clear to see that this code has a corresponding Euler diagram
\begin{center}
\begin{tikzpicture}
\draw (-3.5,-3.5) -- (3.5,-3.5) -- (3.5,3.5) -- (-3.5,3.5) -- cycle;
\draw (0,0) circle [radius=3];
\draw (-1,0) circle [radius=1.75];
\draw (1,0) circle [radius=1.75];
\draw (0.75,0) circle [radius=1.25];
\draw [dashed] (0.7,0) circle [radius=1.12];
\draw [dashed] (0.575,0) circle [radius=0.875];
\draw (0.5,0) circle [radius=0.75];
\node at (-2.25,0) {\small{$\lambda_1$}};
\node at (-2.5,2.5) {\small{$\lambda_2$}};
\node at (2.45,0) {\small{$\lambda_3$}};
\node at (1.66,0) {\tiny{$\cdots$}};
\node at (1.05,0) {\small{$\lambda_n$}};
\end{tikzpicture}
\end{center}
Call a binomial $t^a - t^b \in I_\cC$ \emph{primitive} if there is no binomial $t^u - t^v \in I_\cC$ such that both $t^u$ divides $t^a$ and $t^v$ divides $t^b$.
The \emph{Graver basis} of $I_\cC$ is the set of all primitive binomials in $I_\cC$.
Since every binomial in a reduced Gr\"obner basis of a toric ideal is primitive, the Graver basis will contain the universal Gr\"obner basis of $I_\cC$.
In fact, in certain cases, the Graver basis is identical to the universal Gr\"obner basis.
\begin{defn}
Let $A$ be a $k\times n$ matrix.
Its \emph{Lawrence lifting} is
\[
\Lambda(A) = \begin{bmatrix}
A & 0_{k \times n} \\
I_n & I_n
\end{bmatrix}
\]
where $0_{k \times n}$ is the $k\times n$ zero matrix and $I_n$ is he $n\times n$ identity matrix.
\end{defn}
Let $\cC = \{0\dots0,c_1,\dots,c_k\}$ be a code.
For notational convenience, let $M_\cC$ denote the matrix with columns $c_1,\dots,c_k$.
Thus, we can think of the toric ideal $I_\cC$ as the toric ideal of either the code $\cC$ or of the matrix $M_\cC$.
Because row operations on matrices preserve linear dependencies, we can apply them to $M_{\mathcal{L}_n}$ and compute the toric ideal of the simpler matrix.
In our case, it is straightforward to verify that $M_{\mathcal{L}_n}$ is row-equivalent to the Lawrence lifting of the $1 \times n$ matrix $\begin{bmatrix} 1 & \cdots & 1 \end{bmatrix}$, say by multiplying $M_{\mathcal{L}_n}$ on the left by the matrix with rows $r_1,\dots,r_n$, where
\[
r_i =
\begin{cases}
e_1 & \text{ if } i = 1, \\
e_{i-1}-e_i & \text{ if } 2 \leq i \leq n-1, \\
e_n & \text{ if } i = n.
\end{cases}
\]
\begin{thm}[{\cite[Theorem~7.1]{sturmfels}}]\label{thm: graver equal universal}
Let $\cC$ be any combinatorial neural code.
The following sets are identical:
\begin{enumerate}
\item the universal Gr\"obner basis of $I_{\Lambda(M_\cC)}$,
\item the Graver basis of $I_{\Lambda(M_\cC)}$,
\item the minimal binomial generating set of $I_{\Lambda(M_\cC)}$.
\end{enumerate}
\end{thm}
Since $M_{\mathcal{L}_n}$ is row-equivalent to $A_n = \Lambda(\begin{bmatrix} 1 & \cdots & 1 \end{bmatrix})$, all of the results of the preceding theorem hold for $I_{\mathcal{L}_n}$ as well.
For convenience, we will continue by considering $I_{A_n}$.
To prove the main theorem of the section, let
\[
\mathcal{U}_n = \{t_{\{1,j\}}t_{\{k\}}-t_{\{1,k\}}t_{\{j\}} \mid 2 \leq j < k \leq n, j \neq k \}.
\]
It is clear that $\mathcal{U}_n \subseteq I_{\mathcal{L}_n}$, by verifying that
\[
\pi_{\mathcal{L}_n}(t_{\{1,j\}}t_{\{k\}}-t_{\{1,k\}}t_{\{j\}}) = 0.
\]
\begin{lemma}\label{lem: all quads}
For all $n$, $\mathcal{U}_n$ contains all monic homogeneous quadratic binomials in $I_{A_n}$, up to sign.
\end{lemma}
\begin{proof}
First, note that the first $n$ columns and the last $n$ columns of $A_n$ are linearly independent sets.
So, if $t^a-t^b \in I_{A_n}$ is monic, homogeneous, and quadratic, then each monomial must be a product of one variable $t_{\{1,j\}}$ and one variable $t_{\{k\}}$.
If $j<k$ in the leading term, then the only possible binomial is $t_{\{1,j\}}t_{\{k\}}-t_{\{1,k\}}t_{\{j\}}$.
Similarly, if $k>j$, then the only possible binomial is $-(t_{\{1,j\}}t_{\{k\}}-t_{\{1,k\}}t_{\{j\}})$.
The case $j=k$ cannot happen, since the vector $e_1+2e_j$ can only be obtained as the sum of columns of $A_n$ in one way.
\end{proof}
A result due to Gross, Obatake, and Youngs says if a diagram is 1-inductively pierced then its toric ideal is generated by quadratics \cite{GrossObatakeYoungs}. These quadratic binomials are homogeneous based on a lemma by Sturmfels \cite{sturmfels}. Since we have found all possible forms of quadratic binomials we can now say that the set $B$ generates the toric ideal.
\begin{lemma}\label{lem: Un primitive}
For all $n$, the binomials of $\mathcal{U}_n$ are primitive.
\end{lemma}
\begin{proof}
Let $t_{\{1,j\}}t_{\{k\}}-t_{\{1,k\}}t_{\{j\}} \in \mathcal{U}_n$, and suppose there is some binomial $t^v-t^w \in I_{A_n}$ such that $t^v$ divides $t^a$ and $t^w$ divides $t^b$.
If $\deg(t^v) = 1$, then $\deg(t^w) = 1$, but this cannot happen since no column of $A_n$ is a scaling of another column.
So, $t^v = t_{\{1,j\}}t_{\{k\}}$.
Again by the structure of $A_n$, this forces $t^v = t_{\{1,k\}}t_{\{j\}}$.
As stated above, we know that the toric ideal is generated by quadratic homogeneous binomials.
Since we know this, we can guarantee that the degree of any monomial in the toric ideal is at least 2.
Because of this, we know that the only way to construct the binomial $t^v-t^w$ such that $t^v$ divides $t^a$ and $t^w$ divides $t^b$ is if $t^v$ and $t^w$ both are at least of degree 2.
If $t^v$ and $t^w$ are both at least of degree 2 then $t^a=t^v$ and $t^b=t^w$.
Therefore, all of the binomials in the set $B$ are primitive binomials.
This implies that the set $B$ is a minimal binomial generating set of $I$ and is therefore the Graver basis of $I_{A_n} = I_{\mathcal{L}_n}$ and the universal Gr\"obner basis of $I_{\mathcal{L}_n}$ by Sturmfels's Theorem \cite{sturmfels}.
\end{proof}
\begin{thm}
The universal Gr\"obner basis of $\mathcal{L}_n$ is $\mathcal{U}_n$.
\end{thm}
\begin{proof}
By Theorem~\ref{thm: GOY}, part $3$, we know that $I_{\mathcal{L}_n}$ is generated by quadratics.
Since $I_{\mathcal{L}_n}$ is homogeneous, these quadratics must be homogeneous.
By Lemma~\ref{lem: all quads} and Lemma~\ref{lem: Un primitive}, $\mathcal{U}_n$ is a minimal binomial generating set of $I_{\mathcal{L}_n}$.
Therefore, by Theorem~\ref{thm: graver equal universal}, $\mathcal{U}_n$ is the universal Gr\"obner basis of $\mathcal{L}_n$.
\end{proof}
|
1,116,691,499,066 | arxiv | \section{Basic Properties of Elliptic Curves}
In this part, we will introduce some basic notations and properties about elliptic curves. Some classic results won't be proved, but I will list some books where you can look them up.
\newtheorem{defn}{Definition}[section]
\begin{defn}
An elliptic curve is a pair (E,O), where E is a nonsingular curve of genus 1 and $O\in E$. The elliptic curve E is defined over K, written E/K, if E is defined over K(a field) as a curve and $O\in E(K)$.
\end{defn}
The definition above is not very clear and it's hard for us to study them. So, by Riemann-Roch theorem, we have the following equivalent definition:
\begin{defn}
An elliptic curve over K can be defined as a nonsingular projective plane curve over K of the form
$$Y^2Z+a_1XYZ+a_3YZ^2=X^3+a_2X^2Z+a_4XZ^2+a_6Z^3$$
Here O=[0,1,0] is ths basepoint.\\
This is called the Weierstrass equation of an elliptic curve.
\end{defn}
If we let $Z=0$, then we find that the point must be O. So we can assume that $Z\neq 0$, and we can get the following equation
$$E:y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$$
If $char(\bar{K})\neq 0$, then we can simplify the equation by completing the square. Thus replacing y by $\frac{y-a_1x-a_3}{2}$ gives an equation of the form
$$E:y^2=4x^3+b_2x^2+2b_4x+b_6$$
where
\begin{align*}
b_2&=a_1^2+4a^2\\
b_4&=2a_4+a_1a_3\\
b_6&=a_3^2+4a_6
\end{align*}
We also define quantities
\begin{align*}
b_8&=a_1^2a_6+4a_2a_6-a_1a_3a_4+a_2a_3^2-a_4^2\\
c_4&=b_2^2-24b_4\\
c_6&=b_2^3+36b_2b_4-216b_6\\
\Delta&=-b_2^2b_8-8b_4^3-27b_6^2+9b_2b_4b_6
\end{align*}
If further char($\bar{K}$)$\neq$ 2,3, then replacing (x,y) by $((x-3b_2)/36, y/216)$ we can get a simpler equation
$$E:y^2=x^3-27c_4x-54c_6$$
Thus we often use the equation $E:y^2=x^3+Ax+B$ to denote an elliptic curve, and such curves are nonsingular if and only if $\Delta\neq 0$(i.e. $-16(4A^3+27B^2)\neq 0$).
Next we will intoduce one of the most important structures on elliptic curves, the group law.
\begin{defn}
Let $P,Q\in E$, L the line connecting P and Q(tangent line if $P=Q$). According to the Bezout theorem, the line L and E must intersect on the third point R(may be the same as P, Q).Let L' be the line connecting R and O. Then $P\oplus Q$ is the point that such that L' intersects E at $R,O$, and $P\oplus Q$.
\end{defn}
Now we justify the use of the symbol $\oplus$.
\newtheorem{pro}{Proposition}[section]
\begin{pro}
The composition law($\oplus$) satifies the following properties.\\
(a) If a line L intersects E at the points P,Q and R, then
$$(P\oplus Q)\oplus R=O$$
(b) $P\oplus O=P$ for all $P\in E$\\
(c) $P\oplus Q=Q\oplus P$ for all $P,Q\in E$\\
(d) Let $P\in E$. There is a point of E, denoted -P, so that
$$P\oplus (-P)=O$$
(e) Let $P,Q,R\in E$. Then
$$(P\oplus Q)\oplus R=P\oplus(Q\oplus R)$$
In other words, the composition law makes E into an abelian group with identity element O. We further have:\\
(f) Suppose E is defined over K. Then
$$E(K)=\left\{(x,y)\in K^2:y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6\right\}\cup \left\{O\right\}$$
is a subgroup of E.
\end{pro}
\begin{proof}
Only (e) is not trivial. One can laboriously verify the associative law case by case by checking the equations. However, we will use the Riemann Roch theorem to prove that, and also use a bit of divisors.(For definition of divisors, one can read [Sil], p.31)
\end{proof}
\begin{pro}
Let (E,O) be an elliptic curve.\\
(a) For every divisor $D\in Div^0(E)$ there exists a unique point $P\in E$ so that
$$D\sim (P)-(O)$$
Let
$$\sigma:Div^0(E)\rightarrow E$$
be the map given by this association.
(b) The map $\sigma$ is surjective.
(c) Let $D_1,D_2\in Div^0(E)$. Then
$$\sigma(D_1)=\sigma(D_2)\qquad iff\quad D_1\sim D_2$$
Thus $\sigma$ induces a bijection of sets(which we also denote by $\sigma$)
$$\sigma:Pic^0(E)\rightarrow E$$
(d) The inverse to $\sigma$ is the map
$$\kappa:E\rightarrow Pic^0(E)$$
$$P\rightarrow class\ of\ (P)-(O)$$
(e) If E is given by a Weierstrass equation, then the composition law we mentioned above and the group law from $Pic^0(E)$ by using $\sigma$ are the same. Thus, the composition law satisfied the associative law.
\end{pro}
\begin{proof}
(a) Since E has genus 1, the Riemann-Roch theorem says that
$$dim\mathcal{L}(D+(O))=1$$
Let $f\in \bar{K}(E)$ be a generator for $\mathcal{L}(D+(O))$. Since
$$div(f)\geq -D-(O)\quad and\quad deg(div(f))=0$$
it follows that
$$div(f)=-D-(O)+(P)$$
for some $P\in E$. Hence
$$D\sim (P)-(O)$$
To prove that P is unique, we assume that there are two points P and P' both satisfy the condition. Then we get that $P\sim P'$. So there exists $f\in \bar{K}(E)$ so that
$$div(f)=(P)-(P')$$
Then $f\in \mathcal{L}((P'))$, and by the Riemann-Roch theorem we have $dim\mathcal{L}((P'))=1$. However we already know that the constant function is in $\mathcal{L}((P'))$, so we can get f is a constant function. Thus $P=P'$. Hence P is unique.\\
(b) For any $P\in E$
$$\sigma((P)-(O))=P$$
(c) Suppose $\sigma(D_1)=P_1$, $\sigma(D_2)=P_2$. Then we can get $(P_1)-(P_2)\sim D_1-D_2$. Thus $\sigma(D_1)=\sigma(D_2)$ we can imply that $D_1\sim D_2$. Also if $D_1\sim D_2$, we have $P_1\sim P_2$, so $P_1=P_2$.\\
(d) Directly from (b) and (c).\\
(e) Let E be given by a Weierstrass equation, and let $P,Q\in E$. It clearly suffices to show that
$$\kappa(P+Q)=\kappa(P)+\kappa(Q)$$
Let
$$f(X,Y,Z)=aX+bY+cZ=0$$
give the line L in $\mathbf{P}^2$ going through P and Q, let R be the third point of intersection of L with E, and let
$$f'(X,Y,Z)=a'X+b'Y+c'Z=0$$
be the line L in $\mathbf{P}^2$ through R and O. Then from the definition of addition on E and the fact that $Z=0$ intersects E at O with multiplicity 3, we have
$$div(f/Z)=(P)+(Q)+(R)-3(O)$$
and
$$div(f'/Z)=(P+Q)+(R)-2(O)$$
Thus
$$(P+Q)-(P)-(Q)+(O)=div(f'/f)$$
Hence
$$\kappa(P+Q)=\kappa(P)+\kappa(Q)$$
\end{proof}
\newtheorem{200}{Remark}
\begin{200}
Here we will directly write out the equation of the composition law.
Let E be an elliptic curve given by a Weierstrass equation
$$E:y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$$
(a) Let $P_0=(x_0,y_0)\in E$. Then
$$-P_0=(x_0,-y_0-a_1x_0-a_3)$$
Now let $P_1+P_2=P_3\quad with\quad P_i=(x_i,y_i)\in E$
(b) If $x_1=x_2$ and $y_1+y_2+a_1x_2+a_3=0$, then
$$P_1+P_2=O$$
Otherwise, let
$$\lambda=\frac{y_2-y_1}{x_2-x_1},\qquad \nu=\frac{y_1x_2-y_2x_1}{x_2-x_1}\qquad if\ x_1\neq x_2$$
$$\lambda=\frac{3x_1^2+2a_2x_1+a_4-a_1y_1}{2y_1+a_1x_1+a_3},\qquad \nu=\frac{-x_1^3+a_4x_1+2a_6-a_3y_1}{2y_1+a_1x_1+a_3}\qquad if\ x_1= x_2$$
(c) $P_3=P_2+P_1$ is given by
$$x_3=\lambda^2+a_1\lambda-a_2-x_1-x_2$$
$$y_3=-(\lambda+a_1)x_3-\nu-a_3$$
(d) As special cases of (c), we have for $P_1\neq \pm P_2$
$$x(P_1+P_2)=\frac{y_2-y_1}{x_2-x_1}^2+a_1\frac{y_2-y_1}{x_2-x_1}-a_2-x_1-x_2$$
and the duplication formula for (x,y)$\in$ E
$$x([2]P)=\frac{x^4-b_4x^2-2b_6x-b_8}{4x^3+b_2x^2+2b_4x+b_6}$$
\end{200}
After establishing the group structure on an elliptic curve, we will now discuss a special kind of morphism between elliptic curves.
\begin{defn}
Let $V_1$ and $V_2\subset \mathbf{P}^n$ be two projective varieties. A rational map from $V_1$ to $V_2$ is a map of the form
$$\phi:V_1\rightarrow V_2$$
$$\phi=[f_0,\dots,f_n]$$
where $f_0,\dots,f_n\in \bar{K}(V_1)$ have the property that for every point $P\in V_1$ at which $f_0,\dots,f_n$ are all defined, $\phi(P)\in V_2$.
\end{defn}
\begin{defn}
A rational map $\phi$ is regular(or defined) at P if there is a function $g\in \bar{K}(V_1)$ such that
(a) each $gf_i$ is regular at P
(b) for some i, $(gf_i)(P)\neq 0$
A rational map which is regular at every point in $V_1$ is called a morphism.
\end{defn}
Next we will state two very important results for morphisms on curves. We won't prove them and , and for those who want to see the proofs, you can check [Har, Chapter 2 Thm 6.8].
\newtheorem{thm}{Theorem}[section]
\begin{thm}
For every $Q\in C_2$, we have the following equationLet $\phi:C_1\rightarrow C_2$ be a morphism of curves. Then $\phi$ is either constant or surjective.
\end{thm}
\begin{thm}
Let $\phi:C_1\rightarrow C_2$ be a non-constant map of smooth curves. For all but finitely many $Q\in C_2$
$$\#\phi^{-1}(Q)=deg_s(\phi)$$
\end{thm}
Now we go back to the ellpitic curves. Because an elliptic curve contains a point O, so the map between elliptic curves should contains more imformation. Therefore we have the following definition.
\begin{defn}
Let $E_1$ and $E_2$ be elliptic curves. An isogeny between $E_1$ and $E_2$ is a morphism
$$\phi:E_1\rightarrow E_2$$
satisfying $\phi(O)=O$. $E_1$ and $E_2$ are isogenous if there is an isogeny $\phi$ between them with $\phi(E_1)=\left\{O\right\}$
\end{defn}
From the definition, we can clearly see that the map defined by multiplying m is an isogeny, and we use [m] to denote it.
\begin{pro}
Let E/K be an elliptic curve and let $m\in \mathbf{Z}$, $m\neq 0$. Then the multiplication by m map
$$[m]:E\rightarrow E$$
is non-constant.
\end{pro}
\begin{proof}
We start by showing that $[2]\neq [0]$. From the duplication formula, if a point P=(x,y)$\in$ E has order 2, then it must satisfy
$$4x^3+b_2x^2+2b_4x+b_6=0$$
which only has finitely many solutions. Therefore $[2]\neq [0]$. Now, using the fact that $[mn]=[m][n]$, we are reduced to considering the case of odd m.
Using the long division, one can easily find out that the polynomials
$$4x^3+b_2x^2+2b_4x+b_6$$
does not divide
$$x^4-b_4x^3-2b_6x-b_8$$
(If it does, then $\Delta$=0, contradiction). Hence we can find an $x_0\in \bar{K}$ so that the former vanishes to a higher order at $x=x_0$ than the latter. Choosing $y_0\in \bar{K}$ so that $P_0=(x_0,y_0)\in E$, the doubling formula implies that [2]$P_0$=O.In other words, we have shown that E has a non-trivial point of order 2. But then for m odd
$$[m]P_0=P_0\neq O$$
so clearly $[m]\neq [0]$.
\end{proof}
\begin{thm}
Consider the isogeny $[m]:E\rightarrow E$. For every $Q\in E_2$
$$\#[m]^{-1}(Q)=deg_s[m]$$
\end{thm}
\begin{proof}
From Theorem 1.2 we know that
$$\#[m]^{-1}(Q)=deg_s[m]$$
for all but finitely many $Q\in E_2$. But for any $P,P'\in E_1$, if $[m]P=[m]P'$, then $P-P'\in [m]^{-1}(O)$. Thus for every $Q\in E_2$, $[m]^{-1}(Q)$ is a coset of $[m]^{-1}(O)$. So for all Q, we have
$$\#[m]^{-1}(Q)=deg_s[m]$$
\end{proof}
By now, we have introduced some important properties of elliptic curves, and next we will introduce the Mordell Weil theorem.
\section{Mordell Weil Theorem}
\newtheorem{theorem}{Theorem}[section]
\begin{theorem}
\textup{(Mordell-Weil)} Let E be an elliptic curve defined over a number field K. The group E(K) is a finitely generated Abelian group
\end{theorem}
The proof is given in two parts: The first part is called the Weak Mordell-Weil Theorem, which proves that $E(K)/nE(K)$ is finite, and the second part uses height function to prove $E(K)$ is finitely generated.
\subsection{Weak Mordell-Weil Theorem}
In this section, we will give two proofs of the Weak Mordell-Weil Theorem.
\newtheorem{1}{Theorem}[subsection]
\begin{1}
\textup{(Weak Mordell-Weil)} Let E be an elliptic curve defined over a number field K. Then E(K)/mE(K) is finite for any n $\geq$ 2.
\end{1}
The first proof, given by Silverman, is based on theories about field extension.
\newtheorem{2}{Lemma}[subsection]
\begin{2}
Let L/K be a finite Galois extension. If E(L)/mE(L) is finite, then E(K)/mE(K) is finite.
\end{2}
\begin{proof}
Let $\Phi$ be the kernel of the natural map $E(K)/nE(K) \rightarrow E(L)/nE(L)$. Therefore,$$\Phi=(E(K)\cap mE(L))/mE(K)$$
and for each $P$ (mod $mE(K)$) in $\Phi$, we can choose a point $Q_p \in E(L)$ with $[m]Q_p=P$. Having done this, we define a map of sets
$$\lambda_p:G_{L/K} \rightarrow E[m], \qquad \lambda_p(\sigma)=Q_p^\sigma-Q_p$$Here $Q_p$ is fixed for each $P$.\\
We notice that
$$\lambda_p(\sigma)=[m](Q_p^\sigma-Q_p)=[m]Q_p^\sigma-[m]Q_p=0$$
So $\lambda_p(\sigma)$ is in $E[m]$.\\
Suppose that $\lambda_p=\lambda_{p'}$ for two points $P,P' \in E(K)\cap mE(L)$. Then we have
$$(Q_p-Q_{p'})^\sigma=Q_p-Q_{p'} \qquad for \ all \ \sigma \in G_{L/K}$$
so $Q_p-Q_{p'} \in E(K)$. Therefore
$$P-P'=[m](Q_p-Q_{p'})\in mE(K)\Leftrightarrow P\equiv P'\pmod {mE(K)}$$
So the map
$$\Phi\rightarrow Map(G_{L/K},\ E[m]),\qquad P\rightarrow\lambda_p$$
is an injection. But $G_{L/K}$ and $E[m]$ are finite sets, so $\Phi$ is a finite set.\\
Finally, the exact sequence
$$0\rightarrow\Phi\rightarrow E(K)/mE(K)\rightarrow E(L)/mE(L)$$
implies that $E(K)/mE(K)$ is finite(because it is between two finite sets).
\end{proof}
In view of the lemma above, we can enlarge the number field K and suppose that $E[m]\subset E(K)$ (because $E[m]$ is finite). We will assume this is true for the remainder of this section.\\
The next step we will do is to translate the question into a question about a certain field extension of K.
\newtheorem{3}{Definiton}[subsection]
\begin{3}
The Kummer pairing
$$\kappa:E(K)\times G_{\overline{K}/K}\rightarrow E[m]$$
is defined as follows. Let $P\in E(K)$, and choose any $Q\in E(\overline{K})$ satisfying $[m]Q=P$. Then
$$\kappa(P,\ \sigma)=Q^\sigma-Q$$
\end{3}
Actually, from the definition we can see that it is similar to the definition of $\lambda_p$. It is well-defined because $E[m]\subset E(K)$.
\begin{1}
(a) The Kummer pairing is bilinear.\\
(b) The kernel of the Kummer pairing on the left is $mE(K)$.\\
(c) The kernel of the Kummer pairing on the right is $G_{\overline{K}/L}$, where
$$L=K([m]^{-1}E(K))$$
is the compositum of all fields $K(Q)$ as $Q$ ranges over the points of $E(\overline{K})$ satisfying $[m]Q\in E(K)$.\\
Hence the Kummer pairing induces a perfect bilinear pairing
$$E(K)/mE(K)\times G_{L/K}\rightarrow E[m]$$
\end{1}
\begin{proof}
(a) The linearity of P is trivial. For $\sigma$, let $\sigma,\tau\in G_{\overline{K}/K}$. Then
$$\kappa(P, \ \sigma\tau)=Q^{\sigma\tau}-Q=(Q^\sigma-Q)^\tau+Q^\tau-Q=\kappa(P,\ \sigma)^\tau+\kappa(P, \ \tau)$$
However, $\kappa(P,\ \sigma)\in E[m]\subset E(K)$, so it is fixed by $\tau$. Therefore,
$$\kappa(P, \ \sigma\tau)=\kappa(P,\ \sigma)+\kappa(P, \ \tau)$$
(b) Suppose $\kappa(P, \ \sigma)=0$ for all $\sigma\in G_{\overline{K}/K}$. Then we have $Q^\sigma=Q$ for all $\sigma\in G_{\overline{K}/K}$.Therefore, $Q\in E(K)$ and $P=[m]Q\in mE(K)$
And if $P\in mE(k)$, it is obvious that $\kappa(P, \ \sigma)=0$ for all $\sigma\in G_{\overline{K}/K}$. Therefore, the kernel on the left is $mE(K)$.\\
(c) Suppose $\kappa(P, \ \sigma)=0$ for all $P\in E(K)$, then $Q^\sigma-Q=0$ for all $Q$ satisfying $[m]Q\in E(K)$. But $L$ is the compositum of $K(Q)$ over all such $Q$, so $\sigma$ fixes $L$. Hence $\sigma\in G_{\overline{K}/L}$. Conversely, if $\sigma\in G_{\overline{K}/L}$, then we have
$$\kappa(P, \ \sigma)=Q^\sigma-Q=0$$
since $Q\in E(L)$ from the definition. Thus the kernel on the right is $ G_{\overline{K}/L}$.\\
Finally, for the last statement of the theorem, we firstly claim that $L/K$ is Galois because it is normal from the definition($[m]Q'=[m]Q\in E(K)$ if $Q'$ is a conjugate of $Q$). Since $L/K$ is Galois, we have
$$G_{\overline{K}/K}/G_{\overline{K}/L}=G_{L/K}$$
Thus it is a perfect bilinear pairing.
\end{proof}
From Theorem 2.1.2 we can see that if we can prove $L$ is a finite extension, or in other words, $G_{L/K}$ is finite, then the group $E(K)/mE(K)$ is finite. So the next step is to analyze this extension.
\begin{1}
Let $L$ be the field defined in Theorem 2.1.2.\\
(a) $L/K$ is an abelian extension of exponent m.(I.e. $G_{L/K}$ is abelian and every element has order dividing m.)\\
(b) Let
$$S=\left\{v\in M_K^0:\ E\ has\ bad\ reduction\ at\ v\right\}\cup \left\{v\in M_K^0:v(m)\neq0\right\}\cup M_K^\infty$$
Then $L/K$ is unramified outside S.
\end{1}
\begin{proof}
(a) This follows immediately from the last statement of Theorem 2.2, which implies that there is an injection
$$G_{L/K}\rightarrow Hom(E(K),\ E[m])$$
$$\sigma\rightarrow\kappa(\cdot,\ \sigma)$$\\
(b) Let $v\in M_K$ with $v\notin S$. Choose an arbitrary element $Q$ in $m^{-1}E(K)$, and the only thing we have to show is that $K'=K(Q)$ is unramified at v, because $L$ is the compositum of all such $K'$. Let $v'\in M_{K'}$ be a place of $K'$ such that $v\mid v'$, and let $k_{v'}/k_v$ be the corresponding extension of residue fields. Since $E$ has good reduction at $v$, $E$ also has good reduction at $v'$(because the discriminants are the same). Thus we have the usual reduction map
$$E(K')\rightarrow \tilde{E_{v'}}(k_{v'}') $$
Now let $I_{v'/v}\subset G_{K'/K}$ be the inertia group for $v'/v$, and let $\sigma\in I_{v'/v}$. By definition of inertia, $\sigma$ acts trivially on $\tilde{E_{v'}}(k_{v'}')$, so
$$\tilde{Q^\sigma-Q}=\tilde{Q^\sigma}-\tilde{Q}=\tilde{0}$$
On the other hand, $Q^\sigma-Q\in E(K)[m]$, so $Q^\sigma=Q$. Thus $Q$ is fixed by all elements of $I_{v'/v}$, which implies that the action of inertia group on $K'$ is trivial. Hence $K'$ is unramified over $K$ at $v'$.
\end{proof}
Next we will prove that all field extensions $L/K$ satifying the condition in Theorem 2.1.3 must be a finite field extension.
\begin{1}
Let K be a number field, $S\subset M_K$ a finite set of places containing $M_K^\infty$, and $m\geq2$ an integer. Let $L/K$ be the maximal abelian extension of $K$ having exponent m which is unramified outside of S. Then $L/K$ is a finite extension.
\end{1}
\begin{proof}
First, we can assume that K contains the $m^th-roots$ of unity $\mu_m$. That is because if K doesn't contain it, we can choose $K'=K(\mu_m)$ and $LK'/K'$ is also an abelian extension of exponent m unramified at $S'$, where $S'$ is the set of places of $K'$ lying over S. And if $LK'/K'$ is finite, $L/K$ is also finite. So we can assume that K contains the $m^th-roots$ of unity $\mu_m$.
Furthermore, we may increase the set S, because this can only make the field extension larger. Using the fact that the class number of K is finite, we can thus add a finite number of elements to S so that the ring of S-integers
$$R_s=\left\{a\in K:v(a)\geq 0\ for\ all\ v\in M_K, v\notin S\right\}$$
is a principle ideal domain. We may also enlarge S so that $v(m)=0$ for all $v\notin S$.
Next, according to Kummer theory, we know that L is the largest subfield of $K(\sqrt[m]{a}:a\in K) $ which is unramified outside S.
Let $v\in M_K$, $v\notin S$. Looking at the equation
$$X^m-a=0$$
over local field $K_v$, and remembering that $v(m)=0$, it is clear that $K_v(\sqrt[m]{a})/K_v$ is unramified iff
$$ord_v(a) \equiv 0\pmod m$$
Therefore, $L=K(\sqrt[m]{a}:a\in T_S)$, where
$$T_S=\left\{a\in K^*/(K^*)^m:ord_v(a)\equiv 0\pmod m\right\}$$
Hence if we can prove that $$T_S$$ is a finite group, we can see that L is a finite extension over K.
To prove $T_S$ is finite, we first consider the natural map
$$R_S^*\rightarrow T_S$$
We claim that the map is surjective. To see this, suppose $a\in K^*$ represents an element of $T_S$. Then the ideal $aR_S$ is the $m^{th}$-power of an ideal in $R_S$, since the prime ideals of $R_S$ correspond to the valuations $v\notin S$. Since $R_S$ is a principle ideal domain, we can find $b\in K^*$ s.t. $aR_S=b^mR_S$, which means that
$$a=ub^m$$
for $u\in R_S^*$. Then u and a give the same element of $T_S$, showing that the map is surjective. Now the kernel of the map certainly contains $(R_S^*)^m$, so we have a surjection
$$R_S^*/(R_S^*)^m\rightarrow T_S$$
According to Dirichlet's unit theorem, which shows that the group of units is finitely generated, we know that $R_S^*/(R_S^*)^m$ is a finite group. Thus $T_S$ is a finite group, and the proof is completed.
\end{proof}
From Theorem 2.1.4 we can see that $L/K$ is a finite galois extension, so $G(L/K)$ is finite. Therefore, the group $E(k)/mE(K)$ is finite, and we have the Weak Mordell Weil theorem correct.
Next we will use cohomology to prove the Weak Mordell Weil theorem. First we will introduce group cohomology.
\begin{3}
Let G be a finite group acting on an abelian group M. We define
$$H^0(G, M)=M^G=\left\{m\in M\mid \sigma m=m,\ all\ \sigma\in G\right\}$$
A crossed homomorphism is a map $f:G\rightarrow M$ such that
$$f(\sigma\tau)=f(\sigma)+\sigma f(\tau)\qquad all\ \sigma,\tau\in G$$
and a crossed homomorphism is said to be principal if given an $m\in M$
$$f(\sigma)=\sigma m-m,\qquad all\ \sigma\in G$$
Next we define
$$H^1(G, M)=\frac{\left\{crossed\ homomorphisms\right\}}{\left\{principle\ crossed homomorphisms\right\}}$$
\end{3}
We then state the most important and basic properties of cohomology.
\newtheorem{4}{Proposition}[subsection]
\begin{4}
For any exact sequence of G-modules
$$0\rightarrow M\rightarrow N\rightarrow P\rightarrow 0$$
there is a canonical exact sequence
$$0\rightarrow H^0(G, M)\rightarrow H^0(G, N)\rightarrow H^0(G, P)\xrightarrow{\delta} H^1(G, M)\rightarrow H^1(G, N)\rightarrow H^1(G, P)$$
\end{4}
However, we want to solve problem about field extension, which might be infinite, so we have to develop the theory about cohomology of infinite Galois group.
\begin{3}
Let K be a perfect field, $\bar{K}$ its algebraic closure, and let
$$G:=Gal(\bar{K}/K)=G_K$$
be its Galois group. Then we can dress G in the Krull topology: a subgroup is open if it fixes a finite extension of K. Thus all these subgroups form a base of $1_G$. And thus they form a base for every point $g\in G$. So we can a topology on G.\\
Next a G-module M is said to be discrete if the map $G\times M\rightarrow M$ is continuous relative to the discrete topology on M and the Krull topology on G. This is equivalent to requiring that every element of M is fixed by the subgroup of G fixing some finite extension of K.
\end{3}
For a dicrete $G-module\ M$, every principle crossed homomorphism $f:G\rightarrow M$ is continuous. That is because every element of $M$ is fixed by an open normal subgroup of $G$.
\begin{3}
$$H^1(G, M)=\frac{\left\{continuous\ crossed\ homomorphisms\right\}}{\left\{principle\ crossed homomorphisms\right\}}$$
\end{3}
And still, we have Theorem correct.\\
\\
Also, we have the short exact sequence
$$0\rightarrow E(\bar{\mathbf{Q}})[m]\rightarrow E(\bar{\mathbf{Q}})\xrightarrow{m} E(\bar{\mathbf{Q}})\rightarrow 0$$
Therefore, we can get the long exact sequence
$$0\rightarrow E(\mathbf{Q})[m]\rightarrow E(\mathbf{Q})\xrightarrow{m} E(\mathbf{Q})\xrightarrow{\delta} H^1(\mathbf{Q}, E[m])\rightarrow H^1(\mathbf{Q}, E)\xrightarrow{m} H^1(\mathbf{Q}, E)$$
From this, we can get another short exact sequence
$$0\rightarrow E(\mathbf{Q})/mE(\mathbf{Q})\xrightarrow{\delta} H^1(\mathbf{Q}, E[m])\rightarrow H^1(\mathbf{Q}, E)[m]\rightarrow 0$$
Since $\delta$ is an injection here, if we can prove the group $H^1(\mathbf{Q}, E(\mathbf{Q})[m])$ is finite, then we can prove the Weak Mordell-Weil Theorem. However, this might not be true. So we will use the local field $\mathbf{Q}_p$ to solve the problem.
First, we choose the algebraic closure $\bar{\mathbf{Q}}$ for $\mathbf{Q}$, and $\bar{\mathbf{Q}_p}$ for $\mathbf{Q}_p$. The embedding $\mathbf{Q}\hookrightarrow \mathbf{Q}_p$ extends to an embedding $\bar{\mathbf{Q}}\hookrightarrow \bar{\mathbf{Q}_p}$. Moreover, the action of $Gal(\bar{\mathbf{Q}_p}/\mathbf{Q}_p)$ on $\bar{\mathbf{Q}}\subset \bar{\mathbf{Q}_p}$ defines a homomorphism $\psi:G_{\mathbf{Q}_p}\rightarrow G_Q$ by restriction of the Galois action.
Therefore, a crossed homomorphism $f:G_{\mathbf{Q}}\rightarrow E(\bar{\mathbf{Q}})$ defines a crossed homomorphism $\tilde{f}:G_{\bar{\mathbf{Q}}_p}\rightarrow E(\bar{\mathbf{Q}}_p)$ by composition $\tilde{f}=f\circ \psi$. To check this is well defined, for any $\sigma,\tau\in G_{\mathbf{Q}_p}$,
$$\tilde{f}(\sigma\tau)=f(\psi(\sigma\tau))=f(\psi(\sigma))+\psi(\sigma)f(\psi(\tau))=f(\psi(\sigma))+\sigma f(\psi(\tau))=\tilde{f}(\sigma)+\sigma \tilde{f}(\tau)$$
And also if $f$ is a principle crossed homomorphism, then $\tilde{f}$ is also principle. Thus we can get a map $\phi: H^1(\mathbf{Q}, E)\rightarrow H^1(\mathbf{Q}_p, E)$ by taking $f$ to $\tilde{f}$.
We can get the following commutative diagram
\begin{align*}
0\rightarrow E(\mathbf{Q})&/mE(\mathbf{Q})\xrightarrow{\delta} H^1(\mathbf{Q}, E[m])\rightarrow H^1(\mathbf{Q}, E)[m]\rightarrow 0\\
&\downarrow\qquad\qquad\qquad\qquad\downarrow\qquad\qquad\qquad\qquad\downarrow\\
0\rightarrow E(\mathbf{Q}_p)&/mE(\mathbf{Q}_p)\xrightarrow{\delta} H^1(\mathbf{Q}_p, E[m])\rightarrow H^1(\mathbf{Q}_p, E)[m]\rightarrow 0
\end{align*}
where the top and bottom lines are exact and the vertical maps are embedding.
Next we reach a crucial argument. If some $\gamma\in H^1(\mathbf{Q}, E[m])$ comes from the class of an element of $E(\mathbf{Q})$, then its image $\gamma_p\in H^1(\mathbf{Q}_p, E[m])$ arises from an element of $E(\mathbf{Q}_p)$. We want to quantify those $\gamma$ whose local versions $\gamma_p$ comes from $E(\mathbf{Q}_p)$ and all those $\gamma$ which vanish locally.
Here comes two definitions that we will mainly talk about.
\begin{3}
The $n-Selmer group$ is defined by
\begin{align*}
S^{(n)}(E/\mathbf{Q}):&=\left\{\gamma\in H^1(\mathbf{Q}, E[n])\mid \forall p,\ \gamma_p \ comes\ from\ E(\mathbf{Q}_p)\right\}\\
&=ker(H^1(\mathbf{Q}, E[n])\rightarrow \prod_{p\ prime} H^1(\mathbf{Q}_p, E))
\end{align*}
\end{3}
\begin{3}
The $Tate-Shafarevich$ group is defined by
$$\Sha(E/\mathbf{Q})=ker(H^1(\mathbf{Q}, E)\rightarrow \prod_{p\ prime} H^1(\mathbf{Q}_p, E) )$$
\end{3}
And we need the following lemma, which is easy to prove.
\begin{2}
For any chain of modules $A\xrightarrow{\alpha}B\xrightarrow{\beta}C$, we can get a long exact sequence
$$0\rightarrow ker(\alpha)\rightarrow ker(\beta\alpha)\rightarrow ker(\beta)\rightarrow coker(\alpha)\rightarrow coker(\beta\alpha)\rightarrow coker(\beta)\rightarrow 0$$
\end{2}
We won't prove this because all the maps are natural.
If we apply the lemma to the maps
$$H^1(\mathbf{Q}, E[n])\rightarrow H^1(\mathbf{Q}, E)[n]\rightarrow \prod_{p\ prime} H^1(\mathbf{Q}_p, E)[n])$$,
we obtain the fundamental exact sequence
$$0\rightarrow E(\mathbf{Q})/nE(\mathbf{Q})\rightarrow S^{(n)}E/\mathbf{Q}\rightarrow \Sha(E/\mathbf{Q})[n]\rightarrow 0$$
We shall prove $E(\mathbf{Q})/nE(\mathbf{Q})$ to be finite by showing that $S^{(n)}E/\mathbf{Q}$ is finite.
First we will prove the Selmer group is finite in a special case.
\begin{2}
If all the pointos of order 2 on an elliptic curve given by the Weierstrass equation
$$Y^2Z+a_1XYZ+a_3YZ^2=X^3+a_2X^2Z+a_4XZ^2+a_6Z^3$$
have coordinates in $\mathbf{Q}$, then the Selmer group $S^{(2)}(E/\mathbf{Q})$ is finite.
\end{2}
\begin{proof}
Since they all have coordinates in $\mathbf{Q}$, we can imply that
$$E(\bar{\mathbf{Q}})[2]=E(\mathbf{Q})[2]\cong (\mathbf{Z}/2\mathbf{Z})\times (\mathbf{Z}/2\mathbf{Z})$$
(One can check [Sil, Cor 6.4(b)] for the proof)
And the group $Gal(\bar{\mathbf{Q}}/\mathbf{Q})$ acts trivially on E[2]. Thus we have
$$H^1(\mathbf{Q}, E[2])\cong (\mathbf{Q}^\times/\mathbf{Q}^{\times2})^2$$
(One can get this by using the long exact sequence of cohomology on the short exact sequence
$$1\rightarrow \mathbf{Z}/2\mathbf{Z}\rightarrow \mathbf{Q}^\times \xrightarrow{2} \mathbf{Q}^\times\rightarrow 1$$)
Let $\gamma\in S^{(2)}(E/\mathbf{Q})\subset H^1(\mathbf{Q}, E[2])$. For each prime $p_0$ not dividing 2$\Delta$, there exists a finite unramified extension K of $\mathbf{Q}_{p_0}$ such that $\gamma$ maps to zero under the vertical arrows:
\begin{align*}
H^1(&\mathbf{Q}, E[2])\xrightarrow{\cong} (\mathbf{Q}^\times/\mathbf{Q}^{\times2})^2\\
&\downarrow \qquad \qquad \qquad \downarrow\\
H^1(&K, E[2])\xrightarrow{\cong} (K^\times/K^{\times2})^2
\end{align*}
We choose a representative element $((-1)^{\varepsilon(\infty)}\prod_{p}p^{\varepsilon(p)},(-1)^{\varepsilon'(\infty)}\prod_{p}p^{\varepsilon'(p)})\in (\mathbf{Q}^\times/\mathbf{Q}^{\times2})^2$ for $\gamma$. Here each $\varepsilon$ or $\varepsilon'$ is either 0 or -1. Therefore we can see that
$$ord_{p_0}((-1)^{\varepsilon(\infty)}\prod_{p}p^{\varepsilon(p)})=\varepsilon(p_0)$$
and so if $(-1)^{\varepsilon(\infty)}\prod_{p}p^{\varepsilon(p)}$ is a square in K, then $\varepsilon(p_0)=0$. Therefore the only p that can occur in the factorizations are those dividing 2$\Delta$, which allows only finitely many possibilities for $\gamma$.
\end{proof}
After proving the special case, we will now turn to prove the general case.
\begin{1}
The Selmer group $S^{(n)}(E/\mathbf{Q})$ is finite.
\end{1}
\begin{proof}
Actually, instead of proving $S^{(n)}(E/\mathbf{Q})$ is finite, we want to prove that
$$S^{(n)}(E/L):=ker(H^1(L, E[n])\rightarrow \prod_{v\in M_K} H^1(\mathbf{Q}_p, E))$$
is finite for any suitably large L. And according to the next lemma, we will show that if it is correct for L, then it is correct for $\mathbf{Q}$.
\begin{2}
For any finite Galois extension L of $\mathbf{Q}$ and integer $n\geq 1$, the kernel of
$$S^{(n)}(E/\mathbf{Q})\rightarrow S^{(n)}(E/L)$$
is finite
\end{2}
\begin{proof}
Since $S^{(n)}(E/\mathbf{Q})$ and $S^{(n)}(E/L)$ are subgroups of $H^1(\mathbf{Q},E[n])$ and $H^1(L,E[n])$ respectively, it suffices to prove that the kernel of
$$H^1(\mathbf{Q},E[n])\rightarrow H^1(L,E[n])$$
is finite. However, we can easily verify that the kernel of the map is $H^1(Gal(L/\mathbf{Q}), E(L)[n])$, which is finite because both Gal(L/$\mathbf{Q}$) and E(L)[n] are finite.
\end{proof}
Here we still need some preparations from algebraic number theory.
\begin{2}
When T is a finite set of prime ideals in L, the groups $U_T$ and $C_T$ defined by the exactness of the sequence
$$0\rightarrow U_T\rightarrow L^\times \xrightarrow{a\rightarrow(ord_\mathfrak{p}(a))} \bigoplus_{\mathfrak{p}\notin T}\mathbf{Z}\rightarrow C_T\rightarrow0$$
are, respectively, finitely generated and finite.
\end{2}
\begin{proof}
First, let's consider the kernel of the map
$$f:L^\times\rightarrow \bigoplus_{\mathfrak{p}}\mathbf{Z}$$
An element a is $kerf$ iff $ord_\mathfrak{p}(a)=0$ for all $\mathfrak{p}$, thus a is in the kernel iff it is a unit of $O_L$. And the cokernel of f is obviously finite due to the finiteness of the class number. Hence we get an exact sequence
$$0\rightarrow U\rightarrow L^\times \xrightarrow{a\rightarrow(ord_\mathfrak{p}(a))} \bigoplus_{\mathfrak{p}}\mathbf{Z}\rightarrow C\rightarrow0$$
where U is the unit group of $O_L$, and C is the ideal class group. So U is finitely generated due to the Dedekind Unit theorem and C is a finite group.
Next, use the kernel-cokernel exact sequence of
$$L^\times \rightarrow \bigoplus_{\mathfrak{p}}\mathbf{Z}\rightarrow \bigoplus_{\mathfrak{p}\notin T}\mathbf{Z}$$
is an exact sequence
$$0\rightarrow U\rightarrow U_T\rightarrow \bigoplus_{\mathfrak{p}\in T}\mathbf{Z}\rightarrow C\rightarrow C_T\rightarrow o$$
Thus we can see that $U_T$ and $C_T$ are finitely generated and finite recpectively.
\end{proof}
Now we come back to the proof of the theorem.
Let's review the proof of the special case. Actually, we can see that the proof used the following facts:\\
(a) $\mathbf{Q}$ contains a primitive square root of 1\\
(b) The points of order 2 all have coordinates in $\mathbf{Q}$\\
(c) For any finite set T of prime numbers, the kernel of
$$r\rightarrow(ord_p(r)\pmod 2):\mathbf{Q}^\times/\mathbf{Q}^{\times2}\rightarrow \bigoplus_{p\in T}\mathbf{Z}/2\mathbf{Z}$$
is finite.
Therefore, according to the above discussion, what we have to do is to prove the following lemma and the proof will be completed.
\begin{2}
Assume that L contains the $n^{th}$-unity root. For any finite subset T of $M_L$ containing $M_K^\infty$, let N be the kernel of
$$a\rightarrow(ord_\mathfrak{p}(a)\pmod n):L^\times/L^{\times n}\rightarrow \bigoplus_{\mathfrak{p}\in T}\mathbf{Z}/n\mathbf{Z}$$
Then there is an exact sequence
$$0\rightarrow U_T/U_T^n\rightarrow N\rightarrow C_T[n]$$
Therefore N is a finite group.
\end{2}
\begin{proof}
This can be proved by a diagram chase in
\begin{align*}
0\rightarrow U_T\rightarrow &L^\times \rightarrow \bigoplus_{\mathfrak{p}\notin T}\mathbf{Z}\rightarrow C_T\rightarrow 0\\
\downarrow n\quad &\downarrow n\qquad\downarrow n\quad\ \downarrow n\quad\\
0\rightarrow U_T\rightarrow &L^\times \rightarrow \bigoplus_{\mathfrak{p}\notin T}\mathbf{Z}\rightarrow C_T\rightarrow 0\\
&\downarrow \qquad\quad \downarrow\\
L^\times&/L^{\times n}\rightarrow \bigoplus_{\mathfrak{p}\in T}\mathbf{Z}/n\mathbf{Z}
\end{align*}
\end{proof}
Since we have the lemma correct, we have the theorem correct, and the proof is cmpleted.
\end{proof}
Actually, we can see that the proof above can prove that the Selmer group $S^{(n)}(E/K)$ is finite for any number field K. Therefore we prove the Weak Mordell Weil theorem by using cohomology.
\subsection{The Descent Procedure and Height Function on $\mathbf{Q}$}
In this section, we will prove the Mordell Weil theorem on $\mathbf{Q}$.
\begin{4}
(Descent theorem) Let A be an abelian group. Suppose there is a 'height' funtion
$$h:A\rightarrow\mathbf{R}$$
with the following three properties:\\
($\romannumeral1$) Let $Q\in A$. There is a constant $C_1$ depending on A and Q, so that for all $P\in A$,
$$h(P+Q)\leq 2h(P)+C_1$$
($\romannumeral2$) There is an integer $m\geq2$ and a constant $C_2$, depending on A, so that for all $P\in A$,
$$h(mP)\geq m^2h(P)-C_2$$
($\romannumeral3$) For every constant $C_3$,
$$\left\{P\in A:h(P)\leq C_3\right\}$$
is a finite set.\\
Suppose further that for the integer m in ($\romannumeral2$), the quotient group $A/mA$ is finite. Then A is finitely generated.
\end{4}
\begin{proof}
Choose elements $Q_1,\dots, Q_r\in A$ to represent the finitely many cosets in A/mA. The idea is to show that by substracting an appropriate linear combination of $Q_1,\dots, Q_r$ from P, we will be able to make the height of the resulting point less than a constant which is independent of P. Then the $Q_1,\dots, Q_r$ and the finitely many points with height less than this constant will generate A.
Write
$$P=mP_1+Q_{i_1}\qquad for\ some\ 1\leq i_1\leq r$$
Continuing in this fashion,
$$P_1=mP_2+Q_{i_2}$$
$$.$$
$$.$$
$$.$$
$$P_{n-1}=mP_n+Q_{i_n}$$
Now for any j, we have
\begin{align*}
h(P_j)&\leq \frac{1}{m^2}[h(mP_j)+C_2]\qquad from (\romannumeral3)\\
&=\frac{1}{m^2}[h(P_{j-1}-Q_{i_j})+C_2]\\
&\leq\frac{1}{m^2}[2h(mP_{j-1})+C_1'+C_2]\qquad from (\romannumeral1)
\end{align*}
where we take $C_1'$ to be the maximum of the constants from ($\romannumeral1$) for $Q=-Q_i,\ 1\leq i\leq r$. Note that $C_1'$ and $C_2$ do not depend on P.
Now use the above inequality repeatedly, starting from $P_n$ and working back to P. This yields
\begin{align}
h(P_n)&\leq(\frac{2}{m^2})^nh(P)+[\frac{1}{m^2}+\frac{2}{m^4}+\frac{4}{m^6}+\dots+\frac{2^{n-1}}{m^{2n}}](C_1'+C_2)\\
&<(\frac{2}{m^2})^nh(P)+\frac{C_1'+C_2}{m^2-2}\\
&\leq 2^{-n}h(P)+(C_1'+C_2)/2
\end{align}
It follows that by taking n sufficiently large, we will have
$$h(P_n)\leq 1+(C_1'+C_2)/2$$
Since
$$P=m^nP_n+\Sigma_{j=1}^n m^{j-1}Q_[i_j]$$
it follows that every $P\in A$ is a linear combination of the points in the set
$$\left\{Q_1,\dots, Q_r\right\}\cup\left\{Q\in A:h(Q)\leq 1+(C_1'+C_2)/2\right\}$$
And from the third property, this is a finite set, which proves that A is finitely generated.
\end{proof}
Therefore, to solve the problem, all we have to do is to find a height function on $E(K)$ satisfying the three properties. First let's talk about how to define a height function on $E(\mathbf{Q})$.
Fix a Weierstrass equation for $E/\mathbf{Q}$ of the form
$$E:y^2=x^3+Ax+B$$
with $A,B\in \mathbf{Z}$.
\begin{3}
Let $t\in \mathbf{Q}$ and write $t=p/q$ as a fraction in lowest terms. The height of t, denoted H(t), is defined by
$$H(t)=max\left\{|p|,|q|\right\}$$
\end{3}
\begin{3}
The height on $E(\mathbf{Q})$(relative to the given Weierstrass equation) is the function
$$h_x:E(\mathbf{Q})\rightarrow \mathbf{R}$$
$$h_x(P)=\left\{ \begin{array}{rcl}
logH(x(P)) & \mbox{if}
& P\neq O \\ 0 & \mbox{if} & P=O \end{array}\right.$$
\end{3}
We want to prove that the height function defined above has the three properties. Therefore we should prove the following lemma
\begin{2}
(a) Let $P_0\in E(\mathbf{Q})$. There is a constant $C_1$, depending on $P_0$, A, B, so that for all $P\in E(\mathbf{Q})$,
$$h_x(P+P_0)\leq 2h_x(P)+C_1$$
(b) There is a constant $C_2$, depending on A, B, so that for all $P\in E(\mathbf{Q})$,
$$h_x([2]P)\geq 4h_x(P)-C_2$$
(c) For every constant $C_3$, the set
$$\left\{P\in E(\mathbf{Q}):h_x(P)\geq C_3\right\}$$
is finite.
\end{2}
\begin{proof}
Taking $C_1>max\left\{h_x(P_0),h_x([2]P_0)\right\}$, we may assume $P_0\neq O$ and $P\neq O,\pm P_0$. Then writing
$$P=(x,y)=(\frac{a}{d^2},\frac{b}{d^3})\qquad P_0=(x_0,y_0)=(\frac{a_0}{d_0^2},\frac{b_0}{d_0^3})$$
(we can write the coordinates in this form because of the form of the Weierstrass Equation) where the indicated fractions are in lowest terms. Thus we have
$$x(P+P_0)=(\frac{y-y_0}{x-x_0})^2-x-x_0$$.
Now multiplying this out and using that P and $P_0$ satisfy the Weierstrass equation yields
\begin{align*}
x(P+P_0)&=\frac{(xx_0+A)(x+x_0)+2B-2yy_0}{(x-x_0)^2}\\
&=\frac{(aa_0+Ad^2d_0^2)(ad_0^2+a_0d^2)+2Bd^4d_0^4-2bdb_0d_0}{(ad_0^2-a_0d^2)^2}
\end{align*}
In computing the height of a rational number, cancellation between numerator and denominator can only decrease the height, so we find by an easy estimation that
$$H(x(P+P_0))\leq C_1'max\left\{|a|^2,|d|^4,|bd|\right\}$$
Since $H(x(P))=max\left\{|a|,|d|^2\right\}$, and from the equation below
$$b^2=a^3+Aad^4+Bd^6$$
we can get that
$$|b|\leq C_1''max\left\{|a|^{3/2}, |d|^3\right\}$$
which implies that
$$H(x(P+P_0))\leq C_1max\left\{|a|^2,|d|^4\right\}=C_1H(x(P))$$
Now taking logarithms gives the desired result.
(b) By choosing $C_2\geq 4h_x(T)$ for each of the points $T\in E(\mathbf{Q}[2])$, we may assume that $[2]P\neq O$. Then writing $P=(x,y)$, the duplication formula reads
$$x([2]P)=\frac{x^4-2Ax^2-8Bx+A^2}{4x^3+4Ax+4B}$$
It is convenient to define homogeneous polynomials
$$F(X,Z)=X^4-2AX^2Z^2-8BXZ^3+A^2Z^4$$
$$G(X,Z)=4X^3Z+4AXZ^3+4BZ^4$$
Then if we write $x=x(P)=a/b$ as a fraction in lowest terms, x([2]P) can be written as a quotient of integers
$$x([2]P)=F(a,b)/G(a,b)$$
Unlike what we've done in (a), we have to find a lower bound for H(x([2]P)), so it will be important to bound how much cancellation can occur between numerator and denominator.
The idea is to use the fact F(X,1) and G(X,1) are relative prime polynomials, so they generate the unit ideal in $\mathbf{Q}$.
\newtheorem{5}{Sublemma}[subsection]
\begin{5}
Let $\Delta=4A^3+27B^2$
\begin{align*}
&f_1(X,Z)=12X^2Z+16AZ^3\\
&g_1(X,Z)=3X^3-5AXZ^2-27BZ^3\\
&f_2(X,Z)=4(4A^3+27B^2)X^3-4A^2BX^2Z+4A(3A^3+22B^2)XZ^2+12B(A^3+8B^2)Z^3\\
&g_2(X,Z)=A^2BX^3+A(5A^3+32B^2)X^2Z+2B(13A^3+96B^2)XZ^2-3A^2(A^3+8B^2)Z^3
\end{align*}
Then the following identities hold in $\mathbf{Q}[X,Z]$:
$$f_1(X,Z)F(X,Z)-g_1(X,Z)G(X,Z)=4\Delta Z^7$$
$$f_2(X,Z)F(X,Z)-g_2(X,Z)G(X,Z)=4\Delta X^7$$
\end{5}
Let
$$\delta=gcd(F(a,b),G(a,b))$$
be the cancellation in our fraction for x([2]P). From equations
$$f_1(a,b)F(a,b)-g_1(a,b)G(a,b)=4\Delta b^7$$
$$f_2(a,b)F(a,b)-g_2(a,b)G(a,b)=4\Delta a^7$$
we see that $\delta$ divides 4$\Delta$. Hence we obtain the bound
$$\delta\leq |4\Delta|$$
and so
$$H(x([2]P))\geq max\left\{F(a,b),G(a,b)\right\}/|4\Delta|$$
On the other hand, the same identites give the estimates
$$|4\Delta b^7|\leq 2max\left\{f_1(a,b), g_1(a,b)\right\}max\left\{F(a,b), G(a,b)\right\}$$
$$|4\Delta a^7|\leq 2max\left\{f_2(a,b), g_2(a,b)\right\}max\left\{F(a,b), G(a,b)\right\}$$
Now looking at the expressions for $f_1,f_2,g_1,g_2$, we have
$$max\left\{f_1(a,b),g_1(a,b),f_2(a,b),g_2(a,b)\right\}\geq Cmax\left\{|a|^3,|b|^3\right\}$$
where C is a constant relying on A and B. Combining the last three inequlities yields
$$max\left\{|4\Delta a^7|,|4\Delta b^7|\right\}\leq 2Cmax\left\{|a|^3, |b|^3\right\}max\left\{F(a,b), G(a,b)\right\}$$
And so cancelling $max\left\{|a|^3, |b|^3\right\}$ gives
$$max\left\{F(a,b), G(a,b)\right\}/|4\Delta|\geq (2C)^{-1}max\left\{|a|^4,|b|^4\right\}$$
Since $max\left\{|a|^4,|b|^4\right\}=H(x(P))^4$, this gives the desired estimate
$$H(x([2]P))\geq (2C)^{-1}H(x(P))^4$$
and now taking logarithms gives the desired result.
(c) For any constant C, the set
$$\left\{t\in \mathbf{Q}:H(t)\leq C\right\}$$
is obviously finite. And given any x, there will be at most two values of y satisfying the Weierstrass equation. Thus we have
$$\left\{P\in \mathbf{Q}:h_x(P)\leq C_3\right\}$$
is finite.
\end{proof}
Using the Decent theorem, the Weak Mordell-Weil theorem for $m=2$ and the lemma above, we can see that $E(\mathbf{Q})$ is finite generated.
\subsection{Heights on Projective Space}
We want to prove the Weak Mordell-Weil theorem for any number field K, so we have to find a height function satisfying the three properties, and then by applying the Desecent theorem we can finish the proof. However, unlike $\mathbf{Q}$, it's not easy to define a height function on other number fields. So we have to prove a lot of things to develop a height function in general cases.
\begin{3}
The set of standard absolute value on $\mathbf{Q}$, which we again denote by $M_\mathbf{Q}$, consists of the following:\\
($\romannumeral1$) $M_\mathbf{Q}$ contains one archimedean absolute value, given by
$$|x|_\infty =usual\ absolute\ value$$
($\romannumeral2$) For each prime $p\in \mathbf{Z}$, $M+\mathbf{Q}$ contains one non-archimedean (p-adic) absolute value, given by
$$|p^n\frac{a}{b}|=p^{-n} \mbox{for} a,b\in \mathbf{Z},\qquad gcd(p,ab)=1$$
The set of standard absolute values on K, denoted $M_K$, consists of all absolute values on K whose restriction to $\mathbf{Q}$ is one of the absolute values in $M_\mathbf{Q}$.
\end{3}
\begin{3}
For $v\in M_K$, the local degree at v, denoted $n_v$, is given by
$$n_v=[K_v:\mathbf{Q}_v]$$
Here $K_v$ and $\mathbf{Q}_v$ denote the completion of the field with respect to the absolute value v.
\end{3}
With these definitions, we can state two basic facts from algebraic number theory which will be needed.
\begin{4}
Let $L/K/\mathbf{Q}$ be a tower of number fields, and $v\in M_K$. Then
\begin{equation}
\sum_{w\in M_L\atop w|v}n_w=[L:K]n_v
\end{equation}
\end{4}
\begin{4}
Let $x\in K^*$. Then
$$\prod_{v\in M_K}|x|^{n_v}=1$$
\end{4}
Next we will define the height of a point in projective space.
\begin{3}
Let $P\in \mathbf{P}^N(K)$ be a point with homogeneous coordinates
$$P=[x_0,\dots,x_N],\qquad x_i\in K$$
The height of P (relative to K) is defined by
$$H_K(P)=\prod_{v\in M_K}max\left\{|x_0|_v,\dots |x_N|_v\right\}^{n_v}$$
\end{3}
As we can see, when $K=\mathbf{Q}$, this definition is the same as
$$H(P)=max \left\{|x_0|,\dots, |x_N|\right\}$$
where
$$x_0,\dots ,x_N\in \mathbf{Z}\quad and\quad gcd(x_0,\dots, x_N)=1.$$
We will state some important proerties of the given height function.
\begin{4}
Let $P\in \mathbf{P}^N(K)$\\
(a) The height $H_K(P)$ does not depend on the choice of homogeneous coordinates for P.\\
(b) $H_K(P)\geq 1$\\
(c) Let L/K be a finite extension. Then
$$H_L(P)=H_K(P)^{[L:K]}$$
\end{4}
\begin{proof}
(a) It is directly from Proposition 2.3.2.\\
(b) For any point in projective space, one can find homogeneous coordinates by multiplying a number so that one of the coordinates is 1. Then every factor in the product defining $H_K(P)$ is at least 1.\\
(c) We compute
\begin{align*}
H_L(P)&=\prod_{w\in M_L}max\left\{|x_0|_w,\dots |x_N|_w\right\}^{n_w}\\
&=\prod_{v\in M_K}\prod_{w\in M_L\atop w|v}max\left\{|x_0|_v,\dots |x_N|_v\right\}^{n_w}\qquad since\ x_i\in K\\
&=\prod_{v\in M_K}max\left\{|x_0|_v,\dots |x_N|_v\right\}^{[L:K]n_v}\\
&=H_K(P)^{[L:K]}
\end{align*}
\end{proof}
Sometimes, when a field is not given, it's easier to use a height function not relative to a field.
\begin{3}
Let $P\in \mathbf{P}^N(\bar{\mathbf{Q}})$. The absolute height of P, denoted H(P), is defined as follows. Choose any field K such that $P\in \mathbf{P}^N(K)$. Then
$$H(P)=H_K(P)^{1/[K:\mathbf{Q}]}$$
In view of Proposition 2.3.3, it's easy to see that this is well defined.
\end{3}
We now investigate how the height changes under mappings between projective spaces.
\begin{3}
A morphism of degree d between projective spaces is a map
$$F:\mathbf{P}^N\rightarrow \mathbf{P}^M$$
$$F(P)=[f_0(P),\dots, f^M(P)]$$
where $f_0,\dots , f_M\in \bar{\mathbf{Q}}[X_0,\dots, X_N]$ are homogeneous polynomials of degree d with no commone zero in $\bar{\mathbf{Q}}$ other than $X_0=\dots =X_N=0$.
\end{3}
To prove the height function has the three properties, we have to find the lower bound and upper bound of the height function. Therefore we have the following theorem:
\begin{1}
Let
$$F:\mathbf{P}^N\rightarrow \mathbf{P}^M$$
be a morphism of degree d. Then there are constants $C_1$ and $C_2$, depending on F, so that for all points $P\in \mathbf{P}^N(\bar{\mathbf{Q}})$,
$$C_1H(P)^d\leq H(F(P))\leq C_2H(P)^d$$
\end{1}
\begin{proof}
Write $F=[f_0,\dots,f_M]$ with homogeneous polynomials $f_i$, and let $P=[x_0,\dots, x_N]\in \mathbf{P}^N(\bar{\mathbf{Q}})$. Choose some number field K containing $x_0,\dots,x_N$ and all of the coefficients of all of the $f_i's$. Then for each $v\in M_K$, let
$$|P|_v=max_{0\leq i\leq N}\left\{|x_i|_v\right\},\qquad |F(P)|_v=max_{0\leq j\leq M}\left\{|f_j(P)|_v\right\}$$
and
$$|F|_v=max\left\{|a|_v:a\ is\ a\ coefficient\ of\ some\ f_i\right\}$$
Then from the definition of height,
$$H_K(P)=\prod_{v\in M_K}|P|_v^{n_v}\qquad and\qquad H_K(F(P))=\prod_{v\in M_K}|F(P)|_v^{n_v}$$
so it makes sense to define
$$H_K(F)=\prod_{v\in M_K}|F|_v^{n_v}$$
Finally, we let $C_1,/dots,$ denote constants which depend only on M,N and d, and set
$$\varepsilon(v) = \left\{ \begin{array}{rcl}
1 & \mbox{if}
& v\in M_K^\infty \\ 0 & \mbox{if} & v\in M_K^0
\end{array}\right.$$
Having set notation, we turn to the proof of the theorem. The upper bound is relatively easy. Let $v\in M_K$. The triangle inequality yields
$$|f_i(P)|_v\leq C_1^{\varepsilon(v)}|F|_v|P|_v^d$$
Now raise to the $n_v-power$, multiply over all $v_in M_K$, and take the $[K:\mathbf{Q}]^{th}-root$. This yields the desired upper bound
\begin{align*}
H(F(P))&\leq C_1^{
\Sigma_{v\in M_K}\varepsilon(v)n_v
/[K:\mathbf{Q}]}H(F)H(P)^d\\
&= C_1^{
\Sigma_{v\in M_K^\infty}n_v
/[K:\mathbf{Q}]}H(F)H(P)^d\\
&=C_1H(F)H(P)^d
\end{align*}
Notice that we don't use the fact that the $f_i's$ have no common non-trivial zero. But for the lower bound, we have to use this condition.
From the Nullstellensatz theorem that the ideal generated by $f_0,\dots, f_M$ in $\bar{mathbf{Q}}[X_0,\dots, X_N]$ contains some power of each $X_0,\dots, X_N$, since each $(0,\dots, 0)$. Thus for an approriate integer $e\geq 1$, there are polynomials $g_{ij}\in \bar{mathbf{Q}}[X_0,\dots, X_N]$ such that
\begin{equation}
X_i^e=\sum_{j=0}^{M} g_{ij}f_j \qquad\mbox{for each} \ \ 0\leq i\leq N
\end{equation}
Replacing K by a finite extension, we may assume that each $g_{ij}\in K[X_0,\dots, X_N]$. Further, by discarding all terms except those which are homogeneous of degree e, we may assume that each $g_{ij}$ is homogeneous of degree e-d. Let us set the further reasonable notation
$$|G|_v=max\left\{|b|_v:b\ is\ a\ coefficient\ of\ some\ g_{ij}\right\}$$
$$H_K(G)=\prod_{v\in M_K}|G|_v^{n_v}$$
Recalling that $P=[x_0,\dots, x_N]$, the equation described above imply that for each i,
\begin{align*}
|x_i|_v^e&=|\sum_{j=0}^{M}g_{ij}(P)f_j(P)|_v\\
&\leq C_2^{\varepsilon(v)}max_{0\leq j\leq M}\left\{|g_{ij}(P)f_j(P)|_v\right\}
\end{align*}
Now taking the maximum over i gives
$$|P|_v^e\leq C_2^{\varepsilon(v)}max_{0\leq j\leq M\atop 0\leq i\leq N}\left\{|g_{ij}(P)|_v\right\}|F(P)|_v$$
But since each $g_{ij}$ has degree e-d, the usual application of the triangle inequality yields
$$|g_{ij}(P)|_v\leq C_3^{\varepsilon(v)} |G|_v|P|_v^{e-d}$$
Substituting this in above and multiplying through by $|P|_v^{d-e}$ gives
$$|P|_v^d\leq C_4^{\varepsilon(v)}|G|_v|F(P)|_v$$
and now the usual raising to the $n_v$-power, multiplying over $v\in M_K$ and taking the $[K:\mathbf{Q}]^{th}$-root yields the desired lower bound.
\end{proof}
\begin{3}
For $x\in \bar{\mathbf{Q}}$, let
$$H(x)=H([x,1])$$
Similarly, if $x\in K$, then
$$H_K(x)=H_K([x,1])$$
\end{3}
\begin{1}
Let
$$f(T)=a_0T^d+a_1T^{d-1}+\dots+a_d=a_0(T-\alpha_1)\dots (T-\alpha_d)\in \bar{\mathbf{Q}}[T]$$
be a polynomial of degree d. Then
$$2^{-d}\prod_{j=1}^{d}H(\alpha_j)\leq H([a_0,\dots,a_d])\leq 2^{d-1}\prod_{j=1}^{d}H(\alpha_j)$$
\end{1}
\begin{proof}
First note that the inequality remains unchanged if $f(T)$ is replaced by $(1/a_0)f(T)$. Thus we can prove the inequality under the condition of $a_0=1$.
Let $\mathbf{Q}(\alpha_1,\dots,\alpha_d)$, and for $v\in M_K$, set
$$\varepsilon(v) = \left\{ \begin{array}{rcl}
2 & \mbox{if}
& v\in M_K^\infty \\ 1 & \mbox{if} & v\in M_K^0
\end{array}\right.$$
(Note that this function is different from the $\varepsilon(v)$ we defined before, that is because we can see that
$$|x+y|_v\leq \varepsilon(v)max\left\{|x|_v,|y|_v\right\}$$
which will be helpful to solve the inequality, due to its form.)
We will now prove that
$$\varepsilon(v)^{-d}\prod_{j=1}^{d}max\left\{|\alpha_j|_v,1\right\}\leq max_{0\leq i\leq d}\left\{|a_i|_v\right\}\leq \varepsilon(v)^{d-1}\prod_{j=1}^{d}max_{0\leq i\leq d}\left\{|a_i|_v,1\right\}$$
Once this is done, raising to the $n_v$-power, multiplying over $v\in M_K$, and taking $[K:\mathbf{Q}]^{th}$-roots gives the desired result.
The proof is by induction of $d=deg(f)$. For $d=1$, the inequality is clear. Assume for polynomials of degree d-1, the result is true. Choose an index k so that
$$|\alpha_k|_v\geq |\alpha_j|_v \qquad \mbox{for all} \ 0\leq j\leq d$$
And we define a polynomial
\begin{align*}
g(T)&=(T-\alpha_1)\dots(T-\alpha_{k-1})(T-\alpha_{k+1})\dots(T-\alpha_d)\\
&=b_0T^{d-1}+\dots+b_{d-1}
\end{align*}
Therefore we can see that $f(T)=(T-\alpha_k)g(T)$. By comparing coefficients we can get
$$a_i=b_i-\alpha_kb_{i-1}$$
We now prove the upper above bound.
\begin{align*}
max_{0\leq i\leq d}\left\{|a_i|_v\right\}&=max_{0\leq i\leq d}\left\{|b_i-\alpha_kb_{i-1}|\right\}\\
&\leq \varepsilon(v)max_{0\leq i\leq d}\left\{|b_i|,|\alpha_kb_{i-1}|\right\}\\
&\leq \varepsilon(v)max_{0\leq i\leq d}\left\{|b_i|\right\}max\left\{|\alpha_k|,1\right\}\\
&\leq\varepsilon(v)^{d-1}\prod_{j=1}^{d}max_{0\leq i\leq d}\left\{|a_i|_v,1\right\}
\end{align*}
(The last step is by the induction hypothesis applied to g).
Next, to prove the lower bound, we consider two cases. First, if $|\alpha_k|_v\leq \varepsilon(v)$, then by the choice of the index k,
$$\prod_{j=1}^{d}max\left\{|\alpha_j|_v,1\right\}\leq max\left\{|\alpha_k|_v\right\}\leq \varepsilon(v)^d$$
And remember that $a_0=1$, so we have
$$max_{0\leq i\leq d}\left\{|a_i|_v\right\}\geq 1$$
Therefore
$$\varepsilon(v)^{-d}\prod_{j=1}^{d}max\left\{|\alpha_j|_v,1\right\}\leq max_{0\leq i\leq d}\left\{|a_i|_v\right\}$$
Second, if $|\alpha_k|_v\geq \varepsilon(v)$, then
\begin{align*}
max_{0\leq i\leq d}\left\{|a_i|_v\right\}&=max_{0\leq i\leq d}\left\{|b_i-\alpha_kb_{i-1}|_v\right\}\\
&=max_{0\leq i\leq d-1}\left\{|b_i|_v\right\}|\alpha_k|_v
\end{align*}
for $v\in M_K^0$. And for $v\in M_K^\infty$,
\begin{align*}
max_{0\leq i\leq d}\left\{|b_i-\alpha_kb_{i-1}\right\}&\geq (|\alpha_k|_v-1)max_{0\leq i\leq d-1}\left\{|b_i|_v\right\}\\
&>\varepsilon(v)^{-1}|\alpha_k|_vmax_{0\leq i\leq d-1}\left\{|b_i|_v\right\}
\end{align*}
Combining the two situations we can get that
$$max_{0\leq i\leq d}\left\{|a_i|_v\right\}\geq \varepsilon(v)^{-1}|\alpha_k|_vmax_{0\leq i\leq d-1}\left\{|b_i|_v\right\}$$
And now applying the induction hypothesis to g gives the desired lower bound, which completes the proof.
\end{proof}
The reason we prove this theorem is that we want to prove that this height function 'satisfies' the third property. But before we do that, we have to prove another lemma.
\begin{2}
Let $P\in \mathbf{P}^N(\bar{\mathbf{Q}})$ and $\sigma\in G_{\bar{\mathbf{Q}}/\mathbf{Q}}$. Then
$$H(P^\sigma)=H(P)$$
\end{2}
\begin{proof}
Let $K/\mathbf{Q}$ be a field with $P\in \mathbf{P}^N(K)$ and $\sigma$ gives an isomorphism $\sigma:K\rightarrow K^\sigma$. It likewise identifies the sets of absolute values,
$$\sigma: M_K\rightarrow M_{K^\sigma}$$
$$v\rightarrow v^\sigma$$
Clearly $\sigma$ also gives an isomorphism $K_v\rightarrow K_{v^\sigma}^\sigma$, so $n_v=n_{v^\sigma}$. We now compute
\begin{align*}
H_{K^\sigma}(P^\sigma)&=\prod_{w\in M_{K^\sigma}}max\left\{|x_i^\sigma|_w\right\}^{n_w}\\
&=\prod_{v\in M_K}max\left\{|x_i^\sigma|_{v^\sigma}\right\}^{n_{v^\sigma}}\\
&=\prod_{v\in M_K}max\left\{|x_i|_v\right\}^{n_v}\\
&=H_K(P).
\end{align*}
Since $[K:\mathbf{Q}]=[K^\sigma:\mathbf{Q}]$, this is the desired result.
\end{proof}
\begin{1}
Let C and d be constants. Then the set
$$\left\{P\in\mathbf{P}^N(\bar{\mathbf{Q}}):H(P)\leq C\ and\ [\mathbf{Q}(P):\mathbf{Q}]\leq d\right\}$$
contains only finitely many points. In particular, for any number field K,
$$\left\{P\in \mathbf{P}^N(K):H_K(P)\leq C\right\}$$
is a finite set.
\end{1}
\begin{proof}
Let $P\in \mathbf{P}^N(\bar{\mathbf{Q}})$. Take homogeneous coordinates for P, say
$$P=[x_0,\dots,x_N]$$
with some $x_j=1$. Then $\mathbf{Q}(P)=\mathbf{Q}(x_0,\dots,x_N)$, and we have the easy estimate
\begin{align*}
H_{\mathbf{Q}(P)}(P)&=\prod_{v\in M_{\mathbf{Q}(P)}}max_{0\leq i\leq N}\left\{|x_i|_v\right\}^{n_v}\\
&\geq max_{0\leq i\leq N}(\prod_{v\in M_{\mathbf{Q}(P)}}max\left\{|x_i|_v,1\right\}^{n_v})\\
&=max_{0\leq i\leq N}H_{\mathbf{Q}(P)}(x_i)
\end{align*}
Thus if $H(P)\leq C$ and $[\mathbf{Q}(P):\mathbf{Q}]\leq d$, then
$$max_{0\leq i\leq N}\ H(x_i)\leq C\qquad max_{0\leq i\leq N}\ [\mathbf{Q}(x_i):\mathbf{Q}]\leq d$$
It thus suffices to prove that the set
$$\left\{x\in \bar{\mathbf{Q}}:H(x)\leq C\ and\ [\mathbf{Q}(x_i):\mathbf{Q}]\leq d\right\}$$
is finite, which means that we only have to prove the case $N=1$.
Suppose $x\in\bar{\mathbf{Q}}$ is in this set, and let $e=[\mathbf{Q}(x):\mathbf{Q}]\leq d$. Further let $x=x_1,\dots, x_e$ be the conjugates of x, so the minimal polynomial of x over $\mathbf{Q}$ is
$$f_x(T)=(T-x_1)\dots(T-x_e)=T^e+\dots+a_e\in \mathbf{Q}(T)$$
Now
\begin{align*}
H([1,a_1,\dots,a_e])&\leq 2^{e-1}\prod_{j=1}^{e}H(x_j)\\
&=2^{e-1}H(x)^e\\
&\leq (2C)^d
\end{align*}
Thus we can see that there are only finitely many choices for $a_i$, so the set is finite, which completes the proof.
\end{proof}
\subsection{Heights on Elliptic Curves}
Now we have already developed enough theorems about height functions on projective space, so we will focus on elliptic curves, and finish the proof of the Weak-Mordell theorem.
\begin{3}
Let f,g be two real-valued functions on a set $\Phi$. Then we write
$$f=g+O(1)$$
if there's constants $C_1$ and $C_2$ so that
$$C_1\leq f(P)-g(P)\leq C_2\ \mbox{for all}\ P\in \Phi$$
In only the lower(respectively upper) inequality is satisfied, then we naturally write $f\geq g+O(1)$(respecively $f\leq g+O(1)$).
\end{3}
\begin{3}
The height on projective space is the function
$$h:\mathbf{P}^N(\bar{\mathbf{Q}})\rightarrow \mathbf{R}$$
$$h(P)=logH(P)$$
Note that $h(P)\geq 0$ for all P since $H(P)\geq1$.
And let E/K be an elliptic curve and $f\in \bar{K}(E)$ a function. The height on E(relative to f) is the function
$$h_f:E(\bar{K})\rightarrow \mathbf{R}$$
$$h_f(P)=h(f(P))$$
\end{3}
\begin{4}
Let E/K be an elliptic curve and $f\in K(E)$ a non-constant function. The for any constant C,
$$\left\{P\in E(K):h_f(P)\leq C\right\}$$
is a finite set.
\end{4}
\begin{proof}
The function f gives a finite-to-one map of the set in question to the set
$$\left\{Q\in \mathbf{P}^1(K):H(Q)\leq e^C\right\}$$
Now apply Theorem to this last set and we can get the desired result.
\end{proof}
The next step helps us find the relation between the additive law on an elliptic curve and the height function.
\begin{1}
Let E/K be an elliptic curve and let $f\in K(E)$ be an even function(i.e. $f\circ[-1]=f$). Then for all $P,Q\in E(\bar{K})$,
$$h_f(P+Q)+h_f(P-Q)=2h_f(P)+2h_f(Q)+O(1)$$
\end{1}
\begin{proof}
Choose a Weierstrass equation for E/K of the form
$$E:y^2=x^3+Ax+B$$
We start by proving the theorem for the particular function $f=x$(Note that it is an even function). The general case will be an easy corollary.
Since $h_x(O)=0$ and $h_x(-P)=h_x(P)$, the result clearly holds if $P=O$ or $Q=O$. We now assume that $P,Q\neq O$, and write
$$x(P)=[x_1,1]\qquad x(Q)=[x_2,1]$$
$$x(P+Q)=[x_3,1]\qquad x(P-Q)=[x_4,1]$$
Thus we have
$$x_3+x_4=\frac{2(x_1+x_2)(A+x_1x_2)+4B}{(x_1+x_2)^2-4x_1x_2}$$
$$x_3x_4=\frac{(x_1x_2-A)^2-4B(x_1+x_2)}{(x_1+x_2)^2-4x_1x_2}$$
Define a map $g:\mathbf{P}^2\rightarrow\mathbf{P}^2$ by
$$g([t,u,v])=[u^2-4tv, 2u(At+v),(v-At)^2-4Btu]$$
Then the formula for $x_3$ and $x_4$ shows that there is a commutative diagram
\begin{align*}
\qquad E\ &\times \ E\xrightarrow{G}E\ \times\ E\\
&\downarrow\qquad \qquad \quad \downarrow\\
\sigma\qquad \mathbf{P}^1\ &\times\ \mathbf{P}^1\quad \mathbf{P}^1\ \times\ \mathbf{P}^1\quad \sigma\\
&\downarrow\qquad \qquad \quad \downarrow\\
&\mathbf{P}^2\ \quad \xrightarrow{g}\ \quad \mathbf{P}^2
\end{align*}
where
$$G(P,Q)=(P+Q, P-Q)$$
and the vertical map $\sigma$ is the composition of the two maps
$$E\times E\rightarrow \mathbf{P}^1\times\mathbf{P}^1\qquad and\qquad \mathbf{P}^1\times\mathbf{P}^1\rightarrow \mathbf{P}^2$$
$$(P,Q)\rightarrow(x(P),x(Q))\qquad ([\alpha_1,\beta_1],[\alpha_2,\beta_2])\rightarrow [\beta_1\beta_2, \alpha_1\beta_2+\alpha_2\beta_1,\alpha_1\alpha_2]$$
\end{proof}
The next step is to show that g is morphism, so as to be able to apply Theorem. By definition, this is equivalent to prove that the three polynomials have no common non-trivial zeros. Suppose that $g([t,u,v])=[0,0,0]$. If $t=0$, then from
$$u^2-4tv=0\qquad and\qquad (v-At)^2-4Btu=0$$
we see that $u=v=0$. Thus we may assume that $t\neq 0$, and so it makes sense to define a new quantity $x=u/2t$. Notice that the equation $u^2-4tv=0$ can be written as $x^2=v/t$. Now dividing the equalities
$$2u(At+v)+4Bt^2=0\qquad and\qquad (v-At)^2-4Btu=0$$
by $t^2$ and rewriting them interms of x yields the two equations
$$\psi(x)=4x^3+4Ax+4B=0$$
$$\phi(x)=x^4-2Ax^2-8Bx+A^2=0$$
And we can see that
$$(12X^2+16A)\phi(X)-(3X^3-5AX-27B)\psi(X)=4(4A^3+27B^2)\neq 0$$
This completes the proof that g is a morphism.
We return to our commutative diagram, and compute
\begin{align*}
h(\sigma(P+Q,P-Q))&=h(\sigma\circ G(P,Q))\\
&=h(g\circ \sigma(P,Q))\\
&=2h(\sigma(P,Q))+O(1)\qquad from\ Theorem
\end{align*}
since g is a morphism of degree 2. Now to complete the proof for $f=x$, we will show that for all $R_1,R_2\in E(\bar{K})$, there is a relation
$$h(\sigma(R_1,R_2))=h_x(R_1)+h_x(R_2)+O(1)$$
Then using this twice on both sides of the equation
$$h(\sigma(P+Q,P-Q))=2h(\sigma(P,Q))+O(1)$$
will give the desired result.
One verifies that if either $R_1=O$ or $R_2=O$, then $h(\sigma(R_1,R_2))=h_x(R_1)+h_x(R_2)$. Otherwise, we may write
$$x(R_1)=[\alpha_1,1]\qquad and\qquad x(R_2)=[\alpha_2,1]$$
and so
$$h(\sigma(R_1,R_2))=h([1,\alpha_1+\alpha_2,\alpha_1\alpha_2])\quad and\quad h_x(R_1)+h_x(R_2)=h(\alpha_1)+h(\alpha_2)$$
Then from Theorem applied to the polynomial $(T+\alpha_1)(T+\alpha_2)$, we obtain the desired estimate
$$h(\alpha_1)+h(\alpha_2)-log4\leq h([1,\alpha_1+\alpha_2,\alpha_1\alpha_2])\leq h(\alpha_1)+h(\alpha_2)+log2$$
Finally, to deal with the general case, we prove that
$$h_f=\frac{deg(f)h_x}{2}+O(1)$$
Once this is proved, the theorem follows immediately from multiplying the known relation for $h_x$ by $\frac{deg(f)}{2}$
\begin{2}
Let $f,g\in K(E)$ be even functions. Then
$$(deg\ g)h_f=(deg\ f)h_g+O(1)$$
\end{2}
\begin{proof}
Let $x,y\in K(E)$ be Weierstrass coordinates for E/K. The subfield consisting of all even functions is exactly K(x), so we can find a rational function $\rho(X)\in K(X)$ so that
$$f=\rho\circ x$$
Hence using theorem and the fact that $\rho$ is a morphism, we can get that
$$h_f=(deg\ \rho)h_x+O(1)$$
But from the equation above, we have
$$degf=degxdeg\rho=2deg\rho$$
So we find
$$2h_f=(deg\ f)h_x+O(1)$$
The same reasoning for g yields
$$2h_g=(deg\ g)h_x+O(1)$$
and combining the last two equations gives the desired result.
\end{proof}
\newtheorem{6}{Corollary}
\begin{6}
Let E/K be an elliptic curve and $f\in E(K)$ an even function. \\
(a) Let $Q\in E(\bar{K})$. Then for all $P\in E(\bar{K})$,
$$h_f(P+Q)\leq 2h_f(P)+O(1)$$
(b) Let $m\in \mathbf{Z}$. Then for all $P\in E(\bar{K})$,
$$h_f([m]P)=m^2h_f(P)+O(1)$$
\end{6}
\begin{proof}
(a) This is directly from Theorem 2.4.2, because $h_f(P-Q)\geq 0$.\\
(b) Since f is even, it suffices to consider $m\geq 0$. Further, this result is trivial for $m=0,1$. We will finish the proof by induction. Assume it is known for m-1 and m. Replacing P,Q in Theorem 2.4.3 by [m]P and P, we find
\begin{align*}
h_f([m+1]P)&=-h_f([m-1]P)+2h_f([m]P)+2h_f(P)+O(1)\\
&=(-(m-1)^2+2m^2+2)h_f(P)+O(1)\\
&=(m+1)^2h_f(P)+O(1)
\end{align*}
\end{proof}
\begin{1}
\textup{(Mordell-Weil)} Let E be an elliptic curve defined over a number field K. The group E(K) is a finitely generated Abelian group
\end{1}
\begin{proof}
Choose any even, non-constant function $f\in K(E)$, for example the x-coordinate function on Weierstrass equation. Then from Corollary 2.4.4(a), 2.4.4(b) and Proposition 2.4.1 we know that the height function $h_f$ satisfies the three properties, and using the Descent theorem and Weak Mordell-Weil theorem we know Mordell-Weil theorem is correct.
\end{proof}
|
1,116,691,499,067 | arxiv | \section{Introduction}
The conjugacy decision problem is, along with the word and isomorphism
problems, one of the original group-theoretic decision problems introduced by Max Dehn
in 1911. In this paper we will consider a variant of the first called the conjugacy search problem, in which we are given two conjugate elements $g$ and $g_1$ of a group $G$ and are asked to find an element $h$ in $G$ such that $g^h=hgh^{-1}$. There are groups for which the conjugacy decision problem is unsolvable, whereas the search variant is always solvable. In the case of finitely generated metabelian groups, the conjugacy decision problem is solvable \cite{noskov1982}.
\
The original motivation for the conjugacy search problem comes from group-based cryptography. In this paper our focus is on the algorithmic aspects of the conjugacy search problem, rather than its potential use in cryptographic applications. In particular, we develop and estimate the computational complexity of an algorithm that solves the conjugacy search problem in a certain family $\mathcal{F}$ of finitely presented metabelian groups (see Section \ref{groups}).
Note that in the literature there are many algorithmic results available for the conjugacy decision, conjugacy search, and other group-theoretic problems, but primarily for polycyclic groups (\cite{eickostheimer2003}, \cite{handbook}). However, groups in our family need not to be polycyclic. To rectify this, we essentially translate the existing algorithms for metabelian polycyclic groups to a non-polycyclic setting. There are a number of technical difficulties to achieve this, and a large part of the paper is devoted to solving these difficulties.
\
A particular subfamily of $\mathcal{F}$ consists of the following generalization of Baumslag-Solitar groups:
$$G=\langle q_1,\ldots,q_n,b\mid [q_l,q_t]=1,q_lbq_l^{-1}=b^{m_l},1\leq l,t\leq n\rangle,$$
where $m_1,\ldots,m_n$ are integer numbers.
Observe that such generalized Baumslag-Solitar groups split as semidirect products $B\rtimes Q$ with $Q$ free abelian of finite rank $n$ and $B=\mathbb{Z}[{1\over p_1},\ldots,{1\over p_n}]$.
In general, groups in $\mathcal{F}$ split as a semidirect product $G=B\rtimes Q$ with $Q$ free abelian of finite rank and where $B$ can be seen as an additive subgroup of $\mathbb{Q}^s$ for some $s$. Throughout the paper, we will consider $B$ as a $Q$-module with left action. Upon fixing a basis, the action of $Q$ on $B$ can be described using integral matrices
that commute pairwise.
Such a group $G$ is polycyclic if and only if the matrices encoding the action of $Q$ have integral inverses \cite{AuslanderHall}.
\
Groups in $\mathcal{F}$ enjoy strong finiteness properties, for example they have finite Pr\"ufer rank (meaning that the number of generators needed to generate any finitely generated subgroup is bounded),
have cohomological type $\text{FP}_\infty$ \cite[Proposition 1]{baumslag1976constructable} (see also the proof of Theorem 8 in the same paper) and are constructible (i.e., can be constructed in finitely many steps from the trivial group using finite index extensions and ascending HNN-extensions). In fact, our groups are iterated, strictly ascending HNN-extensions of a free abelian group of finite rank. Moreover, any constructible torsion-free split metabelian group of finite Pr\"ufer rank lies in $\mathcal{F}$ and any metabelian group of finite Pr\"ufer rank can be embedded in a group in $\mathcal{F}$ \cite{baumslag1976constructable}.
\
The general strategy of our algorithm consists of using linear representations of our groups so we can utilize known methods from linear algebra, such as the polynomial time solution of the multiple orbit problem given in \cite{aszl1996multiplicative}. In order to uses these methods we need an efficient means of swapping between the representation of an element of $B$ inside $\mathbb{Q}^s$ and its word representation. After giving the precise definition of the family $\mathcal{F}$ and fixing some notation, we develop the necessary techniques to swap between these representations in Section \ref{groups}. These include a method to decide whether some arbitrary vector in $\mathbb{Q}^s$ belongs to $B$, as well as the ability to determine when a system of linear equations has some solution in $B$. Along the way we show that the computational complexity of these procedures is sufficiently low to switch between word and linear representations as needed.
\
In Section \ref{complexity}, we describe and analyze the computational complexity of our algorithm for the conjugacy search problem in $\mathcal{F}$. In particular, we prove the following theorem:
\begin{thm}\label{exponential} For any $G\in\mathcal{F}$, the time complexity of the conjugacy search problem for conjugate elements $g,g_1\in G$ is at most exponential in the length of $g$ and $g_1$.
\end{thm}
As a corollary, we also deduce some consequences about conjugator lengths (Corollary \ref{length}).
\
Of course, the complexity bound of Theorem \ref{exponential} is too large for some particular choices of the group $G$. This is the case if $G$ happens to be nilpotent, as the complexity is known to be polynomial (see \cite{sims1994}, we thank the anonymous referee for pointing out this fact). We make this explicit by giving an example of a subfamily of $\mathcal{F}$ consisting of nilpotent groups where our algorithm is also polynomial (see Theorem \ref{polynomial}).
\
However, there are some particular cases of groups in $\mathcal{F}$ for which the conjugacy search problem reduces to a type of discrete logarithm problem. This is discussed in Subsection \ref{discretelog}. In particular, this is the case for generalized metabelian Baumslag-Solitar groups given by a relatively simple presentation of the form
$$
G=\left\langle q_1,q_2,b|b^{q_1}=b^{m_1},b^{q_2}=b^{m_2},[q_1,q_2]=1\right\rangle.
$$
\iffalse{
\
Length based conjugacy search is a heuristic method that attempts to solve the conjugacy search problem or the generalized conjugacy search problem (multiple instances of the conjugacy search problem where there is a common conjugating element in a specified subgroup). The latter problem is well known since it is related to the security of the Arithmetica protocol. To perform the LBCS, we associate to our group an effectively computable length function that has the property that conjugation generically increases the lengths of elements. Following that, we iteratively build a conjugating element by successively conjugating by generators of our group and then assuming that we are building a successful conjugator when there is a decrease in length.
Finally in the last section we perform experiments on the generalized metabelian Baumslag-Solitar groups as above. Such experiments utilize a heuristic algorithm called length-based conjugacy search, which is adapted from an attack of the same name originating in group-based cryptography. Our experiments indicate that these generalized metabelian Baumslag-Solitar groups are resistant to such search algorithms, i.e., probabilistically the conjugator cannot be found given sufficient time.
}\fi
\section{Split Metabelian Groups of Finite Pr\"ufer Rank}\label{groups}
\subsection{The Family $\mathcal{F}$ and Notation} Consider the following group presentation
\begin{equation}\label{F}G=\langle q_1,\ldots,q_n,b_1,\ldots,b_s\mid [q_l,q_t]=1,[b_i,b_j]=1,\mathcal{R}\rangle, \hbox{ with}\end{equation}
$$\mathcal{R}=\{q_lb_iq_l^{-1}=b_1^{m_{l(1,j)}}b_2^{m_{l(2,j)}}\ldots b_s^{m_{l(s,j)}};m_{l(i,j)}\in\mathbb{Z}\},$$
where we require the following extra condition: for $l=1,\ldots,n$ let $M_l$ be the integer matrix encoding the action of $q_l$, that is, the $s\times s$ matrix with $j$-th column $m_{l(1,j)},\ldots,m_{l(s,j)}$. Then the matrices $M_l$ have to commute pairwise.
\
We define the class $\mathcal{F}$ as the class of groups admitting a presentation as (\ref{F}). For a fixed group $G$ with presentation (\ref{F}) we denote by $Q$ the subgroup of $G$ generated by $q_1,\ldots,q_n$ and by $B$ the normal subgroup of $G$ generated by $b_1,\ldots,b_s$. Note that elements in $B$ are precisely those elements expressed by words where the exponent sum of instances of every $q_l$ is zero. From this fact one easily sees that $G=B\rtimes Q$.
We use multiplicative notation for the whole group $G$ but often we use additive notation for $B$. If $c\in B$, $x\in Q$, the conjugation action of the element $x$ on $c$ is denoted
$x\cdot c$ additively or $c^x=xcx^{-1}$ multiplicatively.
Any element in $G$
can be represented by a word of the following type
\begin{equation}\label{form}q_1^{-\alpha_1}\ldots q_n^{-\alpha_n}b_1^{\beta_1}\ldots b_s^{\beta_s}q_1^{\gamma_1}\ldots q_n^{\gamma_n},\end{equation}
with $\alpha_1,\ldots,\alpha_n\geq 0$ and such that whenever $\alpha_i\neq 0$, the element $q_i^{-1}b_1^{\beta_1}\ldots b_s^{\beta_s}q_i$ does not belong to the subgroup generated by $b_1,\ldots,b_s$.
There is an efficient algorithm (collection) to transform any word in the generators into a word of the previous form representing the same group element: use the relators to move all of the instances of $q_i$ with negative exponents to the left and all the instances of $q_i$ with positive exponents to the right
(see example \ref{genBS}).
\begin{ex}\label{genBS} Generalized Metabelian Baumslag-Solitar Groups. {\rm Let $m_1,\ldots,m_n$ be positive integers. We call the group given by the following presentation a {\sl generalized metabelian Baumslag-Solitar group}
$$G=\langle q_1,\ldots,q_n,b\mid b^{q_l}=b^{m_l},\, [q_l, q_j]=1,\, 1\leq i, j \leq n,\rangle.$$
It is a constructible metabelian group of finite Pr\"ufer rank and $G\cong B\rtimes Q$ with $Q=\langle q_1,\ldots,q_n\rangle\cong\mathbb{Z}^n$ and $B=\mathbb{Z}[m_1^{\pm1},\ldots,m_k^{\pm1}]$ (as additive groups). Let us examine how collection works for these groups. Consider for example the group
$$
G=\langle q_1,q_2,b\mid b^{q_1}=b^{2},b^{q_2}=b^{3},[q_1,q_2]=1\rangle,
$$
with $G\cong\mathbb{Z}\left[\frac{1}{2},\frac{1}{3}\right]\rtimes\mathbb{Z}^2$, and an uncollected word
$q_2bq_1^{-1}q_2b^{-1}q_1$ in $G$
representing the group element $w$. An iterated use of the relations $q_2b^{\pm 1}=b^{\pm 3}q_2$ and $b^{\pm 1}q_1^{-1}=q_1^{-1}b^{\pm 2}$ together with the commutation relations between $q_1$ and $q_2$ yields
$$
w=q_2bq_1^{-1}q_2b^{-1}q_1=b^3q_2q_1^{-1}b^{-3}q_2q_1=q_1^{-1}b^6b^{-9}q^2_2q_1=q_1^{-1}b^{-3}q_1q^2_2.
$$
}
\end{ex}
\
\begin{ex}\label{galois}{\rm Let $L:\mathbb{Q}$ be a Galois extension of degree $n$ and fix an integral basis $\{u_1,\ldots,u_s\}$ of $L$ over $\mathbb{Q}$.
Then $\{u_1,\ldots,u_s\}$ freely generates the maximal order $\mathcal{O}_L$ as a $\mathbb{Z}$-module.
Now, we choose integral elements, $q_1,\ldots,q_n$, generating a free abelian multiplicative subgroup of $L-\{0\}$. Each $q_l$ acts on $L$ by left multiplication and using the basis $\{u_1,\ldots,u_s\}$, we may represent this action by means of an integral matrix $M_l$. Let $B$ be the smallest sub $\mathbb{Z}$-module of $L$ closed under multiplication with the elements $q_l$ and $q_l^{-1}$ and such that $\mathcal{O}_L\subseteq B$, i.e.,
$$B=\mathcal{O}_L[q_1^{\pm 1},\ldots,q_n^{\pm1}].$$
We may then define $G=B\rtimes Q$, where the action of $Q$ on $B$ is given by multiplication by the $q_l$'s. The generalized Baumslag-Solitar groups of the previous example are a particular case of this situation when $L=\mathbb{Q}$. If the elements $q_l$ lie in $\mathcal{O}_L^\times$, which is the group of units of $\mathcal{O}_L$, then the group $G$ is polycyclic.}
\end{ex}
\subsection{Linear Representations}\label{semidirectB} In this subsection, we consider again a group with a presentation as in (\ref{F}). We will show that the subgroup $B$ can be seen as an additive subgroup of $\mathbb{Q}^s$ and see how to swap between representation of its elements as words in the generators and as vectors in $\mathbb{Q}^s$. Recall that $B$ consists precisely of those elements of $G$ that are given by words $w$ so that for each $i=1,\ldots,n$ the exponent sum of $q_i$'s is zero. Using the collection process above, such an element $b$ can be represented by a word
$$q_1^{-\alpha_1}\ldots q_n^{-\alpha_n}b_1^{\beta_1}\ldots n_s^{\beta_s}q_1^{\alpha_1}\ldots q_n^{\alpha_n}.$$
Additively,
$$b=(q_1^{-\alpha_1}\ldots q_n^{-\alpha_n})\cdot ({\beta_1}b_1+\ldots+ {\beta_s}b_s).$$
Then, define the map $B\hookrightarrow\mathbb{Q}^s$ by mapping $b$ to
\begin{equation}\label{matrixb} v=M_1^{-\alpha_1}\cdots M_n^{-\alpha_n}\begin{pmatrix} \beta_1\\ \vdots\\ \beta_s\\ \end{pmatrix}.\end{equation}
One easily checks that this is a well-defined injection
that can be seen as an explicit recipe to transform words representing elements in $B$ into the corresponding vector of $\mathbb{Q}^s$. We can do something similar for arbitrary elements $g\in G$:
assume that $g$ is given by a word as in (\ref{form})
$$q_1^{-\alpha_1}\ldots q_n^{-\alpha_n}b_1^{\beta_1}\ldots b_s^{\beta_s}q_1^{\gamma_1}\ldots q_n^{\gamma_n},$$
then the following word also yields $g$:
$$q_1^{-\alpha_1}\ldots q_n^{-\alpha_n}b_1^{\beta_1}\ldots b_s^{\beta_s}q_1^{\alpha_1}\ldots q_n^{\alpha_n}q_1^{\gamma_1-\alpha_1}\ldots q_n^{\gamma_n-\alpha_n}.$$
In the semidirect representation we have $g=vx$ with $x=q_1^{\gamma_1-\alpha_1}\ldots q_n^{\gamma_n-\alpha_n}$ and $v$ is as before.
The complexity of the computation in (\ref{matrixb}) using Gaussian elimination for inverses, standard matrix multiplication, and efficient exponentiation is:
$$
O((n-1)[s^3+s^3\log\max_l(\alpha_l)+s^3\log\max_l(\gamma_l-\alpha_l)]+s^2+s^3).
$$
Now, consider the converse, in which we have $vx$ with $v$ a vector in $\mathbb{Q}^s$. In order to express $vx$ as a word as in (\ref{form})
we first
describe a particular subset of $\mathbb{Q}^s$ that contains $B$.
In the following discussion, we identify $B$ with its image in $\mathbb{Q}^s$ and the group generated by $b_1\ldots,b_s$ with $\mathbb{Z}^s$.
\
For $1\leq l \leq n$, let $d_l$ be the smallest positive integer such that
$d_lM_l^{-1}$is an integral matrix, i.e., $d_l$ is the lowest common denominator of the matrix entries $m_{l(s,i)}$. Let $d=\prod_ld_l$ (if $G$ is polycyclic, $d=1$). For any $v\in B$,
$$
d^{\alpha_1+\ldots+\alpha_n}v\in\mathbb{Z}^s
$$
thus $v\in\mathbb{Z}[\frac{1}{d}]^s$, in other words, we have
$$B\subseteq\mathbb{Z}[\tfrac{1}{d}]^s\subset \mathbb{Q}^s.$$
\begin{rem}{\rm This implies that for any $v\in B$, if $t$ is be the smallest positive integer such that $d^tv$ lies in $\mathbb{Z}^s$, then $t$ is bounded by twice the length of $v$ as a word in (\ref{form}).}
\end{rem}
\begin{comment}
Conversely, assume that we are given a vector $v\in\mathbb{Z}[{1\over d}]^s$. We now describe a procedure to decide whether $v$ belongs to the image of $B$ under the embedding $B\hookrightarrow\mathbb{Q}^s$, and if so, to find a word that represents the associated group element $b$.
If $B$ happens to be equal to the whole $\mathbb{Z}[{1\over d}]^s$ then we do not need to do anything. In particular this happens if $d$ can be taken to be $1$, i.e., when $G$ is polycyclic.
\end{comment}
$B$ can also be constructed from $\mathbb{Z}^s$ and $M=\prod_lM_l$. Observe that
$$
\mathbb{Z}^s\subseteq M^{-1}\mathbb{Z}^s\subseteq\ldots\subseteq M^{-j}\mathbb{Z}^s\subseteq M^{-j-1}\mathbb{Z}^s\subseteq\ldots\subseteq B
$$
and in fact $B=\cup_{j=0}^\infty M^{-j}\mathbb{Z}^s.$ To check this, note that any vector in $B$ has the form $M_1^{-\beta_1}\ldots M_n^{-\beta_n}u$ for some $u\in\mathbb{Z}^s$ and certain $\beta_1,\ldots,\beta_n\geq 0$. Let $\beta=\max\{\beta_1,\ldots,\beta_n\}$, then
$$M_1^{-\beta_1}\ldots M_n^{-\beta_n}u=M^{-\beta}M_1^{\beta-\beta_1}\ldots M_n^{\beta-\beta_n}u=M^{-\beta}w$$
where $w=M_1^{\beta-\beta_1}\ldots M_n^{\beta-\beta_n}v$ lies in $\mathbb{Z}^s$. Consequently, if $q=q_1\ldots q_n$, then the group $B\rtimes \langle q\rangle$ is a strictly ascending HNN extension of $\mathbb{Z}^s$.
\begin{lem}\label{intersection} There is some $\alpha$ depending on $G$ only such that for any $i$,
$$B\cap {1\over d^i}\mathbb{Z}^s\subseteq M^{-i\alpha}\mathbb{Z}^s.$$
Moreover $\alpha\leq s\log d$.
\end{lem}
\begin{proof} Consider first the case when $i=1$. We have $\mathbb{Z}^s\subseteq{1\over d}\mathbb{Z}^s$ and
$$\mathbb{Z}^s\subseteq M^{-1}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s\subseteq\ldots\subseteq M^{-j}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s\subseteq M^{-j-1}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s\subseteq\ldots\subseteq{1\over d}\mathbb{Z}^s.$$
As the quotient of ${1\over d}\mathbb{Z}^s$ over $\mathbb{Z}^s$ is the finite group $\mathbb{Z}_d\times\ldots\times \mathbb{Z}_d$ of order $d^s$, this sequence stabilizes at some degree, say $\alpha$. Then
$$B\cap {1\over d}\mathbb{Z}^s=M^{-\alpha}\mathbb{Z}^s\cap {1\over d}\mathbb{Z}^s\subseteq M^{-\alpha}\mathbb{Z}^s$$
as desired. Moreover, we claim that stabilizes precisely at the first $\alpha$ such that
$$M^{-\alpha}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s= M^{-\alpha-1}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s.$$
To demonstrate, let $b\in M^{-\alpha-2}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s$. Then
$$Mb\in M^{-\alpha-1}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s=M^{-\alpha}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s$$
thus $b\in M^{-\alpha-1}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s=M^{-\alpha}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s$. Repeating the argument implies that for all $\beta>\alpha$,
$$M^{-\alpha}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s=M^{-\beta}\mathbb{Z}^s\cap{1\over d}\mathbb{Z}^s.$$
As a consequence, $\alpha$ is bounded by the length of the longest chain of proper subgroups in $\mathbb{Z}_d\times\ldots\times \mathbb{Z}_d$, i.e., $\alpha\leq\log(d^s)=s\log d$.
\
For the case of an arbitrary $i$ we argue by induction. Let $b\in B\cap {1\over d^i}\mathbb{Z}^s$, then $db\in B\cap {1\over d^{i-1}}\mathbb{Z}^s$ and by induction we may assume that $db\in M^{-(i-1)\alpha}\mathbb{Z}^s$, thus $M^{(i-1)\alpha}db=v\in\mathbb{Z}^s$. Then
$${1\over d}v\in B\cap {1\over d}\mathbb{Z}^s\subseteq M^{-\alpha}\mathbb{Z}^s.$$
Therefore
$$M^{\alpha}M^{(i-1)\alpha} b={1\over d}M^{i\alpha} v\in\mathbb{Z}^s$$
and $b\in M^{-i\alpha}\mathbb{Z}^s$.
\end{proof}
It is easy to construct examples with $\alpha\neq 1$ for $\alpha$ as in Lemma \ref{intersection}:
\begin{ex}\label{exalpha}{\rm
Consider the group $G\in\mathcal{F}$ given by the following presentation:
\begin{align*}
G&=\langle q_1,q_2,q_3,b_1,b_2,b_3\mid [q_l,q_t]=1,[b_i,b_j]=1,\mathcal{R}\rangle\, \hbox{with}\\
&\mathcal{R}=\{b_1^{q_1}=b_1^2,b_2^{q_2}=b_2^4,b_3^{q_3}=b_3^{16},b_j^{q_l}=b_j\text{ for }j\neq l\}\\
\end{align*}
From the presentation above $s=3$. The linear representations of the $q_l$'s (and their product $M$) are then:
$$
M_1=\left[\begin{array}{ccc}
2 & 0 & 0\\
0& 1 & 0\\
0& 0 &1
\end{array}\right]
M_2=\left[\begin{array}{ccc}
1 & 0 & 0\\
0& 4 & 0\\
0& 0 &1
\end{array}\right]
M_3=\left[\begin{array}{ccc}
1 & 0 & 0\\
0& 1 & 0\\
0& 0 &16
\end{array}\right];
M=\left[\begin{array}{ccc}
2 & 0 & 0\\
0& 4 & 0\\
0& 0 &16
\end{array}\right].
$$
From visual inspection of $M$ it is clear that $d=16$. Moreover, it is easy to check that ${1\over 16}\mathbb{Z}^s\subseteq B$ and that in fact
$${1\over 16}\mathbb{Z}^s={1\over 16}\mathbb{Z}^s\cap B\subseteq M^{-4}\mathbb{Z}^s.$$
Moreover $4$ is smallest possible in these conditions thus $\alpha=4$.
}\end{ex}
In determining whether a vector $v\in\mathbb{Q}^s$ lies in $B$, it is clear from the previous discussion that a necessary condition is that $v$ belongs to $\mathbb{Z}[{1\over d}]^s$, and therefore there exists a $t>0$ such that $v\in {1\over d^t}\mathbb{Z}^s$. In the particular case when $v$ is integral, then $v\in B$ and the coordinates of $v$ are the exponents of the $b_j$'s in an expression for $v$ as in (\ref{form}).
\
If $v$ is strictly rational, we can perform the following procedure to check whether $v\in{1\over d^t}\mathbb{Z}^s$ for some $t$ and to find the smallest possible such $t$. First, compute the least common multiple of the denominators of the entries of $v$. By reducing if necessary, we may assume that
$$v={1\over m}(v_1,\ldots,v_s)$$
with the $v_j$ integers so that no prime divides all of $m,v_1,\ldots, v_s$ simultaneously. Note that for an arbitrary $t$, $d^tv$ is integral if and only if $m$ divides $d^tv_j$ for each $j=1\ldots,n$, and by the choice of $m$ and $v_1,\ldots,v_s$ this is equivalent to $m$ dividing $d^t$. Therefore to decide whether $v$ lies in ${1\over d^t}\mathbb{Z}^s$ for some $t$ we
only have to check whether $d^t=0$ modulo $m$ for some $t\geq 0$. Moreover, it is easy to check that if there is such a $t$ then there is some so that $t\leq m$, in other words,
if explicit factorizations of $m$ and $d$ are not available, we need only compute $d^t$ for $1\leq i\leq m$ and in the case when none of these values is a multiple of $m$ we conclude that $v$ does not belong to $\mathbb{Z}[{1\over d}]^s$.
\begin{lem}\label{vinB} Let $v\in\mathbb{Z}[{1\over d}]^s$ and $t$ the smallest possible integer such that $d^tv$ is integral. Then $v\in B$ if and only if $$M^{ts\lfloor\log d\rfloor}v\in\mathbb{Z}^s$$
where $M=M_1M_2\ldots M_n$.
The complexity of this computation is polynomial, specifically $O((n-1) s^3\log ts\lfloor\log d\rfloor)$. (Alternatively, the same result holds true but with $\alpha$ instead of $s\lfloor\log d\rfloor$).\end{lem}
\begin{proof} Lemma \ref{intersection} implies that $v\in B$ if and only if $M^{t\alpha}v$ is integral. Thus if $v\in B$,
$$M^{ts\lfloor\log d\rfloor}v=M^{(ts\lfloor\log d\rfloor-t\alpha)} M^{t\alpha}v$$
is integral because $ts\lfloor\log d\rfloor-t\alpha\geq 0$. The converse is obvious.
\
Regarding the time complexity, we have to compute the $(ts\lfloor\log d\rfloor v)$-th power of the matrix $M$. The complexity estimation is obtained using standard matrix multiplication and efficient exponentiation.
\end{proof}
\begin{rem}{\rm Note that the exponent $ts\lfloor\log d\rfloor$ is just an upper bound and often a much smaller value suffices to obtain an expression of a given $v\in B$ as a word on the generators of $G$. Consider for example the group of Example \ref{exalpha} and the vector $v\in\mathbb{Q}^3$:
$$
v'=\left[ \frac{1}{32},\frac{3}{64},\frac{5}{16}\right].
$$
Here, $t=2$, $s=3$, $d=16$ and $d=$ thus $ts\lfloor\log d\rfloor=24$ but note that already $M^5v$ is integral.
}\end{rem}
\begin{comment}
\begin{rem}
{\rm In Lemma \ref{vinB} if $v$ lies in $B$ we get, as a by-product, an expression for $v$ as in (\ref{form}). Moreover, we deduce that given an element by its semidirect product representation $vx$ with $x\in Q$ and $v\in\mathbb{Z}[{1\over d}]^s$ it is possible, in a time which is polynomial in $n$, $i$ and $s$ (recall that $\alpha\leq s\log d$), to produce a representation as a word in the generators. (Recall that $d$ can be computed in polynomial time in $s$). Conversely, the semidirect product representation of a group element which is given by a word as in (\ref{form}) can be computed in a time which is polynomial in $s$ and in the length of the word since we only have to perform at most as many matrix multiplications as that length.}
\end{rem}
\end{comment}
\begin{comment}
\begin{ex}{\rm
Consider the group $G\in\mathcal{F}$ given by the following presentation:
\begin{align*}
G&=\langle b_i,q_i\mid b_1^{q_1}=b_1^{2},b_2^{q_2}=b_2^{4},b_3^{q_3}=b_3^{16},[b_i,b_j]=1,[q_i,q_j]=1\rangle,\\
&\text{with }1\leq i,j\leq 3\text{ and }b_i^{q_j}=b_i\text{ for }i\neq j.
\end{align*}
From the presentation above $s=3$. The linear representations of the $q_l$'s (and their product $M$) are then:
$$
M_1=\left[\begin{array}{ccc}
2 & 0 & 0\\
0& 1 & 0\\
0& 0 &1
\end{array}\right]
M_2=\left[\begin{array}{ccc}
1 & 0 & 0\\
0& 4 & 0\\
0& 0 &1
\end{array}\right]
M_3=\left[\begin{array}{ccc}
1 & 0 & 0\\
0& 1 & 0\\
0& 0 &16
\end{array}\right];
M=\left[\begin{array}{ccc}
2 & 0 & 0\\
0& 4 & 0\\
0& 0 &16
\end{array}\right].
$$
From visual inspection of $M$ it is clear that $d=16$. Now consider the vector $v\in\mathbb{Q}^3$:
$$
v'=\left[ \frac{1}{32},\frac{3}{64},\frac{5}{16}\right],
$$
then $m=\text{lcm}(32,64,16)=64$. As $d<m$, our first value for $i$ is 2, which yields $d^2=256\equiv 0\text{ (mod } m)$, thus $v\in \mathbb{Z}\left[\frac{1}{d}\right]^3$. Then $v_1\in\mathbb{Z}^3$, as
\begin{align*}
v_1&=M^{is\lfloor\log d\rfloor}v=M^{2\cdot3\cdot4}v=M^{24}v\\
&=\left[ 524288,13194139533312,24758800785707605497982484480\right]'\\
&=\left[ \beta_1,\beta_2,\beta_3\right]^T;
\end{align*}
which is the linear representation of the word
$$
q_1^{-24}q_2^{-24}q_3^{-24}b_1^{\beta_1}b_2^{\beta_2}b_3^{\beta_3}\in G.
$$
}\end{ex}
\end{comment}
\subsection{Solving Linear Systems} To finish this section and for future reference, we are going to consider the following problem. Assume that we have a square $s\times s$ integral matrix $N$ that commutes with all the matrices $M_l$ and a rational column vector $u\in\mathbb{Q}^s$, and we want to determine if the linear system
\begin{equation}\label{originalsystem}NX=u\end{equation}
has some solution $v\in\mathbb{Q}^s$ that lies in $B$.
To solve this problem, we will use a standard technique to solve these kind of systems in $\mathbb{Z}$. The Smith normal form for $N$ is a diagonal matrix $D$ with diagonal entries $k_1,\ldots,k_r,0,\ldots,0$, such that $0<k_j$ for $0\leq j\leq r$ and each $k_j$ divides the next $k_{j+1}$, with $r$ being the rank of $N$. Moreover, there are invertible matrices $P$ and $Q$ in $\text{SL}(s,\mathbb{Z})$ such that $D=QNP$.
We set $$a=\text{max}\{|a_{ij}|\mid a_{ij}\text{ entry of }N\}.$$
\begin{lem}\label{smith1} Let $N$ be any integral $s\times s$ matrix and let $D=\text{diag}(k_1,\ldots,k_r,0,\ldots,0)$ be its Smith normal form. Then
$$k_1\ldots k_r\leq \sqrt{s}a^s$$
\end{lem}
\begin{proof} It is well known that the product $k_1\ldots k_r$ is the greatest common divisor of the determinants of the nonsingular $r\times r$ minors of the matrix $N$. Let $N_1$ be one of those minors. Then
$$k_r\leq k_1\ldots k_r\leq|\text{det}N_1|.$$
Now, the determinant of the matrix $N_1$ is bounded by the product of the norms of the columns $c_1,\ldots,c_r$ of the matrix (this bound is due to Hadamard, see for example \cite{Horn1985}) so we have
$$|\text{det}N_1|\leq\prod_{j=1}^r\|c_j\|\leq\sqrt{r}^ra^r.$$
\end{proof}
Recall that we are assuming that $N$ commutes with all the matrices $M_l$. Under this assumption we claim that we can solve the problem above by using Lemma \ref{vinB}. To demonstrate, let $P$ and $Q$ be invertible matrices in $\text{SL}(s,\mathbb{Z})$ such that $D=QNP=\text{diag}(k_1,\ldots,k_r,0,\ldots,0)$
is the Smith normal form of $N$. Our system can then be transformed into
\begin{equation}\label{smithsystem} D\tilde X= \begin{scriptsize} \begin{pmatrix}
0&0 \\
0& D_2 \\
\end{pmatrix}
\end{scriptsize}\tilde X=Qu
\end{equation}
with $\tilde X=P^{-1}X$. At this point, we see that the system has some solution if and only if the first $s-r$ entries of $Qu$ vanish.
Assume that this is the case and let $v_2$ be the unique solution to the system
\begin{equation}\label{smallsystem}D_2\tilde X_2=(Qu)_2\end{equation}
where the subscript $2$ in $\tilde X$ and $Qu$ means that we take the last $r$ coordinates only. Then
$$v_2=D_2^{-1}(Qu)_2.$$
The set of all the rational solutions to (\ref{originalsystem}) is
$$\Big\{P
\begin{scriptsize}\begin{pmatrix}
v_1 \\
v_2 \\
\end{pmatrix}
\end{scriptsize}\mid v_1\in\mathbb{Q}^{s-r}
\Big\}.$$
Equivalently, this set can be written as $$v+\text{Ker}N\, \text{
where }\, v=P \begin{scriptsize}\begin{pmatrix}
0 \\
v_2 \\
\end{pmatrix}
\end{scriptsize}.$$
Observe that the columns of $P$ give a new basis of $\mathbb{Z}^s$ that can be used to define $B$ instead of $b_1,\ldots,b_s$. In this new basis the action of each $q_l$ is encoded by the matrix $P^{-1}M_lP$. The fact that $N$ commutes with each $M_l$ implies that $M_l$ leaves $\text{Ker}N$ (setwise) invariant. By construction, $\text{Ker}N$ is generated by the first $s-r$ columns of $P$ and therefore each $P^{-1}M_lP$ has the following block upper triangular form:
$$P^{-1}M_lP= \begin{scriptsize}\begin{pmatrix}
A_l&B_l \\
0& C_l \\
\end{pmatrix}
\end{scriptsize}.$$
Moreover, $C_l$ is just the $r\times r$ matrix associated with the action of $q_l$ in the quotient $\mathbb{Q}^s/\text{Ker}N$, written in the basis obtained from the last $r$ columns of $P$.
\begin{prop}\label{solutionsystem} A solution to the system (\ref{smithsystem}) exists in $B$ if and only if $v_2\in\mathbb{Z}[{1\over d}]^r$ and
$$C^{tr\lfloor\log d\rfloor}v_2\in\mathbb{Z}^r,$$
with $C=\prod_lC_l$ and $t$ the smallest possible integer such that $d^tv_2$ is integral. (We can use $s$ instead of $r$).
\end{prop}
\begin{proof} Assume first that $C^{tr\lfloor\log d\rfloor}v_2\in\mathbb{Z}^r$, with $t$ as above. We have
$$P^{-1}M^{t\alpha}P= \begin{scriptsize}\begin{pmatrix}
A&S \\
0& C^{tr\lfloor\log d\rfloor}\\
\end{pmatrix}
\end{scriptsize}$$
for certain $(s-r)\times r$ matrix $S$ and certain $(s-r)\times (s-r)$ invertible matrix $A$, with $M=\prod_lM_l$ as before. Therefore
$$P^{-1}M^{tr\lfloor\log d\rfloor}P\tilde X= \begin{scriptsize}\begin{pmatrix}
A&S \\
0& C^{tr\lfloor\log d\rfloor}\\
\end{pmatrix} \begin{pmatrix}v_1\\ v_2\end{pmatrix}=\begin{pmatrix}Av_1+Sv_2\\ C^{tr\lfloor\log d\rfloor}v_2\end{pmatrix}.
\end{scriptsize}$$
This means that now we only have to find a $v_1\in\mathbb{Q}^{s-r}$ such that $Av_1+Sv_2\in\mathbb{Z}^s$. To do it, observe that it suffices to take $v_1=-A^{-1}Sv_2'$.
\
Conversely, assume that some $P\begin{scriptsize}\begin{pmatrix}v_1\\ v_2\end{pmatrix}
\end{scriptsize}$ lies in $B$. Then some product of positive powers of the $M_l$'s transforms $P\begin{scriptsize}\begin{pmatrix}v_1\\ v_2\end{pmatrix}
\end{scriptsize}$ into an integral vector, thus there is a product of the $C_l$'s that transforms $v_2$ into an integral vector.
We may use now Lemma \ref{vinB} applied to $\mathbb{Q}^r=\mathbb{Q}^s/\text{Ker}N$ with respect to the action of the matrices $C_l$ to conclude that
$v_2\in\mathbb{Z}[{1\over d}]^r$ and $$C^{tr\lfloor\log d\rfloor}v_2\in\mathbb{Z}^r,$$
with $t$ the smallest possible integer such that $d^tv_2$ is integral. (Note that $dC_l^{-1}$ is integral so we can use the same $d$ for this quotient as for the original group.)\\
\end{proof}
\begin{rem}\label{iforv2}{\rm Observe that, as $N$ is integral, a necessary condition for (\ref{originalsystem}) to have some solution in $B$ is that $u$ lies in $\mathbb{Z}[{1\over d}]^d$.
Let $t_0$ be such that $d^{t_0}u$ is integral. Then $d^{t_0}\text{det}(D_2)v_2$ is also integral. If $v_2$ lies in $\mathbb{Z}[{1\over d}]^r$, it means that for some $t_1$ such that $d^{t_1}\leq\text{det}(D_2)$, we have that $d^{t_0+t_1}v_2$ is integral. By Lemma \ref{smith1} $\text{det}(D_2)\leq \sqrt{s}a^s$, thus $t_1\leq\sqrt{s}a^s$. As a consequence, if $t$ is as in Proposition \ref{solutionsystem}, we have
$$t\leq t_0+\sqrt{s}a^s.$$
}\end{rem}
Now we are ready to show:
\begin{prop}\label{complexitysolutionsystem} There is an algorithm to decide whether the system (\ref{originalsystem}) has some solution in $B$ and to compute that solution. The complexity of this algorithm is polynomial, specifically
$$
O(s^6\log sa+ (s-r)^5 +(s-r)^3 + (n-1)[s^3\log ts\log d+1]+r^3).
$$
where $t\leq t_0+\sqrt{s}a^s$ and $t_0$ is such that $d^{t_0}u$ is integral. (If there is no such $t_0$ then the system has no solution in $B$).
\end{prop}
\begin{proof}
The algorithm has been described above. In summary, we have to transform the original system using the Smith normal form for $N$, compute $v_2$ and the matrices $C_l$ and $C=C_1\ldots C_n$, and then check whether $v_2$ lies in $\mathbb{Z}[{1\over d}]^r$. If it does, we may either compute $t$ such that $d^tv_2$ is integral or estimate $t$ as $t_0+t_1$ (see Remark \ref{iforv2}). Then we compute
$$C^{tr\lfloor\log d\rfloor}v_2$$
and check whether it is integral or not.
\
To estimate the complexity of this procedure observe that for an integral matrix $N$, the time complexity of computing the Smith normal form $D$ and invertible integral matrices $P$ and $Q$ such that $QNP=D$ is polynomial, specifically $O(s^6\log sa)$, where $a$ is the maximum absolute value of the entries of $N$. For a proof of this fact see \cite{KB} in the non-singular case and \cite{CY} for the singular one.
Once we have the Smith normal form, to compute $v_2$ we only have to perform the product of $D_2^{-1}$ and $(Qu)_2$ which has complexity $O(r^3)$. Next, we have to compute the matrices $C_l$, which requires $n-1$ matrix multiplications, thus $O((n-1)s^3)$. We then check whether $C^{tr\lfloor\log d\rfloor}v_2$ is integral which takes at most $O((n-1) s^3\log ts\log d)$ time.
Solving for $v_2$ and $v_1'$ via Gaussians elimination take $O(r^3)$ and $O((s-r)^3)$, respectively, and calculating $v_1$ is $O((s-r)^5$. The overall time complexity is then the sum of the above operations. Note that the lower order terms involving $s$ and $r$ are dominated by the complexity of calculating the Smith normal form.
\end{proof}
\section{On the Complexity of the Conjugacy Problem
}\label{complexity}
\subsection{The Conjugacy Search Problem in Split Metabelian Groups}\label{general} We begin with a few considerations about the conjugacy search problem in split metabelian groups not necessarily in $\mathcal{F}$. So for the moment we consider
a group $G$ of the form $G=B\rtimes Q$ with both $B$ and $Q$ abelian groups. As before, we use multiplicative notation for the whole group $G$ but additive notation for $B$.
\
Assume that we have conjugate elements $g,g_1\in G$ and we want to solve the conjugacy search problem for $g$, $g_1$, i.e., we want to find an
$h\in G$ such that
$$g^h=g_1.$$
Let $g=bx$, $g_1=b_1x_1$ and $h=cy$ with $b,b_1,c\in B$, $x,x_1,y\in Q$, then
$$b_1x_1=g_1=g^h=hgh^{-1}=cybxy^{-1}c^{-1}=cb^y(c^{-1})^{x}x$$
thus $x=x_1$ and from now on we denote this element solely by $x$.
The element $cb^y(c^{-1})^{x}$ belongs to the abelian group $B$. We write it additively
$$c-x\cdot c+y\cdot b=y\cdot b+(1-x)\cdot c.$$
This means that the conjugacy search problem above
is equivalent to the problem of finding $c\in B$, $y\in Q$ such that
\begin{equation}\label{equation}b_1=y\cdot b+(1-x)\cdot c\end{equation}
when $b,b_1\in B$ and $x\in Q$ are given.
This problem can be split into two parts:
\begin{itemize}
\item[i)] find a $y\in Q$ such that $b_1\equiv y\cdot b$ modulo the subgroup $(1-x)B$
\item[ii)] find a $c\in B$ such that $(1-x)\cdot c=b_1-y\cdot b$ where $y$ is the element found in i).
\end{itemize}
The complexity of both problems depends on the groups $Q$ and $B$ and potentially on the choice of $x$, $b$ and $b_1$ as well.
\
We now return to the particular case when our group belongs to $\mathcal{F}$. In this case there is a decomposition $G=B\rtimes Q$ with $Q$ a free abelian group of finite rank and $B$ an additive subgroup of $\mathbb{Q}^s$. We fix the elements $x\in Q$, $b$ and $b_1\in B$. We also use the same notation for $G$ as in the previous section so we assume that $G$ has a presentation as (\ref{F}), that $M_1,\ldots,M_n$ are the matrices encoded in that presentation that yield the action of the $q_1,\ldots,q_n$ and that $d$ is an integer so that $dM_l^{-1}$ is an integer matrix for each $l=1,\ldots,n$.
\
We let $M_x$ be the rational matrix associated with the action of $x$ on $B$, that is, if $x=q_1^{\alpha_1}\ldots q_n^{\alpha_n}$, then
$$M_x=M_1^{\alpha_1}\ldots M_n^{\alpha_n}.$$
For technical reasons it will be useful to have a special notation for the matrix $I-M_x$. So we put $N_x=I-M_x$.
Consider the $Q$-invariant subgroup of $B$
$$N_xB=(1-x)\cdot B.$$
There is an induced action of $Q$ on the quotient group $\bar{B}=B/((1-x)\cdot B)=B/N_xB$. We use $\bar{\quad}$ to denote the coset in $\bar{B}$ associated with a given element. With this notation equation in i) above is
$$\bar b_1=y\cdot \bar b.$$
Let $T$ be the torsion subgroup of $\bar B$. This subgroup is also $Q$-invariant so we get an induced $Q$-action on $\bar B/T$.
The fact that $\bar B/T$ is torsion-free and of finite Pr\"ufer rank implies that it can be embedded in $\mathbb{Q}^{s_1}$ for some $s_1$. Explicitly, as $\mathbb{Q}$ is flat we get
$$\bar B/T\hookrightarrow \bar B/T\otimes\mathbb{Q}=(B/N_xB)\otimes\mathbb{Q}=(B\otimes\mathbb{Q})/(N_xB\otimes\mathbb{Q})=\mathbb{Q}^s/N_x\mathbb{Q}^s$$
and one can find the matrices encoding the action of each of the elements $q_l$ on $\bar{B}/T$.
\
As we will see below, problem i) can be reduced by quotienting out the subgroup $T$ to a multiple orbit problem in a vector space and problem ii) is a type of discrete log problem. For the first we take advantage of the polynomial time solution of the multiple orbit problem in a vector space given in \cite{aszl1996multiplicative}. The latter can be solved using the fact, proved in the next subsection, that $T$ is finite. Moreover, we will see that one can get an upper bound for the complexity of this algorithm that is essentially dependent upon the size of $T$.
\subsection{On the Subgroup $T$} We proceed to showing that $T$ is indeed finite, and calculating a bound for its size.
Recall that the exponent of a torsion abelian group $T$, denoted $\text{exp}(T)$, is the smallest non-negative integer $k$ such that $kv=0$ for any $v\in T$. (If there is no such integer, then the exponent is infinite). The following lemma is well known, but we include it here for completeness:
\begin{lem}\label{abelianprufer} Let $T$ be a torsion abelian group of finite Pr\"ufer rank $s$. Assume that $k=\text{exp}(T)<\infty$. Then $T$ is finite and
$$|T|\leq k^s.$$
\end{lem}
\begin{proof} Observe that as $T$ has finite exponent, its $p$-primary component $T_p$ vanishes for all primes $p$ except of possibly those primes dividing $k$. Moreover, $T$ cannot contain quasicyclic groups $C_{p^\infty}$. Then, using \cite[5.1.2]{robinson2004} (see also item 3 in page 85), we see that for any prime $p$ dividing $k$, $T_p$ is a sum of at most $s$ copies of a cyclic group of order at most the $p$-part of $k$.
As $T=\oplus_{p\mid k}T_p$ we deduce the result.
\end{proof}
\begin{lem}\label{integer} Let $N$ be a square $s\times s$ integer matrix and $T$ the torsion subgroup of the group $\mathbb{Z}^s/N\mathbb{Z}^s$. Then
$$\text{exp}(T)\leq \sqrt{s}a^s$$
with $$a=\text{max}\{|a_{ij}|\mid a_{ij}\text{ entry of }N\}.$$
\end{lem}
\begin{proof} Let $D=\text{diag}(k_1,\ldots,k_r,0,\ldots,0)$ be the Smith normal form of $N$. Then
$$\text{exp}(T)=k_r\leq k_1,\ldots,k_r$$
so it suffices to apply Lemma \ref{smith1}.
\end{proof}
\begin{thm}\label{bound} Let $T$ be the torsion subgroup of the abelian group
$\bar{B}={B/(1-x)\cdot B}$.
Then $T$ is finite and
$$|T|\leq \sqrt{s}^sd^{\mathcal{L}s^2}(a+1)^{s^2}$$
where $\mathcal{L}$ is the length of the element $x$ as a word in the generators of $Q$, $a$ is the maximum absolute value of an entry in $M_x$, the matrix associated with the action of $x$ on $B$.
\end{thm}
\begin{proof} Let $N_x=I-M_x$. Assume first that $M_x$ is an integral matrix, so the same happens with $N_x$. We want to relate the exponent of $T$ with the exponent of the torsion subgroup of $\mathbb{Z}^s/N_x\mathbb{Z}^s$. Let $k$ be this last exponent and choose $\beta\in B$ such that $1\neq\bar \beta$ lies in $T$. Denote by $m>0$ the order of $\bar \beta$. Observe that $m\beta=N_x\gamma$ for some $\gamma\in B$ and that $m$ is the smallest possible under these conditions.
\
Next, choose $q\in Q$ such that $q\cdot \beta$ and $q\cdot \gamma$ both lie in $\mathbb{Z}^s$. To find such a $q$ it suffices to
write $\beta$ and $\gamma$ multiplicatively as in (\ref{form}) and take as $q$ a product of the $q_l$'s with big enough exponents.
\
Then we have $m(q\cdot \beta)=q\cdot N_x\gamma=N_x(q\cdot \gamma)\in N_x\mathbb{Z}^s$ thus $q\cdot b\beta+N_x\mathbb{Z}^s$ lies in the torsion subgroup of $\mathbb{Z}^s/N_x\mathbb{Z}^s$. Therefore, $k(q\cdot \beta)\in N_x\mathbb{Z}^s$. Now, let $m_1$ be the greatest common divisor of $m$ and $k$ and observe that the previous equations imply $m_1(q\cdot \beta)\in N_x\mathbb{Z}^s$. This means that for some $\gamma_1\in\mathbb{Z}^s$ we have
$m_1(q\cdot \beta)=N_x\gamma_1$, thus
$$m_1\beta=q^{-1}N_x\gamma_1=N_xq^{-1}\gamma_1=N_x\gamma_2$$
with $\gamma_2=q^{-1}\cdot \gamma_1\in B$. By the minimality of $m$ we must have $m\leq m_1$. As $m_1$ divides both $k$ and $m$ we can conclude $m=m_1\mid k$. This implies that $k$ is also the exponent of $T$.
\
Next, we consider the general case when $N_x$ could be non-integral.
As $M_x$ is the product of $\mathcal{L}$ matrices in the set $\{M_1^{\pm1},\ldots,M_n^{\pm}\}$ we see that the matrix $d^{\mathcal{L}}M_x$ is integral and therefore so is $d^{\mathcal{L}}N_x$.
Obviously, the group $N_xB/d^{\mathcal{L}}N_xB$ is torsion thus
$$\text{exp}(T)\leq\text{exp}(\text{torsion subgroup of }B/d^\mathcal{L}N_xB).$$
The matrix $d^{\mathcal{L}}N_x$ also commutes with the $Q$-action so what we did above implies that this last exponent equals the exponent of the torsion subgroup of $\mathbb{Z}^s/d^{\mathcal{L}}N_x\mathbb{Z}^s$. From all this together with Lemma \ref{integer} and using that the biggest absolute value of an entry of $d^\mathcal{L}$ is bounded by $d^\mathcal{L}N_x$ we get
$$\text{exp}(T)\leq \sqrt{s}d^{{\mathcal{L}}s}(a+1)^s.$$
Finally, as the group $\bar B$ has finite Pr\"ufer rank, so does $T$, therefore by Lemma \ref{abelianprufer} we get the result.
\end{proof}
\
\begin{rem}{\rm The maximum absolute value of an entry in the matrix $M_x$ is bounded exponentially on $\mathcal{L}$. Therefore, its logarithm is bounded linearly on $\mathcal{L}$. To see it, observe first that
if $M_1$ and $M_2$ are $s\times s$ matrices and $\mu$ is an upper bound for the absolute value of the entries of both $M_1$ and $M_2$, then the maximum absolute value of an entry in the product $M_1M_2$ is bounded by
$s\mu^2.$ Repeating this argument one sees that if $x$ has length $\mathcal{L}$ as a word in $q_1,\ldots,q_n$ and $\mu$ is an upper bound for the absolute value of the entries of each $M_l$, then the maximum absolute value $a$ of an entry of $M_x$ is bounded by
$$s^{\mathcal{L}-1}\mu^\mathcal{L}$$
}
\end{rem}
\
The next result yields a bound on the order of $T$ which is exponential in the length $\mathcal{L}$ of $x$.
\begin{prop}\label{Texponential} With the previous notation, there is a constant $K$, depending on $G$ only such that for $T$ the torsion subgroup of $\bar{B}=B/N_xB$,
$$|T|\leq K^\mathcal{L}$$
where $\mathcal{L}$ is the length of $x$.
\end{prop}
\begin{proof} Let $\mu$ be an upper bound for the absolute value of the entries of each $M_l$. By Theorem \ref{bound} and the observation above
$$|T|\leq \sqrt{s}^sd^{\mathcal{L}s^2}(a+1)^{s^2}\leq \sqrt{s}^sd^{\mathcal{L}s^2}(s^{\mathcal{L}-1}\mu^\mathcal{L}+1)^{s^2}\leq(\sqrt{s}ds\mu+\sqrt{s}d)^{s^2\mathcal{L}}$$
so we only have to take $K=(\sqrt{s}ds\mu+\sqrt{s}d)^{s^2}$.
\end{proof}
\
\
\subsection{Description of the Algorithm}\label{algorithm}
Recall that to find an element that conjugates $bx$ into $b_1x$ (here, $x\in Q$, $b,b_1\in B$) we need to find:
\begin{itemize}
\item{i)} $y\in Q$ such that $\bar{b_1}=y\cdot\bar{b}$ where $\bar{}$ means passing to $\bar{B}=B/N_xB$,
\item{ii)} $c\in B$ such that $N_xc=b_1-y\cdot b$.
\end{itemize}
\
\noindent{\bf Step 1 (Problem i):} With $M_x$ and $N_x$ as before, form the quotient $V=\mathbb{Q}^s/N_x\mathbb{Q}^s$ and find the matrices encoding the action of each $q_l$ on $V$.
Consider the projections $\bar b+T$ and $\bar b_1+T$ of $b$ and $b_1$ in $\bar B/T$ and see them as elements in $V$ (via the embedding $\bar B/T\hookrightarrow V$). Then use the algorithm in \cite{aszl1996multiplicative} to solve the multiple orbit problem
$$y\cdot (\bar b+T)=\bar b_1+T.$$
This algorithm determines not only a single $y$ but the full lattice of solutions.
$$\Lambda=\{q\in Q\mid q\cdot\bar b-\bar b_1\in T\},$$
Furthermore, it allows one to compute a basis $y_1,\ldots,y_m$ of the following subgroup of $Q$
$$Q_1=C_Q(\bar b+T)=\{q\in Q\mid q\cdot \bar b-\bar b\in T\}.$$
(Note that for any $z\in\Lambda$, $Q_1=z^{-1}\Lambda$).
\
\noindent{\bf Step 2 (Problem ii):} Order the elements of $Q_1$ according to word length. For each $q\in Q_1$ check whether $q\cdot b-b_1\in N_xB$. Each check consists of trying to solve a system of linear equations. More precisely, we have to check whether the system
$$u= N_xX$$
with $u=q\cdot b-b_1$ has some solution in $B$. This can be done using Proposition \ref{solutionsystem}. When found, the solution yields the element $c$ of $B$ that we needed.
\
Of course, a priori this procedure might never halt. But as we shall see next this is not the case, for the number of iterations in Step 2 is bounded by the size of the group $T$, which has been shown to be finite.
\
We can now be more explicit. Recall that the problem is to find a $y\in Q$ such that $y\cdot \bar b=\bar b_1$, and, as this is the search variant of the conjugacy, $y$ exists. We will use the notation above so we know that all the solutions $y$ lie in the set
$$\Lambda=\{q\in Q\mid q\cdot\bar b-\bar b_1\in T\}.$$
Again let $Q_1=C_Q(\bar b+T)=\{q\in Q\mid q\cdot \bar b-\bar b\in T\}$. Let $z\in\Lambda$ be an arbitrary element that we will assume fixed from now. Then we have $\Lambda=zQ_1$ thus
for any $q\in Q_1$, the element $zq\cdot\bar b-\bar b_1$ lies in $T$ and as $T$ is finite there are only finitely many possibilities for its value. Moreover, we know that eventually it takes the value 0.
Put
$$Q_2=C_Q(\bar b)=\{q\in Q\mid q\cdot \bar b=\bar b\}=\{q\in Q\mid q\cdot b- b\in N_xB\}.$$
We obviously have $Q_2\leq Q_1$ and for $q_1,q_2\in Q_1$,
$$zq_1\cdot\bar b-\bar b_1=zq_2\cdot\bar b-\bar b_1\text{ if and only if }q_1Q_2=q_2Q_2.$$ As $T$ is finite we conclude that
the quotient $Q_1/Q_2$ is of finite order bounded by $|T|$. If $\{y_1\ldots,y_{|T|}\}$ is a set of representatives of the cosets of $Q_2$ in $Q_1$, then some element $y$ in the finite set
$$\{zy_1,\ldots,zy_{|T|}\}$$
is the $y\in Q$ that satisfies $y\cdot \bar b=\bar b_1$.
\
In the next lemma we prove that by $Q_1$ being a lattice we can produce a full set of representatives as before, including our $y$, by taking elements solely from $Q_1$, Moreover, the number of steps needed is bounded in terms of $|T|$.
\begin{lem} Let $Q_2\leq Q_1$ with $Q_1$ free abelian with generators $x_1,\ldots,x_m$, and assume that the group $Q_1/Q_2$ is finite. Then the set
$$\Omega=\{x_1^{\alpha_1}\ldots x_m^{\alpha_m}\mid \sum_{j=1}^m|\alpha_j|< |Q_1/Q_2|\}$$
has order bounded by $(2 |Q_1/Q_2|)^{m}$ and contains a full set of representatives of the cosets of $Q_2$ in $Q_1$. \end{lem}
\begin{proof} Let $v_1,\ldots,v_m$ be generators of the subgroup $Q_2$, which can be viewed as points in $\mathbb{Z}^m$. Consider the parallelogram
$$P=\{t_1v_1+\ldots+t_mv_m\mid t_j\in\mathbb{R}, 0\leq t_j<1\}.$$
Then $\mathbb{Z}^m\cap P$ is a set of representatives of the cosets of $Q_2$ in $Q_1$ and we claim that $P\subseteq\Omega$. Observe that for any point $p=(\alpha_1,\ldots,\alpha_m)$ in $\mathbb{Z}^m\cap P$ there is a path in $\mathbb{Z}^m\cap P$ from $(0,\ldots,0)$ to $p$. We may assume that the path is simple and therefore its length is bounded by $ |Q_1/Q_2|$. On the other hand, the length of the path is greater than or equal to $\sum_{j=1}^m|\alpha_j|$ thus
$$\sum_{j=1}^m|\alpha_j|\leq |Q_1/Q_2|.$$
\end{proof}
The number of iterations of Step 2 is bounded by the value $|Q_1/Q_2|$.
At this point, it is clear that smaller groups $Q_1/Q_2$ will reduce the running time of the algorithm. Observe that by construction, the element $x$ belongs to the group $Q_2$. In the case when $Q$ is cyclic this yields a dramatic improvement of our bound for $|Q_1/Q_2|$: we only have one generator, say $q_1$ of $Q$, thus, if $x=q_1^\mathcal{L}$, $|Q_1/Q_2|\leq |Q/Q_2|=\mathcal{L}$. Moreover, in this case Step 1 in our algorithm is not needed, so we only have to perform $\mathcal{L}$ iterations of Step 2, and our algorithm coincides with the one in \cite{cavallo2014polynomial}.
\goodbreak
\subsection{Complexity Analysis and Consequences} We can now prove Theorem \ref{exponential}:
\noindent{{\sl Proof of Theorem \ref{exponential}}.
We consider the complexity of the algorithm of Subsection \ref{algorithm}. We assume that $g$ and $g_1$ are given as words as in (\ref{form}).
Observe that Step 1 only requires polynomial time. As for Step 2, we have to consider an exponential (in $\mathcal{L}$) number of systems of linear equations of the form
$$u= N_xX$$
with $u=q\cdot b-b_1$. Moreover, we may find (by writing $u$ as in (\ref{form})) some $q\in Q$ such that $q\cdot u$ is in the group generated by $b_1\ldots,b_s$. If $R$ is the matrix representing the action of $q$, this is equivalent to the vector $Ru$ being integral. As $R$ and $N_x$ commute our system can be transformed into
$$N_xRX=Ru.$$
Obviously, $X$ lies in $B$ if and only if $RX$ does, thus the problem is equivalent to deciding whether
$$d^\mathcal{L}N_xX_1=d^\mathcal{L}Ru$$
has some solution $X_1$ in $B$.
\
Using Proposition \ref{solutionsystem} and the complexity computation of Proposition \ref{complexitysolutionsystem} we see that this can be done in a time that is polynomial on log of the maximum absolute value of an entry in $d^\mathcal{L}N_x$. Observe that our integrality assumption on $Ru$ implies that the integer denoted $t_0$ in Proposition \ref{complexitysolutionsystem} can be taken to be 0. As the maximum absolute value of an entry in $d^\mathcal{L}N_x$ is exponential on $\mathcal{L}$, this time is polynomial on $\mathcal{L}$. The exponential bound in the result then follows because we are doing this a number of times which is exponential on $\mathcal{L}$.
\qe
\
Next, we consider a particular case in which the running time of the algorithm is reduced to polynomial with respect to the length $\mathcal{L}$ of $x$.
Let $s_1,s_2\geq 0$ be integers with $s=s_1+s_2$ and denote
$$\Gamma_{s_1,s_2}:= \left\{ \text{Matrices }
\begin{scriptsize}
\begin{pmatrix}
I_{s_1} & A \\
0 & I_{s_2} \\
\end{pmatrix}
\end{scriptsize}
\right\}\leq SL(s,\mathbb{Z}).$$
Observe that if we consider a group $G$ with a presentation as in (\ref{F}) such that the matrices $M_l$ lie in $\Gamma_{s_1,s_2}$,
then each $M_l^{-1}$ is also an integral matrix so we can choose $d=1$, where $d$ is the integer of Theorem \ref{bound}. This implies that this group is polycyclic. Moreover, it is easy to see that it is in fact nilpotent.
\begin{prop} With the previous notation, assume that for $l=1,\ldots,n$,
$$M_l\in\Gamma_{s_1,s_2}.$$
Then there is some constant $K$ depending on $G$ only such that for $T$, the torsion subgroup of $B/N_xB=(1-M_x)B$,
$$|T|\leq K\mathcal{L}^{s^2}$$
where $\mathcal{L}$ is the length of $x$.
\end{prop}
\begin{proof} We consider the bound of Theorem \ref{bound} for $d=1$ (see above)
$$|T|\leq \sqrt{s}(a+1)^{s^2},$$
where $a$ is the maximum absolute value of an entry in $M_x$. Observe that $M_x$ is a product of matrices in $\Gamma_{s_1,s_2}$ and that
$$ \begin{pmatrix}
I_{s_1} & A_1 \\
0 & I_{s_2} \\
\end{pmatrix}
\begin{pmatrix}
I_{s_1} & A_2 \\
0 & I_{s_2} \\
\end{pmatrix} =
\begin{pmatrix}
I_{s_1} & A_1+A_2 \\
0 & I_{s_2} \\
\end{pmatrix} .
$$
Therefore, if we let $\mu$ be the maximum absolute value of an entry in each of the matrices $A_1,\ldots,A_n$, then
$a\leq \mathcal{L}\mu$ and therefore
$$|T|\leq \sqrt{s}(a+1)^{s^2}\leq \sqrt{s}(\mathcal{L}\mu+1)^{s^2}\leq \sqrt{s}(2\mathcal{L}\mu)^{s^2}$$
so it suffices to take $K=\sqrt{s}(2\mu)^{s^2}$.
\end{proof}
This result together with the algorithm above (recall that $d=1$ in this case) imply the following:
\begin{thm}\label{polynomial} With the previous notation, assume that for $l=1,\ldots,n$,
$$M_l\in\Gamma_{s_1,s_2}.$$
then the complexity of the conjugacy problem in $G$ is at most polynomial.
\end{thm}
We finish this section with a remark on conjugator lengths. Let $g$ and $g_1$ be conjugate elements in $G$. Our algorithm primarily consists of identifying a suitable subgroup $Q_1$ of $Q$ and showing that, for a function dependent upon the length $\mathcal{L}$ of $x$, there exists some $y\in Q_1$ whose length is bounded by that function and which is the $Q$-component of an element $h$ such that $g^h=g_1$. Essentially, we are providing an estimation for the $Q$-conjugator length function.
We make this more precise in the next result.
\begin{cor}\label{length} There exists an integer $K>0$ dependent upon $G$ only such that for any conjugate elements $g,g_1\in G$, with $g=bx$, $g_1=b_1x$ for $x\in Q$ and $b,b_1\in B$, there is some $h=cy$ for $c\in B$, $y\in Q$ and $g^h=g_1$ such that
the length of $y$ is bounded by $K^\mathcal{L}$, where $\mathcal{L}$ is the length of $x$. In the particular case when $Q\leq\Gamma_{s_1+s_2}$,
the length of $y$ is bounded by $K\mathcal{L}^{s^2}$.
\end{cor}
\subsection{Reduction to the Discrete Logarithm Problem}\label{discretelog} For this subsection, we restrict ourselves to the situation of Example \ref{galois} where $Q$ is a multiplicative subgroup of a field $L$ such that $L:\mathbb{Q}$ is a Galois extension and $B$ is the additive group $\mathcal{O}_L[q_1^{\pm},\ldots,q_n^{\pm}]$ which is sandwiched between $\mathbb{Q}$ and $L$. In particular, this means that the only element in $Q$ with an associated matrix having an eigenvalue of 1 is the identity matrix: the eigenvalues of the matrix representing an element $q\in L$ are precisely $q$ itself and its Galois conjugates and thus cannot be 1 if $q\neq 1$. Recall also that Example \ref{galois} includes Example \ref{genBS}.
\
We will keep the notation of the previous sections, with elements $bx$, $b_1x\in G$ such that there is some $cy\in G$ with (additively)
$$b_1=y\cdot b+(1-x)\cdot c.$$
We may consider $y$ and $1-x$ as elements in the field $L$. From now on we omit the $\cdot$ from our notation and use juxtaposition to denote the action. Now, $B$ also has a ring structure and $(1-x)B$ is an ideal in $B$. Moreover, in this case the quotient ring $\bar{B}=B/(1-x)B$ is finite (because the matrix associated with $1-x$ is regular.) In this finite quotient ring we wish to solve the equation
$$y\bar{b}=\bar{b_1}.$$
Let $y=q_1^{t_1}\ldots q_k^{t_k}$, then solving the discrete log problem in $\bar{B}=B/(1-x)B$ consists of finding $t_1,\ldots,t_k$ so that
$$q_1^{t_1}\ldots q_k^{t_k}\bar{b}=\bar{b_1}$$
in the finite ring $\bar{B}$.
\
This is a special type of discrete log problem as one can observe by recalling what happens when $Q$ is cyclic: then $x=q_1^s$ for some $s$ thus we have to solve
$$q_1^{t_1}\bar{v}=\bar{w}$$
in $\bar{B}=B/(1-q_1^s)B$. To solve it $s$ trials are sufficient (see \cite{cavallo2014polynomial}).
Let us restrict ourselves further to the case of generalized Baumslag-Solitar groups (i.e., the groups of Example \ref{genBS}.)
We identify the elements $q_l$ with the integers $m_l$ encoding their action. Assume that each $m_l$ is coprime with $1-m_1$. As before let $y=m_1^{t_1}\ldots m_k^{t_k}$ and choose $x=m_1$.
Then as each $m_l$ is coprime with $1-m_1$,
$$\bar{B}=\mathbb{Z}[m_1^{\pm},\ldots,m_k^{\pm}]/(1-x)\mathbb{Z}[m_1^{\pm},\ldots,m_k^{\pm}]=\mathbb{Z}/(1-x)\mathbb{Z}=\mathbb{Z}_{1-m_1}.$$
We then have to find $t_2,\ldots,t_k$ such that
$$m_2^{t_2}\ldots m_k^{t_k}\bar{b}=\bar{b_1}$$
in the ring of integers modulo $1-m_1$. If $k=2$ this is an instance of the ordinary discrete logarithm problem.
\iffalse{
\section{Length Based Conjugacy Search}
\label{subsec:explbcs}
\
Length based conjugacy search is a heuristic method that attempts to solve the conjugacy search problem or the generalized conjugacy search problem (multiple instances of the conjugacy search problem where there is a common conjugating element in a specified subgroup). The latter problem is well known since it is related to the security of the Arithmetica protocol. To perform the LBCS, we associate to our group an effectively computable length function that has the property that conjugation generically increases the lengths of elements. Following that, we iteratively build a conjugating element by successively conjugating by generators of our group and then assuming that we are building a successful conjugator when there is a decrease in length.
\
Most previous work such as \cite{myasnikov2007length} and \cite{garber2006length} study the LBCS in the context of braid groups while the authors of \cite{garber2013analyzing} perform the LBCS on polycyclic groups. Both groups have the advantage of having certain length functions that satisfy the properties of the previous paragraph. It is worth noting that the LBCS can be performed on an arbitrary finitely presented group as long as it admits a length function that is generically monotone increasing under conjugacy. The algorithm will work in the same way: starting with an arbitrary presentation, assign the group a length function,
conjugate by successive elements in the group, and attempt to build a conjugator by investigating which elements shorten your word.
\
It is important to note that for length based conjugacy search to work, there needs to be an effective way to to apply the relations of the group.
As such, it is best tailored towards groups that have a normal form that is easily computable. Another difference with using LBCS to solve the general
conjugacy problem versus using it to break Arithmetica, is that the elements we conjugate by would need to generate the group as we are not searching within
a specific subgroup. As such, we can assume that our set contains the standard generators as are given by the presentation.
For a given instance of the conjugacy problem, another set of generators may be more effective,
but such knowledge of effective generators is something we cannot assume in general.
\
In what follows we provide the pseudocode for the LBCS with memory 2 from \cite{garber2013analyzing}, the most effective algorithm from their paper, applied to a single instance of the conjugacy problem. In this variation, one maintains a set $S$ full of conjugates of our initial element, $y$. Each element of $S$ is conjugated by each generator and the results are stored in a set $S'$. After every element of $S$ has been conjugated by every generator, the user saves the $M$ elements with minimal length and sets that equal to $S$. The algorithm is terminated when the problem has been solved or after a user specified time-out. It is also worth noting that any other variation of the LBCS seen in this paper (or elsewhere) can be adapted to a single conjugacy search problem in much the same way. We assume that our group $G$
has a length function, $|\cdot |$ such that $|g| < |xgx^{-1}|$ and also that our set $S$ generates $G$. Note that $S$ does not need to be a minimal generating
set, namely it may have a strict subset that also generates $G$. As input we take $x, y \in G$ such that $|y| > |x|$ and $B$ such that $\langle B \rangle = G$.
For convenience, we assume that $B$ is closed under inversion of elements. We also impose a user specified time-out and a natural number $M$ specifying the number of elements we keep track of.
\
\begin{algorithm}[H]
\caption{LBCS with Memory 2 (Single Conjugacy Problem)}
\begin{algorithmic}
\State {Initialize $S = \{ (|y|, y, \mbox{id}_G) \}$}
\While {not time-out}
\For {$(|z|, z, a) \in S$}
\State {Remove $(|z|, z, a)$}
\For {$g \in G$}
\If {$gzg^{-1} = x$}
\State {Return $ga$ as an element that conjugates $x$ to $y$}
\Else
\State { Save $(|gzg^{-1}|, gzg^{-1}, ga)$ in a set $S'$}
\EndIf
\EndFor
\EndFor
\State {Copy the $M$ elements with minimal first coordinate into $S$ and delete $S'$}
\EndWhile
\State {return FAIL}
\end{algorithmic}
\end{algorithm}
\section{Experimental Results}
Tests were run on an Intel Core i7-4770K computer, running Ubuntu 14.04 LTS and using GAP version 4.7.5 \cite{gap} with 6 GB of memory allowance.
\subsection{LBCS in Generalized Metabelian BS Groups}
\
Using the notation of \ref{genBS}, the groups tested were of the form:
$$
G=\left\langle q_1,q_2,b|b^{q_1}=b^{m_1},b^{q_2}=b^{m_2},[q_1,q_2]=1\right\rangle,
$$
where $m_1$ and $m_2$ are primes. Larger primes were chosen from the list of primes \texttt{Primes2} in GAP. The table below indicates the primes chosen for each group, together with their respective bit lengths:\\
\begin{results}
\begin{tabular}{|c|c|c|c|}
\hline
Group & $m_1$ & $m_2$&Bit Lengths $(m_1,m_2)$\\
\hline
1&2&3&$(2,2)$\\
2&2&4&$(2,3)$\\
3&\texttt{Primes2[20]}&\texttt{Primes2[25]}&$(24,25)$\\
4&\texttt{Primes2[362]}&\texttt{Primes2[363]}&$(48,48)$\\
5&\texttt{Primes2[559]}&\texttt{Primes2[560]}&$(96,96)$\\
6&\texttt{Primes2[590]}&\texttt{Primes2[591]}&$(128,130)$\\
\hline
\end{tabular}
\caption{Primes Used for Group Construction}
\end{results}
Two different length functions were used as heuristics for LBCS. In the first three groups, a word's length was calculated as
$$
\sum_{i}|e_i|,
$$
whereas in the latter three groups the length was
$$
\sum_{i}|\log_{10}(e_i)|.
$$
As the primes become larger it becomes difficult or sometimes impossible to create elements in a range which will work for all groups. Instead, a number $l=\log_{10}p$ was used as an approximate unit size for each of the larger groups. Random elements were then selected from ranges in multiples of $l$.\\
\begin{results}
\begin{tabular}{|c|c|c|c|r|r|r|r|r|r|}
\hline
Group & $l$ & $[10,15]$ & $[20,23]$ & $[40,43]$ & $[l,2l]$ & $[2l,3l ]$& $[3l,4l]$\\
\hline
1&N/A&20\%&0\%&0\%&N/A&N/A&N/A\\
2&N/A&0\%&0\%&0\%&N/A&N/A&N/A\\
3&N/A&0\%&0\%&0\%&N/A&N/A&N/A\\
4&14&N/A&N/A&N/A&0\%&0\%&0\%\\
5&29&N/A&N/A&N/A&0\%&0\%&0\%\\
6&38&N/A&N/A&N/A&0\%&0\%&0\%\\
\hline
\end{tabular}
\caption{LBCS Results for GMBS Groups}
\end{results}
}\fi
\section*{Acknowledgements}
We thank Bren Cavallo who helped us in the beginning stage of this paper.\\
Delaram Kahrobaei is partially supported by a PSC-CUNY grant from the CUNY Research Foundation, the City Tech Foundation, and ONR (Office of Naval Research) grants N000141210758 and N00014-15-1-2164. Conchita Mart\'inez-P\'erez was supported by Gobierno de Arag\'on, European Regional
Development Funds and partially supported by
MTM2015-67781-P (MINECO/FEDER)
\bibliographystyle{plain}
|
1,116,691,499,068 | arxiv | \section{Introduction}
Charmless hadronic $B$ decays are suppressed compared to other hadronic $B$ decays and hence can be excellent probes for new physics beyond the Standard Model (SM). In this paper, we present recent results from the Belle experiment on the charmless hadronic and radiative $B_s^0$ decays $B_s^0\rightarrow K^0\bar{K^0}$, $B_s^0\rightarrow\phi\gamma$ and $B_s^0\rightarrow\gamma\gamma$.
The main challenge of studying the charmless $B_s^0$ decays is the suppression of overwhelmingly large background arising from continuum $e^+e^-\to q\bar{q}~(q=u,~d,~c,~s$) production. To suppress this background, we use a multivariate analyzer
based on a neural network. The neural network uses the so-called event shape variables to discriminate continuum events, which tend to be jetlike, from spherical $B\bar{B}$ events. Signal decays are identified by two kinematical variables: the beam-energy-constrained mass $M_{\rm bc}= \sqrt{E^2_{\rm beam}-|\vec{p}^{}_{B}|^2c^2}/c^2$ and the energy difference
$\Delta E=E_{B}-E_{\rm beam}$. To determine the signal yield, normally an unbinned extended maximum likelihood fit is applied to all candidate event using the above two kinematical variables and other useful information. The signal probability density functions (PDF) of these two variables are typically studied from Monte Carlo (MC) simulation and the background PDF can be obtained either from MC simulation or sideband data. A high statistics control sample of similar topology is used to understand potential data/MC differences.
\section{Observation of the decay $B_s^0\rightarrow K^0\bar{K^0}$}
The two-body decays $B_s^0\rightarrow h^+h'^-$, where $h^{\scriptscriptstyle(}\kern-1pt{}'\kern-1pt{}^{\scriptscriptstyle)}$ is
either a pion or kaon, have now all been observed~\cite{PDG}.
In contrast, the neutral-daughter decays $B_s^0\rightarrow h^0h'^0$ have
yet to be observed. The decay $B_s^0\rightarrow K^0\bar{K^0}$~\cite{charge-conjugate}
is of particular interest because the branching fraction is predicted
to be relatively large. In the SM, the decay
proceeds mainly via a $b\rightarrow s$ loop (or ``penguin") transition as shown
in Fig.~\ref{fig:feynman}, and the branching fraction is predicted
to be in the range $(16-27)\times10^{-6}$~\cite{SM-branching}.
The presence of non-SM particles or couplings could enhance
this value~\cite{Chang:2013hba}. It has been pointed out
that $CP$ asymmetries in $B_s^0\rightarrow K^0\bar{K^0}$ decays are
promising observables in which to search for new
physics~\cite{susy}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Fig1.png}
\caption{\small Loop diagram for $B_s^0\rightarrow K^0\bar{K^0}$ decays. }
\label{fig:feynman}
\end{figure}
The current upper limit on the branching fraction,
$\mathcal{B}(B_s^0\rightarrow K^0\bar{K^0})<6.6\times 10^{-5}$ at 90\%
confidence level (C.L.), was set by the Belle Collaboration using
$23.6~{\rm fb^{-1}}$ of data recorded at the
$\Upsilon(5S)$ resonance~\cite{Peng:2010ze}.
The analysis presented here uses the full data set of
$121.4~{\rm fb^{-1}}$ recorded at the~$\Upsilon(5S)$.
Improved tracking, $K^0$ reconstruction, and continuum suppression algorithms are also used in this analysis.
The data set corresponds to $(6.53\pm 0.66)\times10^6$ $B_s^0\bar{B_s^0}$
pairs~\cite{Oswald:2015dma} produced in three $\Upsilon(5S)$ decay
channels: $B_s^0\bar{B_s^0}$, $B_s^{*0}\bar{B_s^0}$ or $B_s^0\bar{B}_s^{*0}$, and $B_s^{*0}\bar{B}_s^{*0}$.
The latter two channels dominate, with production fractions
of $f_{B_s^{*0}\bar{B_s^0}}=(7.3\pm1.4)\%$ and $f_{B_s^{*0}\bar{B}_s^{*0}}=(87.0\pm1.7)$\%~\cite{Esen:2012yz}.
The $B_s^{*0}$ decays via $B_s^{*0}\rightarrow B_s^0\gamma$, and the $\gamma$ is not reconstructed.
Candidate $K^0$ mesons are reconstructed via the decay $K_S^0\to\pi^+\pi^-$ and require that the $\pi^+\pi^-$ invariant mass be within 12 MeV/$c^2$ of the nominal $K_S^0$ mass~\cite{PDG}. In order to extract the signal yield, we perform a three-dimensional (3D) unbinned maximum likelihood fit to the variables, $M_{\rm bc}$,
$\Delta E$, and continuum suppression variable $C'_{\rm NN} = \ln\left(\frac{C_{\rm NN}-C^{\rm min}_{\rm NN}}
{C^{\rm max}_{\rm NN}-C_{\rm NN}}\right)$. We extract $29.0\,^{+8.5}_{-7.6}$ signal events
and $1095.0\,^{+33.9}_{-33.4}$ continuum background events.
Projections of the fit are shown in Fig.~\ref{fig:fig2}.
\begin{figure*}[t]
\includegraphics[width=0.32\textwidth]{Fig2a.pdf}
\includegraphics[width=0.32\textwidth]{Fig2b.pdf}
\includegraphics[width=0.32\textwidth]{Fig2c.pdf}
\caption{\small Projections of the 3D fit to the real data:
(a) $M_{\rm bc}$ in $-0.11~{\rm GeV} <\Delta E < 0.02~{\rm GeV}$
and $C^{\prime}_{\rm NN}>0.5$;
(b) $\Delta E$ in $5.405~{\rm GeV}/c^{2} <M_{\rm bc}< 5.427~{\rm GeV}/c^{2}$
and $C^{\prime}_{\rm NN}>0.5$; and
(c) $C^{\prime}_{\rm NN}$ in $5.405~{\rm GeV}/c^{2} <M_{\rm bc}< 5.427~{\rm GeV}/c^{2}$
and $-0.11~{\rm GeV} <\Delta E < 0.02~{\rm GeV}$.
The points with error bars are data, the (green) dashed curves
show the signal, (magenta) dotted curves show the continuum
background, and (blue) solid curves show the total. The three peaks in $M_{\rm bc}$ arise from
$\Upsilon(5S)\to B_s^0\bar{B_s^0}, B_s^{*0}\bar{B_s^0}+B_s^0\bar{B}_s^{*0}$, and $B_s^{*0}\bar{B}_s^{*0}$ decays.
}
\label{fig:fig2}
\end{figure*}
The branching fraction of the decay $B_s^0\rightarrow K^0\bar{K^0}$ is measured to be~\cite{Pal:2015ghq}
\begin{equation}
\mathcal{B}(B_s^0\rightarrow K^0\bar{K^0})=(19.6\,^{+5.8}_{-5.1}\,\pm1.0\,\pm2.0)\times10^{-6},
\end{equation}
where the first uncertainty is statistical, the second
is systematic, and the third reflects the uncertainty due to the
total number of $B_s^0\bar{B_s^0}$ pairs. The significance of this result is 5.1 standard deviations, thus, our measurement constitutes the first observation of this decay.
This measured branching fraction is in good agreement with the
SM predictions~\cite{SM-branching}, and it implies that the Belle II experiment~\cite{Abe:2010gxa} will
reconstruct over 1000 of these decays. Such a sample would allow for a much higher sensitivity search for new physics in this $b\to s$ penguin-dominated decay.
\section{Radiative $B_s^0$ decays}
In the SM, the decays $B_s^0\to\gamma\gamma$ and $B_s^0\to\phi\gamma$ are explained by the radiative transitions $b\to s\gamma\gamma$ and $b\to s\gamma$, respectively.
The leading-order Feynman diagrams for these processes are shown in Fig.~\ref{fig:feynman1}. First observation of the decay $B_s^0\to\phi\gamma$ was made by the Belle Collaboration using $23.6~{\rm fb^{-1}}$
of data collected at the $\Upsilon(5S)$ resonance and its branching fraction was measured to be $(5.7\,^{+2.2}_{-1.9})\times10^{-5}$~\cite{Wicht:2007ni}. The decay $B_s^0\to\gamma\gamma$ , on the other hand, has not been
observed yet and the current upper limit on the branching fraction is $8.7\times10^{-6}$ at 90\% C.L.~\cite{Wicht:2007ni}. This is almost an order of magnitude larger than the range covered by the published theoretical calculations~\cite{ggth}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Fig3a.png}%
\includegraphics[width=0.4\textwidth]{Fig3b.png}
\caption{\small Leading-order Feynman diagrams for decays (left) $B_s^0\to\phi\gamma$ and (right) $B_s^0\to\gamma\gamma$. }
\label{fig:feynman1}
\end{figure}
New physics could enhance its branching fraction by more than an order of magnitude~\cite{Gemintern:2004bw}.
The results presented here are based on $121.4~{\rm fb^{-1}}$ recorded at the~$\Upsilon(5S)$.
Candidate $\phi$ mesons are reconstructed via the decay $\phi\to K^+K^-$ and require that the $K^+K^-$ invariant mass be within 12 MeV/$c^2$ of the nominal $\phi$ mass~\cite{PDG}.
For $B_s^0\to\phi\gamma$ ($B_s^0\to\gamma\gamma$) decay, we perform a four-dimensional (two-dimensional) unbinned maximum likelihood fit to the variables $M_{\rm bc}$, $\Delta E$, $C'_{\rm NN}$ and
$\cos\theta_{\rm hel}$ ($M_{\rm bc}$ and $\Delta E$). The helicity angle $\theta_{\rm hel}$ is the angle between the $B_s^0$ and the $K^+$ evaluated in the $\phi$ rest frame.
We observe $91\,^{+14}_{-13}$ signal events in the $B_s^0\to\phi\gamma$ mode and the corresponding branching fraction is measured to be~\cite{Dutta:2014sxo}
\begin{equation}
\mathcal{B}(B_s^0\to\phi\gamma)=(36\,\pm5\,\pm3\,\pm6)\times10^{-6},
\end{equation}
where the first uncertainty is statistical, the second
is systematic, and the third reflects the uncertainty due to the fraction of $B_s^{(*)}\bar{B}_s^{(*)}$ in $b\bar{b}$ events. Fit projections are shown in Fig.~\ref{fig:phigamma}. This improved result
supersedes our earlier measurement~\cite{Wicht:2007ni} and is consistent with the recent LHCb's measurement~\cite{Aaij:2012ita} .
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Fig4a.png}%
\includegraphics[width=0.4\textwidth]{Fig4b.png}
\includegraphics[width=0.4\textwidth]{Fig4c.png}%
\includegraphics[width=0.4\textwidth]{Fig4d.png}
\caption{\small Data fits for the $B_s^0\to\phi\gamma$ analysis. The projections are shown only for events inside the $B_s^*\bar{B_s^*}$ signal region except for the plotted variable. The $B_s^*\bar{B_s^*}$ signal region is defined as $M_{\rm bc}>5.4 ~{\rm GeV}/c^2, ~−0.2~{\rm GeV} <\Delta E< 0.02~{\rm GeV},~ |\cos\theta_{\rm hel}|< 0.8 ~\textrm{and}~ 0.0 <C'_{\rm NN}< 10.0$. The points with error bars represent the data, the solid black curve represents the total fit function, the red dashed (blue dotted) curve represents the signal (continuum background) contribution. }
\label{fig:phigamma}
\end{figure}
We see no significant signal in the $B_s^0\to\gamma\gamma$ mode and we extract an upper limit at 90\% C.L. of ~\cite{Dutta:2014sxo}
\begin{equation}
\mathcal{B}(B_s^0\to\gamma\gamma)<3.1\times10^{-6}.
\end{equation}
This result represent an improvement by a factor of about 3 over the previous best measurement~\cite{Wicht:2007ni}. Fit projections are shown in Fig.~\ref{fig:gammagamma}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Fig5a.png}%
\includegraphics[width=0.4\textwidth]{Fig5b.png}
\caption{\small Data fits for the $B_s^0\to\gamma\gamma$ analysis. The projections are shown only for events inside the $B_s^*\bar{B_s^*}$ signal region except for the plotted variable. The $B_s^*\bar{B_s^*}$ signal region is defined as $M_{\rm bc}>5.4 ~{\rm GeV}/c^2 ~\textrm{and}~ ~−0.3~{\rm GeV} <\Delta E< 0.05~{\rm GeV}$. The points with error bars represent the data, the solid black curve represents the total fit function, the red dashed (blue dotted) curve represents the signal (continuum background) contribution. }
\label{fig:gammagamma}
\end{figure}
\section{Conclusions}
Using the full set of Belle data collected at $\Upsilon(5S)$ resonance, recent measurements of charmless
hadronic and radiative $B_s^0$ decays are presented. Our measurement of
$B_s^0\to K^0\bar{K^0}$
branching fraction
constitutes the first observation of the decay. This is the first observation of a charmless $B_s^0$ decay involving only neutral hadrons.
\section*{Acknowledgements}
The author thanks the organizers of XIII International Conference on Heavy Quarks and Leptons for excellent hospitality and for assembling a nice scientific program.
This work is
supported by the U.S. Department of Energy.
|
1,116,691,499,069 | arxiv | \section{Introduction}
\label{se:intro}
Recent breakthroughs in two-dimensional (2D) van der Waals materials led to the experimental realization of a new form of ferroelectricity (FE)~\cite{li2017binary,yasuda2021stacking,ViznerStern2021,weston2022interfacial,de2021direct}.
This newly discovered interfacial ferroelectricity results from stacking configurations that break inversion symmetry, such as in AB or BA stacked hexagonal boron nitride (h-BN) bilayer~\cite{yasuda2021stacking,ViznerStern2021} or other 2D materials~\cite{weston2022interfacial,de2021direct}. Remarkably, the resulting polarization can be flipped by a relative sliding of the layers by a single atomic distance.
Ferroelectric tunnel junctions, consisting of a thin FE material sandwiched between two electrodes, permit reading the FE polarization via
tunnelling electroresistance (TER)~\cite{pantel2010electroresistance,garcia2014ferroelectric}. What is the fate of interfacial ferroelectricity within a tunnel junction? The chemical contact with the electrodes may significantly modify the polarization. This was recently studied within density functional theory (DFT)~\cite{yang2021sliding}. It was found that whereas metallic electrodes significantly affect the polarization of a bilayer h-BN, adding graphene spacers between the FE and metals restores the polarization and results in a significant TER. Moreover, a DFT-based approach has also been utilized in describing the domain wall shift in the presence of an electric field. \cite{Bennett2022}
In this work we study theoretically
2D FE tunnel junctions with graphene electrodes, as shown in Fig.~1(a). Assuming a given value for the bare FE polarization $V_{KP}^{(0)}$, which, in principle, can be determined by \textit{ab initio} methods, we focus on the interplay of the polarization and screening charges forming on the graphene electrodes.
We find that when one of the electrode's Fermi level is tuned to the Dirac point, the electronic equilibration is dominated by quantum capacitance, and then a FE voltage can be measured across the device.
We focus on basic mechanisms allowing to read out the FE polarization direction and magnitude from the current-voltage characteristics. The standard tunneling electroresistance (TER) mechanism results from the dependence of the electrostatic tunneling barrier on the FE polarization orientation. We show that, even in the 2D limit, this indeed leads to a detectable TER in our system.
However, a graphene-FE-graphene junction allows for a much more sensitive TER mechanism due to 2D momentum conservation. Resonance tunneling peaks in the $I(V)$ characteristics were identified both in planar 2D junctions of semiconductor heterostructures~\cite{PhysRevB.44.6511,eisenstein1992coulomb}, and more recently in 2D materials~\cite{Britnell2013,mishchenko2014twist,ChenJulian2021}, specifically for graphene-h-BN-graphene~\cite{Britnell2013}. Thus, such a junction with an AB or BA stacked bilayer h-BN is expected to be a perfect candidate to employ resonance tunneling peaks as a sensitive probe of ferroelectricity which essentially acts as an internal voltage that shifts the resonance.
We indeed find using our self-consistent model a sizable shift of the resulting resonance peak~\cite{Britnell2013,mishchenko2014twist,ChenJulian2021} for the two FE polarization directions.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{combined_figure_1.png}
\caption{(a) FE tunnel junction: a pair of tunnel coupled graphene sheets and a bottom gate are modeled by a pair of plate capacitors $C_0,C_g$. The FE material is stacked inside the graphene-graphene capacitor and leads to an internal voltage $V_{FE}$. (b) Schematic Ginzburg-Landau free energy of the FE order parameter $V_{FE}$. Deep below the critical temperature the system consists of two degenerate minima corresponding to stacking configurations AB and BA. In our model Eq.~(\ref{eq:model}) the FE polarization, $V_{FE}$, interacts with the electric field due to the electrons in the graphene sheets and gate. \label{fig:device}
}
\end{figure}
The paper is organized as follows. After introducing the model in Sec.~\ref{se:model}, we study the Kelvin probe voltage in Sec.~\ref{se:kpvoltage}. In Sec.~\ref{se:TER} we estimate the TER in our device. In Sec.~\ref{se:tunneling} we consider momentum conserving tunneling and focus on resonance peaks as a means to detect the FE polarization. We summarize in Sec.~\ref{se:summary}.
\section{Mode
}
\label{se:model}
As shown in Fig.~1(a), we consider a tunneling junction consisting of a 2D FE encapsulated between two graphene sheets. Our theory is not restricted to a specific FE material, but for definiteness we consider parallel stacked bi-layer h-BN, having two stacking configurations denoted AB and BA exhibiting a finite FE voltage, see Fig.~1(b). A tunneling current is enabled by a bias voltage $V_b$, as will be considered below. The bottom graphene sheet is gated by a voltage $V_g$. The energy describing the system is given by
\begin{eqnarray}
\label{eq:model}
E(Q, Q_g,V_{FE}) &=& \frac{Q^2}{2 C_0 } - Q (V_b + V_{FE}) \nonumber \\
&+& \frac{ Q_g^2}{2 C_g } - Q_g V_g \nonumber \\
&+& \frac{C_0}{2} \frac{\epsilon_r}{\epsilon_r-1} (V_{FE} -p V_{KP}^{(0)} )^2
\nonumber \\
&+&E_G (Q) +E_G (-Q+ Q_g).
\end{eqnarray}
Here $C_0$ is the capacitance between the two graphene sheets, and $C_g$ is the capacitance between the bottom graphene sheet and a gate. We measure energy, charge and capacitance per unit area.
The third line encapsulates a classical model~\cite{ViznerStern2021} that determines the FE polarization for either one of the two h-BN stacking configuration that we denote by $p=\pm 1$. The out of plane FE polarization $P$ is accounted for by a voltage $V_{FE}$. By construction, the bare FE voltage (i.e. without the electrodes) is given by $\pm V_{KP}^{(0)}$ for AB or BA stacking of the bi-layer h-BN, see Fig.~1(b). Its value $V_{KP}^{(0)} \approx 120 {\rm{mV}}$ was measured directly using Kelvin probe force microscopy~\cite{ViznerStern2021}. In addition, this term renormalizes the capacitance $C_0 \to C=C_0 \varepsilon_r$ by the dielectric constant of the FE material.
The last line represents the quantum capacitance of the graphene sheets, with
\begin{equation}
E_G (Q) = \frac{2}{3} \beta |Q|^{\frac{3}{2}},
\end{equation}
and $\beta = \sqrt{\frac{\pi}{e^3}} \hbar v_F$. $E_G (Q)$ is the total kinetic energy density measured from the Dirac point, valid in the vicinity of the Dirac point, and $Q>0(Q<0)$ refers to holes (electrons).
The equilibrium configuration
is obtained by minimizing the energy with respect to the charge on the top sheet $Q$, induced charge due to gating $Q_g$, and the polarization of the FE layer $V_{FE}$. The equation
$\frac{\partial E}{\partial V_{FE}} = 0 $ gives
\begin{equation}
\label{eq:VFE}
V_{FE} = p V_{KP}^{(0)} + \frac{(\epsilon_r - 1)Q}{\epsilon_r C_0},
\end{equation}
showing that the internal polarization is affected by the electrodes.
Defining the Fermi energy
\begin{equation}
E_F(Q)=-e \frac{\partial E_G(Q)}{\partial Q}=-e~{\rm{sign}}(Q)\beta |Q|^{1/2},~~~ (e>0),
\end{equation}
the equations $\frac{\partial E}{\partial Q} = \frac{\partial E}{\partial Q_g} =0$ yield
\begin{eqnarray}
\label{eq4}
0 &=& \frac{Q}{C} - V_b -p V_{KP}^{(0)} -E_F(Q)/e+E_F( Q_g-Q)/e, \\
\label{eq4a}
0 &=& \frac{ Q_g}{C_g} -V_g - E_F( Q_g-Q)/e.
\end{eqnarray}
Eqs.~(\ref{eq4}) and (\ref{eq4a}) are subsequently
solved numerically for $Q$ and $Q_g$ as function of $V_g$ and $V_b$ for either sign of the polarization $p=\pm 1$.
\section{Kelvin probe voltage
}
\label{se:kpvoltage}
\begin{figure*}
\includegraphics[width=\textwidth]{combined_figure_2_newer.png}
\caption{(a) Schematic view of the Kelvin probe measurement device. We define the Kelvin probe sensitivity (KPS) $f=\delta V_{KP}/(2 V_{KP}^{(0)})$ [see Eq.~(\ref{eq:f})] as the relative difference in the voltage measured
for the two FE orientations. (b) KPS versus gate and bias voltage. Maximal sensitivity occurs when one of the graphene sheets is at charge neutrality.
(c) Cut along a fixed gate voltage showing a double peak. A fit (dashed line) to Eq.~(\ref{eq:sqrt}) shows that the KPS decays at large bias as $V_b^{-\frac{1}{2}}$. In this and all subsequent calculations we use $d = 5 {\rm{nm}}$, $d_g = 90 {\rm{nm}}$ (width of gate capacitor) and $\epsilon_r = 3.8$.
}
\end{figure*}
In this section we calculate the KP voltage, which can be measured via scanning atomic force microscopy (AFM) as function of gate and bias voltages, in a setup as in Fig.~2(a). Such measurements were performed for the bare ferroelectric material~\cite{yasuda2021stacking,ViznerStern2021,weston2022interfacial}. Our theory addresses the renormalization of the internal polarization $V_{FE}$ and the resulting total KP voltage $V_{KP}$ across the electrodes.
The total voltage drop measured in a Kelvin probe setup as in Fig.~2(a) is given by
\begin{equation}\label{eq:kpvoltage}
V_{KP} \equiv V_{FE} - \frac{Q}{C_0} = p V_{KP}^{(0)}- \frac{Q}{ C},
\end{equation}
where we used Eq.~(\ref{eq:VFE}). We subsequently focus on the difference of $V_{KP}$ for the two polarization orientations, $\Delta V_{KP} =V_{KP}^{p=+}-V_{KP}^{p=-} $ which can be measured by scanning the AFM tip across FE domain walls, see Fig.~2(a).
It is convenient to measure $\Delta V_{KP}$ in units of the bare FE voltage by defining the
Kelvin probe sensitivity (KPS)
\begin{equation}
\label{eq:f}
f \equiv \frac{\Delta V_{KP} }{2V_{KP}^{(0)}}=1-\frac{Q^+-Q^-}{2C V_{KP}^{(0)}}.
\end{equation}
The second equality, obtained from Eq.~(\ref{eq:kpvoltage}), shows that $f \to 1$ in the absence of charges on the electrodes
The numerical solution for $\Delta V_{KP}$ and hence the KPS is shown in Fig.~2(b) as function of bias and gate voltages. A cut with fixed $V_b$ in shown in Fig.~2(c). We can see that $\Delta V_{KP}$ peaks when either one of the two graphene sheets is at charge neutrality and takes a value which is of the order of $V_{KP}^{(0)}$ (and hence $f \lesssim 1$). Along these peaks the charge transfer between the electrodes is small and primarily determined by quantum capacitance, and hence it very inefficiently screens electrostatically the internal polarization.
We now discuss the physics describing the tail and peaks of $\Delta V_{KP}$ (or equivalently $f$ in Fig.~2(c)). The tails can be analysed by considering the quantum capacitance as a small perturbation. One can solve Eqs.~(\ref{eq4}) and (\ref{eq4a}) up to a given order in $\beta$. In the absence of quantum capacitance ($\beta \to 0$) we have
\begin{equation}
\label{eq:Q0}
Q \to Q^{(0)} \equiv C (V_b \pm V_{KP}^{(0)}),~~~Q_g \to Q_g^{(0)} \equiv C_g V_g.
\end{equation}
Then $V_{KP}^\pm = V_b$, $\Delta V_{KP}=0$, and $f \to 0$. The vanishing KPS is expected since the potential on the electrodes adjusts such as to completely screen the internal polarization.
Considering a small quantum capacitance correction, $Q=Q^{(0)}+ \delta Q$, $Q_g=Q_g^{(0)}+ \delta Q_g$, and expanding Eq.~(\ref{eq4}) to linear order in these deviations, we obtain for $V_b \gg V_g \gg V_{KP}^{(0)}$
\begin{equation}
\label{eq:sqrt}
f \to \beta \sqrt{\frac{C}{V_b}}.
\end{equation}
This is confirmed as a dashed line in Fig.~2(c).
The height of the two peaks in Fig.~2(c) can be obtained by considering the quantum capacitance of the corresponding neutralized graphene sheet as the dominant term. Consider for example the $Q=0$ peak [nearly vertical peak ridge in Fig.~2(b)]. We can then decouple Eqs.~(\ref{eq4}) and (\ref{eq4a}) by replacing the Fermi energy of the bottom layer $E_F(Q_g-Q)$ by $E_F(Q_g)$. We thus obtain
$0=\frac{Q}{C}-V_b-p V_{KP}^{(0)}) - E_F(Q)/e+E_F(Q_g)/e$, yielding a peak at $V_b=E_F(Q_g)/e$ where for large $V_g$ we have $Q_g=C_g V_g$. Solving the quadratic equation yields $Q=p \left( -C \beta+\sqrt{(C \beta)^2 + 4 C V_{KP}^{(0)}} \right)^2/4$. Substituting in Eq.~(\ref{eq:f}), we have $f=1-\frac{Q}{C V_{KP}^{(0)}}$. Expanding in small $V_{KP}^{(0)}$, yields
\begin{eqnarray}
f=1-\frac{V_{KP}^{(0)}}{C \beta^2} +\mathcal{O}((V_{KP}^{(0)})^2).
\end{eqnarray}
We can see that $C \beta^2 $ sets a voltage scale below which quantum capacitance effects set in. In our system
\begin{equation}
\mathcal{V} \equiv C \beta^2= \frac{e}{16 \pi d \varepsilon_0} \left( \frac{4 \pi \varepsilon_0 \hbar v_F}{e^2} \right)^2\sim 0.3V.
\end{equation}
Thus, the reason that the peaks in KPS approach nearly unity
is that the FE material property $V_{KP}^{(0)}$ is small, but of the order of $\mathcal{V}$.
\section{TER}
\label{se:TER}
In this section we discuss the standard tunnelling electroresistance (TER), namely the influence of the polarization on the tunneling coefficient $|\mathcal{T}|^2$. TER results from a polarization-dependent distortion of the electrostatic tunnel barrier~\cite{zhuravlev2005giant,gerra2007ferroelectricity,Lichtensteiger2012,wu2020high}. FE-mediated variations of the barrier are expected to modify the tunneling coefficient due to its exponential sensitivity $|\mathcal{T}|^2 \propto e^{-2 \kappa d}$.
Denoting the tunneling amplitudes for the two polarization orientations by $\mathcal{T}^\pm$, we define
\begin{equation}
{\rm{TER}}=\frac{|\mathcal{T}^+|^2-|\mathcal{T}^-|^2}{|\mathcal{T}^+ |^2 +| \mathcal{T}^-|^2}.
\end{equation}
As a simple model providing an order of magnitude estimate for TER in a few layer FE,
consider the bottom and top graphene sheets to be located at the $z=\pm d/2$ planes, and the two h-BN layers at the $z=\pm d/6$ planes. The tunneling amplitude from bottom to top is given via 3-rd order perturbation theory by~\cite{PhysRevApplied.2.014003} $\mathcal{T} \propto \frac{t^3}{E(z=-d/6)E(z=d/6)}$, where $t$ are nearest layer hopping amplitudes, and $E(z=\pm d/6)$ is the energy in the h-BN layers. We denote by $E_g$ the energy gap of the bare h-BN layer at $Q=0$. Adding the linear potential due to the charged electrodes, $E(z)=E_{g}+\frac{e Q z}{dC}$.
The tunneling amplitude becomes $\mathcal{T}^\pm \propto \left( E_g^2-\left( \frac{e Q^\pm }{6C} \right)^2 \right)^{-1}$. If the two directions of polarization lead to exactly opposite charge transfer, $Q^+ = - Q^-$, then $\mathcal{T}^+=\mathcal{T}^-$. In general TER$=0$ when inversion symmetry holds, namely when $V_b=0$ and $V_g=0$. Finite
TER results from $Q^+ \ne - Q^-$,
\begin{equation}
{\rm{TER}} \approx \frac{\left( \frac{eQ^- }{6C} \right)^2-\left( \frac{eQ^+ }{6C} \right)^2}{2E^2_g}.
\end{equation}
For sufficiently large $V_b$, using Eq.~(\ref{eq:Q0}) which ignores quantum capacitance effects, we have TER$~= - e^2 V_b V_{KP}^{(0)}/(18 E_g^2)$ independently of $V_g$. Assuming $E_g \sim 1eV$, this result leads to a
few percent TER for a bias $V_b$ of few volts. The smallness of the effect derives from the small ratio between the FE voltage $V_{KP}^{(0)}$ and the gap of h-BN, however this can be made larger in other materials. Higher TER may also be obtained in an asymmetric device when one electrode is weakly coupled while the other one is strongly coupled~\cite{rogee2022ferroelectricity}.
\section{Momentum conserving Tunneling}
\label{se:tunneling}
In this section we consider 2D momentum conserving tunneling through the FE material. In conventional tunnel junctions 2D momentum conservation leads to a resonance peak in the $I(V)$ characteristics corresponding to two aligned Dirac cones~\cite{Britnell2013,mishchenko2014twist,ChenJulian2021}. Our goal is to incorporate the FE polarization into the resonant condition, yielding a sensitive probe of the FE orientation and magnitude.
Following the model outlined in Ref.~\onlinecite{Britnell2013}, we start with a Fermi golden rule expression for the tunneling current
\begin{equation}
\begin{split}
I^{(\pm)} = |\mathcal{T}^\pm|^2 \sum_{\nu_B,\nu_T } & \int d^2 k_T \int d^2 k_B \left( f_B(\varepsilon_{\vec{k}_B,\nu_B}) - f_T(\varepsilon_{\vec{k}_T,\nu_T}) \right) \\ & (V_q(k_B,k_T))^2 \delta\left( \varepsilon_{\vec{k}_B,\nu_B} -\varepsilon_{\vec{k}_T,\nu_T} -e V_{KP} \right).
\end{split}
\end{equation}
In this section, in order to disentangle the TER effects of the previous chapter with effects of momentum conservation, we assume for simplicity $\mathcal{T}^+ = \mathcal{T}^-$. Here, $f_{i}(\varepsilon)=(\exp[(\varepsilon-E_{F,i})/T]+1)^{-1}$ are Fermi functions and $\varepsilon_{\vec{k}_i,\nu_i} = \hbar v_F \nu_i |\vec{k}_i|$ are the electron ($\nu=1$) and hole ($\nu=- 1$) bands in the bottom or top sheet $(i=B,T)$, respectively (see Fig.~3). This process describes tunneling of an electron from an occupied momentum state $\vec{k}_B$ in the bottom layer to an unoccupied state with momentum $\vec{k}_T$ at the top layer, measured from the respective Dirac point in a given valley. The scattering potential
contains a phenomenological momentum dependence $V_q (k_B,k_T)= \frac{1}{q_c^2 + (\vec{k}_B - \vec{k}_T - \vec{Q} )^2} $. Here $|\vec{Q}|=K \theta $ is a momentum shift between the top and bottom Dirac points (at momentum $K$ in the Brillouin zone) due to a relative twist of the layers by angle $\theta$. The limit $q_c \to 0$ corresponds to momentum conserving tunneling. A finite $q_c$ phenomenologically describes non-momentum-conserving tunneling processes e.g. due to short-range disorder or the Moire pattern of either the h-BN or its interface with graphene. We take $q_c^{-1}=12 nm$~\cite{Britnell2013}. The typical energy band diagram in the presence of a bias voltage is shown in Fig.~3(a). Since $E_{F,T}=E_F(Q)$ and $E_{F,B}=E_F(Q_g-Q)$ mark the distance of these Fermi levels from the corresponding Dirac point; from Eq.~(\ref{eq4}) it follows that the energy difference of the two Dirac points is given precisely by $eV_{KP}$, as marked in Fig.~3(a).
\begin{figure}
\includegraphics[width=1.0\columnwidth]{scheme3_tunneling_new.png}
\caption{(a) Typical energy band diagram (momentum shift for clarity). The Fermi levels of the bottom and top layers $E_{F,i}$ ($i=B,T$) are determined from the charge densities $Q,Q_g$. The $KP$ voltage $V_{KP}$ sets the energy misalignment of the Dirac points. (b) At zero relative twist angle equi-energy contours are concentric circles. At $V_{KP}=0$ these rings exactly overlap for all energies (see marked energy diagram in Fig.~4), leading to a resonant momentum conserving tunneling in the entire voltage window between the two Fermi energies. (c) For finite twist the equi-energy circles are non-concentric, and resonance peaks occur when these circles are tangential for all energies (see marked energy diagrams in Fig.~5), when $eV_{KP} = \pm \Delta$ is satisfied (see Eq.~(\ref{eq:VKPDELTA})).\label{fig:3}
}
\end{figure}
\subsection{Zero twist angle}
When the two graphene sheets are perfectly aligned we have $\vec{Q}=0$. For any energy within the voltage window, the two momenta $\vec{k}_B$ and $\vec{k}_T$ belong to two concentric circles in momentum space, as denoted in Fig.~3(b). The energy displacement of the Dirac cones, $e V_{KP}$, is controlled by the bias voltage. When $V_{KP}=0$ these two circles in momentum space overlap, for all energies, leading to a resonant current peak.
The resonance peak can be obtained by performing the angular integration, leading to
\begin{equation}
\label{eq:32}
\begin{split}
I^{(\pm)} \propto \int_{-\infty}^\infty d k_T d k_B & \frac{(f_B - f_T) |k_B| |k_T| (q_c^2 + k_B^2 + k_T^2) }{\left(q_c^2 + (k_B - k_T)^2 \right)^\frac{3}{2} \left(q_c^2 + (k_B + k_T)^2 \right)^\frac{3}{2} } \\
& \delta\left( \hbar v_F(k_B - k_T) - e V_{KP} \right).
\end{split}
\end{equation}
This yields a single integral that we evaluate numerically.
\begin{figure*}
\includegraphics[width=\textwidth]{phase_diag_full_newestr.png}
\caption{Color plot showcasing the calculated current $I^{(+)}$ for one orientation ($p = +1$) of the polarization, as a function of the gate and bias voltage, for the untwisted case $\theta=0$. On top of it are overlayed the curves for which the top and/or the bottom graphene electrodes are at the Dirac point (green $Q_{t}^{(p)}=0$ and yellow $Q_{b}^{(p)}=0$ curves, respectively), as well as the curves where the graphene sheets are electrostatically aligned (red curve, $V_{KP}^{(p)}=0$).
The right inset shows a cut of the $I-V$ curve at $V_g=20V$ for both $p=\pm 1$
The two resonance peaks match with the band alignment condition $V^{(\pm)}_{KP}=0$ and their line shape is given by Eq.~(\ref{eq:32}). The top inset displays the Dirac bands (horizontally shifted for clarity) for $p=1$, for points 1-6 corresponding to $V_g=20V$ and $V_b=-1,-0.35,-0.17,0,0.34,1.0$V. The resonance case corresponds to point 3 (blue outline). The green dashed lines represent the energy midway between the two Dirac points, at which the tunneling process conserves momentum. Tunneling through this energy state begins at point 5, at which the plateau in the right inset ends.
}
\end{figure*}
Fig.~4(a) shows a color map of the current $I$ versus $V_b$ and $V_g$ for a specific polarization direction. As a guide to the eye, this figure displays the charge neutrality curves of either bottom or top graphene sheets.
We also plot the curve where $V_{KP}=0$. A cut of the $I(V_b)$ curve for either direction of the polarization for fixed $V_g$ is shown in Fig.~4(b). We can see a pronounced resonance peak positioned precisely at $V_b$ at which $V_{KP}=0$ where the Dirac cones overlap. This occurs for different resonant voltages for the two polarization orientations.
The line shape of the peak versus $V_b$ stems from the implicit dependence of $V_{KP}$ on the latter, and takes the form
\begin{equation}
\label{eq:32b}
I(V_b)|_{V_{KP} \approx 0} \propto ((\hbar v_F q_c)^2+(e V_{KP})^2)^{-3/2}.
\end{equation}
We can also observe a plateau in the $I(V_b)$ curve in Fig.~4 which terminates at $V_b \approx 0.4V$ (at point 5). As shown in the energy diagrams in Fig.~4, this corresponds to a threshold for transport through the midway energy between the two Dirac points, see green dashed lines. At the threshold $V_b \approx 0.4V$ this specific energy enters into the voltage window~\cite{feenstra2012single}.
\subsection{Finite twist angle}
Tunneling between twisted graphene sheets has been discussed in numerous theory ~\cite{feenstra2012single,de2014theory} and experimental~\cite{mishchenko2014twist} works. Whereas at zero twist, there is a single resonance condition $V_{KP}=0$ at which the two Dirac cones completely overlap, at a finite twist angle there is a momentum shift between the Dirac points, given by $|\vec{Q}|= K \theta$ for $\theta \ll 2 \pi$. Thus equi-energy contours of the bottom and top layers are non-concentric circles, see Fig.~3(c). Upon tuning their relative energy $e V_{KP}$ there are two situations where the Dirac cones are tangential,
\begin{equation}
\label{eq:VKPDELTA}
e V_{KP} = \pm \hbar v_F \theta K \equiv \pm \Delta.
\end{equation}
The calculated current with a finite twist angle of $\theta = 2^{o}$ is shown in the color plot of Fig.~5. We can clearly see that the pair of resonance peaks, also shown along a cut of fixed gate, overlap with the condition Eq.~(\ref{eq:VKPDELTA}). The corresponding dispersion relations are shown in the top insets of Fig.~5. The sensitivity of the resonant peaks to different orientation of the FE polarization can be enhanced by tuning the gate such that the resonance peaks will occur near the charge neutrality condition of one graphene sheet. This is indeed seen for the cut of the $I(V_b)$ curve at $V_g=9.5V$.
\begin{figure*}
\includegraphics[width=\textwidth]{twisted_phase_diagram_new.png}
\caption{Color plot showcasing the calculated current as a function of gate and bias voltage for specific orientation of the polarization $(p=+1)$, with twist angle $\theta = 2^{o}$ between the graphene sheets. The resonance peak splitting to two peaks at $eV_{KP}= \pm \Delta$ is observed on the color plot. The right insets show two cuts at fixed gate voltages, plotting both $I^{(+)}$ and $I^{(-)}$. We can see that each peak of either $eV_{KP}= \Delta$ or $eV_{KP}= - \Delta$ further splits according to the direction of the FE polarization $(p=\pm 1)$. The top inset shows the calculated band structure alignment (for points 1-4 corresponding to $V_g=20V$ and $V_b=-1.14,-0.70,-0.35,0.70 $V).
}
\end{figure*}
\section{Summary}
To summarize, in this work we studied graphene-FE-graphene junctions. We focused on the self-consistent interplay of the FE polarization with the electrostatics and quantum capacitance of the graphene electrodes. We first asked whether experiments will detect a FE voltage across the conducting graphene electrodes, and discovered that a finite signal appears only upon gate tuning of one of the Dirac electrodes to charge neutrality.
Moving to the tunneling current, we studied two mechanisms for its dependence on the polarization direction. First, we discussed a dependence of the tunneling coefficient on the polarization. This tunneling electroresistance (TER) becomes sizable upon increasing the bias voltage which leads to a device asymmetry. TER is relatively small at zero bias for available values of gate voltages. We then considered 2D momentum conserving tunneling, and showed that one can detect the actual value of the polarization via a shift in the voltage axis of a resonant peak occurring in the $I(V)$ characteristics due to the alignment of the Dirac cones.
Finite TER allows to identify two polarization states by measuring different tunnelling conductance for the same values of $V_g,V_b$. In order to switch from one state to the other, one would need, in practice, to slide a domain wall separating AB and BA stacking through the tunneling region. This could be achieved by tuning the bias $V_b$ above the switching point. The TER based on the presence of resonant peaks in the $I(V_b)$ characteristics, however, can operate even in a system where multiple domains are situated in the tunneling region, leading to a visible FE splitting of each peak.
Our results apply to general FE materials such as transition metal dichalcogenides. We hope that our theory will serve as a basis for analysis of near future FE tunnneling experiments.
\label{se:summary}
\section{Acknowledgments}
We thank discussions with Moshe Ben Shalom, Igor Rozhansky, Simon Sallah Atri and Hadar Steinberg. This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program under grant agreement No 951541.
|
1,116,691,499,070 | arxiv | \section{Introduction}
\label{sec:intro}
In 1959 Richard Kadison and Isadore Singer \cite{kad1} formulated a problem in quantum mechanics that later became one of the iconic mathematical questions of the twentieth century. The problem is now known as the Kadison\textendash Singer Problem (KSP).
\begin{problem}[KSP]
\label{p1}
Does every pure state on the algebra of bounded diagonal operators acting on the Hilbert space of square summable complex-valued sequences have a unique extension to a regular state on the algebra of all bounded operators? $\hfill \Box$
\end{problem}
Following a finite-dimensional reformulation \cite{and1} by Joel Anderson in 1979 and further reduction to an equivalent problem in discrepancy theory \cite{wea1} by Nik Weaver in 2004, a positive answer to KSP was eventually found \cite{mar1,mar2} by Adam Marcus, Daniel Spielman and Nikhil Srivastava in 2013. We will not attempt a detailed explanation of KSP but instead refer readers to the excellent review article \cite{har1} by Nick Harvey. The Marcus\textendash Spielman\textendash Srivistava Discrepancy Theorem (MSSDT) was a basic platform for the eventual solution of KSP and is a central theme in our paper.
\begin{theorem}[MSSDT]
\label{ksdiscrepancy}
If $\bfv_1,\ldots,\bfv_n \in {\mathbb C}^m$ are such that $\|\bfv_j\|^2 \leq \alpha$ for all $j=1,\ldots,n$ and
\begin{equation}
\label{ntfmatrixsum}
\sum_{j=1}^n \bfv_j \bfv_j^* = I
\end{equation}
where $I \in {\mathbb C}^{m \times m}$ is the unit matrix then there is a partition of the index set ${\mathcal J} = \{1,\ldots,n\} \subset {\mathbb N}$ into two disjoint subsets ${\mathcal J}_1$ and ${\mathcal J}_2$ such that
\begin{equation}
\label{ksdcon}
\left \| \sum_{j\, \in\, {\mathcal J}_k} \bfv_j {\bfv_j}^* \right \|_2 \leq \left( \frac{1}{\sqrt{2}} + \sqrt{\alpha} \right)^2
\end{equation}
for each $k=1,2$. The norm used here is the $2$-norm. Note that $(\ref{ntfmatrixsum})$ implies $m \leq n$. $\hfill \Box$
\end{theorem}
MSSDT was a fundamental stepping stone in the ultimately successful quest \cite{mar1, mar2} for a positive answer to KSP \cite{kad1}. In fact MSSDT also implies the truth of the Weaver Discrepancy Statement (WDS) proposed earlier in 2004 by Nik Weaver \cite{wea1} as a mechanism for solving KSP.
\begin{statement}[WDS]
\label{wds}
Let $\bfv_1,\ldots,\bfv_n \in {\mathbb C}^m$ satisfy $\|\bfv_j\|^2 \leq \alpha$ for each $j=1,\ldots,n$ and suppose that
\begin{equation}
\label{ntfquadform}
\sum_{j=1}^n |{\bfv_j}^*\bfx|^2 = 1
\end{equation}
for all $\bfx \in {\mathbb C}^m$ with $\|\bfx\|=1$. Then we can partition ${\mathcal J} = \{1,\ldots,n\}$ into two disjoint sets ${\mathcal J}_1, {\mathcal J}_2$ such that
\begin{equation}
\label{wdscon}
\left | \sum_{j \in {\mathcal J}_k} |{\bfv_j}^* \bfx|^2 - \frac{1}{2} \right | \leq 5 \sqrt{\alpha}
\end{equation}
for each $k=1,2$ and all $\bfx \in {\mathbb C}^m$ with $\|\bfx\|=1$. If we write ${\mathcal J}_1 = \{h(1),\dots,h(p)\}$ where $1 \leq h(1) < \cdots < h(p) \leq n$ and ${\mathcal J}_2 = \{k(1),\ldots,k(q)\}$ where $1 \leq k(1) < \cdots < k(q) \leq n$ with $p+q=n$ and define $V_1 = [\bfv_{h(1)},\ldots,\bfv_{h(p)}]$ and $V_2 = [\bfv_{k(1)},\ldots,\bfv_{k(p)}]$ then $(\ref{wdscon})$ can be rewritten as
\begin{equation}
\label{wdsmatrix}
\left \| \rule{0cm}{0.4cm} \hspace{1mm} V_kV_k^* - \frac{I}{2} \hspace{1mm} \right \|_2 \leq 5 \sqrt{\alpha},
\end{equation}
where $I \in {\mathbb C}^{m \times m}$ is the unit matrix, for each $k=1,2$. $\hfill \Box$
\end{statement}
An advantage of WDS is that it allows us to interpret the discrepancy in terms of quadratic forms. WDS says that any quadratic form expressed as a sum of small rank one quadratic forms can be split into two almost equal parts. The {\em discrepancy} is represented as the difference in values on the surface of the unit sphere of the two constituent quadratic forms with each one expressed as a sum of small rank one quadratic forms. A key motivation for our paper is the close connection between MSSDT and the theory of frames \cite{cas2,chr1} in finite dimensional Euclidean space.
\subsection{Motivation}
\label{m}
Conditions (\ref{ntfmatrixsum}) and (\ref{ntfquadform}) are equivalent. A set of vectors $\{\bfv_1,\ldots,\bfv_n\} \in {\mathbb C}^m$ that satisfies these conditions is said to form a {\em Parseval frame} or {\em normalized tight frame} in ${\mathbb C}^m$. Based on MSSDT one of our key motivations was to explore the connection between discrepancy theory and Parseval frames in finite-dimensional Euclidean space. Our second motivation is less obvious. In a recent note \cite{sri1}, Nikhil Srivastava wrote that in general $\ldots$ the presence of large vectors (in a frame) {\em is an obstruction to the existence of a low discrepancy partition}. Thus we decided to investigate Parseval frames in which all vectors are the same size. It is known that Parseval frames in finite-dimensional Euclidean spaces are closely related to orthogonal matrices. The Walsh matrices are a well-known collection of real symmetric matrices where all elements have magnitude $1$ and the sets of row and column vectors are each sets of mutually orthogonal vectors. Thus we were led to a discussion of discrepancy theory for Parseval frames defined by Walsh matrices.
\subsection{Tight frames in finite dimensional Euclidean space}
\label{ntffdes}
If the set of vectors $\{\bfv_1,\ldots,\bfv_n\} \in {\mathbb C}^m$ satisfies the condition
\begin{equation}
\label{tf}
\sum_{j=1}^n |\bfv_j^*\bfx|^2 = c
\end{equation}
for some $c > 0$ and all $\bfx \in {\mathbb C}^m$ with $\|\bfx\| = 1$ then $\{\bfv_1,\ldots,\bfv_n\}$ is said to form a {\em tight frame} in ${\mathbb C}^m$ with frame constant $c$. In such cases we must have $m \leq n$. If we define the {\em pre-frame operator} $V = [\bfv_1,\ldots,\bfv_n] \in {\mathbb C}^{m \times n}$ then
$$
\sum_{j=1}^n |{\bfv_j}^* \bfx|^2 = \bfx^* \left[ \sum_{j=1}^n \bfv_j {\bfv_j}^* \right] \bfx = \bfx^* VV^* \bfx = \bfx^* S \bfx
$$
where $S = VV^* \in {\mathbb C}^{m \times m}$ is called the {\em frame operator}. The frame operator is self-adjoint and positive. Hence it is invertible. For all $\bfx \in {\mathbb C}^m$ we can write
$$
\bfx = \sum_{j=1}^n \left( {\bfv_j}^* S^{-1} \bfx \right) \bfv_j
$$
where the coefficients ${\bfv_j}^* S^{-1} \bfx$ for each $j=1,\ldots,m$ are called the frame coefficients for $\bfx$. In the case where $c=1$ the condition (\ref{tf}) reduces to (\ref{ntfquadform}) and the frame becomes a Parseval frame. Condition (\ref{ntfquadform}) can now be rewritten as
\begin{equation}
\label{ntfframeop}
\bfx^*(S - I_m)\bfx = 0
\end{equation}
for all $\bfx \in {\mathbb C}^m$ with $\bfx \neq \mbox{\boldmath $0$}$. Thus for a Parseval frame we must have $S = I_m$ and the frame representation reduces to
\begin{equation}
\label{ntfrep}
\bfx = \sum_{j=1}^n \left( {\bfv_j}^* \bfx \right) \bfv_j
\end{equation}
for all $\bfx \in {\mathbb C}^m$. This formula suggests another more familiar formula. If we define $W = V^*$ and write $W = [\bfw_1,\ldots,\bfw_m] \in {\mathbb C}^{n \times m}$ then the condition $S = VV^* = I_m$ can be rewritten as $W^*W = I_m$ and this means that the set $\{\bfw_1,\ldots,\bfw_m\}$ forms an orthonormal set in ${\mathbb C}^n$. If we extend this set to an orthonormal basis $\{\bfw_1,\ldots,\bfw_n\}$ and write $H = [H_1 \mid H_2] = [\bfw_1, \ldots, \bfw_m \mid \bfw_{m+1},\ldots,\bfw_n] \in {\mathbb C}^{n \times n}$ then $H_1 = W$. Let $G = H^*$ and write
$$
G = \left[ \begin{array}{cc}
G_1 \\ \hline
G_2 \end{array} \right]
$$
where $G_1 = {H_1}^* = V$. Thus we may write
$$
G = \left[ \begin{array}{cccc}
\bfv_1 & \bfv_2 & \cdots & \bfv_n \\ \hline
\bfr_1 & \bfr_2 & \cdots & \bfr_n \end{array} \right]
$$
where $\{ \bfr_1,\ldots,\bfr_n\} \in {\mathbb C}^{(n-m) \times n}$. The matrix $G = [\bfg_1,\ldots,\bfg_n] \in {\mathbb C}^{n \times n}$ is orthogonal and the set of vectors $\{\bfg_1,\ldots,\bfg_n\} \in {\mathbb C}^n$ forms an orthonormal basis for ${\mathbb C}^n$. Since
$$
\bfg_j = \left[ \begin{array}{cc}
\bfv_j \\
\bfr_j \end{array} \right]
$$
for each $j=1,\ldots,n$ the standard representation for a vector
$$
\bfz = \left[ \begin{array}{cc}
\bfx \\
\bfr \end{array} \right] \in {\mathbb C}^n
$$
in terms of the orthonormal basis $\{\bfg_1,\ldots \bfg_n\}$ is given by
\begin{equation}
\label{obrep}
\bfz = \sum_{j=1}^n \left( {\bfg_j}^* \bfz \right) \bfg_j \iff \left[ \begin{array}{cc}
\bfx \\
\bfr \end{array} \right] = \sum_{j=1}^n \left( {\bfv_j}^* \bfx + {\bfr_j}^*\bfr \right) \left[ \begin{array}{cc}
\bfv_j \\
\bfr_j \end{array} \right].
\end{equation}
For vectors in the subspace $S_m \subseteq {\mathbb C}^n$ defined by $\bfr = \mbox{\boldmath $0$}$ the representation in (\ref{obrep}) reduces to
\begin{equation}
\label{efrep}
\left[ \begin{array}{cc}
\bfx \\
\mbox{\boldmath $0$} \end{array} \right] = \sum_{j=1}^n \left( {\bfv_j}^* \bfx \right) \left[ \begin{array}{cc}
\bfv_j \\
\bfr_j \end{array} \right]
\end{equation}
which is essentially the same representation as (\ref{ntfrep}). It follows that $\sum_{j=1}^n ({\bfv_j}^* \bfx) \bfr_j = \mbox{\boldmath $0$}$. In fact we can see that if $\bfz \in S_m$ then
$$
\sum_{j=1}^n | {\bfg_j}^* \bfz |^2 = \sum_{j=1}^n \left( | {\bfv_j}^* \bfx |^2 + | {\bfr_j}^* \mbox{\boldmath $0$}|^2 \right) = \sum_{j=1}^n | {\bfv_j}^* \bfx |^2 = 1
$$
for all $\bfz \in S_m$ with $\|\bfz\|^2 = 1$. Hence the set $\{ \bfg_1,\ldots,\bfg_n\} \in {\mathbb C}^n$ defines a Parseval frame for the $m$-dimensional subspace $S_m \subseteq {\mathbb C}^n$. The frame for $S_m$ defined by $G$ is simply the original frame defined by $V$ embedded into ${\mathbb C}^n$. We could write $G \cong V$.
For a Parseval frame defined by a pre-frame operator $V$ the frame operator $S = VV^* = W^*W = I_m \in {\mathcal B}({\mathbb C}^m)$ is the identity mapping. The related operator $R = V^*V = WW^* = J_n \in {\mathcal B}({\mathbb C}^n)$ satisfies the equations $J_n^2 = J_n$ and $J_n W = W$. Thus $J_n$ is a projection onto the column space of $W$.
\subsection{The Walsh functions and Walsh matrices}
\label{wm}
The {\em Walsh functions} $W_k:[0,1] \rightarrow \{-1,1\}$ for $k \in {\mathbb N} - 1$ can be defined as follows. Choose $m \in {\mathbb N}$ and let each $k < 2^m$ be represented in binary form as
$$
k = k_m \cdots k_1 \iff k = \sum_{s=1}^m k_s\, 2^{s-1} \iff \bfk = (k_1,\ldots,k_m,0,0,\ldots) \in \{0,1\}^{\infty}
$$
and let each $x \in [0,1]$ be represented in binary form as
$$
x = 0.x_1x_2 \cdots \iff x = \sum_{s=1}^{\infty} x_s 2^{-s} \iff \bfx = (x_1,x_2,\ldots) \in \{0,1\}^{\infty}
$$
where no expansion is permitted with $x_s = 1$ for all $s \geq n$ for some $n = n(x) \in {\mathbb N}$. Then we have
$$
W_k(x) = (-1)^{p(\sbfk,\sbfx)}
$$
for each $k < 2^m$ and each $x \in [0,1]$ where $p(\bfk,\bfx) = \sum_{s=1}^m k_s x_s$. The Walsh functions form a complete orthonormal set in the Hilbert space $L^2[0,1]$. They were introduced in a 1923 paper \cite{wal1} by Joseph Walsh and have since found wide application in digital signal processing. In this regard a fundamental requirement was the development of efficient computation routines for Walsh matrices and the associated function representations using Walsh series and Walsh transforms. For a detailed account see \cite{sch1}. See also \cite{fin1} and references therein and some recent work on the construction of wavelet frames in Walsh analysis \cite{far1, far2, far3}. Importantly we note that Per Enflo used Walsh series to prove a celebrated result \cite{enf1} that there exist separable Banach spaces with no Schauder basis.
The Walsh functions are closely related to the Walsh matrices which are our particular interest in this paper. Let $n = 2^r$ for some $r \in {\mathbb N}$. The {\em Walsh matrix} $Y = Y_r \in {\mathbb C}^{n \times n}$ can be efficiently computed using the recursive Sylvester construction defined by the M{\sc atlab} algorithm \medskip
\begin{algorithmic}
\STATE $Y = [1]$;
\STATE {\bf for} {$i = 1: r$}
\STATE $Y = [Y,\hspace{0.2cm} Y; Y,\hspace{0.2cm} -Y]$;
\STATE {\bf end}
\end{algorithmic}
\noindent The matrix $Y$ is real symmetric with $y_{ij} = \pm 1$ for all $i, j \in \{1,\ldots,n\}$ and $Y^*Y = nI$. The columns $\{\bfy_1,\ldots,\bfy_n\}$ form an orthogonal basis for ${\mathbb C}^n$ with $\|\bfy_j\| = \sqrt{n}$ for all $j=1,\ldots,n$. If we choose $m \leq n$ and define $W = [\bfy_1,\ldots,\bfy_m]/\sqrt{n} \in {\mathbb C}^{n \times m}$ and $V = W^*$ and if we write $V = [\bfv_1,\ldots,\bfv_n] \in {\mathbb C}^{m \times n}$ then the columns of $V$ define a Parseval frame in ${\mathbb C}^m$ which consists of $n = 2^r$ vectors $\bfv_1,\ldots,\bfv_n$ with $\| \bfv_1 \|^2 = \cdots = \| \bfv_n \|^2 = m/n$.
We have
$$
Y_1 = \left[ \begin{array}{r|r}
1 & 1 \\ \hline
1 & -1\end{array} \right],
$$
$$
Y_2 = \left[ \begin{array}{rr|rr}
1 & 1 & 1 & 1 \\
1 & -1 & 1 & -1 \\ \hline
1 & 1 & -1 & -1 \\
1 & -1 & -1 & 1 \end{array} \right]
$$
and
$$
Y_3 = \left[ \begin{array}{rrrr|rrrr}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\
1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\
1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\ \hline
1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\
1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\
1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \\
1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \end{array} \right].
$$
Note that these matrices are known as the {\em Walsh} matrices \cite{fin1} using the {\em natural ordering} and they are a special case of the {\em Hadamard} matrices \cite{hed1}. The construction described in the M{\sc atlab} algorithm is due to Sylvester \cite[Section 3.1]{hed1}. The Walsh matrices can be presented with various different orderings of the rows and columns. The {\em sequency ordering} \cite{fin1} is defined by ordering the rows according to the number of sign changes in each row. Thus, with this ordering, we have
$$
Z_1 = \left[ \begin{array}{rr}
1 & 1 \\
1 & -1\end{array} \right],
$$
$$
Z_2 = \left[ \begin{array}{rrrr}
1 & 1 & 1 & 1 \\
1 & 1 & -1 & -1 \\
1 & -1 & -1 & 1 \\
1 & -1 & 1 & -1\end{array} \right]
$$
and
$$
Z_3 = \left[ \begin{array}{rrrrrrrr}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\
1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \\
1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\
1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\
1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \\
1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\
1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \end{array} \right].
$$
The advantage of the sequency ordering is that row $k+1$ of $Z_r$ defines the value of the Walsh function $W_k(x) $ on each interval $x \in ((\ell-1)/n, \ell/n)$ for each $\ell=1,\ldots,n$ where $n = 2^r$. The disadvantage is that there is no efficient direct numerical construction. Thus the Walsh matrices with the sequency ordering are normally constructed by permutation of the natural ordering. We will always use the natural order in this paper.
\subsection{Contribution}
\label{con}
In this paper we discuss discrepancy results for a special class of Parseval frames defined by Walsh matrices. In particular we show that if $m,n \in {\mathbb N}$ with $m \leq n = 2^r$ for some $r \in {\mathbb N}$ then there is a Parseval frame defined by a pre-frame matrix operator $V = [\bfv_1,\ldots,\bfv_n] \in {\mathbb C}^{m \times n}$ where $\bfv_j = [v_{ij}] \in {\mathbb C}^m$ with $v_{ij} = \pm 1/\sqrt{n}$ for each $i=1,\ldots,m$ and $j=1,\ldots,n$ and $\| \bfv_j \| = \sqrt{m/n}$ for each $j=1,\ldots,n$. We show that for $m \leq n/2$ these frames can be split into two identical tight frames with frame constant $c =1/2$. For $n/2 < m \leq n$ we show that the frames can no longer be evenly split but we find an explicit expression for the discrepancy in a best possible split. Because the vectors in our frames are all the same length we have not imposed any direct condition that forces them to be small. Hence our results are not directly comparable to those in MSSDT. Of course the frame vectors are small if $\sqrt{m/n}$ is small. We also show that all Parseval frames in ${\mathbb C}^m$ constructed from $n = 2^r$ vectors of equal length can be transformed to a corresponding Walsh frame and we ponder the implications of this correspondence in regard to splitting of the associated quadratic forms.
\section{The main results}
\label{mr}
Let $m,n \in {\mathbb N}$ with $m \leq n$. We would like to construct a Parseval frame defined by a pre-frame matrix operator $V = [ v_{ij}] = [\bfv_1,\ldots,\bfv_n] \in {\mathbb C}^{m \times n}$ where the frame vectors $\bfv_j \in {\mathbb C}^m$ all have the same length. In order to construct the simplest possible frame we will insist that $v_{ij} = \pm\, 1 /\sqrt{n}$ for all $i=1,\ldots,m$ and all $j = 1,\ldots,n$. Thus $\| \bfv_j\| = \sqrt{m/n}$ for all $j=1,\ldots,n$. To facilitate splitting the frame into two potentially equal parts we will restrict our attention to frames with $n = 2^r$ vectors for some $r \in {\mathbb N}$. We will show that the normalized Walsh matrices provide the ideal building blocks for our proposed frames. We discuss splitting of Parseval frames defined by Walsh matrices and find explicit expressions for the minimal discrepancy.
\subsection{Parseval frames defined by Walsh matrices}
\label{ntfwm}
Define $n = 2^r$ for some $r \in {\mathbb N} + 1$ and suppose $ m \in {\mathbb N}$ with $m < n$. Thus we exclude the case $m=n$. Let $Y = [\bfy_1,\ldots,\bfy_n] \in {\mathbb C}^{n \times n}$ be the corresponding Walsh matrix. We have
$$
Y^*Y = YY^* = nI_n
$$
where $I_n \in {\mathbb C}^{n \times n}$ is the unit matrix and so $G = Y/\sqrt{n}$ is a unitary matrix. Write $G = [\bfg_1,\ldots,\bfg_m \mid \bfg_{m+1},\ldots,\bfg_n]$ and define $W \in {\mathbb C}^{n \times m}$ by setting $W = [\bfg_1,\ldots,\bfg_m]$. Let $V = W^* \in {\mathbb C}^{m \times n}$. Since the columns of $W$ are a subset of the columns of $G$ they form an orthonormal set in ${\mathbb C}^n$. Therefore $W^*W = I_m \iff VV^* = I_m$. As usual we write $V = [\bfv_1,\ldots,\bfv_n]$ where $\bfv_j \in {\mathbb C}^m$ for all $j=1,\ldots,n$. The column vectors $\{\bfv_1,\ldots,\bfv_n\}$ form a Parseval frame in ${\mathbb C}^m$ and since $v_{ij} = \pm1/\sqrt{n}$ for each $i=1,\ldots,m$ and $j=1,\ldots,n$ it follows that $\|\bfv_j\| = \sqrt{m/n}$ for all $j=1,\ldots,n$. Thus the frame vectors are all the same size. The Parseval frame defined by the columns $V = [\bfv_1,\ldots,\bfv_n] \in {\mathbb R}^{m \times n}$ will be called a {\em Walsh frame}.
We wish to consider what happens when we try to split a Walsh frame into two equal parts. We begin with a simple example.
\begin{example}
\label{ex1}
{\rm Let $m=3$ and $n = 8$. Use the first three rows of the Walsh matrix $Y_3$ to define
$$
V = \frac{1}{2\sqrt{2}} \left[ \begin{array}{rrrr|rrrr}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\
1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \end{array} \right] = \left[ \begin{array}{c|c}
V_a & V_b \end{array} \right]
$$
so that $VV^* = I_3$. The Parseval frame can be split into two identical frames as shown above with $V_a = V_b$ and $V_a{V_a}^* = V_b{V_b}^*= I_3/2$. Now renormalize and define
$$
V = \frac{1}{2} \left[ \begin{array}{rr|rr}
1 & 1 &1 & 1 \\
1 & -1 & -1 & 1 \\
1 & 1 & -1 & -1 \end{array} \right] = \left[ \begin{array}{c|c}
V_a & V_b \end{array} \right]
$$
so that $VV^* = I_3$. If we split the new Parseval frame into two parts $V_a$ and $V_b$ as shown above then the two parts are no longer identical. In fact a little thought will show that no even split is possible. For the proposed split we have
$$
\|V_a{V_a}^* - I_3/2 \|_2 = \|V_b{V_b}^* - I_3/2 \|_2 = 1/2
$$
which is the best possible. For the corresponding quadratic forms we have
\begin{eqnarray*}
s(\bfx) & = & \bfx^*VV^*\bfx \\
& = & x_1^2 + x_2^2 + x_3^2 \\
& = & \left( x_1^2/2 + x_2^2/2 + x_3^2/2 + x_1x_3 \right) + \left( x_1^2/2 + x_2^2/2 + x_3^2/2 - x_1x_3 \right) \\
& = & \bfx^*V_a{V_a}^*\bfx + \bfx^*V_b{V_b}^*\bfx \\
& = & s_a(\bfx) + s_b(\bfx).
\end{eqnarray*}
Considered separately the sets of vectors defined by the columns of $V_a$ and $V_b$ no longer span ${\mathbb C}^3$. Thus neither $V_a$ nor $V_b$ defines a frame for ${\mathbb C}^3$.} $\hfill \Box$
\end{example}
Let $m \in {\mathbb N}$ with $2^{s-1} < m \leq 2^s$ for some $s \in {\mathbb N}$ and let $n = 2^r$ for some $r \in {\mathbb N}+s$. Thus $m < 2^{s+1} \leq n$. Consider a Walsh frame defined by the first $m$ rows of the normalized Walsh matrix $F = Y/\sqrt{n}$ where $Y = Y_r$ is the corresponding Walsh matrix. The above example suggests that we can split this Parseval frame into two identical tight sub-frames each having $2^{r-1}$ elements. Indeed the example suggests that we can split the tight frame into identical tight sub-frames $k$ times where $k = r-s$. We have the following elementary result.
\begin{theorem}[WF1]
\label{wft1}
Let $n=2^r$ for some $r \in {\mathbb N}$ and suppose $m \in {\mathbb N}$ with $m \leq n/2 = 2^{r-1}$. Let $W = [ \bfy_1,\ldots,\bfy_m ]/\sqrt{n} \in {\mathbb C}^{n \times m}$ be defined by the first $m$ columns of the Walsh matrix $Y_r \in {\mathbb C}^{n \times n}$ and let $V = [\bfv_1,\ldots,\bfv_n] = W^* \in {\mathbb C}^{m \times n}$. Then $VV^* = I_m$ and the Parseval frame for ${\mathbb C}^m$ defined by the columns of the matrix $V$ can be split into two identical tight frames for ${\mathbb C}^m$ defined by the columns of the matrices $V_1 = [\bfv_1,\ldots,\bfv_{n/2}]$ and $V_2 = [\bfv_{n/2+1},\ldots,\bfv_n]$ with $V_1{V_1}^* = V_2{V_2}^* = I_m/2$. $\hfill \Box$
\end{theorem}
\noindent{\bf Proof} \quad It follows from the recursive definition of the Walsh matrices
$$
Y_r = \left[ \begin{array}{rr}
Y_{r-1} & Y_{r-1} \\
Y_{r-1} & -Y_{r-1} \end{array} \right]
$$
that $V_1, V_2 \in {\mathbb C}^{m \times n/2}$ are sub-matrices of ${Y_{r-1}}^*/\sqrt{n} = Y_{r-1}/\sqrt{n}$ consisting in each case of the first $m$ rows. Hence they are identical. Each of the matrices $V_1, V_2$ has $m \leq n/2$ mutually orthogonal rows and each row has length $1/\sqrt{2}$. $\hfill \Box$ \bigskip
Although {\em redundancy} and the additional associated {\em flexibility} are useful ingredients in the use of frames \cite{cas2,chr1} the redundancy in the Parseval frames defined by Walsh matrices is simply repetition of individual vectors. If $m = 2^k < 2^r = n$ then each vector is repeated $2^{k - r}$ times. Thus an equal split is obvious. We will now restrict our attention to reduced Walsh frames with $n = 2^r$ and $n/2 < m \leq n$. If $m = n/2 +s$ then $s$ vectors from the $n/2$ individual vectors in the frame are repeated. A reduced Walsh frame cannot be evenly split. Define $Y \in {\mathbb C}^{n \times m}$ by deleting $n/2 - s$ arbitrarily-selected columns from the right-hand half of the Walsh matrix $Y_r$. Thus we have
$$
Y = \left[ \begin{array}{rrr|rrr}
\bfy_1 & \cdots & \bfy_{n/2} & \bfy_{k(1)} & \cdots & \bfy_{k(s)} \\ \hline
\bfy_1 & \cdots & \bfy_{n/2} & - \bfy_{k(1)} & \cdots & -\bfy_{k(s)} \end{array} \right]
$$
where $Y_{r-1} = [\bfy_1,\ldots,\bfy_{n/2}] \in {\mathbb C}^{(n/2) \times (n/2)}$ is the Walsh matrix of order $r-1$ and $1 \leq k(1) < \cdots < k(s) \leq n/2$ is an arbitrarily-selected subset of size $s$ from $\{1,\ldots,n/2\}$. If we define $W = Y/ \sqrt{n}$ and $V = W^*$ then we have
$$
VV^* = W^*W = I.
$$
Hence the columns of $V$ form a Parseval frame for ${\mathbb C}^m$ with $\|\bfv_j\| = \sqrt{m/n}$ for all $j=1,\ldots,n$. We wish to split the frame as evenly as possible. Let
\begin{eqnarray*}
Y_a & = & \left[ \begin{array}{rrr|rrr}
\bfy_1 & \cdots & \bfy_{n/2} & \bfy_{k(1)} & \cdots & \bfy_{k(s)} \end{array} \right] \in {\mathbb C}^{(n/2) \times m} \\
Y_b & = & \left[ \begin{array}{rrr|rrr}
\bfy_1 & \cdots & \bfy_{n/2} & - \bfy_{k(1)} & \cdots & -\bfy_{k(s)} \end{array} \right] \in {\mathbb C}^{(n/2) \times m}
\end{eqnarray*}
and define $W_a = Y_a / \sqrt{n}$, $W_b = Y_b / \sqrt{n}$, $V_a = {W_a}^*$ and $V_b = {W_b}^*$. We know that
$$
\bfy_j^* \bfy_k = \frac{n}{2}\, \delta_{jk} = \left\{ \begin{array}{ll}
\displaystyle \frac{n}{2} &\mbox{if}\ k = j \\
& \\
0 &\mbox{otherwise}. \end{array} \right.
$$
Therefore
\begin{eqnarray*}
{Y_a}^*Y_a & = & \left[ \begin{array}{c}
\left[ \begin{array}{c} \bfy_j^* \end{array} \right] \\
\vspace{-0.375cm} \\ \hline
\vspace{-0.375cm} \\
\left[ \begin{array}{c} \bfy_{k(p)}^*\end{array} \right] \end{array} \right] \left[ \rule{0cm}{0.4cm} \begin{array}{c|c}
\left[ \begin{array}{c} \bfy_k \end{array} \right] & \left[ \begin{array}{c} \bfy_{k(q)} \end{array} \right] \end{array} \right] \\
& = & \left[ \begin{array}{c|c}
\left[ \begin{array}{c} \bfy_j^* \bfy_k \end{array} \right] & \left[ \begin{array}{c} \bfy_j^* \bfy_{k(q)} \end{array} \right] \\
\vspace{-0.375cm} \\ \hline
\vspace{-0.375cm} & \\
\left[ \begin{array}{c} \bfy_{k(p)}^* \bfy_k \end{array} \right] & \left[ \begin{array}{c} \bfy_{k(p)} \bfy_{k(q)} \end{array} \right] \end{array} \right] \\
& = & \frac{n}{2} \left[ \begin{array}{c|c}
\left[ \begin{array}{c} \delta_{jk} \end{array} \right] & \left[ \begin{array}{c} \delta_{j k(q)} \end{array} \right] \\
\vspace{-0.375cm} \\ \hline
\vspace{-0.375cm} & \\
\left[ \begin{array}{c} \delta_{k(p)k} \end{array} \right] & \left[ \begin{array}{c} \delta_{k(p)k(q)} \end{array} \right] \end{array} \right] \\
& = & \frac{n}{2} \left[ \begin{array}{c|c}
I_{n/2} & \Delta \\
\vspace{-0.375cm} \\ \hline
\Delta^* & I_s \end{array} \right]
\end{eqnarray*}
where $\Delta = [ \bfe_{k(1)}, \ldots, \bfe_{k(s)} ] \in {\mathbb C}^{(n/2) \times s}$. A similar argument shows that
$$
{Y_b}^*Y_b = \frac{n}{2} \left[ \begin{array}{c|c}
I_{n/2} & - \Delta \\
\vspace{-0.35cm} \\ \hline
- \Delta^* & I_s \end{array} \right].
$$
Thus
$$
V_a{V_a}^* = \frac{1}{2} \left[ \begin{array}{c|c}
I_{n/2} & \Delta \\ \hline
\Delta^* & I_s \end{array} \right] \quad \mbox{and} \quad V_b{V_b}^* = \frac{1}{2} \left[ \begin{array}{c|c}
I_{n/2} & - \Delta \\ \hline
- \Delta^* & I_s \end{array} \right].
$$
Our chosen split is not even but is nevertheless the best possible. What is the discrepancy in this case? Let
$$
E_a = V_a{V_a}^* - I_m/2 = \frac{1}{2} \left[ \begin{array}{cc}
0 & \Delta \\
\Delta^* & 0 \end{array} \right]
$$
and
$$
E_b = V_b{V_b}^* - I_m/2 = \frac{1}{2} \left[ \begin{array}{cc}
0 & - \Delta \\
- \Delta^* & 0 \end{array} \right].
$$
We have $\Delta^* \Delta = I_s$ and $\Delta \Delta^* = U = [u_{pq}] \in {\mathbb C}^{(n/2) \times (n/2)}$ where $u_{pq} = 1$ if $p=q=k(r)$ for $r=1,\ldots,s$ and $u_{pq} = 0$ otherwise. Therefore
$$
E_a^2 = E_b^2 = \frac{1}{4} \left[ \begin{array}{cc}
U & 0 \\
0 & I_s \end{array} \right]
$$
and hence the eigenvalues of $E_a^2$ and $E_b^2$ are $\lambda_1 = 0$ with multiplicity $n/2 - s$ corresponding to the zero rows and columns and $\lambda_2 = 1/4$ with multiplicity $2s$ corresponding to unit rows and columns. Therefore $\|E_a^2\|_2 = \|E_b^2\|_2 = 1/4$ and hence $\|E_a\|_2 = \|E_b\|_2 = 1/2$.
Alternatively we can show that the eigenvalues of the real symmetric matrix $V_a{V_a}^*$ are $\lambda = 0, 1/2, 1$. We have
$$
\lambda I_m - V_a{V_a}^* = \left[ \begin{array}{cc}
(\lambda - 1/2)I_{n/2} & - \Delta/2 \\
- \Delta^*/2 & (\lambda - 1/2) I_s \end{array} \right].
$$
We can use left multiplication by elementary matrices to perform elementary row operations on the matrix $\lambda I_m - V_a{V_a}^*$ and thereby show that $\lambda = 0$ and $\lambda = 1$ are each eigenvalues of multiplicity $s$. For $\lambda = 0$ we can see that
$$
\left[ \begin{array}{cc}
I_{n/2} & 0 \\
- \Delta^* & I_s \end{array} \right] \left[ \begin{array}{cc}
- I_{n/2} & - \Delta \\
- \Delta^* & - I_s \end{array} \right] = \left[ \begin{array}{cc}
- I_{n/2} & - \Delta \\
0 & \Delta^* \Delta - I_s \end{array} \right] = \left[ \begin{array}{cc}
- I_{n/2} & - \Delta \\
0 & 0 \end{array} \right]
$$
and for $\lambda = 1$ we have
$$
\left[ \begin{array}{cc}
I_{n/2} & 0 \\
\Delta^* & I_s \end{array} \right] \left[ \begin{array}{cc}
I_{n/2} & - \Delta \\
- \Delta^* & I_s \end{array} \right] = \left[ \begin{array}{cc}
I_{n/2} & - \Delta \\
0 & - \Delta^* \Delta + I_s \end{array} \right] = \left[ \begin{array}{cc}
I_{n/2} & - \Delta \\
0 & 0 \end{array} \right].
$$
In each case the reduced matrix has rank $n/2$ and hence the eigenvalue has multiplicity $m - n/2 = s$. For $\lambda = 1/2$ the matrix
$$
I_m/2 - V_a{V_a}^* = \left[ \begin{array}{cc}
0 & - \Delta/2 \\
- \Delta^*/2 & 0 \end{array} \right]
$$
has rank $2s$ and so $\lambda = 1/2$ is an eigenvalue of multiplicity $m - 2s$. It follows that $\| V_a{V_a}^*\|_2 = \| {V_a}^*V_a \|_2 = 1$. A similar argument shows us that $\| V_b{V_b}^*\|_2 = \| {V_b}^*V_b \|_2 = 1$. If we use the Frobenius norm then
$$
\| V_a{V_a}^* - I_m/2 \|_F = \| V_b{V_b}^* - I_m/2 \|_F = \sqrt{ 2 \| \Delta/2\|_F^2} = \sqrt{s/2} = \sqrt{(m - n/2)/2}.
$$
This leads us to the following result.
\begin{theorem}[WF2]
\label{wft2}
Let $n=2^r$ for some $r \in {\mathbb N}$ and suppose $m \in {\mathbb N}$ with $2^{r-1} = n/2 \leq m < n = 2^r$. Let $Y_{r-1} = [\bfy_1,\ldots,\bfy_{n/2}] \in {\mathbb C}^{(n/2) \times (n/2)}$ be the Walsh matrix of order $r-1$ and define
$$
Y = \left[ \begin{array}{rrr|rrr}
\bfy_1 & \cdots & \bfy_{n/2} & \bfy_{k(1)} & \cdots & \bfy_{k(s)} \\ \hline
\bfy_1 & \cdots & \bfy_{n/2} & - \bfy_{k(1)} & \cdots & -\bfy_{k(s)} \end{array} \right] = \left[ \begin{array}{cc}
Y_a \\ \hline
Y_b \end{array} \right] \in {\mathbb C}^{n \times m}
$$
where $s = m - n/2$ and $1 \leq k(1) < \cdots < k(s) \leq n/2$ is an arbitrarily selected subset of $\{1,\ldots,n/2\}$. Let $V = Y^*/\sqrt{n}$, $V_a = {Y_a}^*/\sqrt{n}$ and $V_b = {Y_b}^*/\sqrt{n}$. Then $VV^* = I$ and the split defined by $V = [V_a \mid V_b]$ gives
$$
V_a{V_a}^* - I/2 = \left[ \begin{array}{c|c}
0 & \Delta/2 \\ \hline
\Delta^*/2 & 0 \end{array} \right] \quad \mbox{and} \quad V_b{V_b}^* - I/2 = \left[ \begin{array}{c|c}
0 & - \Delta/2 \\ \hline
- \Delta^*/2 & 0 \end{array} \right]
$$
where $\Delta = [\bfe_{k(1)} \cdots \bfe_{k(s)}] \in {\mathbb C}^{(n/2) \times s}$. The error
$$
\|V_a{V_a}^* - I/2\|_2 = \|V_b{V_b}^* - I/2\|_2 = 1/2
$$
is the best possible. We also have $\| V_a{V_a}^*\|_2 = \|V_b{V_b}^*\|_2 = 1$. If we use the Frobenius norm then
$$
\|V_a{V_a}^* - I/2\|_F = \|V_b{V_b}^* - I/2\|_F = \sqrt{s/2} = \sqrt{(m - n/2)/2}
$$
and $\| V_a{V_a}^*\|_F = \|V_b{V_b}^*\|_F = \sqrt{(m + 2s)/4} = \sqrt{(3m-n)/4}$. $\hfill \Box$
\end{theorem}
We illustrate our results with an example.
\begin{example}
\label{ex2}
{\rm In the case where $m=6$ and $r = 3$ we have $n = 2^3 = 8$ with
$$
Y_2 = \left[ \begin{array}{cccc}
\bfy_1 & \bfy_2 & \bfy_3 & \bfy_4 \end{array} \right] = \left[ \begin{array}{rrrr}
1 & 1 & 1 & 1 \\
1 & -1 & 1 & -1 \\
1 & 1 & -1 & -1 \\
1 & -1 & -1 & 1 \end{array} \right].
$$
If we choose $\bfk = [ 1,\ 3 ]$ then we define
\begin{eqnarray*}
Y & = & \left[ \begin{array}{rrrr|rr}
\bfy_1 & \bfy_2 & \bfy_3 & \bfy_4 & \bfy_1 & \bfy_3 \\ \hline
\bfy_1 & \bfy_2 & \bfy_3 & \bfy_4 & -\bfy_1 & -\bfy_3 \end{array} \right] \\
& = & \left[ \begin{array}{rrrr|rr}
1 & 1 & 1 & 1 & 1 & 1 \\
1 & -1 & 1 & -1 & 1 & 1 \\
1 & 1 & -1 & -1 & 1 & -1 \\
1 & -1 & -1 & 1 & 1 & -1 \\ \hline
1 & 1 & 1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 & -1 & -1 \\
1 & 1 & -1 & -1 & -1 & 1 \\
1 & -1 & -1 & 1 & -1 & 1 \end{array} \right]
\end{eqnarray*}
and $V = Y^*/(2 \sqrt{2})$. Now define
\begin{eqnarray*}
Y_a & = & \left[ \begin{array}{rrrr|rr}
\bfy_1 & \bfy_2 & \bfy_3 & \bfy_4 & \bfy_1 & \bfy_3 \end{array} \right] \\
& & \\
& = & \left[ \begin{array}{rrrr|rr}
1 & 1 & 1 & 1 & 1 & 1 \\
1 & -1 & 1 & -1 & 1 & 1 \\
1 & 1 & -1 & -1 & 1 & -1 \\
1 & -1 & -1 & 1 & 1 & -1 \end{array} \right]
\end{eqnarray*}
and
\begin{eqnarray*}
Y_b & = & \left[ \begin{array}{rrrr|rr}
\bfy_1 & \bfy_2 & \bfy_3 & \bfy_4 & - \bfy_1 & - \bfy_3 \end{array} \right]
\\
& & \\
& = & \left[ \begin{array}{rrrr|rr}
1 & 1 & 1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 & -1 & -1 \\
1 & 1 & -1 & -1 & -1 & 1 \\
1 & -1 & -1 & 1 & -1 & 1 \end{array} \right]
\end{eqnarray*}
and let $V_a = {Y_a}^*/(2\sqrt{2})$ and $V_b = {Y_b}^*/(2\sqrt{2})$. We have
$$
V_a{V_a}^* = \left[ \begin{array}{rrrr|rr}
\rule{0cm}{0.4cm} \frac{1}{2} & 0 & 0 & 0 & \frac{1}{2} & 0 \\
\rule{0cm}{0.4cm} 0 & \frac{1}{2} & 0 & 0 & 0 & 0 \\
\rule{0cm}{0.4cm} 0 & 0 & \frac{1}{2} & 0 & 0 & \frac{1}{2} \\
\rule{0cm}{0.4cm} 0 & 0 & 0 & \frac{1}{2} & 0 & 0 \\
\vspace{-0.375cm} &&&& \\ \hline
\rule{0cm}{0.4cm} \frac{1}{2} & 0 & 0 & 0 & \frac{1}{2} & 0 \\
\rule{0cm}{0.4cm} 0 & 0 & \frac{1}{2} & 0 & 0 & \frac{1}{2} \end{array} \right] = \left[ \begin{array}{c|c}
I_4/2 & \Delta/2 \\ \hline
\Delta^*/2 & I_2/2 \end{array} \right],
$$
and
$$
V_b{V_b}^* = \left[ \begin{array}{rrrr|rr}
\rule{0cm}{0.4cm} \frac{1}{2} & 0 & 0 & 0 & -\frac{1}{2} & 0 \\
\rule{0cm}{0.4cm} 0 & \frac{1}{2} & 0 & 0 & 0 & 0 \\
\rule{0cm}{0.4cm} 0 & 0 & \frac{1}{2} & 0 & 0 & -\frac{1}{2} \\
\rule{0cm}{0.4cm} 0 & 0 & 0 & \frac{1}{2} & 0 & 0 \\
\vspace{-0.375cm} &&&& \\ \hline
\rule{0cm}{0.4cm} -\frac{1}{2} & 0 & 0 & 0 & \frac{1}{2} & 0 \\
\rule{0cm}{0.4cm} 0 & 0 & -\frac{1}{2} & 0 & 0 & \frac{1}{2} \end{array} \right] = \left[ \begin{array}{c|c}
I_4/2 & - \Delta/2 \\ \hline
- \Delta^*/2 & I_2/2 \end{array} \right],
$$
where
$$
\Delta = \left[ \begin{array}{rr}
1 & 0 \\
0 & 0 \\
0 & 1 \\
0 & 0 \end{array} \right].
$$
Note that $\Delta^*\Delta = I_2$ and
$$
U = \Delta \Delta^* = \left[ \begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 \end{array}\right].
$$
We have
$$
\| V_a{V_a}^* - I_4/2\|_2 = \| V_b{V_b}^* - I_4/2\|_2 = 1/2
$$
and $\| V_a{V_a}^* \| = \| V_b{V_b}^*\| = 1$. If we return to the idea that each matrix $S = VV^*$ is a sum of elementary rank $1$ matrices then we have $S = \sum_{j=1}^6 P_j$ where $P_j = \bfv_j {\bfv_j}^* = \bfy_j{\bfy_j}^*/8$ for each $j=1,\ldots,6$. If we use the Frobenius norm then $\|P_j\|_F^2 = 1/4$ and
$$
\| V_a{V_a}^* - I_4/2\|_F = \| V_b{V_b}^* - I_4/2\|_F = 1.
$$
These calculations agree with the general results stated in WF2.} $\hfill \Box$
\end{example}
\section{Parseval frames defined by vectors of equal length}
\label{ntfelv}
The quadratic forms defined by Walsh frames can be split exactly if $m \leq n/2 = 2^{r-1}$. If $n/2 < m < n = 2^r$ the quadratic forms can no longer be evenly split but there is an explicit description for the minimal discrepancy. An interesting question is whether these results are completely specific to Walsh frames or whether similar results apply to quadratic forms defined by other Parseval frames constructed from vectors of equal length. We begin by stating a well-known lemma.
\begin{lemma}
\label{frameconstant}
Let $m \in {\mathbb N}$ and $n = 2^r$ for some $r \in {\mathbb N}$. If $m \leq n$ and $V = [\bfv_1,\ldots,\bfv_n] \in {\mathbb C}^{m \times n}$ defines a Parseval frame for ${\mathbb C}^m$ with $S = VV^* = I_m$ and $\|\bfv_j\|^2 = \alpha$ for each $j=1,\ldots,m$ then $\alpha = m/n$. $\hfill \Box$
\end{lemma}
Let $m \in {\mathbb N}$ with $m \leq n = 2^r$ for some $r \in {\mathbb N}$. Suppose that $V = [\bfv_1,\ldots,\bfv_n] \in {\mathbb C}^{m \times n}$ defines a Parseval frame with $\|\bfv_j\| = \sqrt{m/n}$ for each $j=1,\ldots,m$. Define $W = [\bfw_1,\ldots,\bfw_m] \in {\mathbb C}^{n \times m}$ by setting $W = V^*$. We have $W^*W = VV^* = I_m$ and so $\{\bfw_1,\ldots,\bfw_m\} \in {\mathbb C}^n$ is an orthonormal set. Let us extend this set to an orthonormal basis $\{\bfw_1,\ldots,\bfw_n\} \in {\mathbb C}^n$. Define the orthogonal matrix
$$
H = [H_1 \mid H_2 ] = [ \bfw_1,\ldots,\bfw_m \mid \bfw_{m+1},\ldots, \bfw_n] \in {\mathbb C}^{n \times n}
$$
where $H_1 = W \in {\mathbb C}^{n \times m}$ and $H_2 \in {\mathbb C}^{n \times (n-m)}$. Define $G = H^* \in {\mathbb C}^{n \times n}$. We can write
$$
G = \left[ \begin{array}{c}
G_1 \\ \hline
G_2 \end{array} \right] = \left[ \begin{array}{cccc}
\bfv_1 & \bfv_2 & \cdots & \bfv_n \\ \hline
\bfr_1 & \bfr_2 & \cdots & \bfr_n \end{array} \right]
$$
where $G_1 = V \in {\mathbb C}^{m \times n}$ and $G_2 = R = [\bfr_1,\ldots,\bfr_n] \in {\mathbb C}^{(n-m) \times n}$. If we define
$$
\bfg_j = \left[ \begin{array}{c}
\bfv_j \\
\bfr_j \end{array} \right]
$$
for each $j=1,\ldots,n$ then we can write $G = [\bfg_1,\ldots,\bfg_n]$. The matrix $G$ defines an orthonormal basis $\{\bfg_1,\ldots,\bfg_n\}$ for ${\mathbb C}^n$. The set $\{\bfg_1,\ldots,\bfg_n\}$ also defines an embedded Parseval frame for the $m$-dimensional subspace $S_m$ of ${\mathbb C}^n$ spanned by all vectors of the form
$$
\bfz = \left[ \begin{array}{c}
\bfx \\
\mbox{\boldmath $0$} \end{array} \right]
$$
where $\bfx \in {\mathbb C}^m$. Let $Y = [\bfy_1,\ldots,\bfy_n] \in {\mathbb C}^{n \times n}$ be the Walsh matrix of order $r$ and define $F = [\bff_1,\ldots,\bff_n] \in {\mathbb C}^{n \times n}$ by setting $F = Y/\sqrt{n} \in {\mathbb C}^{n \times n}$. We have $\bff_j = \bfy_j/\sqrt{n}$ for all $j=1,\ldots,n$. Note that the normalized Walsh matrix $F$ is real symmetric and orthogonal. Define an orthogonal matrix $P \in {\mathbb C}^{n \times n}$ by setting $P = F H$. Therefore $PG = F$ and hence $P \bfg_j = \bff_j$ for all $j=1,\ldots,n$. We will use the orthogonal matrix $P$ to change the coordinate representation for the embedded Parseval frame defined by $G$ into a representation defined by $F$. Thus the embedded frame now looks like a Walsh frame.
We saw earlier that we can represent vectors in $S_m$ using the embedded frame with
\begin{equation}
\label{genrep2}
\left[ \begin{array}{c}
\bfx \\
\mbox{\boldmath $0$} \end{array} \right] = \sum_{j=1}^n ( {\bfg_j}^* \bfz) \bfg_j = \sum_{j=1}^n ( {\bfv_j}^* \bfx) \left[ \begin{array}{c}
\bfv_j \\
\bfr_j \end{array} \right]
\end{equation}
for all $\bfx \in {\mathbb C}^m$. To see this representation in the new coordinates we simply multiply both sides of (\ref{genrep2}) by $P$. Thus we have
\begin{equation}
\label{genrep3}
\bfy = P \left[ \begin{array}{c}
\bfx \\
\mbox{\boldmath $0$} \end{array} \right] = \sum_{j=1}^n ( {\bfg_j}^* \bfz) P \bfg_j = \sum_{j=1}^n ( {\bfv_j}^* \bfx) \bff_j
\end{equation}
where we have the same coefficients yet again. Since $VV^* = I_m$ the quadratic form for the original frame is simply $$
s(\bfx) = \bfx^* \bfx = \sum_{j=1}^m |{\bfv_j}^* \bfx |^2 = \sum_{j=1}^n s_j(\bfx)
$$
where the $s_j(\bfx)$ are rank $1$ quadratic forms for all $j=1,\ldots,n$. Since $GG^* = I_n$ the corresponding embedded quadratic form is given by
$$
t(\bfz) = \bfz^* \bfz = \sum_{j=1}^n | {\bfg_j}^* \bfz |^2 = \sum_{j=1}^n t_j(\bfz)
$$
where
$$
\bfz = \left[ \begin{array}{c}
\bfx \\
\bfr \end{array} \right] \in {\mathbb C}^n
$$
and where the $t_j(\bfz)$ are rank $1$ quadratic forms. For quadratic forms on the subspace $S_m$ we have
$$
t \left( \left[ \begin{array}{c}
\bfx \\
\mbox{\boldmath $0$} \end{array} \right] \right) = \bfx^* \bfx + \mbox{\boldmath $0$}^* \mbox{\boldmath $0$} = \sum_{j=1}^n | {\bfv_j}^* \bfx + {\bfr_j }^* \mbox{\boldmath $0$} |^2 = \sum_{j=1}^m | {\bfv_j}^* \bfx|^2 = \sum_{j=1}^m s_j(\bfx) = s(\bfx).
$$
In the new coordinates $\bfy = P^* \bfz \iff P \bfy = \bfz$ we can use (\ref{genrep3}) when $\bfr = \mbox{\boldmath $0$}$ to see that
$$
q(\bfy) = \bfy^* \bfy = \sum_{j=1}^n \sum_{k=1}^n \bff_j^*(\bfx^* \bfv_j)( {\bfv_k}^*\bfx) \bff_k = \sum_{j=1}^n |{\bfv_j}^* \bfx |^2 = s(\bfx).
$$
If we write
$$
P = \left[ \begin{array}{c}
P_1 \\
P_2 \end{array} \right]
$$
where $P_1 \in {\mathbb C}^{m \times n}$ and $P_2 \in {\mathbb C}^{(n-m) \times n}$ then the subspace $S_m$ defined by $\bfr = \mbox{\boldmath $0$}$ is defined in the new coordinates by $P_2 \bfy = \mbox{\boldmath $0$}$. Since $P$ is invertible we must have $\mbox{rank}(P_2) = n-m$. Thus we can use elementary row operations to eliminate $(n-m)$ variables and hence express $q(\bfy)$ as a sum of rank $1$ quadratic forms in $m$ variables $y_{k(1)},y_{k(2)},\ldots,y_{k(m)}$ where $1 \leq k(1) < k(2) < \cdots < k(m) \leq n$.
Although we have assumed $n=2^r$ throughout we shall see in the following example that this assumption is basically just a matter of convenience. If $V = [\bfv_1,\ldots,\bfv_k] \in {\mathbb C}^{m \times k}$ form a Parseval frame in ${\mathbb C}^k$ where $2^{r-1} = n/2 < k < n = 2^r$ and if we define $W = [\bfw_1,\ldots,\bfw_m] \in {\mathbb C}^{k \times m}$ by setting $W = V^*$ then the columns $\{\bfw_1,\ldots,\bfw_m\}$ form an orthonormal set in ${\mathbb C}^k$. We can embed these vectors in ${\mathbb C}^n$ by defining
$$
\bfh_j = \left[ \begin{array}{c}
\bfw_j \\
\mbox{\boldmath $0$} \end{array} \right] \in {\mathbb C}^n
$$
for each $j=1,\ldots,m$. Now $\{\bfh_1,\ldots,\bfh_m\}$ forms an orthonormal set in ${\mathbb C}^n$ which we can easily extend to an orthonormal basis $\{\bfh_1,\ldots,\bfh_n\} \in {\mathbb C}^n$. Define orthogonal matrices $H = [\bfh_1,\ldots,\bfh_n] \in {\mathbb C}^{n \times n}$ and $G = [\bfg_1,\ldots,\bfg_n] \in {\mathbb C}^{n \times n}$ by setting $G = H^*$. The vectors $\{\bfg_1,\ldots,\bfg_n\} \in {\mathbb C}^n$ now form a Parseval frame for the $m$-dimensional subspace $S_m$ defined by vectors in the form
$$
\bfz = \left[ \begin{array}{c}
\bfx \\
\mbox{\boldmath $0$} \end{array} \right] \in {\mathbb C}^n
$$
for all $\bfx \in {\mathbb C}^m$. The frame defined by $G$ for the $m$-dimensional subspace is simply the frame defined by $V$ embedded into ${\mathbb C}^n$. We can write $G \cong V$.
We have argued that from within ${\mathbb C}^n$ all orthonormal bases look the same and that coordinate representation is essentially a matter of choice. Thus it is always possible to use the columns of a normalized Walsh matrix $F$ to represent the vectors of a Parseval frame defined by vectors of equal length. We illustrate our remarks by considering a particular example.
\begin{example}
\label{ex3}
{\rm Suppose we wish to find three vectors of equal length that form a tight frame in ${\mathbb C}^2$. If we define
$$
\bfv_1 = \left[ \begin{array}{cc}
a \\
\sqrt{t^2 - a^2} \end{array} \right], \quad \bfv_2 = \left[ \begin{array}{cc}
b \\
\sqrt{t^2 - b^2} \end{array} \right], \quad \bfv_3 = \left[ \begin{array}{cc}
c \\
\sqrt{t^2 - c^2} \end{array} \right]
$$
then $\| \bfv_j\| = |t|$ for each $j=1,2,3$. The condition for a Parseval frame is that
$$
\bfw_1 = \left[ \begin{array}{c}
a \\
b \\
c \end{array} \right] \quad \mbox{and} \quad \bfw_2 = \left[ \begin{array}{c}
\sqrt{t^2 - a^2} \\
\sqrt{t^2 - b^2} \\
\sqrt{t^2 - c^2} \end{array} \right]
$$
form an orthonormal set. Thus we require $a^2 + b^2 + c^2 = 1$, $(t^2 - a^2) + (t^2 - b^2) + (t^2 - c^2) = 1$ and $a \sqrt{t^2 - a^2} + b\sqrt{t^2-b^2} + c \sqrt{t^2 - c^2} = 0$. The first two equations yield $t = \sqrt{2/3}$ and the final equation then gives
$$
b^2 = \frac{1 - a^2 + \sqrt{ (1 - a^2)^2 - (1 - 2a^2)^2}}{2}.
$$
We can now find a solution by setting $a^2 = 3/5$. Thus we have
\begin{eqnarray*}
V & = & \left[ \begin{array}{ccc}
- \sqrt{15}/5 & (\sqrt{15} + \sqrt{5})/10 & - (\sqrt{15} - \sqrt{5})/10\\
\sqrt{15}/15 & (9\sqrt{5} - \sqrt{15})/30 & (9\sqrt{5} + \sqrt{15})/30 \end{array} \right] \\
& \approx & \left[ \begin{array}{rrr}
- 0.7746 & 0.6109 & - 0.1637\\
0.2582 & 0.5417 & 0.7999 \end{array} \right] \in {\mathbb C}^{2 \times 3}.
\end{eqnarray*}
If we define
$$
W = V^* \approx \left[ \begin{array}{cc}
- 0.7746 & 0.2582 \\
0.6109 & 0.5417 \\
- 0.1637 & 0.7999 \end{array} \right] \in {\mathbb C}^{3 \times 2}
$$
then we have $W^*W = I_2$. We can embed $\{\bfw_1,\bfw_2\}$ in ${\mathbb C}^4$ by writing
$$
\bfh_j = \left[ \begin{array}{c}
\bfw_j \\
0 \end{array} \right]
$$
for each $j=1,2$ and extend the set to an orthonormal basis $\{ \bfh_1, \bfh_2, \bfh_3, \bfh_4\} \in {\mathbb C}^4$ by adding two normalized orthogonal columns to give
$$
W \in {\mathbb C}^{3 \times 2} \rightarrow H = [ H_1 \mid H_2] \approx \left[ \begin{array}{rr|rr}
-0.7746 & 0.2582 & 0.5774 & 0 \\
0.6109 & 0.5417 & 0.5774 & 0 \\
-0.1637 & 0.7999 & -0.5774 & 0 \\
0 & 0 & 0 & 1 \end{array} \right] \in {\mathbb C}^{4 \times 4}
$$
where $H_1 \cong W$. Note that the subspace spanned by the additional columns is uniquely defined. Let $G = H^*$ so that
$$
V \in {\mathbb C}^{2 \times 3} \rightarrow G = \left[ \begin{array}{c}
G_1 \\ \hline
G_2 \end{array} \right] \approx \left[ \begin{array}{rrrr}
-0.7746 & 0.6109 & -0.1637 & 0 \\
0.2582 & 0.5417 & 0.7999 & 0 \\ \hline
0.5774 & 0.5774 & -0.5774 & 0 \\
0 & 0 & 0 &1 \end{array} \right] \in {\mathbb C}^{4 \times 4}
$$
where $G_1 = [ \bfv_1, \bfv_2, \bfv_3, \mbox{\boldmath $0$}] \in {\mathbb C}^{2 \times 4}$ and $G_2 = R = [\bfr_1,\bfr_2, \bfr_3, \bfr_4] \in {\mathbb C}^{2 \times 4}$. Clearly $G_1 \cong V$. The matrix $G$ represents the embedded frame as an orthonormal basis in ${\mathbb C}^4$. Let $Y = Y_2 \in {\mathbb C}^4$ be the Walsh matrix of order $2$ and let $F = Y/2$ be the normalized Walsh matrix of order $2$. Since $H^*H = I \in {\mathbb C}^4$ there is an orthogonal matrix $P = F H$ such that $PG = F$. Thus, in appropriately chosen orthogonal coordinates, we have
$$
W \rightarrow H \cong \frac{1}{2} \left[ \begin{array}{rrrr}
1 & 1 & 1 & 1 \\
1 & -1 & 1 & -1 \\
1 & 1 & -1 & -1 \\
1 & -1 & -1 & 1 \end{array} \right] = F.
$$
In terms of the original frame this means that
$$
V \rightarrow G \cong \frac{1}{2} \left[ \begin{array}{rrr|r}
1 & 1 & 1 & 1 \\
1 & -1 & 1 & -1 \\
1 & 1 & -1 & -1 \\
1 & -1 & -1 & 1 \end{array} \right] = F^* = F
$$
since $F$ is real symmetric. The column vectors $\bfv_1, \bfv_2, \bfv_3$ of the original matrix $V$ form a Parseval frame for ${\mathbb C}^2$. The fundamental representation theorem tells us that an arbitrary vector $\bfx = \xi_1\bfe_1 + \xi_2 \bfe_2 \in {\mathbb C}^2$ written in the original coordinates as $\bxi \in {\mathbb C}^2$ can be represented relative to the Parseval frame in the standard form
$$
\bfx = \sum_{j=1}^3 \langle \bfv_j, \bfx \rangle \bfv_j = \sum_{j=1}^3 \eta_j \bfv_j
$$
with coordinates given by $\bfeta = V^* \bxi$. Thus, for instance, we have
\begin{equation}
\label{rep1}
\left[ \begin{array}{r}
1 \\
2 \end{array} \right] \approx -0.2582 \left[\! \begin{array}{r}
-0.7746 \\
0.2582 \end{array}\! \right] + 1.6943 \left[\! \begin{array}{r}
0.6109 \\
0.5417 \end{array}\! \right] + 1.4361 \left[\! \begin{array}{r}
-0.1637 \\
0.7999 \end{array}\! \right].
\end{equation}
When we embed the Parseval frame defined by $V$ into ${\mathbb C}^4$ and extend to the orthonormal basis in ${\mathbb C}^4$ defined by $G = [\bfg_1\ \bfg_2\ \bfg_3\ \bfg_4]$ then for each $\bfx = \xi_1 \bfe_1 + \xi_2 \bfe_2 \in {\mathbb C}^2$ we have
$$
\bfz = \left[ \begin{array}{c}
\bfx \\
\mbox{\boldmath $0$} \end{array} \right] = \sum_{j=1}^4 \langle \bfg_j, \bfz \rangle \bfg_j = \sum_{j=1}^3 \langle \bfv_j, \bfx \rangle \bfg_j = \sum_{j=1}^3 \eta_j \left[ \begin{array}{c}
\bfv_j \\
\bfr_j \end{array} \right]
$$
with coordinates $\eta_1,\eta_2, \eta_3$ given by $\bfeta = V^* \bxi$. Note that $\langle \bfg_4, \bfz \rangle = 0$ and $\sum_{j=1}^3 \eta_j \bfr_j = \mbox{\boldmath $0$}$. Thus we have
\begin{equation}
\label{rep2}
\left[\! \begin{array}{r}
1 \\
2 \\
0 \\
0 \end{array}\! \right] \approx -0.2582 \left[\! \begin{array}{r}
-0.7746 \\
0.2582 \\
0.5774 \\
0 \end{array}\! \right] + 1.6943 \left[\! \begin{array}{r}
0.6109 \\
0.5417 \\
0.5774 \\
0 \end{array}\! \right] + 1.4361 \left[\! \begin{array}{r}
-0.1637 \\
0.7999 \\
-0.5774 \\
0 \end{array}\! \right]
\end{equation}
which is essentially the same representation obtained in (\ref{rep1}) using the Parseval frame in ${\mathbb C}^2$. For convenience we will now use $\bfx = [x_j] \in {\mathbb C}^4$ to refer to the embedded coordinates and we will define new coordinates using the transformation $\bfy = P^* \bfx$ where
$$
P = FH \approx \left[ \begin{array}{rrrr}
-0.1637 & 0.7999 & 0.2887 & 0.5000 \\
-0.7746 & 0.2582 & -0.2887 & -0.5000 \\
0.0000 & -0.0000 & 0.8661 & -0.5000 \\
-0.6109 & -0.5417 & 0.2887 & 0.5000 \end{array} \right].
$$
If we multiply the previous representation (\ref{rep2}) on the left by $P$ we obtain
\begin{equation}
\label{rep3}
\left[ \begin{array}{r}
1.4361 \\
-0.2582 \\
-0.0000 \\
-1.6943 \end{array} \right] \approx -0.2582 \left[ \begin{array}{r}
0.5 \\
0.5 \\
0.5 \\
0.5 \end{array} \right] + 1.6943 \left[ \begin{array}{r}
0.5 \\
-0.5 \\
0.5 \\
-0.5 \end{array} \right] + 1.4361 \left[ \begin{array}{r}
0.5 \\
0.5 \\
-0.5 \\
-0.5 \end{array} \right]
\end{equation}
which is once again essentially the same representation. Note that similar numerical calculations are applied in each case. The columns of $F$ define a Parseval frame for the subspace with $y_3 = 0$ and $y_4 = -y_1 + y_2$. These conditions are easily obtained by putting $x_3 = x_4 = 0$ in the coordinate relationship $\bfx = P\bfy$.
Let us now consider the quadratic form defined by $V = [\bfv_1,\bfv_2,\bfv_3]$. The complete form
$$
s(x_1,x_2) = x_1^2 + x_2^2
$$
is made up as a sum of elementary rank $1$ quadratic forms defined by
$$
s_1(x_1,x_2) \approx ( -0.7746 x_1 - 0.2582 x_2)^2 \approx 0.6 x_1^2 - 0.4 x_1x_2 + 0.0667 x_2^2,
$$
$$
s_2(x_1,x_2) \approx (0.6109 x_1 + 0.5417 x_2)^2 \approx 0.3732 x_1^2 + 0.6618 x_1x_2 + 0.2934 x_2^2
$$
and
$$
s_3(x_1,x_2) \approx (-0.1637 x_1 + 0.7999 x_2)^2 \approx 0.0268 x_1^2 - 0.2619 x_1x_2 + 0.6398 x_2^2.
$$
From the extended basis defined by $H$ and the associated extended matrix $G$ the original complete quadratic form could be seen as $s(x_1,x_2) = t(x_1,x_2,0,0)$ where the extended complete form
$$
t(x_1,x_2,x_3,x_4) = x_1^2 + x_2^2 + x_3^2 + x_4^2
$$
is made up as a sum of elementary rank $1$ extended quadratic forms
\begin{eqnarray*}
t_1(x_1,x_2,x_3,x_4) & \approx & (-0.7746 x_1 + 0.2582 x_2 + 0.5774 x_3)^2 \\
& \approx & 0.6 x_1^2 - 0.4 x_1x_2 - 0.8945 x_1x_3 + 0.0667 x_2^2 \\
& & \hspace{5cm} +\hspace{1mm} 0.2982 x_2x_3 + 0.3333 x_3^2,
\end{eqnarray*}
\begin{eqnarray*}
t_2(x_1,x_2,x_3,x_4) & \approx & (0.6109 x_1 + 0.5417 x_2 + 0.5774 x_3)^2 \\
& \approx & 0.3732 x_1^2 + 0.6618 x_1x_2 + 0.7055 x_1x_3 + 0.2934 x_2^2 \\
& & \hspace{5cm} +\hspace{1mm} 0.6256 x_2x_3 + 0.3333 x_3^2,
\end{eqnarray*}
\begin{eqnarray*}
t_3(x_1,x_2,x_3,x_4) & \approx & (-0.1637 x_1 + 0.7999 x_2 - 0.5774 x_3)^2 \\
& \approx & 0.0268 x_1^2 - 0.2619 x_1x_2 + 0.1890 x_1x_3 + 0.6398 x_2^2 \\
& & \hspace{5cm} -\hspace{1mm} 0.9237 x_2x_3 + 0.3333 x_3^2,
\end{eqnarray*}
and $t_4(x_1,x_2,x_3,x_4) = x_4^2$. When we transform to the new coordinates we define $q(y_1,y_2,y_3,y_4) = t(x_1,x_2,x_3,x_4)$. The transformed complete extended form
$$
q(y_1,y_2,y_3,y_4) = y_1^2 + y_2^2 + y_3^2 + y_4^2
$$
is made up as a sum of elementary rank $1$ transformed extended quadratic forms
\begin{eqnarray*}
q_1(y_1,y_2,y_3,y_4) & = & (0.5y_1 + 0.5y_2 + 0.5y_3 + 0.5y_4)^2 \\
& = & 0.25y_1^2 + 0.5y_1y_2 + 0.5y_1y_3 + 0.5y_1y_4 + 0.25y_2^2 \\
& & \hspace{2cm} + 0.5y_2y_3 + 0.5y_2y_4 + 0.25y_3^2 + 0.5y_3y_4 + 0.25 y_4^2,
\end{eqnarray*}
\begin{eqnarray*}
q_2(y_1,y_2,y_3,y_4) & = & (0.5y_1 - 0.5y_2 + 0.5y_3 - 0.5y_4)^2 \\
& = & 0.25y_1^2 - 0.5y_1y_2 + 0.5y_1y_3 - 0.5y_1y_4 + 0.25y_2^2 \\
& & \hspace{2cm} - 0.5y_2y_3 + 0.5y_2y_4 + 0.25y_3^2 - 0.5y_3y_4 + 0.25 y_4^2,
\end{eqnarray*}
\begin{eqnarray*}
q_3(y_1,y_2,y_3,y_4) & = & (0.5y_1 + 0.5y_2 - 0.5y_3 - 0.5y_4)^2 \\
& = & 0.25y_1^2 + 0.5y_1y_2 - 0.5y_1y_3 - 0.5y_1y_4 + 0.25y_2^2 \\
& & \hspace{2cm} - 0.5y_2y_3 - 0.5y_2y_4 + 0.25y_3^2 + 0.5y_3y_4 + 0.25 y_4^2
\end{eqnarray*}
and
\begin{eqnarray*}
q_3(y_1,y_2,y_3,y_4) & = & (0.5y_1 - 0.5y_2 - 0.5y_3 + 0.5y_4)^2 \\
& = & 0.25y_1^2 - 0.5y_1y_2 - 0.5y_1y_3 + 0.5y_1y_4 + 0.25y_2^2 \\
& & \hspace{2cm} + 0.5y_2y_3 - 0.5y_2y_4 + 0.25y_3^2 - 0.5y_3y_4 + 0.25 y_4^2.
\end{eqnarray*}
According to our splitting rule we take
\begin{eqnarray*}
q_a(y_1,y_2,y_3,y_4) & = & q_1(y_1,y_2,y_3,y_4) + q_2(y_1,y_2,y_3,y_4) \\
& = & 0.5y_1^2 + y_1y_3 + 0.5y_2^2 + y_2y_4 + 0.5y_3^2 + 0.5y_4^2
\end{eqnarray*}
and
\begin{eqnarray*}
q_b(y_1,y_2,y_3,y_4) & = & q_3(y_1,y_2,y_3,y_4) + q_4(y_1,y_2,y_3,y_4) \\
& = & 0.5y_1^2 - y_1y_3 + 0.5y_2^2 - y_2y_4 + 0.5y_3^2 + 0.5y_4^2.
\end{eqnarray*}
Now the conditions $x_3 = x_4 = 0$ are equivalent to $y_3 = 0$ and $y_4 = -y_1 + y_2$. Thus the condition $x_1^2 + x_2^2 + x_3^2 + x_4^2 = 1 \iff y_1^2 + y_2^2 + y_3^2 + y_4^2 = 1$ can be rewritten as $2y_1^2 + 2y_2^2 - 2y_1y_2 = 1$. Hence our original extended quadratic form is given in terms of the two component transformed extended quadratic forms by
\begin{eqnarray*}
t(x_1,x_2,0,0) & = & q(y_1,y_2,0,-y_1+y_2) \\
& = & q_a(y_1,y_2,0,-y_1+y_2) + q_b(y_1,y_2,0,-y_1+y_2) \\
& = & \left( y_1^2 + 2y_2^2 - 2y_1y_2 \right) + y_1^2 \\
& = & \left( 1 - y_1^2 \right) + y_1^2.
\end{eqnarray*}
We have $|1/2- (1 - y_1^2)| = |1/2 - y_1^2| \leq 1/2$ and so the discrepancy is at most $1/2$. Since
$$
y_1 \approx -0.1637x_1 + 0.7999 x_2 \quad \mbox{and} \quad y_2 \approx -0.7746 x_1 + 0.2582 x_2
$$
and since $x_1^2 + x_2^2 = 1$ we have
\begin{eqnarray*}
t(x_1,x_2,0,0) & \approx & \left( 0.9732 x_1^2 +0.2618 x_1x_2 + 0.3601x_2^2 \right) \\
& & \hspace{3cm} + \left( 0.0268 x_1^2 - 0.2619 x_1x_2 + 0.6398 x_2^2 \right).
\end{eqnarray*}
In this example we considered a Parseval frame defined by a pre-frame matrix operator $V = [\bfv_1,\ldots,\bfv_k] \in {\mathbb C}^{m \times k}$ where $m \leq k$ and $2^{r-1} < k < n = 2 ^r$ and where all $k$ frame vectors have length $\sqrt{m/k}$. We showed that the orthonormal set defined by the columns of $W = V^* \in {\mathbb C}^{k \times m}$ can be embedded into ${\mathbb C}^n$ and extended to an orthonormal basis for ${\mathbb C}^n$ defined by a matrix $H \in {\mathbb C}^{n \times n}$. We then used the columns of the orthogonal matrix $G = H^*$ to define a Parseval frame for an $m$-dimensional subspace $S_m$ of ${\mathbb C}^n$. Finally we defined an orthogonal matrix $P = FG^*$ to transform the embedded frame defined by $G$ into an embedded normalized Walsh frame defined by $F = PG \in {\mathbb C}^{n \times n}$. Thus we obtained a coordinate representation of the embedded frame using the columns of a normalized Walsh matrix. Subsequently we argued that this transformation makes no essential difference to vector representation in the frame but does provide a plausible rationale for a low discrepancy splitting of the quadratic form.} $\hfill \Box$
\end{example}
\section{Conclusions and future work}
We have argued that Parseval frames defined by vectors of equal length in finite-dimensional Euclidean space can be represented in coordinate form using the columns of a normalized Walsh matrix. We have supported our arguments by discussing the representation of individual vectors and by finding some general results about optimal splitting of the corresponding quadratic forms.
Although the results in this paper are not directly linked to our current research into inversion of perturbed linear operators on Banach space there is a basic philosophical connection in the following sense. Joel Anderson reduced the seemingly intractable infinite-dimensional KSP to an equivalent finite-dimensional problem \cite{and1} which was subsequently reformulated \cite{wea1} and eventually solved \cite{mar1,mar2} using a basic discrepancy theorem for quadratic forms defined by finite-dimensional frames. We have shown recently that solution of the fundamental equations for inversion of perturbed linear operators on infinite-dimensional Banach space \cite{alb1, how1} is necessary and sufficient for existence of an analytic resolvent. However there is no known systematic method for solving the fundamental equations in an infinite-dimensional setting. We would like to know if solution of the fundamental equations could be reduced to a finite-dimensional problem using Schauder frames \cite{cas1}.
\section{Acknowledgements}
\label{ack}
This research is funded by the Australian Research Council Discovery Grant DP 160101236 held by Phil Howlett, Amie Albrecht, Jerzy Filar and Konstantin Avrachenkov. Geetika Verma is employed by the project as a Research Associate. The authors would like to thank Dr Lalit Vashisht for helpful advice about preparation of the manuscript.
|
1,116,691,499,071 | arxiv | \section{Introduction}
In the context of Active Galactic Nuclei (AGN) outflows from the central region are commonly detected as absorption lines in more bands. In the UV/Optical range, they can be present in $\sim$70\% of type 1 AGN, with an extension up to kpc scales and velocities up to $\sim$1000 km/s (\citealt{Harrison}). A similar fraction ($\sim$60\%) has been found in the quasar (QSO) population (\citealt{Gan}).
In the X-ray band, more highly ionized outflows are detected as Ultra Fast Outflows (UFO) both in radio quiet (\citealt{Pounds}) and radio loud AGN (\citealt{Tombesi14}) with much higher velocities, in the range 0.03-0.4c.
A feedback effect from outflows on the host galaxy has been proven to exist by different authors in the past years (e.g. \citealt{Feruglio,Wang,Sturm}), and lately they have been proven to hamper star formation (\citealt{Tombesi}).
BAL QSOs are among the objects presenting the fastest outflows. These are detected in about 15\% of QSOs as broad absorption troughs in the UV spectrum, on the blue side of emission lines from ionised species, mainly C\,{\sc iv}~ and Mg\,{\sc ii}~. They can be both detached or superimposed to the emission peak, and reach relativistic velocities of up to 0.2c (\citealt{Hewett}). \cite{Allen} have found a dependence with redshift of the BAL fraction, decreasing a factor of 3.5$\pm$0.4 from z$\sim$4 to $\sim$2.
The mechanism at the origin of these violent outflows has not been unveiled yet. The two main scenarios discussed in the literature tend to ascribe the BAL phenomenon to: 1) young objects, in which the strong nuclear starburst activity is still expelling a dust cocoon (\citealt{Briggs, Sanders, Farrah}), or 2) normal QSOs, whose outflows intercept the line of sight of the observer (\citealt{Elvis}). In this case, relativistic outflows are supposed to be commonly present in QSOs, but detected only when orientation is favourable. Variability of the BAL troughs has been explored by many authors in the past years, thanks to the increasing amount of available spectroscopic surveys data (\citealt{Gibson08,Gibson10,Capellupo11,Capellupo12,Vivek12}), and a typical duty cycle of about a thousand years for the BAL-producing outflow has been found (\citealt{Filiz12,Filiz13}).
Several works have been published in the past years, trying to collect information in the different electromagnetic bands. In particular, the emission in the radio band has been used to probe the orientation and age of these objects (\citealt{Montenegro,DiPompeo1,Bruni,Bruni2}).
No clear hints of a favoured scenario arose from these works, resulting in indications of different possible orientations and different ages for radio loud (RL) BAL QSOs. In this work, we present follow-up observations of sources from our previously studied sample (\citealt{Bruni}). We explored the emission properties at m- and mm-wavelengths, to complement the multi-wavelength view of these objects, already studied at cm-band in our previous work.
The detection of a strong MHz emission can be safely interpreted as the presence of old extended radio plasma connected to a former AGN radio-active phase, that can be as old as 10$^7$-10$^8$ years (\citealt{Konar06,Konar13}). It has been shown indeed that jets in radio loud AGN can have multiple phases of activity (\citealt{Lara,Schoen,Saikia,Nandi}) with a duty cycle that depends on the source radio power (\citealt{Best2,Shabala}). Hints of these components in BAL QSOs were already found from literature data, for the same sample studied in this paper, and presented in \cite{Bruni}. This kind of emission adds a significant information in the framework of the presented models for BAL QSOs. In particular, if the young-scenario would be the most realistic one, no further components than the one peaking in the MHz-GHz range should be present, since GigaHertz-Peaked Spectrum sources (GPS) and Compact Steep Spectrum sources (CSS, \citealt{ODea}), together with High Frequency Peakers (HFP, \citealt{Dallacasa}) are among the youngest radio sources. To date systematic searches of diffuse emission around GPS and CSS sources find about 20\% detections (\citealt{Stan1,Stan2}). In light of this we use low-frequency GMRT observations to probe possible extended emission around BAL QSO with the aim of possibly confirming or discarding the youth scenario described in the previous paragraph. The radio phase itself does not seem to introduce significant differences in RL BAL QSOs with respect to Radio Quiet (RQ) ones (\citealt{Bruni3,Rochais}), thus being a valid tool to study general phenomenology of these objects.
The continuum emission of dust, in the rest-frame far-infrared domain (FIR), can be detected at mm-wavelengths (over 100 GHz) for objects with z$\sim$2. Objects enshrouded by gas and dust can host star-formation regions (\citealt{Zahid}), and thus show high star-formation rates, that may indicate a young age of the galaxy. A flux density excess in the FIR could be an indicator of a different age for BAL QSOs with respect to the non-BAL QSO population, and thus help in discriminating between the orientation and the evolutionary models. There are two major works presenting (sub)mm observations on samples of BAL QSOs. \cite{Willott} showed SCUBA measurements on a sample of 30 radio-quiet
BAL QSOs and conclude that there is no difference between BAL QSOs and a
comparison sample of non-BAL QSOs. Nevertheless, \cite{Priddey} based on SCUBA observations of 15 BAL QSOs found tentative evidence for a dependence of submm flux densities on the equivalent width of the characteristic C\,{\sc iv}~ BAL which \emph{`suggests that the BAL phenomenon is not a simple geometric effect [\dots] but that other variables, such as evolutionary phase, [\ldots] must be invoked'}. \cite{Cao} discuss the far-infrared properties and star-formation rates (SFR) of BAL QSOs (without distinguishing between RL and RQ ones), using data from the {\emph{Herschel}}-ATLAS project. They found no differences with respect to non-BAL QSOs, concluding that a scenario in which BAL QSOs are objects expelling a dust cocoon is improbable.
The main difference of the above samples compared to our target sample is
the radio-loudness of our sources.
With the radio-data presented in \cite{Bruni} we were able to characterise the synchrotron
spectra of our sources, and thus to study the peak frequency and spectral index distributions with respect to the `normal' QSO population. Also an upper limit to the synchrotron emission at mm-wavelengths can be constrained.
A search for HFP, i.e. the
youngest known radio sources with the highest turnover frequencies, shows
only a very small percentage of sources with peak frequencies close to 20 GHz
(\citealt{Dallacasa}) with the most extreme case at 25 GHz, leading to a
formal age of some 50 years only (\citealt{Orienti}).
As any upturn towards an even higher peak frequency would be visible in our Spectral Energy Distributions
(SEDs) we can safely assume that the extrapolated synchrotron emission
reflects its true contribution at 250 GHz and that any observed excess emission
can be attributed to the presence of cold dust. Moreover, the variability study presented in \cite{Bruni} excludes any possible significant variability even at high frequencies over a 3 years time scale, for this sample of objects.
The outline of the paper is as follows: in Sect.~\ref{sec:sample} we describe the BAL QSO sample. The radio observations are reported in Sect.~\ref{sec:observations}. In Sect.~\ref{sec:results} we present the results concerning morphology at MHz frequencies and dust abundance. Sect.~\ref{sec:discussion} is a discussion of the results, in the context of recent works about BAL QSOs.
The cosmology adopted throughout the work assumes a flat universe and the following parameters: $H_0=71$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}=0.73$, $\Omega_{M}=0.27$.
\section{The RL BAL QSO sample}
\label{sec:sample}
The radio-loud BAL QSO sample studied in this paper is presented in \cite{Bruni}. All sources were chosen among objects from the 4th edition of SDSS Quasar Catalogue (\citealt{Schneider07}), drawn from the 5th data release of the Sloan Digital Sky Survey (SDSS-DR5; \citealt{Adelman}). To select radio-loud objects, we cross-matched the SDSS with the FIRST (Faint Images of the Radio Sky at Twenty-cm; \citealt{Becker2}) and only those with a counterpart lying $<$2 arcsec away and having $S_{1.4\rm{GHz}}$ $>$ 30 mJy were considered. All of these satisfy the radio-loudness definition by \cite{Stocke}. Moreover, the selection has been limited to those objects whose redshifts lie in the range 1.7 $<z<$ 4.7 allowing the identification of both C\,{\sc iv}~ and Mg\,{\sc ii}~ absorption features on SDSS spectra. In order to select genuine BAL QSOs, only objects with an Absorption Index (AI) $>$ 100 $\rm{km/s}$ were considered, and only troughs broader than 1000 $\rm{km/s}$ were used for this calculation\footnote{i.e. we adopted an AI defined as ${\rm{AI}}=\int_{0}^{25000}(1-\frac{f(v)}{0.9})\cdot Cdv$, as in \citealt{Hall}, but with $C$=1 only for contiguous troughs $\ge$ 1000 km/s, and null otherwise.}. This resulted in 25 radio-loud BAL QSOs. For a complete description of the sample and the selection procedure refer to \cite{Bruni}.
%
\begin{table*}[]
\caption{Summary of the observations and setups presented in this paper. IRAM and APEX telescopes made use of bolometer receivers.}
\begin{center}
\scalebox{0.92}{
\begin{tabular}{clccccc}
\hline
\hline
Run & Date & Telescope & Frequency & Bandwidth & FWHM & Number of sources\\
&&& (GHz) & (MHz) & (arcsec) & \\
\hline
1& 9-11 Jan 2010 & GMRT & 0.235, 0.610 & 33 & 15-45, 4-10 & 5 \\
2& 26 Oct-23 Nov 2010 & IRAM-30m & 250 & - & 11 & 11 \\
3& 05-22 Aug 2010 & APEX & 850 & - & 19, 8 & 4 \\
4& 01-09 Sep, 05 Nov 2011 & APEX & 345, 850 & - & 19, 8 & 2 \\
5& 04-05 Jun, 13-14 Aug 2012 & APEX & 850 & - & 19, 8 & 3 \\
\hline
\end{tabular}}
\label{Summaryofobservations}
\end{center}
\end{table*}
\section{Radio observations and data reduction}
\label{sec:observations}
In this paper, we present observations complementary to the ones performed by \cite{Bruni} at cm wavelengths. The GMRT, the APEX single-dish and the IRAM 30-m telescope were used to extend the available SEDs extension.
Table ~\ref{Summaryofobservations} summarises the different runs and observing setups.
\subsection{Giant Metrewave Radio Telescope}
Observations at frequencies of 235 MHz and 610 MHz with the GMRT were performed for
five sources, during January 2010. We used the double frequency mode to observe
simultaneously at the two frequencies. The total bandwidth
for each band was 33 MHz, divided into 256 channels of 0.13 MHz each.
We observed in snapshot mode to improve the UV coverage for the sources.
Standard phase and amplitude calibration were performed, using 3C 286 as
primary calibrator every $\sim$4 hours and suitable phase calibrators near targets
every $\sim$30 minutes. Correlation was performed using the GSB software correlator at NCRA.
Data were reduced with the AIPS\footnote{http://www.aips.nrao.edu/index.shtml} package, using the standard procedures.
Flux densities were extracted from images via Gaussian fit of the components, using task JMFIT inside AIPS.
\subsection{IRAM-30m single dish}
We could observe 11 sources of the BAL QSO sample at 250 GHz with the IRAM-30m telescope, during the 2010 summer pool session. We used the MAMBO2 117-pixel bolometer in ON/OFF mode, since all of our sources are point-like for this telescope (HPBW=11 arcsec).
With average atmospheric conditions the detector could reach a noise of $\sim$1 mJy/beam in $\sim$40 min of observing time: we observed each source for this duration in order to obtain a detection or a 3-$\sigma$ upper limit.
Skydip, calibration and pointing scans were regularly performed during the runs and every time the observing direction in the sky significantly changed in elevation. Focus was repeated at sunrise and sunset. A standard reduction was done using the MOPSIC\footnote{http://www.iram.es/IRAMES/mainWiki/CookbookMopsic} script provided by IRAM.
\subsection{APEX}
From 2010 to 2012 we observed with APEX a total of 9 southern sources from the BAL QSO sample.
All of them were observed with the SABOCA bolometer array (\citealt{Siringo2})
at 850 GHz in photometry mode (HPBW $\sim$8 arcsec) and two of the sources (0044+00 and 1404+07) also
in mapping mode.
In addition, one of the sources (0044+00) was observed with the LABOCA bolometer array
(\citealt{Siringo1}) at 345 GHz in photometry mode (HPBW $\sim$19 arcsec).
APEX observations were carried out in service mode, with typical integration times
of ~1 h per source, in order to reach RMS values around 20 mJy/beam. Calibration was based on
observations of primary calibrators (Mars and Uranus) as well as skydips measured at the same
azimuth of the targets to derive atmospheric opacity. A standard reduction was done using the
version 2.15-1 of the CRUSH\footnote{http://www.submm.caltech.edu/~sharc/crush/} software which offers an improved pipeline for photometric data as compared to earlier versions.
\subsection{Error determination}
In the flux density error calculation, different contributions were
considered for the GMRT interferometric observations:
\begin{itemize}
\item The thermal noise, $\Delta S _{\rm{noise}}$, which is estimated from the map, in empty regions of sky surrounding the target;\\
\item The fractional calibration error, $\Delta S_{\rm{calib}}$, estimated as the visibilities dispersion of the flux density calibrators;\\
\end{itemize}
In particular, we followed the approach proposed by Klein et al. (2003). The expression used is reported below:
\begin{equation}
\Delta I=\sqrt{(\Delta S_{\rm{calib}}\cdot S)^2 +(\Delta S _{\rm{noise}})^2 \cdot \frac{A_{\rm{src}}}{A_{\rm{beam}}}},
\end{equation}
where $A_{\rm{beam}}$ and $A_{\rm{src}}$ are respectively the area of the synthesised beam and the aperture used to extract the source flux density. From their ratio we determine the number of beams contained in the source.
For APEX and IRAM-30m data, obtained with bolometer receivers, the error was calculated using the respective packages for data reduction, estimating the noise from off-source subscans.
\subsection{The polarimetry campaign}
During 2011, we conducted a polarimetry campaign using the EVLA and the Effelsberg-100m single dish on this same sample, to implement data later presented in \cite{Bruni}, and probe with deeper observations the polarisation of the faintest sources. The results from this campaign will be presented in a future paper, but we decided to use part of the obtained total flux density measurements to improve the SED coverage of this work (see Tab. \ref{errata}). Also, one of the sources observed with the EVLA turns out to have a resolved structure (0849+27), and we present here the map (see Sect. \ref{sec:morphology}). This same source was also observed during our mm campaign (see Tab. \ref{dust_flux}). Observations and data reduction were conducted as in \cite{Bruni}.
\begin{table}
\centering
\caption{Top lines: revised flux densities for sources presented in \cite{Bruni} and in this work. Bottom lines: flux densities from our polarimetry campaign, used for this work.}
\begin{tabular}{cccc}
\hline
\hline
ID & Frequency & S & Telescope \\
(J2000) & [GHz] & [mJy] & \\
\hline
0756+37 & 43 & 5.2$\pm$0.8 & VLA \\
0816+48 & 1.4 & 70.9$\pm$0.7 & VLA \\
1335+02 & 43 & 8.4$\pm$1.3 & VLA \\
\hline
0842+06 & 8.35 & 19.5$\pm$0.8 & Effelsberg-100m \\
0849+27 & 4.86 & 27.6$\pm$0.7 & EVLA \\
\hline
\end{tabular}
\label{errata}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=18cm]{./0849_multi.pdf}
\caption{Maps of BAL QSO 0849+27 from the FIRST survey (left panel - 1.4 GHz, beam 5.40$\times$5.40 arcsec), our EVLA observations (central panel - 4.86 GHz, beam 1.95$\times$1.25 arcsec), and our VLBA observations (right panel - 5 GHz, beam 2.99$\times$1.15 mas). Contours are multiples of 3-$\sigma$, according to label. Dashed contours are negative. The synthesised beam size is shown in the lower left corner of each map. Note that the scale of the right panel is in mas.}
\label{0849}
\end{center}
\end{figure*}
\section{Results}
\label{sec:results}
In the following, we present the results from our observing campaign: the morphology could be studied with the GMRT interferometer, while images from the IRAM and APEX bolometers were used for photometric measurements. The collected information is presented in Tab. \ref{GMRT_size}, while SEDs, including flux densities at cm wavelengths from \cite{Bruni} and our polarimetry campaign, are presented in Fig. \ref{dust} and discussed in Sect. \ref{sec:SED_section}.
\subsection{Morphology}
\label{sec:morphology}
From the GMRT and EVLA maps we were able to investigate the morphologies of the sources at arcsec scales.
The frequency range explored with the GMRT allowed us to put some constraints to
the presence of extended, old, radio components.
\subsubsection*{GMRT maps}
We observed with the GMRT the five sources from our BAL QSO sample showing the strongest low-frequency
emission in the flux densities collected from archival surveys data. The goal was the detection of extended emission at 235 or 610 MHz, indicating a previous radio-activity period of the central AGN, and thus putting a constraint on the age. Maps show components with deconvolved dimensions greater than zero in most cases, corresponding to a fraction of the beam (see Fig. \ref{GMRT}).
One source (1159+01) presents an elongated structure at 235 MHz, confirmed by the detection of a second component (B) at higher resolution in the 610 MHz map. This structure is compatible with the one seen in the pc-scale maps obtained by \cite{Tak}, where a jet extension at a comparable position angle is visible. This could confirm the presence of two different radio phases, at different scales, as also highlighted by the SED of this object (see Sec. \ref{sec:SED_section}). Quantities for all sources at the two frequencies are presented in Table \ref{GMRT_size}, together with projected linear sizes.
These measurements confirm the presence of a low-frequency, older radio component in some BAL QSOs,
thus excluding that they are a subclass of young radio objects. The size of these components is significantly larger
than the values of a few kpc measured for the high-frequency, unresolved, counterparts (see \citealt{Bruni}), thus suggesting different emitting regions for the two. The flux densities found nicely fit with collected data from surveys (see Fig. \ref{dust}).
\begin{figure*}
\begin{center}
\includegraphics[width=18cm]{./GMRT.pdf}
\caption{Maps of 5 BAL QSOs observed with the GMRT at 235 and 610 MHz. Contours are multiples of 3-$\sigma$, according to the label. Dashed contours are negative. The synthesised beam size is shown in the lower left corner of each map.}
\label{GMRT}
\end{center}
\end{figure*}
\begin{table*}[]
\renewcommand\tabcolsep{3.0pt}
\caption{Information collected with the GMRT interferometer: Col. 2 and 8 are flux densities from the maps, Col. 3-6 and 9-12 are deconvolved major and minor axes at 235 and 610 MHz, respectively. The projected linear size in kpc is also given. When a null deconvolved size were found, we considered the corresponding beam size as upper limit. For the only resolved source (1159+01) we give values for both components.}
\label{GMRT_size}
\centering
\begin{tabular}{cccccccccccc}
\hline
\hline
& \multicolumn{5}{c}{235 MHz}
& \multicolumn{5}{c}{610 MHz} \\
ID & $S$ &Maj. axis & Min. axis & Maj axis & Min. axis& Component& $S$ &Maj. axis & Min. axis & Maj axis & Min. axis\\
& (mJy) & (arcsec) & (arcsec) & (kpc) & (kpc) && (mJy) & (arcsec) & (arcsec) & (kpc) & (kpc) \\
\hline
0756+37 & $<$85.2 & - & - & - & - & &112$\pm$4 & 3.9&2.2&32.4&18.3\\
1159+01 & 720$\pm$65 & 19.6 &7.0 &164~~~~ &58.4 &A &~~312$\pm$10 &4.6&2.3&39.5&19.8\\
& & & & & &B &~~74.8$\pm$4.2 &10.2~~~&0.9&86.3&~~7.6\\
1159+06 & 872$\pm$81 & ~~6.9 &$<$10.1 &58.7 &$<$85.9 & &163$\pm$5 &6.7&2.4&57.0&20.4\\
1406+34 & 249$\pm$26 & 14.8 &$<$11.0 &122~~~~ &$<$90.9 & &168$\pm$5 &3.9&1.1&32.2&~~9.1\\
1624+37 & 104$\pm$16 & 11.1 &$<$11.0 &84.9 &$<$84.1 & &~~98.6$\pm$4.9 &4.3&3.7&32.9&28.3\\
\hline
\end{tabular}
\end{table*}
\subsubsection*{EVLA map of 0849+27}
During our polarimetry campaign, we could observe the peculiar BAL QSO 0849+27 with the EVLA at 4.86 GHz. The map of the resolved structure of this source at 1.4 GHz, obtained from the FIRST survey, was already presented in a previous work from our group (\citealt{Bruni}). This resulted to be the most extended BAL QSO of our sample (44 arcsec, 382 kpc, between components A and C). We could obtain a map at 4.8 GHz from our subsequent EVLA observation in 2011, taking advantage of the improved performance of this instrument, and also a high resolution (pc-scale) map fo the core component from our VLBA program (see \citealt{Bruni2}). Results are presented in Fig. \ref{0849}. Four out of five components detected in the FIRST data are visible in our EVLA map (A, B, C, E), while the flux density of component D seems to drop below the 3-$\sigma$ significance level. Total flux densities as measured at 4.86 GHz are 27.6$\pm$0.7 mJy, 2.0$\pm$0.2 mJy, 0.49$\pm$0.05 mJy, and 0.41$\pm$0.05 mJy for component A, B, C, E, respectively. Components E was not classified in our previous FIRST map, but given the clear detection we obtained in the EVLA map, we extracted the flux density at the corresponding position in the FIRST map, where a single contour was present. This results to be 2.36$\pm$0.15 mJy.
The obtained spectral index for these components, are $-$0.54$\pm$0.04, $-$1.10$\pm$0.17, $-$1.78$\pm$0.18, and $-$1.41$\pm$0.22 for components A, B, C, and E, respectively. A flat spectral index ($>-$0.5) usually identifies the core for non-Doppler-boosted components: component A shows a spectral index compatible with that value within the error, while B, C, and E have a steep spectral index ($<-$0.5). From our VLBA observations we could confirm that core is component A, since at pc-scale it shows a core-jet structure, with one of the two components (A1) having a flat spectral index of $-$0.12$\pm$0.20 between 5 and 8.4 GHz.
The peculiar morphology of this source, explored at both kpc and pc scales, suggests a jet precession, or radio-activity phases with different jet axes. In fact, trajectories connecting the core (A) with the other components detected at kpc scale are all different, not permitting to identify one as the counter-jet of the other. Moreover, the pc-scale structure shows a further launching direction for the most recent component (A2), not corresponding to any among the ones at kpc scale. In this scenario, the BAL-producing outflow would be present in a source presenting multiple ongoing radio phases: this would not be easily explicable with the young scenario.
\subsection{SEDs shape from 235 MHz to 850 GHz}
\label{sec:SED_section}
We present here a study of the SED shape for the objects in this work. We implemented the new flux densities
in the ones from \cite{Bruni}, spanning from 74 MHz up to 43 GHz, in order to improve the overall frequency coverage.
For three sources (0756+37, 0816+48, 1335+02) we provide here a revised flux density for the VLA measurements presented in \cite{Bruni}: for source 0816+48, during previous data reduction the vicinity of a strong source led to an incorrect phase referencing, resulting in an overestimated flux density measurement. While for sources 0756+37 and 1335+02, our flux-extraction algorithm missed the source, displaced of a few arcsec from the map centre, due to atmospheric effects. The revised values are given in Tab. \ref{errata}.
\subsubsection*{Flux densities at mm wavelengths}
Flux density from the dust grey-body thermal emission can be described by the following equation (\citealt{Hughes}):
\begin{equation}
S^{\rm{obs}}= \frac{(1+z)}{D_L^2} \times M_dk_d^{\rm{rest}}B(\nu^{\rm{rest}},T_d),
\label{Sdust}
\end{equation}
where $S^{\rm{obs}}$ is the observed flux density at a given frequency $\nu_{obs}$, and $\nu_{rest}$ is the rest-frame frequency, $D_L$ is the luminosity distance, $z$ is the target
redshift, $k_d^{\rm{rest}}$ is the mass absorption coefficient at a given rest-frequency, $B$ is the black-body
Planck function, $T_d$ is the dust temperature, and $M_d$ is the dust mass. Typical AGN values for $T_d$ and $M_d$ from the latest works in the literature
are 10<$T_d$<60 K, and $M_d\sim10^8 M_\odot$ (\citealt{Kal}), while $k_d^{\rm{rest}}$ is usually calculated scaling down to the desired wavelength values estimated in previous works (e.g. $k_d^{\rm{850\rm{\mu m}}}$=0.077 m$^2$kg$^{-1}$ in \citealt{Dunne00,Dunne11}), approximating the trend \emph{vs} wavelength as a power law:
\begin{equation}
k_d^{\rm{rest}}\propto\lambda^\beta,
\end{equation}
where $\beta$ is the dust emissivity index (\citealt{daCunha}).
We could collect flux densities at mm-wavelengths for a total of 17 objects from the BAL QSO sample.
In most cases only upper limits could be derived (see Tab. \ref{dust_flux}), nevertheless providing useful constraints for the dust abundance. Only one source (0756+37) shows a flux density at 250 GHz clearly distinguishable from the expected synchrotron-emission tail: this is the only genuine example of dusty BAL QSO found among these objects.
Despite the fact that we could not collect enough detections in the mm-band to perform a full fit of the dust emission using Eq. \ref{Sdust}, we can still compare the obtained upper limits with the results from previous works in the literature (\citealt{Omont,Kal}, see Sec. \ref{sec:discussion:dust} for a discussion).
\subsubsection*{SED fitting}
For the SED fitting in the m-/cm-wavelengths domain, we adopted the same basic approach as in our previous work, i.e. a linear and a parabolic fit in logarithmic scale, determining which of the two functions fits best the data. The former is a simple model of power-law synchrotron emission, commonly used to fit the optically-thin part of the emission above the peak, for old sources peaking at frequencies below the available ones. The latter is a first-order approximation of an optically-thick (on the left side of the peak) plus an optically-thin synchrotron emission (on the right side of the peak) capable of fitting the SED of a young radio source, peaking in the MHz-GHz range.
Upper limits were excluded from the dataset used for fitting, as well as datapoints clearly belonging to a second component in the MHz range (if any). After introducing the new data, we obtained best fits with the same functions (line or parabola) than previously adopted, except for sources 0842+06, that resulted to have a parabolic shape, and 1335+02, that after the introduction of the revised flux density from Tab. \ref{errata} show a HFP component peaking at $\sim$20 GHz (see Fig. \ref{dust}). Another source (1159+06) showed a second synchrotron component in the MHz range, peaking at about 100 MHz, while 0816+48, that was not fitted in \cite{Bruni}, now presents a parabolic component in the GHz range using again the revised flux density at 1.4 GHz.
In two of the three sources detected at 250 GHz the emission is most probably due to synchrotron component (1237+47, 1406+34), since it perfectly fits in the shape of the emission tail. For source 0756+37 we found a significant excess of emission at 250 GHz (2.0$\pm$0.5 mJy), with respect to the expected contribution of the synchrotron emission at the same frequency. Thus we can consider this emission as most probably produced by dust.
\begin{table}
\centering
\caption{Results for the 17 BAL QSOs observed in the mm-band: 250 GHz flux densities from IRAM-30m, 345 and 850 GHz from APEX. We give 3-$\sigma$ upper limits in case of non-detection.}
\label{dust_flux}
\renewcommand\tabcolsep{5pt}
\begin{tabular}{lccccc}
\hline
\hline
\multicolumn{1}{c}{Name} &
\multicolumn{1}{c}{$S_{250}$} &
\multicolumn{1}{c}{$S_{345}$} &
\multicolumn{1}{c}{$S_{850}$} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{(mJy)} &
\multicolumn{1}{c}{(mJy)} &
\multicolumn{1}{c}{(mJy)} \\
\hline
0044+00 & - &$<$260 &$<$30 \\
0756+37 & 2.0$\pm$0.5 &- &- \\
0816+48 & $<$1.8 &- &- \\
0842+06 & - &- &$<$71 \\
0849+27 & $<$2.4 &- &- \\
1014+05 & $<$3.0 &- &- \\
1102+11 & $<$3.0 &- &- \\
1159+01 & - &- &$<$150 \\
1159+06 & $<$6.0 &- &$<$161 \\
1229+09 & $<$3.3 &- &$<$66 \\
1237+47 & 4.6$\pm$1.0 &- &- \\
1304+13 & $<$3.0 &- &- \\
1327+03 & $<$3.6 &- &$<$201 \\
1335+02 & - &- &$<$78 \\
1337$-$02 & - &- &$<$66 \\
1404+07 & - &- &$<$90 \\
1406+34 & 9.3$\pm$0.8 &- &- \\
\hline
\end{tabular}
\end{table}
\section{Discussion}
\label{sec:discussion}
In the following, we discuss our findings and put them in the context of the works present in the literature.
\subsection{Low-frequency components}
A consistent number of restarting radio sources have been found in the past 20 years, and hypotheses about their nature have been proposed (\citealt{Czerny, Wu, Marecki}). A fraction corresponds to GPS sources or CSS sources, known to be young radio sources. In some cases, up to three radio phases are detectable in the same object (\citealt{Brocksopp}). \cite{Czerny} associate the intermittent activity of the central engine with the radiation pressure instability of the accretion disk.
Concerning our results in the MHz range, the fact that hints of old, extended, radio components were found, despite the limited number of objects observed with the GMRT, could indicate that an age of $10^7$-$10^8$ years for the BAL-hosting QSOs is not rare. This finding is in line with what arose from our previous works (\citealt{Bruni,Bruni2}), where we discussed the presence of old components in the SED of our sample. In some cases a radio-restarting scenario, and the complex dynamics involved by that, could be invoked to explain a two-component SED (1159+01, 1159+06, 1335+02, 1406+34), showing both a GHz-peaked and a MHz component. In other cases (1014+04, 1229+09, 1304+13, 1327+03) a peak in the MHz range could still be present, but not seen because of insufficient frequency coverage. Considering this, and the SEDs of objects previously studied in \cite{Bruni}, a $\sim$70\% of our sample could be in a GPS or GPS+CSS phase.
Once again this shows how BAL-producing outflows can be present not only in young radio sources, but also in more complex scenarios. For example, sources found to have multiple radio phases ongoing (CSS+GPS), should have gone through an unstable radiation pressure phase, causing an intermittent BAL-producing outflow acceleration (favoured during the high-pressure phases). This could include young, just-started radio sources (GPS/HFP) as well as restarted radio sources (e.g. those with a CSS+GPS SED). If this link with ignition/recollimation of the radio jet would be confirmed, an outflow collimation to form a jet (as already proposed by \citealt{Elvis}) could be invoked to explain the BAL variability.
\subsection{Dust emission}
\label{sec:discussion:dust}
Several works in the past years, especially with the advent of the \emph{Herschel} space observatory, has constrained the dust emission properties in AGN, and verified the connection with star-formation rate. Here we provide a comparison between our results for BAL QSOs in the mm-band and what found by other authors.
A very similar study to ours, in terms of instrumentation, sensitivity, and setup, is the one presented by \cite{Omont}. They performed 250-GHz observations of 35 optically luminous RQ QSOs ($M_{B}<-$27.0), with a similar redshift range to ours (1.8$<z<$2.8), performed with the MAMBO bolometer at the same frequency. They found that 26$\pm$9\% of the sources present an emission at that frequency.
Since they reached an RMS very similar to our observations ($\sim$1 mJy) we can compare our detection rate with this percentage: only 1 out of 17 ($\sim$6\%) of our sources shows 250-GHz emission attributable to dust, a substantially smaller fraction than the one found by these authors.
More recently, \cite{Kal} published results of the {\emph{Herschel}}-ATLAS project regarding FIR properties of radio-loud and radio-quiet QSOs. Using five different bands (100, 160, 250, 350, 500 $\mu$m) they fitted the
dust emission for both groups. At the mean redshift of our sample (z$\sim$2.3) they found a mean flux density for radio-loud QSOs of 23.5$\pm$2.1 mJy at 350 $\mu$m (corresponding to our 850 GHz observations), and 21.3$\pm$2.8 mJy at 500 $\mu$m ($\sim$600 GHz). These are values below the sensitivity we reached with APEX, but using a simple power law to extract the expected mean flux density at 250 GHz from the previous values, we obtain a value of $\sim$17 mJy, that would be well detectable with the RMS of $\sim$1 mJy we could reach at IRAM-30m. This could suggest that our sample of RL BAL QSOs is also poorer in dust content than normal RL QSOs, in addition to RQ ones (as seen comparing with \citealt{Omont}).
In the light of these works, our results support the idea of BAL QSOs not being specially dusty objects. In addition, although with modest statistical significance, our work suggests that dust emission in the RL BAL QSO sub-class could be weaker than expected.
\cite{Tombesi} found that fast outflows ($\sim$0.25c) from the accretion disk in AGN can hamper star formation, impacting the interstellar medium. BAL-producing outflows, although at a lower ionization stage than the ones considered by those authors, can present velocities up to 0.2c, and, in the light of these results, could have a similar effect on the SFR of the host galaxy.
This needs to be investigated with further observations, that could confirm this on larger samples of RL BAL QSOs.
\section{Conclusions}
We performed observations in the m-band with the GMRT of 5 RL BAL QSOs, that already showed hints of emission in the MHz range, and in the mm-bands for 17 RL BAL QSOs, from our previously studied sample. We aimed at exploring the emission in the low-frequency regime, and the grey-body emission from dust, respectively. The conclusions are the following:
\begin{itemize}
\item All 5 objects observed at low frequencies present emission from extended components, indicating the presence of an old radio emission. In some cases a restarting radio-activity can be invoked to explain the double component, MHz and GHz-peaked, present in their SEDs. This could suggest an intermittent BAL phase, associated with periods of radio-restarting activity.
This is also supported by the morphology of some objects, both from this work (0849+27) and from \cite{Bruni2}, and by the fact that $\sim$70\% of our RL BAL QSOs sample can be considered in a GPS or CSS+GPS phase, thanks to the data presented here.
\item Only 1 out of 17 sources ($\sim$6\%) presents a clear contribution at 250 GHz from the dust grey-body emission. In the other cases 3-$\sigma$ upper limits have been derived. Comparing our results with the fraction of dust-rich RQ QSOs found by \cite{Omont}, resulting in a percentage of $\sim$26\%, we found that RL BAL QSOs do not present a larger fraction of dust-rich objects with respect to the RQ QSO population. Also comparing with more recent works from the \emph{Herschel}-ATLAS collaboration (\citealt{Kal}), we found a lack of dust emission with respect to mean values for RL QSOs. Since the amount of dust can be connected to star-formation and thus to the age of the host galaxy, this results suggest that RL BAL QSOs are not a hosted by particularly young galaxies. The fact that, despite of radio loudness, these objects present even less dust emission (and consequently a lower star formation rate) than RQ QSOs, could suggest that BAL-producing outflows are able to hamper star formation in the host galaxy.
\item Both the obtained results, from observations performed at m- and mm-wavelengths, suggest that BAL QSOs
are not commonly young radio objects, or objects still expelling their dust cocoon from the central region. They could be radio-restarting objects, which present relativistic outflows in conjunction with some periods of favourable emission/acceleration conditions.
\end{itemize}
\begin{acknowledgements}
Part of this work was supported by a grant of the Italian Programme for
Research of National Interest (PRIN No. 18/2007, PI: K.-H. Mack)
The authors acknowledge financial support from the Spanish Ministerio de Ciencia e Innovaci\'on under project
AYA2008-06311-C02-02.
We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.
This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX). APEX is a collaboration between the Max-Planck-Institut f\"ur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory.
This work is partly based on observations carried out with the IRAM-30m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).
\end{acknowledgements}
\begin{figure*}
\begin{center}
\includegraphics[width=18.5cm]{./SEDs.pdf}
\caption{SEDs of the 18 BAL QSOs observed during the m/mm-wavelengths campaign (x-axis: GHz; y-axis: mJy). 235 MHz and 610 MHz flux densities from the GMRT; 250 GHz flux densities from IRAM-30m; 345 and 850 GHz flux densities from APEX. Measurements at other frequencies are taken from \cite{Bruni}, or from our polarimetry campaign (see Sect. 3). Triangles are 3-$\sigma$ upper limits. Solid lines are parabolic or linear fits, according to the criteria discussed in Sec. 4.2.}
\label{dust}
\end{center}
\end{figure*}
|
1,116,691,499,072 | arxiv | \section{Introduction}
The Hubble Sequence (Hubble 1936) and its revisions (e.g., Sandage 1961;
de Vaucouleurs {\it et al.} 1959; van den Bergh 1976) all address the local
universe, in effect defining the mainstream of ``normal'' galaxies. It has been
known for a long time that some galaxies do not fit in these schemes. Arp
(1966) published an atlas of {\it peculiar} galaxies which shows an impressive
variety of morphologies deemed ``strange'' or ``interesting''. However,
quantifying the notion of peculiarity is still a challenge. This may be partly
due to the fact that peculiar galaxies were regarded as rare exceptions
unrelated to each other, rather than a coherent class (or classes) of galaxies.
The advent of the Hubble Space Telescope (HST) has allowed us to view the
universe of galaxies in much greater depth than ever before. Images in parallel
mode from fields of the Medium Deep Survey (MDS) key project and other
observations give us a first look at large numbers of galaxies residing at
moderate redshifts. The galaxy population at a redshift of about $0.5$ looks
quite different from that of the local universe : A significant population of
blue irregular galaxies was reported (Griffiths {\it et al.} 1994) which
appears to account for much of the increase in number counts at faint
magnitudes (Glazebrook {\it et al.} 1995; Driver {\it et al.} 1995). In
addition, the incidence of morphological peculiarities among these galaxies
appears to be higher than in nearby galaxies (e.g., Abraham {\it et al.} 1995,
hereafter A95). Further out at redshifts exceeding 1 a new morphological class,
called ``chain galaxies'', was recently reported by Cowie {\it et al.} (1995),
although their nature has already been disputed (Dalcanton \& Shectman 1996).
An inherent problem of such observations is that the rest frame of the
observed band is shifted bluewards with increasing redshift. It is difficult to
quantify the effect of this shift on the observed images. We discuss below
the effect of observing moderate redshift galaxies in the V band (filter F606W)
as compared with the I band (F814W). Nevertheless, even without accounting for
the shift in rest frame band it appears that the Hubble Sequence needs to be
extended or replaced by a more general scheme in order to accommodate the
diversity of shape found among moderate redshift galaxies. The numbers clearly
show that peculiar galaxies can no longer be regarded as rare exceptions.
So far, most published work in this field quoted numbers based on eyeball
classification of HST images. However, as was pointed out by Lahav \& Naim
(1996), this approach suffers from two major problems : First, the notion of
morphological peculiarity is not well defined. There is little agreement -
even among experts - on what qualifies as a peculiar galaxy. As a result it is
very difficult to compare results from papers by different authors, and in
particular a wide range of values are given for the fraction of peculiar
galaxies at moderate redshifts. A second, related problem is the difficulty in
obtaining consistent eyeball classifications for large quantities of galaxies.
A95 realised these problems and and took the important first step of
introducing a quantitative measure of peculiarity which can be used to
automatically classify galaxies.
In this paper we answer two related, yet separate, questions : First, how can
one ``teach'' a computer to tell whether a given galaxy is peculiar or not
(given an accepted definition for peculiarity). Second, what is the fraction
of peculiar galaxies at moderate redshifts in the universe. Our premise is that
quantitative measures are crucial if one is to standardise the concept of
peculiar galaxies. We seek to put this quantitative distinction on firm
grounds. Since the notion of peculiarity is not well defined we
begin in \S~2 by describing the galaxy samples and discussing our criteria in
the light of several examples for various types of peculiarity. In \S~3 we
translate our criteria to four quantitative parameters (hereafter the
``4P-set'') and show through examples how they are measured. In \S~4 we train
{\it Artificial Neural Networks} (ANNs) to distinguish between normal and
peculiar galaxies. We compare the A95 set of parameters (hereafter the ``C-A
set'') to the 4P set by training three different ANNs. In \S~5 we study the
limitations of our parameters and examine the ANN success rates as a function
of magnitude, signal-to-noise ratio and image size, below which our method breaks
down. We then use the trained ANN to classify a larger set of images,
for which no eyeball classification is available. In \S~6 we examine the
frequency of peculiarity as a function of the relative importance of the bulge,
in an attempt to determine whether peculiar galaxies are all disk-dominated.
The discussion follows in \S~7.
\section{Galaxy Samples and Criteria for Peculiarity}
\subsection{Galaxy Samples}
Our ``sample 1'' consists of some 1059 images in 9 Groth-Westphal strip fields
(Groth {\it et al.} 1995), and is complete down to a limiting isophotal
magnitude of $24.0$ in the I band (filter F814W). Of these, 81 are rejected
either because they are identified by eye as stars or point sources or due to
low picture quality (e.g., too many bad pixels). The remaining 978 images are
deemed galaxies. Once the ANN is trained we apply it to the rest of the
Groth-Westphal strip, down to $I=24.0$. This larger ``sample 2'' contains 2319
images in 18 more fields. One problem with sample 2 is that it contains stars
as well as galaxies, and we need a quantitative criterion for identifying them.
We discuss such a criterion below.
\subsection {A Qualitative Discussion of Criteria for Peculiarity}
A peculiar galaxy is most easily characterised as any galaxy that is not
morphologically ``normal''. This, in turn, suggests using a representative set
of normal galaxies as templates and comparing any given image to this entire
set. One would then expect the more extreme peculiars to stand out as very
dissimilar to any of the templates. However, such an approach has already been
tried without much success by Lahav \& Naim (1996), and they conclude that
rather than use the whole image as a template one should look for certain
important {\it features}, those which change the most between normal and
peculiar galaxies.
In figure 1 we show twelve examples of I band images, taken from sample 1.
The top nine of these show galaxies with various peculiar features and the
bottom three look normal. The peculiar features of the top nine are (going
left to right, then top to bottom) : Very bright ``knots'' on an otherwise
quiet background; a polar ring galaxy; an apparent merger of two
galaxies; a weird overall shape; an asymmetric shape; a double core (or an
ongoing merger); a ``knot'' at the end of an edge-on disk; faint spiral arms
going in two different senses; an arc on one side of the bulge only. Some of
these features are not easy to see in the printed version of the images. From
an examination of these and other images in sample 1 we came up with two
characteristics of peculiarity :
\medskip
\begin{enumerate}
\item{The nature of bright, localised structure : Many galaxies are considered
peculiar due to the existence of very bright, round features (e.g.,
``knots''), as opposed to elongated arms or arm fragments in spirals}.
\item{The degree of symmetry of the image : A different kind of peculiarity
is an asymmetric shape. In some cases the asymmetry affects the fainter
isophotes (e.g., tidal tails), while in others the shape of the image as
a whole is rather symmetric, but superposed on it are bright asymmetric
features}.
\end{enumerate}
\subsection{Eyeball Classifications}
The images were examined in the I (F814W) and V (F606W) bands {\it separately}
by one of us (AN) on a computer screen, and assigned one of the following
morphological classes : E/S0; Se; Sl; P1; P2. ``Se'' stands for early type
spirals and includes all types in the range S0/a to Sbc. ``Sl'' stands for late
type spirals, and includes all types from Sc through to Im. A distinction
between two general types of peculiarity is also made : Galaxies with a
distorted shape are assigned type ``P1'', while those that exhibit localised
features (e.g., bright ``knots'') are assigned type ``P2''. If a galaxy
qualifies as peculiar on both counts, the assigned type is ``P2''. The galaxies
in the first three bins are collectively labeled ``normal'' and those in the
latter two are labeled "peculiar". We would like to point out that there is
some confusion in the literature between ``Irregular'' and ``Peculiar''
galaxies. We define Irregulars as disk dominated galaxies with a small or no
bulge component, and a fairly flat intensity profile, thus lumping together in
this class the revised Hubble types Sdm, Sm and Im. Irregulars are therefore
normal galaxies as far as this paper is concerned and belong to the ``Sl'' bin
described above. On the other hand, for a galaxy to be labeled as peculiar at
least one of the criteria we specify above has to be met. Such galaxies can not
be assigned Hubble types by definition, although they may resemble normal
galaxies in some respects.
The approach adopted here is very liberal towards peculiars : even the
slightest distortions or a single faint knot are enough for a classification
as a peculiar. This means that galaxies that appear normal but for some small
feature are also called peculiar in our sample. We would like to emphasise that
regardless of the criteria employed, the morphologies of galaxies form
sequences, and where one draws the line between one type and another is always
subjective. In recognition of the many border line cases (especially at the
faint end), each peculiar galaxy is also assigned a certainty index, stating
simply if its "peculiar" tag is certain or uncertain. The distribution of
sample 1 galaxies among the classes is shown in table 1 below.
Some $62\%$ of the galaxies receive exactly the same classification, and
the I and V classifications of $80\%$ differ by no more than one type within
each general (normal/peculiar) class. However, there is a ``migration''
of galaxies from earlier and normal types in I to later and peculiar types in
V. This can be seen from the much higher number of galaxies above the diagonal
in table 1 (297), compared with the number of galaxies below the diagonal (78).
This trend entails a significant increase in the number of peculiars detected
in V. One possible explanation for this effect is that in the range of
redshifts covered by our samples (the median redshift of galaxies down to $I
\sim 22$ is about 0.6) the V filter corresponds to a band that is {\it bluer}
than the rest-frame B band, while the I filter corresponds to a band slightly
redder than B in the rest frame. Practically all the existing classification
schemes for galaxies were defined in the B band for relatively nearby galaxies.
In the local universe the reliability of eyeball classification deteriorates
fast as one goes bluer than B, and UV morphologies can look very odd compared
to their B counterparts. Many of the images in our samples are fainter than $I
\sim 22$, which implies an even higher redshift than 0.6 and consequently an
even bluer pass band. Ideally, one would wish to have a classification scheme
for all images in the same rest wavelength, but in practice this is impossible.
On top of this problem, in the Groth-Westphal strip the V band exposures were
about 30\% shorter than the I band exposures. As a result many images in the V
band suffer from lower signal-to-noise ratio, which contributes the spurious
structure and in general misleads the eye to "see" more fragmented structure
and smaller bulges. Overall, 306 galaxies ($31\%$ of sample 1) are assigned
peculiar types in the I band and 432 galaxies ($44\%$ of sample 1) are
assigned peculiar types in the V band.
\section{Quantitative Parameters for Peculiarity}
Following the discussion above, we adopt the I band images for training
the ANN. All of the parameters described below are derived for I band
images. The images are first treated with a reduction and fitting software
written by one of us (KUR), which performs a maximum-likelihood analysis of
the full 2-dimensional image in order to simultaneously fit the sky level, the
image center, size, position angle and axis ratio, and determine the best
fitting photometric model (bulge, disk or bulge+disk; see Ratnatunga {\it et
al., in preparation} for full details). The original images are used for calculating the
C-A set of A95. In calculating our 4P set we use both the original images and
the the residual images, left after subtracting the best fitting photometric
model from the original image.
\subsection{Light Concentration and Asymmetry}
These two parameters are measured following the concepts of Abraham {\it et
al.} 1994) and A95. For the light concentration we use the somewhat different
definition :
\bigskip
\begin{math}
$$
(1)~~~~~~~~~~~~~~~~~C(\alpha) = \frac {\sum_{r < \alpha \cdot R} I(i,j)}
{\sum_{i,j} I(i,j)}
$$
\end{math}
\bigskip
\noindent where the summation in the denominator is over all image pixels, and
that in the numerator is over all pixels (i,j) whose radius $r$ is less than
$\alpha$ times the radius of the image, $R$, which is taken to be the length of
the semi-major axis. $r$ is the distance of an image pixel from the image
center (corrected for ellipticity). $I(i,j)$ is the intensity of pixel (i,j).
The value $0.3$ is adopted for the parameter $\alpha$ (the same value was used
by A95, although its definition is slightly different there).
The asymmetry parameter is calculated as in A95 by rotating the image by
$180^{\circ}$ and self-subtracting, and is set equal to the sum of absolute
values of pixels in the self-subtracted image over the sum of pixels in the
original image :
\bigskip
\begin{math}
$$
(2)~~~~~~~~~~~~~~~~~A = \frac {\sum_{i,j} | I(i,j) - I^{rot}(i,j) |}
{\sum_{i,j} I(i,j)}
$$
\end{math}
\bigskip
\subsection {Measuring Texture}
Many peculiar galaxies appear ``blobby'', while normal galaxies exhibit
ordered structures or no structure at all. We require that a measure of the
texture take into account the existence of a bright central region in most
galaxies, and be sensitive to localised structure. The measure we suggest for
HST WFPC2 images is defined as follows : start by binning the pixel intensities
of the raw image into ten linear intensity bins (excluding the lower $5\%$ and
upper $2\%$ of pixels, to avoid extreme values). Then examine in turn each of
the pixels whose binned intensity is larger than 5 (the upper half of
intensities, because it is bright structure that makes the image ``blobby''),
and denote its distance from the image center (corrected for ellipticity) by
$d$. Since a typical ``knot'' is no more than 3-4 pixels across, draw a circle
of radius 0.5 arcsec (5 pixels) around each such pixel, and consider only
points inside the circle whose (ellipticity-corrected) distance from the image
center is smaller than $d$. Now calculate what fraction of these points has a
lower binned intensity than that of the pixel of interest. Figure 2 shows a
schematic diagram for the process : The solid lines denote intensity contours
of an idealised elliptical galaxy and the ``X'' denotes the current reference
pixel. The circle of radius 5 arcsec around it is depicted by the dashed line
and the open circles are the pixels we examine. In this idealised case it is
obvious that all of those pixels have intensity equal to or higher than that of
the reference pixel, and our measure would be zero. This measure should be
close to zero for real ellipticals (all pixels closer to the center are
necessarily brighter, to within the noise), and should increase as bright
localised structures become more dominant. Spiral arms are expected to yield
intermediate values, because some pixels closer to the center will be at least
as bright (belong to the same arm), while others will be fainter (dust lanes
by the side of the arm). Repeating this calculation for all pixels whose
binned intensity is larger than 5 and averaging this measure over all of them
gives the ``blobbiness'', our first parameter.
\subsection {Measuring Overall Asymmetries}
The asymmetry parameter of A95 is useful, but limited because the detection of
asymmetric features is a function of their brightness. This suppresses the
detection of galaxies with faint peculiar features (e.g., tidal tails). As a
more general approach we define five isophotal levels for the original image,
separated by equal logarithmic intervals. The faintest $15\%$ of image pixels
are not considered, because at the faint end pixels form shapes that are very
sensitive to noise and inaccurate sky subtraction. The pixels of the image are
then divided into five mutually exclusive isophotal groups, and the geometrical
center of each of these groups is calculated. The distances between the five
centers are worked out and the largest distance, normalised by the image
size, is taken as our second parameter (the isophotal center displacement).
\subsection{Measuring the Distortion of Shapes}
A distortion in the image of a galaxy need not be global. It can happen in
the outer regions, say, as a result of weak interactions with nearby
galaxies or accretion of gas; or it may only be apparent in the innermost
regions, say, after a merger has had time to relax but resulted in a double
core. The galaxy may look different at various isophotal levels, even if its
isophotes all share roughly the same center. To complement our second parameter,
we define the "maximal elliptical envelope" of each isophote as the smallest
ellipse containing all the pixels in the isophote. This can be referred to as
the "elliptical hull" of the isophote (in analogy with the "convex hull", e.g.,
Gonzalez and Woods, 1992). The ellipses all share the overall ellipticity and
position angle derived for the image as a whole. The distortion of structure
at a given isophotal level is then defined as the ``filling factor'' of its
envelope, or more precisely, as the ratio of the number of {\it isophote}
pixels within the envelope to the total number of pixels in it. This ratio
approaches $1$ for smooth, axisymmetric galaxies and tends to lower values for
galaxies with structure. The less ordered the structure, the lower this ratio
gets. We found the filling factor of the third (middle) isophote to be the
most useful of the five and chose it as our third parameter.
\subsection{Quantifying the Nature of Localised Structure}
Localised structure is best depicted in the residual images, after the
subtraction of the best-fitting smooth model. The major problem is to tell
residual structure of one kind (spiral arms) from another (e.g., knots).
In order to quantify the structure we start by binning the intensities of
the residual image into 10 logarithmic bins, which are defined ignoring
the faintest $5\%$ and the brightest $5\%$ of pixels (thus avoiding
extreme values which could be the result of inaccurate profile subtraction).
We then define as regions of interest all isolated groups of pixels whose
binned intensities are 7 and above, i.e., belonging to the upper $40\%$ of the
intensity scale. We reject regions that are either too small (less than
$1\%$ of the number of image pixels) or too large (more than $30\%$ of the
image). The former are likely to be just noise patches and the latter are
unrealistically large structures, which could result from the combined effect
of noise and low surface brightness.
For each region we then work out the "skeleton" (Gonzalez and
Woods 1992), which can be loosely described as the ``backbone'' of a
consecutive set of points. We briefly describe the algorithm we used in the
appendix. In figure 3 we demonstrate these ideas with an
example of four stages in the analysis of an image of a grand-design spiral :
the original image, the residual (following subtraction of the best fitting
smooth model), a map of the detected regions with binned pixel intensities
and a map of their skeleta. We measure the ratio of the number of pixels in
the skeleton to the number of pixels in the entire region. This ratio is
closest to zero for perfectly round shapes, where the skeleton shrinks to a
point, and grows towards 1 with the elongation of the shape. It is superior to
the axis ratio of a shape (which can be easily obtained from its second moments
matrix), because it can follow winding shapes like spiral arms and uncover
their true nature. This ratio is then averaged over all regions to get our
fourth parameter.
\subsection{Independence of the Chosen Parameters of each other}
Judging by the definition of our parameters, it may look as if some of them
should convey more or less the same information and could become redundant. We
check for this possibility by calculating the correlation matrix for our
parameters (table 2 below). It is clear that there are no strong linear
correlations between any two of them, although the first two appear more
correlated than the others.
\section{Training an ANN to Classify Normal/Peculiar Galaxies}
\subsection{Training and Testing the ANN}
For an overview of ANNs see Hertz, Krogh and Palmer (1991). Their application
to morphological classification of galaxies was discussed in detail
elsewhere (Storrie-Lombardi {\it et al.} 1992; Serra-Ricart {\it et al.} 1993;
Naim {\it et al.} 1995a; Naim 1995b; and especially Lahav {\it et al.}, 1996).
Briefly, an ANN can be viewed as a non-linear minimising machine which operates
in a multi-dimensional space. Nodes are arranged in layers, where input parameters are
fed into the nodes of the input layer and the result is read off the output nodes.
ANNs which utilise an intermediate (``hidden'') layer between the input and output
layers are in general more flexible and perform much better than those without it.
The so-called ``architecture'' of the ANN is the full specification of its nodes and
their interconnections. The space in which the ANN operates is spanned by the
connection-strengths (``weights'') between the various nodes. The weights are, in
effect, the degrees of freedom which are adjusted so as to minimise the error between
the actual output reading and the desired output. Training the ANN entails repeated
presentations of patterns from a training set, for which the input parameters
as well as the desired output values are known. Inputs can be, e.g.,
morphological parameters and output can be, e.g., morphological type. The
training phase ends when a certain stopping criterion is met. Usually one
imposes this criterion in terms of the error of the ANN on a different set of
patterns, known as the testing set. It is not recommended to train and test on the
same set, since this results in an overfitting of the data and practically
``memorising'' the particular patterns in that set. If this happens the ANN loses its
ability to generalise its ``knowledge'' and apply it successfully to patterns
it has never seen before.
The 978 galaxies in sample 1 are divided into two sets of approximately equal
sizes, keeping the ratio of normal to peculiar galaxies the same in both sets.
We use one set as a training set and the other as its testing set. Every ANN is run
ten times over, each time starting out with a different set of random weights, and its
resulting classifications are averaged over the ten runs. This is done because in every
run there is the risk of the ANN getting stuck in a local minimum of its error. Using
different sets of initial weights and running several times over makes it less likely
to get stuck in the same local minimum time after time and one local minimum will
stand out against the background of the other runs.
The architecture of the ANN depends first of all on the number of parameters
one uses to describe each galaxy. This number determines the number of
input nodes. We have six parameters in all :
\medskip
\begin{itemize}
\item{Light Concentration, following Abraham {\it et al.} 1994, A95}
\item{Asymmetry, following A95.}
\item{Blobbiness}
\item{Isophotal Center Displacement.}
\item{Filling Factor of Middle Isophote.}
\item{Ratio of Skeleton size to Region size.}
\end{itemize}
\medskip
We attempt several combinations of these parameters, as described below. We set the
number of hidden layer nodes to three and there are two output nodes, one denoting
normal galaxies and the other peculiars. A typical {\it desired} output vector is
either (1,0) or (0,1), stating that that galaxy is of the type whose node is set to 1.
The ANN ends up assigning fractional values to the nodes. These output readings can
be shown (e.g., Gish 1990; Richard \& Lippman 1991) to approximate the {\it
Bayesian a-posteriori} probability for each class given the data, in the limit
of very large training sets. The output node with the higher value "wins", and
the corresponding type is assigned to that galaxy. We ran three ANNs, which are
specified below in terms of their architecture ($N_{input}:N_{hidden}:N_{output}$)
and the parameters used as inputs.
\medskip
\begin{enumerate}
\item{Using the C-A set of A95 as inputs, architecture 2:3:2}
\item{Using the 4P set, architecture 4:3:2}
\item{Using both sets together, architecture 6:3:2}
\end{enumerate}
Table 3 shows the resulting classification matrices of these runs, where
rows denote eyeball classes and columns denote the resulting classes. Also
shown are the ``hit rate'' (or completeness) and ``false alarm rate'' (or
contamination). The hit rate is defined as the fraction of correct
classifications out of the patterns in each eyeball class. The false alarm
rate is defined as the fraction of wrong classifications into each eyeball
class.
The 4P set appears to give a better result than the C-A set, as could be
expected since it utilises four morphological parameters compared with only
two in the C-A set. In table 4 we present the results of interchanging the
roles of training and testing sets for all three ANNs. The picture remains
essentially the same : Using the C-A pair the hit rate over normal galaxies
decreases a little, while for the 4P set the same effect is compensated by a
higher hit rate for peculiars. This is also what happens when we use all six
parameters.
\subsection {Improving the Detection of Peculiars}
We concentrate here on the ANNs which utilise the 4P set. Table 5 shows the
breakdown of successful and unsuccessful classifications of peculiars by the
ANN, as a function of the certain/uncertain classification index (see \S~2
above). It is obvious that the misclassifications are much more likely when
the eyeball classification is uncertain. This means that when one applies the
trained ANN to a set of fresh data (e.g., a catalogue of HST images), it
will identify most clear cases of peculiarity, but only some of
the less certain cases, as one would expect. It is interesting to note that
when the ANN is less certain it tends to make more mistakes. Let us define an
ANN classification as ``uncertain'' if the winning class has a probability of
less than $0.6$. Then only $12\%$ of the successes of the both ANNs are
uncertain, while among the misclassifications the fractions rise to $36\%$
for the first ANN and $24\%$ fot the second, twice and more.
Although the above statements are reassuring regarding the overall performance
of the ANN, one may still wish to do a better job of detecting peculiars, even
at a certain price. One way to push the detection level of peculiar galaxies
up is to change the proportions of normal and peculiar galaxies in the
training set. As was mentioned above, the ANN approximates Bayesian
a-posteriori probabilities. These are conditional probabilities for each class
given the data, which according to Bayes' theorem are proportional to the
class prior. In other words, if the ANN ``knows'' that there are many more
normal galaxies than peculiars, it will prefer to guess a normal type whenever
it is in doubt. If we make the fractions of normals and peculiars equal in the
training set we remove the class prior and thus give the peculiars a better
chance. This will come at the price of lowering the success rate for normal
galaxies and the resulting ANN would not be of use for predicting the overall
morphological mix. In Table 6 we show the results of doing this for both of
our datasets. The hit rates for peculiars are increased significantly, while
those of the normal galaxies drop, as expected.
\subsection{Predicting the Morphological Mix}
Apart from classifying individual galaxies correctly, it is interesting to
see if the ANN is capable of predicting the overall frequencies of peculiar
and normal galaxies in a given dataset. This can be done by utilising
the ANN probabilities : summing the probabilities assigned to each class
over the entire dataset gives a more accurate and robust picture of the
morphological mix than counting how many individual galaxies are assigned to
each class. The reason is that the actual values of the probabilities convey
much more information than the "all or nothing" approach of choosing a winning
class and counting whole numbers. In effect, summing up probabilities is not
subject to the ``round-off'' errors that arise when one sums integer numbers.
The picture is not so simple, though : The ANN approximates the conditional
probability of each class given the data, which depends on the class prior
probability. As a result, if one presents the ANN with a set of galaxies in
which the fractions of normals and peculiars are very different from their
fractions in the training sets, the ANN will not give the correct morphological
mix. The practical implication is that so long as our training sets have {\it
roughly} the correct mix, we can use the ANN to predict the mix in any large
dataset. This is the single most important requirement, and a very natural
one : The training set must represent the ``true'' world faithfully. A
supervised ANN can therefore be trained to mimic the human decision process
(Naim {\it et al.} 1995a) and thus save much effort and time. The trained ANN
is a parametric, non-linear classifier, and is not limited to a small number of
parameters. In this respect it is superior to linear discriminatory analysis, and is more
repeatable than eyeball classification.
Even if the training set contains roughly the correct mix of normals and
peculiars, it is unlikely to give {\it exactly} the same mix. As can be seen
in the top panel of figure 4, when one presents the trained ANN with different
fractions of peculiars its predicted fraction changes almost linearly with the
actual fraction, but this linear relation has a slope much flatter than the
desired value of 1. We produced figure 4 by specifying nine different
fractions of peculiars, ranging from $20\%$ to $60\%$ in steps of $5\%$. For
each of these values we select four independent subsets of 40 galaxies each
from the testing set, and sum the probabilities given by the ANN to get the
predicted fractions for each of the four subsets. We then average the ANN
predictions over the four datasets and work out their standard deviations,
which are depicted by the error bars in the figure. Also shown in the top panel
of figure 4 is the straight line of slope 1 which represents the desired
relation between the predicted and the actual fractions of peculiars.
The question is then how to correct the mix predicted by the ANN to get a more
realistic prediction. Since the ANN prediction varies almost linearly with the
true fraction, the simplest thing to do is fit a straight line to the predicted
fractions and use its slope and zero crossing to correct the prediction. The
bottom panel of figure 4 shows the result, and the straight line of slope 1 is
again plotted just to guide the eye. The error bars are estimated from the
derived errors in the slope and zero-crossing of the fitted line. This
procedure gives us an estimate of the error in predicting the morphological
mix, and can be used for any set of galaxies with a peculiar fraction between
$20\%$ and $60\%$.
\section{Limits of Applicability}
We now turn to study the accuracy of ANN classifications as a function of
various limiting quantities. In figure 5 we show three histograms describing
the hit rates of the ANN as a function of magnitude, signal-to-noise ratio and
image size. Magnitudes are isophotal (Ratnatunga {\it et al., in preparation}).
Signal-to-noise ratios are integrated over all pixels associated with the
image (as determined by the object detection algorithm), for which the
individual ratio is larger than 1. This limit is imposed in order to avoid
inclusion of large patches of sky in the image. Image size is the number of
pixels whose intensity is at least $5~\sigma$ above sky, i.e., the bright
pixels of the image. The success rate for normal galaxies drops almost steadily
as one goes fainter and as the ratio of signal-to-noise decreases. A similar
trend exists when the number of bright pixels decreases. However, the picture
is less clear with respect to the peculiar galaxies : The trend with magnitude
is "bumpy" as is the trend with signal-to-noise, though the dependence on the
number of bright pixels appears more stable. We conclude that our analysis does
not break down sharply at any point, although if one's concern is mainly with
the identification of peculiars we would advise keeping the signal-to-noise
ratio above 180 and the number of bright pixels above 50.
\section{The Distribution of Bulge-to-Disk Ratios}
We now turn to apply the trained ANN to the larger sample 2, which was not
classified by eye. This sample contains 2319 images and includes stars as well
as galaxies. A quantitative criterion that will allow us to reject stars
without resorting to eyeball classification is required. We use the maximum
likelihood software (Ratnatunga {\it et al., in preparation}) to fit a smooth
photometric model to each galaxy image. The model has 10 degrees of freedom,
among which is the log of the effective half light radius. This parameter is
expected to be very small for stars and larger for galaxies. We test this
expectation in figure 6, where we show two histograms depicting the
distribution of half light radii in the eyeballed sample 1. The dashed line
depicts images which were eyeballed as either stars or point sources, and the
solid line depicts those that were deemed galaxies. There is a sharp
distinction between the two populations at a half light radius of roughly 0.1
arcsec, which corresponds to a single pixel in WFPC2 images. This value is a
function of magnitude, and compact, high redshift galaxies may be excluded.
However, we do not have any other means for removing stars from our sample,
and regardless of the true nature of an object, if its half light radius is
less than one pixel it contains very little morphological information for our
analysis. We therefore reject from sample 2 those images whose best fitted half
light radius is less than this value. The number of rejected images is 320,
and we are left with 1999 galaxies in this sample. These are then classified
using the trained ANN.
One of the parameters fitted by the maximum likelihood algorithm is the
bulge-to-total ratio, which is defined as the ratio of light in the fitted
bulge component to the sum of the light in the fitted bulge and disk
components (integrated to infinity for both). This B/T ratio is a quantitative
measure of the dominance of the bulge in the underlying photometric model, and
by construction is less susceptible to bright overlying features than is the
cruder light concentration. The software makes a decision, based on the
signal-to-noise ratio of the image, whether to fit a full bulge+disk model or
to fit only a pure bulge or a pure disk. In these latter cases the B/T ratio is
automatically fixed at 1 or 0 (respectively). In figure 7 we plot the Light
Concentration index against the B/T ratio, and the large numbers of points at
B/T values of 0 and 1 are a testimony of the significant fractions of images
that were fitted as either pure bulges or pure disks. The two quantities are
not very correlated, which is surprising. Since the B/T has been checked for
simulated images and gave very good results (Ratnatunga {\it et al., in
preparation}), we conclude that the light concentration index is too sensitive
to structure superimposed on the underlying smooth model. We therefore
concentrate in what follows on the B/T ratio as the primary indicator of bulge
dominance. In figure 8 we show the distribution of the B/T ratios among normal
and peculiar galaxies, as classified by the ANN. The plot includes galaxies for
which the estimated error in the B/T ratio was less than $0.1$ (the width of a
single bin in the histograms). This excludes 122 out of the 1999 galaxies. It
is clear that the fraction of disk dominated peculiars is larger than the
corresponding fraction for normal galaxies. Nevertheless, about $17\%$ of all
peculiars have a B/T ratio larger than $0.5$, which we take as ``bulge dominated''.
While this is less than the corresponding $23\%$ among normals, it still implies a
significant population.
\section{Discussion}
The frequencies of various morphological types that are quoted in the
literature are almost invariably estimated on the basis of eyeball
classifications. As a result it is difficult to compare work done by different
observers because they have different criteria for peculiarity. A95 made the
crucial step of introducing a {\it quantitative} criterion, but as shown
above it gives only a low hit rate for peculiars, together with a significant
contamination by normal galaxies. The confusion between normal and peculiar
galaxies can be seen in figures 5 and 6 of their paper, where one also sees
several peculiars lying in the region dominated by ellipticals. Their use of
the light-concentration index as a second parameter precludes bulge dominated
peculiar galaxies from the outset. It might be better to define peculiarity on
the basis of shape alone, without any reference to the distribution of light.
We therefore adopt a purely morphological approach, concentrating on the
features that make a galaxy appear peculiar to the eye. This then gives us the
freedom to define the population of peculiars on a morphological basis and
examine the distribution of bulge-to-disk ratios in this population separately.
As we show above, there is indeed a population of bulge-dominated peculiars.
This population probably corresponds to the ``blue nucleated galaxies'' (BNGs)
found in the Canada-France Redshift Survey (CFRS, Schade {\it et al.} 1995;
Schade {\it et al.} 1996), although this correspondence is still to be
confirmed. Unlike CFRS we do not have redshifts for the galaxies in our sample
but our imaging is much better, coming from HST. The success of fitting a bulge
photometric model depends strongly on the point spread function, and therefore
it is not obvious that ground based images deemed bulge dominated will maintain
this quality when imaged by HST (see Ratnatunga {\it et al., in preparation} for
a fuller discussion of this point).
We apply the ANNs trained using the 4P set to a collection of 1999
galaxies taken from the other 18 fields of the Groth-Westphal strip (Groth {\it
et al.} 1995). Summing up the probabilities for peculiars over this entire
set and inverting to get the true frequency of peculiars at moderate redshifts,
we find this frequency to be $35 \pm 15 \%$. The error we obtain is the first
to date to rely on statistical estimates rather than on the much cruder
estimate of eyeball classification errors for individual galaxies. The fact
that the error bar is large is a reflection of the uncertainties involved in
the entire process.
It is quite clear now that so long as the automated procedure relies on
eyeball classifications it will be partly subjective and difficult to
standardise. What is called for at this stage is a set of quantitative
parameters that adequately describe the appearance of galaxies. In this paper
we provide such a set. Once the parameter space is fixed, one way to proceed is
to look for correlations with other sources of information (e.g., colours,
luminosities), and examine whether in this space the locus of galaxies of,
e.g., a certain colour is significantly different that the locus of galaxies
of a different colour.
Morphological classification is a good starting point for studying galaxy
evolution, by virtue of the large numbers of available images. However, it can
only serve as a first step, because galaxy shapes form a continuum rather than
break down to separate groups. In addition, the classes one defines by eye
might not always have a one-to-one correlation with the dynamics and chemistry
of galaxies. In this paper we use morphology as tool for singling out
interesting galaxies for more detailed studies. The next logical step is to
examine where different populations of galaxies reside in the space of our
morphological parameters, without recourse to any kind of classification.
Work along these lines is currently in progress (Naim {\it et al., in
preparation}). Hopefully, once large
quantities of physical data (e.g., spectra, rotation curves) are available for
these galaxies, we will be able to quantify and understand galaxy evolution
much better.
\newpage
\bigskip
{\bf ACKNOWLEDGEMENTS}
\bigskip
We would like to thank Ofer Lahav for statistical insight and Brian Ripley for
allowing us to use his ANN code. Stefano Casertano and Myungshin Im read the
manuscript and raised valuable points and Eric Ostrander's contribution to
the MDS pipeline processing of the images was invaluable. The referee made useful
comments and we thank him for helping us make this paper more readable. This research was
supported by funding from the HST Medium Deep Survey under GO grants p2684 {\it et seq.}
\bigskip
{\bf APPENDIX }: Algorithm for Calculating the Skeleton of an Image
\medskip
The following algorithm appears in Gonzalez \& Woods (1992) and was adopted by
us for calculating the skeleta of galaxy features. This algorithm is suitable
for skeletoning binary regions, i.e., regions in which each pixel is either
part of the feature whose skeleton we seek (value 1) or does not belong to it
at all (value 0). No reference is made to the actual intensity of the pixels.
Denote a pixel in the image by $p1$ and its nearest eight neighbours by $p2,
\ldots,p9$, starting with the one directly above it and going clockwise
around it (figure 9). Define as contour points pixels whose value is 1, with at
least one neighbour whose value is zero. The skeletoning process is iterative
and consists of repeating a set of two steps. In each step we examine value 1
pixels and those that comply with a given set of criteria are flagged for
deletion. The values of pixels flagged for deletion are all changed to 0 at the
end of each step. The process stops once no more pixels are flagged for
deletion.
In step 1 of each iteration a pixel $p1$ is flagged for deletion if it fulfills
all of the following requirements :
\begin{itemize}
\item{the number of value 1 neighbours is in the range $[2,6]$}
\item{the number of changes from value 0 to value 1, as one goes clockwise
around $p1$, is 1.}
\item{the value of at least one of $p2, p4$ and $p6$ is zero.}
\item{the value of at least one of $p4, p6$ and $p8$ is zero.}
\end{itemize}
In step 2 the first two requirements remain unchanged, but the last two are
modified as follows :
\begin{itemize}
\item{the value of at least one of $p2, p4$ and $p8$ is zero.}
\item{the value of at least one of $p2, p6$ and $p8$ is zero.}
\end{itemize}
This procedure is somewhat dependent on the order of the steps and the
resolution of the image (a small number of pixels will normally imply a highly
discretised contour and consequently a less dependable skeleton).
\newpage
\begin{table}
\centering
\caption{Breakdown of Morphological Eyeball Classifications in I and V bands.}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
{\bf I / V} & {\bf E/S0} & {\bf Se} & {\bf Sl} & & {\bf P1} & {\bf P2} & &
{\bf Total} \cr
{\bf E/S0} & 46 & 21 & 5 & & 0 & 0 & & 72 \cr
{\bf Se} & 12 &158 & 80 & & 37 & 28 & &315 \cr
{\bf Sl} & 0 & 17 &174 & & 46 & 48 & &285 \cr
\cr
{\bf P1} & 0 & 9 & 10 & & 91 & 32 & &142 \cr
{\bf P2} & 0 & 1 & 13 & & 16 &134 & &164 \cr
\cr
{\bf Total} & 58 &206 &282 & &190 &242 & & \cr}
\label{eyeball}
\end{table}
\begin{table}
\centering
\caption{Correlation Matrix for the Parameters of the 4P Set.}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil &\quad\hfil#\hfil \cr
& {\bf Blobb-} & {\bf Iso.} & {\bf Iso. } & {\bf Skel.} \cr
& {\bf iness} & {\bf Disp.} & {\bf Fill.} & {\bf Ratio} \cr
{\bf Blobbiness} & 1.00 & -0.47 & 0.14 & -0.21 \cr
{\bf Iso. Disp.} & -0.47 & 1.00 & 0.24 & -0.02 \cr
{\bf Iso. Fill.} & 0.14 & 0.24 & 1.00 & -0.22 \cr
{\bf Skel. Ratio} & -0.21 & -0.02 & -0.22 & 1.00 \cr}
\label{corr_mat}
\end{table}
\begin{table}
\centering
\caption{Results of ANN trained on set 1 and tested on set 2.}
\halign{\quad\hfil#\hfil \cr
{\bf Using the C-A Pair (Light Concentration and Asymmetry)} \cr
\cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
& {\bf ANN} & {\bf ANN} & {\bf Hit} & {\bf False} \cr
& {\bf Normal} & {\bf Peculiar} & {\bf Rate} & {\bf Alarm} \cr
{\bf Eyeball Normal} & 316 & 19 & 94\% & 28 \% \cr
{\bf Eyeball Peculiar} & 120 & 34 & 22\% & 36 \% \cr
\cr
\cr}
\halign{\quad\hfil#\hfil \cr
{\bf Using the 4P Set} \cr
\cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
& {\bf ANN} & {\bf ANN} & {\bf Hit} & {\bf False} \cr
& {\bf Normal} & {\bf Peculiar} & {\bf Rate} & {\bf Alarm} \cr
{\bf Eyeball Normal} & 291 & 44 & 87\% & 23 \% \cr
{\bf Eyeball Peculiar} & 87 & 67 & 44\% & 40 \% \cr
\cr
\cr}
\halign{\quad\hfil#\hfil \cr
{\bf Using All Six Parameters} \cr
\cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
& {\bf ANN} & {\bf ANN} & {\bf Hit} & {\bf False} \cr
& {\bf Normal} & {\bf Peculiar} & {\bf Rate} & {\bf Alarm} \cr
{\bf Eyeball Normal} & 299 & 36 & 89\% & 22 \% \cr
{\bf Eyeball Peculiar} & 83 & 71 & 46\% & 34 \% \cr}
\label{ann1}
\end{table}
\begin{table}
\centering
\caption{Results of ANN trained on set 2 and tested on set 1.}
\halign{\quad\hfil#\hfil \cr
{\bf Using the C-A Pair} \cr
\cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
& {\bf ANN} & {\bf ANN} & {\bf Hit} & {\bf False} \cr
& {\bf Normal} & {\bf Peculiar} & {\bf Rate} & {\bf Alarm} \cr
{\bf Eyeball Normal} & 307 & 28 & 92\% & 28 \% \cr
{\bf Eyeball Peculiar} & 117 & 36 & 24\% & 44 \% \cr
\cr
\cr}
\halign{\quad\hfil#\hfil \cr
{\bf Using the 4P Set} \cr
\cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
& {\bf ANN} & {\bf ANN} & {\bf Hit} & {\bf False} \cr
& {\bf Normal} & {\bf Peculiar} & {\bf Rate} & {\bf Alarm} \cr
{\bf Eyeball Normal} & 305 & 30 & 91\% & 22 \% \cr
{\bf Eyeball Peculiar} & 86 & 67 & 44\% & 31 \% \cr
\cr
\cr}
\halign{\quad\hfil#\hfil \cr
{\bf Using All Six Parameters} \cr
\cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
& {\bf ANN} & {\bf ANN} & {\bf Hit} & {\bf False} \cr
& {\bf Normal} & {\bf Peculiar} & {\bf Rate} & {\bf Alarm} \cr
{\bf Eyeball Normal} & 304 & 31 & 91\% &21 \% \cr
{\bf Eyeball Peculiar} & 80 & 73 & 48\% & 30 \% \cr
\cr
\cr}
\label{ann2}
\end{table}
\begin{table}
\centering
\caption{Breakdown of ANN Successes and Failures in classifying Peculiars,
as a function of the degree of Certainty in Eyeball Classifications.}
\halign{\quad\hfil#\hfil
{\bf For the ANN trained on set 1.} \cr
\cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil \cr
& {\bf Eyeball} & {\bf Eyeball} \cr
& {\bf Certain} & {\bf Uncertain} \cr
{\bf ANN Suceesses} & 48 & 19 \cr
{\bf ANN Failures} & 40 & 47 \cr
\cr
\cr}
\halign{\quad\hfil#\hfil
{\bf For the ANN trained on set 2.} \cr
\cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil \cr
& {\bf Eyeball} & {\bf Eyeball} \cr
& {\bf Certain} & {\bf Uncertain} \cr
{\bf ANN Suceesses} & 52 & 15 \cr
{\bf ANN Failures} & 35 & 51 \cr}
\label{ann_sf}
\end{table}
\begin{table}
\centering
\caption{Results of ANN Trained on Equal Numbers of Normals and Peculiars.
Compare with tables 3 and 4.}
\halign{\quad\hfil#\hfil \cr
{\bf Training on Set 1} \cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
& {\bf ANN} & {\bf ANN} & {\bf Hit} & {\bf False} \cr
& {\bf Normal} & {\bf Peculiar} & {\bf Rate} & {\bf Alarm} \cr
{\bf Eyeball Normal} & 108 & 46 & 70\% & 30 \% \cr
{\bf Eyeball Peculiar} & 46 & 108 & 70\% & 30 \% \cr
\cr
\cr}
\halign{\quad\hfil#\hfil \cr
{\bf Training on Set 2} \cr}
\halign{\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil &\quad\hfil#\hfil
&\quad\hfil#\hfil \cr
& {\bf ANN} & {\bf ANN} & {\bf Hit} & {\bf False} \cr
& {\bf Normal} & {\bf Peculiar} & {\bf Rate} & {\bf Alarm} \cr
{\bf Eyeball Normal} & 95 & 59 & 62\% & 27 \% \cr
{\bf Eyeball Peculiar} & 35 & 118 & 77\% & 33 \% \cr
\cr
\cr}
\end{table}
\newpage
\bigskip
|
1,116,691,499,073 | arxiv | \section{Introduction}
In classical interactive proof, many results have been obtained
on the complexities of restricted verifiers. For example, Ref.~\cite{Con93}
surveys the studies of the case when the verifier is restricted to log-space computing.
In quantum interactive proof, on the other hand, we have more options for
restricting the verifier's ability. For example, we can
assume that the verifier can perform some restricted set of gates,
or even that the verifier is classical.
Most of the researches so far on such restricted quantum interactive proof
have been done for the multi-prover case or the case allowing
multi-communications between the prover and verifier~\cite{RUV,Ji15},
and therefore the simplest case, namely, a single prover and a single communication,
is not well understood.
The purpose of the present paper is to study the class QMA with a restricted verifier.
QMA (Quantum Merlin-Arthur) is a quantum analog of NP (or, more precisely,
MA (Merlin-Arthur)) defined by Kitaev~\cite{Kitaev}
and Watrous~\cite{Watrous} (also discussed by Knill~\cite{Knill}).
The prover, called Merlin, has unbounded
computational power and the verifier, called Arthur, can perform polynomial-time universal
quantum computing
by using a polynomial-size quantum state (so called a witness)
sent from Merlin. For a yes instance, Arthur accepts the witness
with high probability, and for a no instance, any Merlin's witness
is rejected by Arthur with high probability.
The formal definition of QMA is as follows:
\begin{definition}
A promise problem $A=(A_{yes},A_{no})$ is in QMA if and only if there exist polynomials
$p$, $q$, and a polynomial-time uniform family $\{Q_x\}$ of quantum circuits,
where $x\in A$ is the input with $|x|=n$,
$Q_x$ takes as input a $p(n)$-qubit quantum state (so called the witness),
and $q(n)$ ancilla qubits in state $|0\rangle^{\otimes q(n)}$,
such that
\begin{itemize}
\item[1.] Completeness: if $x\in A_{yes}$, then there exists a $p(n)$-qubit quantum
state $|w\rangle$ such that $Q_x$ accepts $|w\rangle$ with probability at
least $a$.
\item[2.]
Soundness: if $x\in A_{no}$, then for any $p(n)$-qubit quantum state $\xi$,
$Q_x$ accepts $\xi$ with probability at most $b$.
\end{itemize}
Here, $a-b\ge \frac{1}{poly(n)}$.
\end{definition}
In this definition, it is assumed that Arthur
can perform universal quantum computing.
In this paper, we investigate what if Arthur is restricted to
apply only Clifford gate operations (plus universal classical computing).
Here, Clifford gate operations are operations generated by
$H\equiv|+\rangle\langle 0|+|-\rangle\langle1|$,
$S\equiv|0\rangle\langle 0|+i|1\rangle\langle1|$,
and
$CZ\equiv|0\rangle\langle 0|\otimes I+|1\rangle\langle1|\otimes Z$,
where $|\pm\rangle\equiv\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle)$,
$I\equiv|0\rangle\langle0|+|1\rangle\langle1|$ is the two-dimensional identity operator,
and $Z\equiv|0\rangle\langle 0|-|1\rangle\langle1|$ is the Pauli $Z$ operator.
In this restriction, Arthur's computational power is restricted to be classical
in some sense~\cite{Nishimura_note}
as the Gottesman-Knill theorem~\cite{GK} says that
Clifford gate operations (plus universal classical computing) are classically simulable.
We show that such a restriction nevertheless does not change the power of QMA.
In other words, we show
\begin{theorem}
${\rm QMA}_{\rm Clifford}={\rm QMA}$.
\end{theorem}
Here, ${\rm QMA}_{\rm Clifford}$ is defined as follows:
\begin{definition}
\label{def:QMA_Clifford}
The definition of ${\rm QMA}_{\rm Clifford}$
is the same as that of QMA except that
``a polynomial-time uniform family $\{Q_x\}$ of quantum circuits"
is replaced with
``a polynomial-time uniform family $\{V_x\}$ of quantum circuits that consist
of
\begin{itemize}
\item[1.]
Preparation of $|0\rangle$.
\item[2.]
Measurements in the $Z$ basis (at any time during the computation).
\item[3.]
Clifford gates (that can be classically
controlled by the previous measurement results)."
\end{itemize}
Note that, for simplicity, we assume that Arthur can also perform
classical XOR gate.
It is known that the generation of the three-qubit GHZ state (which
can be done with the preparation of $|0\rangle^{\otimes 3}$,
and applications of $H$ and $CZ$),
adaptive Pauli measurements (which can be done with the classically controlled
$H$, and the $Z$-basis measurements), and the classical XOR gate
are universal for classical computing~\cite{Janet}.
\end{definition}
Our idea to show the theorem is to use the fact that the preparation
of many (i.e., polynomial in the input size) copies of the single-qubit state,
\begin{eqnarray*}
|H\rangle\langle H|\equiv\frac{1}{2}\Big[
I-\frac{1}{\sqrt{2}}(X+Z)\Big]
=\Big(\sin\frac{\pi}{8}|0\rangle-\cos\frac{\pi}{8}|1\rangle\Big)
\Big(\sin\frac{\pi}{8}\langle0|-\cos\frac{\pi}{8}\langle1|\Big),
\end{eqnarray*}
so called a magic state, plus any Clifford gate operations are universal
for quantum computing~\cite{magic,Fujii_note}.
Here, $X\equiv|0\rangle\langle1|+|1\rangle\langle0|$ is the Pauli $X$ operator.
Therefore, Arthur needs only Clifford gate operations if he asks
Merlin to add magic states to the witness.
One problem is that, for a no instance, Arthur cannot trust Merlin:
Merlin might send other states pretending to be sending magic states.
Therefore, Arthur has to do some test
such that if the state sent from Merlin passes the test,
then the output of the test is guaranteed to be close to magic states
with a sufficiently small significance level.
We show that such a test does exist,
and therefore ${\rm QMA}_{\rm Clifford}={\rm QMA}$.
The idea can also be applied to QIP (Quantum Interactive Proof),
which is a generalization of QMA
where many quantum messages can be exchanged between Merlin (the prover)
and Arthur (the verifier).
QIP was defined by Watrous in Ref.~\cite{Wat99}, and it is known that
${\rm QIP}[3]={\rm QIP}={\rm PSPACE}$~\cite{KW00,JJUW09},
where QIP[$k$] means that
the prover and the verifier can exchange quantum messages $k$ times
(hence ${\rm QMA}={\rm QIP[1]}$),
and ${\rm QIP}\equiv\cup_{k=poly(n)} {\rm QIP}[k]$.
We show ${\rm QIP}[3]_{\rm Clifford}={\rm QIP[3]}$,
where ${\rm QIP}[3]_{\rm Clifford}$ is defined in a similar way
as ${\rm QMA}_{\rm Clifford}$:
the verifier of ${\rm QIP[3]}$ is restricted to only Clifford gate operations
(plus classical XOR gate).
Finally, it is interesting to compare our result on QMA
with QCMA.
QCMA is a variant of QMA where the witness sent from Merlin
is not a quantum state but a classical bit string.
Since the three operations in Definition~\ref{def:QMA_Clifford}
are classically simulable (the Gottesman-Knill theorem~\cite{GK}),
we obtain ${\rm QCMA}_{\rm Clifford}\subseteq{\rm MA}$.
Here, ${\rm QCMA}_{\rm Clifford}$ is defined in a similar way as ${\rm QMA}_{\rm Clifford}$:
Arthur is restricted to Clifford gates (plus the classical XOR gate).
\if0
By using this fact, it is immediate to show that
the class ${\rm BQP}_{\rm /qpoly}$ does not change even if the
verifier's computing ability is restricted to Clifford gate operations,
since the prover can add magic states to the advice.
Here, ${\rm BQP}_{\rm /qpoly}$ is defined by Nishimura and Yamakami~\cite{NY} as follows:
\begin{definition}
A language $L$ is in ${\rm BQP}_{\rm /qpoly}$ if and only if there exist a polynomial-size
quantum circuit family $\{Q_n\}_n$ and a polynomial-size family of quantum states
$\{|\psi_n\rangle\}_n$ such that
\begin{itemize}
\item[1.]
If $x\in L$ then $q(x)\ge \frac{2}{3}$, where $q(x)$ is the probability
that the first qubit is measured to be $|1\rangle$ after $Q_n$ is applied
to $|x\rangle\otimes|0...0\rangle\otimes|\psi_n\rangle$.
\item[2.]
If $x\notin L$ then $q(x)\le \frac{1}{3}$.
\end{itemize}
\end{definition}
The difference between an advice and a witness is that the latter is not trusted.
Therefore, in our situation, Merlin does not necessarily send correct
magic states to Arthur for a no instance. Therefore, Arthur needs some verification procedure
such that if the state sent from Merlin passes the test, then the output of the test is guaranteed
to be close to magic states. We show that such a test does
exist.
\fi
\section{Magic state test}
Let us consider the following test,
which we call {\it the magic state test}:
\begin{itemize}
\item[1.]
Let $\Omega_1$ be a $(2r(n)+s(n)+l(n)+p(n))$-qubit system, where
$r$, $s$, $l$, and $p$ are polynomials specified later.
\item[2.]
Let $\Omega_2$ be the subsystem of $\Omega_1$ consisting
of the first $(2r(n)+s(n)+l(n))$ qubits of $\Omega_1$.
\item[3.]
We randomly choose $(2r(n)+s(n))$ qubits from $\Omega_2$.
Let $\Omega_3$ be the system of thus chosen $(2r(n)+s(n))$ qubits.
\item[4.]
We further randomly choose $2r(n)$ qubits from $\Omega_3$,
and divide thus chosen $2r(n)$ qubits into two $r(n)$-qubit groups, $S_1$ and $S_2$.
We measure each qubit of $S_1$ ($S_2$, resp.) in
$X$ ($Z$, resp.) basis.
Let $x$ and $z$ be the number of obtaining $+1$ results for
$X$ and $Z$ measurements, respectively.
If $x$ and $z$ are larger than $F(\delta_1,\delta_2,r(n))$, the test is passed,
where $F(\delta_1,\delta_2,r(n))$ is the maximum value satisfying
\begin{eqnarray*}
\delta_1\ge \sum_{k=0}^{F(\delta_1,\delta_2,r(n))}{r(n)\choose k}
\Big(\frac{1}{2}-\frac{1}{2\sqrt{2}}+\delta_2\Big)^k
\Big(\frac{1}{2}+\frac{1}{2\sqrt{2}}-\delta_2\Big)^{r(n)-k}.
\end{eqnarray*}
Here, $\delta_1$ and $\delta_2$ are specified later.
\item[5.]
Let $\sigma$ be the state of $s(n)$ qubits of $\Omega_3$ that were not measured.
\end{itemize}
We can show that the correct magic states pass the
magic state test with high probability,
and that if we pass the magic state test,
$\sigma$ is close to the correct magic states.
More precisely, we can show the following lemma. (Its proof is given in
Appendix~\ref{app1}.)
\begin{lemma}
\label{Hayashi_lemma}
We take
$
\delta_2=\frac{2\delta_1}{\sqrt{2}s(n)},
$
and choose $\delta_1$ as
$
\delta_1 \le\frac{1}{4000}.
$
We also take $r(n)$ as
\begin{eqnarray}
r(n)=
\Big(\frac{\sqrt{2}s(n)}{2\delta_1}
\sqrt{8} (\Phi^{-1}(\epsilon)+\Phi^{-1}(\delta_1))\Big)^2,
\label{BB}
\end{eqnarray}
where $\epsilon$ is any constant,
and
\begin{eqnarray*}
\Phi(x)\equiv\int_{x}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{t^2}{2}}dt.
\end{eqnarray*}
Finally, we take $l$ as
\begin{eqnarray}
\sqrt{\frac{2(2r(n)+s(n)-1)^2\log2}{l(n)}}\le\frac{1}{2000}.
\label{CC}
\end{eqnarray}
Then, we have the following items.
\begin{description}
\item[(i)]
If
\begin{eqnarray*}
\rho=|H\rangle\langle H|^{\otimes 2r(n)+s(n)+l(n)}\otimes \xi,
\end{eqnarray*}
we pass the magic state test with probability $1-\epsilon-o(1)$ for any $p(n)$-qubit state $\xi$.
\item[(ii)]
Furthermore, for any state $\rho$ of $\Omega_1$, if we pass the magic state test,
we can guarantee that
\begin{eqnarray}
\langle H^{\otimes s(n)}|\sigma|H^{\otimes s(n)}\rangle
\ge 1-\frac{1}{100} \label{ab}
\end{eqnarray}
with the significance level $\frac{1}{10}$.
\end{description}
\end{lemma}
(Note that the significance level is
the maximum passing probability
when malicious Merlin sends incorrect states
so that the resultant state $\sigma$ does not satisfy Eq.~(\ref{ab})~\cite{textbook}.
)
\if0
Since
\begin{eqnarray*}
\frac{1}{2}\| \sigma- |H\rangle\langle H|^{\otimes s(n)}\|_1
\le
\sqrt{1- F(\sigma,|H\rangle\langle H|^{\otimes s(n)})^2},
\end{eqnarray*}
Eq.~\eqref{ab} implies that
\begin{eqnarray}
\frac{1}{2}\| \sigma- |H\rangle\langle H|^{\otimes s(n)}\|_1 \le
\sqrt{1-\Big(1-\frac{1}{100}\Big)}
= \sqrt{\frac{1}{100}}= \frac{1}{10}.
\label{l1}
\end{eqnarray}
\fi
We can also show the following lemma
(its proof is given in Appendix~\ref{app2}),
which will be used later:
\begin{lemma}
\label{Hayashi_lemma2}
Let $\rho$ be a state in ${\mathcal H}_1\otimes {\mathcal H}_2$,
where ${\mathcal H}_1$ and ${\mathcal H}_2$ are Hilbert spaces.
For any $|x\rangle\in {\mathcal H}_1$,
\begin{eqnarray}
\max_{\rho'\in {\mathcal H}_2}F(|x\rangle\langle x|\otimes \rho',\rho)^2=F(|x\rangle\langle x|,
{\rm Tr}_2(\rho))^2,
\label{l2}
\end{eqnarray}
where $F(\sigma_1,\sigma_2)\equiv{\rm Tr}|\sqrt{\sigma_1}\sqrt{\sigma_2}|$
is the fidelity between $\sigma_1$ and $\sigma_2$,
and ${\rm Tr}_2$ is the partial trace over ${\mathcal H}_2$.
\end{lemma}
\section{Proof of the Theorem}
Now let us show our main result.
{\it Proof of Theorem 2}:
${\rm QMA}_{\rm Clifford}\subseteq{\rm QMA}$
is obvious. Let us show
${\rm QMA}_{\rm Clifford}\supseteq{\rm QMA}$.
We assume that a promise problem $A$ is in
${\rm QMA}$.
Then, there exist a polynomial-time uniform family $\{Q_x\}$ of quantum circuits,
and the $p(n)$-qubit witness state $|w\rangle$ that is accepted by Arthur with
probability at least $a$ if $x\in A_{yes}$,
while any $p(n)$-qubit state is accepted with probability at most $b$
if $x\in A_{no}$. Here we can take $a=\frac{2}{3}$ and $b=\frac{1}{3}$.
Let $V_x$ be a quantum circuit satisfying the conditions of Definition~\ref{def:QMA_Clifford}
and simulating $Q_x$ exactly by the method in Ref.~\cite{magic}, and let $s(n)$
be the number of magic states consumed for this simulation.
Arthur runs the following protocol:
\begin{itemize}
\item[1.]
Arthur receives a $(2r(n)+s(n)+l(n)+p(n))$-qubit state $\rho$ from Merlin.
If Merlin is honest,
\begin{eqnarray*}
\rho=|H\rangle\langle H|^{\otimes 2r(n)+s(n)+l(n)}\otimes|w\rangle\langle w|.
\end{eqnarray*}
If he is malicious, $\rho$ can be any state.
\item[2.]
Arthur does the magic state test on $\rho$.
\item[3.]
If $\rho$ fails to pass the test, Arthur rejects.
\item[4.]
If $\rho$ passes the test, Arthur now has an $(s(n)+p(n))$-qubit state.
The first $s(n)$ qubits are used as magic states to simulate
$Q_x$ with $V_x$,
and the state of the last $p(n)$ qubits
is used as the witness for $Q_x$.
\end{itemize}
First, we consider the case when
$x\in A_{yes}$.
In this case, Merlin sends correct magic states, and therefore
the probability of passing the test
is $1-\frac{1}{10}$ from Lemma~\ref{Hayashi_lemma},
where we take $\epsilon=\frac{1}{10}$.
Therefore, Arthur's acceptance
probability $p_{acc}$ is
\begin{eqnarray*}
p_{acc}\ge a\times \Big(1-\frac{1}{10}\Big)=\frac{9a}{10}\equiv a'.
\end{eqnarray*}
Next let us consider the case when
$x\in A_{no}$.
Arthur's acceptance probability $p_{acc}$ is
\begin{eqnarray*}
p_{acc}=\mbox{Tr}(C_x\eta)\times P(\mbox{pass the test})
\end{eqnarray*}
for a certain POVM element $C_x$ such that the corresponding POVM depends on $x$
and is implementable with only Clifford gates,
where $\eta$ is the $(s(n)+p(n))$-qubit state after passing the magic state test,
and $P(\mbox{pass the test})$ is the probability of passing the
magic state test.
From Eqs.~(\ref{ab}) and (\ref{l2}), and the relation,
\begin{eqnarray*}
\frac{1}{2}\|\rho_1-\rho_2\|_1
\le\sqrt{1-F(\rho_1,\rho_2)^2},
\end{eqnarray*}
between the fidelity and the trace distance
(e.g., Eq.~(6.106) of Ref.~\cite{Hayashi_book}),
$\eta$ satisfies
\begin{eqnarray*}
\frac{1}{2}\Big\|\eta-|H\rangle\langle H|^{\otimes s(n)}\otimes\xi\Big\|_1
\le \frac{1}{10}
\end{eqnarray*}
with probability $1-\frac{1}{10}$
for a certain $p(n)$-qubit state $\xi$.
Then,
\begin{eqnarray*}
\mbox{Tr}(C_x\eta)
-\mbox{Tr}\Big[C_x(|H\rangle\langle H|^{\otimes s(n)}\otimes \xi)\Big]
&\le&
\Big|\mbox{Tr}(C_x\eta)
-\mbox{Tr}\Big[C_x(|H\rangle\langle H|^{\otimes s(n)}\otimes \xi)\Big]\Big|\\
&\le&
\frac{1}{2}\Big\|\eta-|H\rangle\langle H|^{\otimes s(n)}\otimes \xi\Big\|_1\\
&\le&\frac{1}{10}.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
p_{acc}&=&
\mbox{Tr}(C_x\eta)P(\mbox{pass the test})\\
&\le&
\mbox{Tr}(C_x\eta)\\
&\le&\frac{9}{10}\Big(\mbox{Tr}\Big[
C_x(|H\rangle\langle H|^{\otimes s(n)}\otimes\xi)\Big]+\frac{1}{10}\Big)
+\frac{1}{10}\\
&\le&\frac{9b}{10}+\frac{9}{100}+\frac{1}{10}
\equiv b'.
\end{eqnarray*}
Since $a=\frac{2}{3}$ and $b=\frac{1}{3}$,
\begin{eqnarray*}
a'-b'&=&\frac{9a}{10}-\frac{9b}{10}-\frac{9}{100}-\frac{1}{10}\\
&=&\frac{11}{100}.
\end{eqnarray*}
Therefore,
$L$ is in ${\rm QMA}_{\rm Clifford}$.
\if0
\subsection{QMA$_1$}
QMA$_1$ is a variant of QMA defined in the following way:
\begin{definition}
A promise problem $A$ is in QMA$_1$ iff Arthur's acceptance probability is 1
in the case of $x\in A_{yes}$.
\end{definition}
It seems that we cannot keep the class QMA$_1$ while
restricting Arthur's computation ability
to Clifford gate operations, since
even if Merlin sends correct magic states,
the magic state test can reject them with a small probability.
\fi
\section{QIP}
We can apply our idea to QIP.
Let us consider ${\rm QIP}[3]$.
First, the prover applies a unitary map ${\mathcal P_1}$ on
the $\alpha(n)$-qubit state $|0\rangle_P^{\otimes \alpha(n)}$, so
called the prover's private register, and the $\beta(n)$-qubit state
$|0\rangle_M^{\otimes \beta(n)}$, so called the message register,
where $\alpha$ and $\beta$ are some polynomials.
The prover sends the message register of
\begin{eqnarray*}
{\mathcal P}_1\Big(\fbox{0}_P^{\otimes\alpha(n)}\otimes\fbox{0}_M^{\otimes\beta(n)}\Big)
\end{eqnarray*}
to the verifier, where
we have used the notation $\fbox{$x$}\equiv|x\rangle\langle x|$.
The verifier applies a unitary map ${\mathcal V_1}$ on the message register
plus the $\gamma(n)$-qubit state $|0\rangle_V^{\otimes \gamma(n)}$,
so called the verifier's private register:
\begin{eqnarray*}
{\mathcal V}_1\Big({\mathcal P}_1
\big(\fbox{0}_P^{\otimes \alpha(n)}\otimes\fbox{0}_M^{\otimes\beta(n)}\big)
\otimes\fbox{0}_V^{\otimes\gamma(n)}\Big),
\end{eqnarray*}
where $\gamma$ is a polynomial.
The verifier sends the message register to the
prover, and the prover returns it
after applying a unitary map ${\mathcal P}_2$ on the message register plus
the prover's private register.
Now they share the state
\begin{eqnarray*}
{\mathcal P}_2{\mathcal V}_1\Big({\mathcal P}_1
\big(\fbox{0}_P^{\otimes\alpha(n)}\otimes\fbox{0}_M^{\otimes\beta(n)}\big)\otimes
\fbox{0}_V^{\otimes \gamma(n)}\Big),
\end{eqnarray*}
where the message register is possessed by the verifier.
Finally, the verifier performs a POVM measurement
on the verifier's private register plus the message register
in order to decide the acceptance or
rejection.
In the case of ${\rm QIP}[3]_{\rm Clifford}$,
the verifier performs the magic state test,
which we denote ${\mathcal T}$, on
(the message part of) the state
${\mathcal P}_1\big(\fbox{0}_P^{\otimes \alpha(n)}\otimes\fbox{0}_M^{\otimes \beta(n)}\big)$.
Let $C_x$ be the POVM element applied by the verifier
that corresponds to the acceptance.
(The corresponding POVM depends on $x$ and is implementable with only
Clifford gate operations.)
Then, in the similar way as in the case of QMA, we can show
\begin{eqnarray*}
&&\mbox{Tr}\Big[C_x {\mathcal P}_2{\mathcal V}_1\Big(
{\mathcal T}{\mathcal P}_1\big(\fbox{0}_P^{\otimes \alpha}\otimes\fbox{0}_M^{\otimes \beta}\big)
\otimes\fbox{0}_V^{\otimes \gamma}
\Big)\Big]
-
\mbox{Tr}\Big[C_x {\mathcal P}_2{\mathcal V}_1\Big(
\big(\fbox{$H$}^{\otimes s}\otimes \xi\big)\otimes\fbox{0}_V^{\otimes \gamma}
\Big)\Big]\\
&\le&
\frac{1}{2}\Big\|
{\mathcal P}_2{\mathcal V}_1\Big({\mathcal T}
{\mathcal P}_1\big(\fbox{0}_P^{\otimes \alpha}\otimes\fbox{0}_M^{\otimes \beta}\big)
\otimes\fbox{0}_V^{\otimes \gamma}
\Big)
-
{\mathcal P}_2{\mathcal V}_1\Big(
\big(\fbox{$H$}^{\otimes s}\otimes \xi\big)\otimes\fbox{0}_V^{\otimes \gamma}
\Big)\Big\|_1\\
&=&
\frac{1}{2}\Big\|
{\mathcal T}{\mathcal P}_1
\big(\fbox{0}_P^{\otimes \alpha}\otimes\fbox{0}_M^{\otimes \beta}\big)
-
\fbox{$H$}^{\otimes s}\otimes \xi
\Big\|_1\\
&\le&\frac{1}{10},
\end{eqnarray*}
where $\xi$ is a state of a part of the message register and
prover's private register.
Then by using a similar argument of QMA, we can show
${\rm QIP}[3]\subseteq{\rm QIP}[3]_{\rm Clifford}$.
For ${\rm QIP}[2]$, we do not know whether
${\rm QIP}[2]={\rm QIP}[2]_{\rm Clifford}$ holds,
since in this case, the verifier has to perform a unitary map first,
and no magic state is available for the first unitary map.
\acknowledgements
TM is supported by the JSPS Grant-in-Aid for Young Scientists (B) No.26730003 and
the MEXT JSPS Grant-in-Aid for Scientific Research on Innovative Areas No.15H00850.
MH is partially supported by the
JSPS Grant-in-Aid for Scientific Research (A) No. 23246071 and the National
Institute of Information and Communication Technology
(NICT), Japan. The Centre for Quantum Technologies is
funded by the Singapore Ministry of Education and the National
Research Foundation as part of the Research Centres of Excellence programme.
HN is supported by the JSPS Grant-in-Aid for Scientific Research (A)
Nos.23246071, 24240001, 26247016,
and (C) No.25330012,
and the MEXT JSPS Grant-in-Aid for Scientific Research on
Innovative Areas No.24106009.
|
1,116,691,499,074 | arxiv | \section{Introduction}
\label{S: Introduction}
Various mathematical models for the propagation of a rumour within a population have been developed in the past decades. Two classical models were introduced by Daley and Kendall~\cite{DK} and Maki and Thompson~\cite{MT}.
In the first model (which we shorten to [DK] model), a closed homogeneously mixing population of $N + 1$ individuals is considered.
People are subdivided into three classes: \textit{ignorants} (those not aware of the rumour), \textit{spreaders} (who are spreading it), and \textit{stiflers} (who know the rumour but have ceased communicating it after meeting somebody who has already heard it).
We adhere to the usual notation, denoting the number of ignorants, spreaders and stiflers at time $t$ by $X(t)$, $Y(t)$ and $Z(t)$, respectively.
Initially, $X(0) = N$, $Y(0) = 1$ and $Z(0) = 0$, and $X(t) + Y(t) + Z(t) = N + 1$ for all~$t$.
The process $\{(X(t), Y(t))\}_{t \geq 0}$ is a continuous-time Markov chain with transitions and corresponding rates given by
\begin{equation*}
{\allowdisplaybreaks
\begin{array}{cc}
\text{transition} \quad &\text{rate} \\[0.1cm]
(-1, 1) \quad &X Y, \\[0.1cm]
(0, -2) \quad &\displaystyle\binom{Y}{2}, \\[0.2cm]
(0, -1) \quad &Y (N + 1 - X - Y).
\end{array}
\end{equation*}
People interact by pairwise contacts, and the three possible transitions correspond to spreader-ignorant, spreader-spreader and spreader-stifler interactions.
In the first case, the spreader tells the rumour to an ignorant, who becomes a spreader.
The two other transitions represent the transformation of the spreader(s) involved in the meeting into stifler(s), that is, the loss of interest in propagating the rumour derived from learning that it is already known by the other individual in the meeting.
In the model formulated by Maki and Thompson~\cite{MT} (which we shorten to [MT] model), differently from the [DK] model, the rumour is spread by \textit{directed contact} of the spreaders with other individuals, i.e., the initiator is distinguished from the recipient.
In addition, when a spreader contacts another spreader, only the initiating one becomes a stifler.
Thus, the continuous-time Markov chain $\{(X(t), Y(t))\}_{t \geq 0}$ evolves according to the following table
\begin{equation*}
\begin{array}{cc}
\text{transition} \quad &\text{rate} \\[0.1cm]
(-1, 1) \quad &X Y, \\[0.1cm]
(0, -1) \quad &Y (N - X).
\end{array}
\end{equation*}
We refer to Daley and Gani~\cite[Chapter~5]{DG} for an excellent account on the subject of rumour models.
We introduce a general stochastic rumour model with the following characteristics:
\begin{enumerate}
\item[(i)] Distinct rates for meetings between two spreaders in which both become stiflers or only one does.
\item[(ii)] A new class of individuals, which we call \textit{uninterested}.
In traditional models, once an ignorant is told the rumour, only the transformation into a spreader is allowed.
In our model, this individual has the choice (with a certain probability) of becoming uninterested, that is, of stifling right after hearing the rumour.
\end{enumerate}
We underline that a stifler is an ex-spreader who has lost interest in propagating the rumour, whereas an uninterested is an ex-ignorant whose lack of interest arose immediately upon hearing the rumour from a spreader.
To define the model, we consider the notation introduced so far and denote by $U(t)$ the number of uninterested individuals at time $t$.
Initially, $X(0) = N$, $U(0) = 0$, $Y(0) = 1$ and $Z(0) = 0$, and $X(t) + U(t) + Y(t) + Z(t) = N + 1$ for all~$t$.
Let $V(t) = (X(t), U(t), Y(t))$.
We suppose that $\{V(t)\}_{t \geq 0}$ is a continuous-time Markov chain with initial state $(N, 0, 1)$ and
\begin{equation*}
\begin{array}{cc}
\text{transition} \quad &\text{rate} \\[0.1cm]
(-1, 0, 1) \quad &\la \, \de \, X Y, \\[0.1cm]
(-1, 1, 0) \quad &\la (1 - \de) \, X Y, \\[0.1cm]
(0, 0, -2) \quad &\la \, \tu \displaystyle\binom{Y}{2}, \\[0.2cm]
(0, 0, -1) \quad &\la \, \td \, Y (Y - 1) + \la \, \ga \, Y (N + 1 - X - Y).
\end{array}
\end{equation*}
The first two cases correspond to a spreader telling the rumour to an ignorant, who decides to become a spreader or an uninterested (that is, an immediate stifler) with respective probabilities~$\de$ and $1 - \de$.
The interaction between two spreaders splits into two cases, in which both become stiflers or only one does.
These cases are represented by the third transition and by the first part of the rate of the fourth transition.
Finally, the second part of this rate corresponds to a meeting between a spreader and a stifler or an uninterested individual.
We define $\te = \tu + \td - \ga$ and assume throughout the paper that
\begin{equation}
\label{F: Hyp}
\la > 0, \, \ga > 0, \, \tu \geq 0, \, \td \geq 0, \, 0 < \de \leq 1 \, \text{ and } \, 0 \leq \te \leq 1.
\end{equation}
Notice that both [DK] and [MT] models can be obtained by suitably choosing these constants.
The family of Markov chains just defined includes the classical and other rumour models, some of which are presented in \S\ref{S: Main results}, showing the convenience of the adopted parametrization.
To state our results, it is convenient to write down explicitly the dependence of~$V$ on~$N$, so we use the notation $V^{(N)} = (X^{(N)}, U^{(N)}, Y^{(N)})$.
Observing that the process eventually terminates (when there are no spreaders in the population), we define
\[ \tau^{(N)} = \inf \{t: Y^{(N)}(t) = 0 \}. \]
In \S\ref{S: Main results}, we present a Weak Law of Large Numbers and a Central Limit Theorem for $ N^{-1} \, (X^{(N)}(\tau^{(N)}), U^{(N)}(\tau^{(N)})) $, that is, the ultimate fractions of the originally ignorant individuals who remained ignorant and who became uninterested.
Proofs are presented in \S\ref{S: Proofs}.
As far as we know, limit theorems were rigorously proved only for the basic [DK] and [MT] models
(see Pittel~\cite{Pittel}, Sudbury~\cite{Sudbury} and Watson~\cite{Watson}).
In addition, the main tools applied for stochastic rumour processes have been the analysis of the embedded Markov chain, martingale arguments, diffusion approximations, generating functions and the study of analogue deterministic versions.
A good survey of these methods can be found in Daley and Gani~\cite{DG}.
Briefly, the technique we use consists in defining a coupled process with the same transitions as the original process until absorption.
This construction is suitably defined, in such a way that the new process is a density dependent stochastic model for which the theory presented in Ethier and Kurtz~\cite{MPCC} can be applied.
This approach allows us to establish limit theorems for more general models.
To the best of our knowledge, this idea is used for the first time in the context of stochastic rumour models here and in Lebensztayn et al.~\cite{RPRS}, in which we study a generalization of the Maki--Thompson model with random stifling and general initial configuration.
\section{Main results}
\label{S: Main results}
We start off with a few definitions.
\begin{defn}
\label{D: xf}
For $0 < \te < 1$, consider the function $f: [0, 1] \rightarrow \bbR $ given by
\begin{equation*}
f(x) = \frac{(\ga + \de \te) x^{\te} - (\ga + \de) \te x - \ga (1 - \te)}{\te (1 - \te)}.
\end{equation*}
We define $\xf = \xf(\de, \ga, \te)$ as the unique root of $f(x) = 0$ in the interval $(0, 1)$.
\end{defn}
To justify the existence and uniqueness of the root, notice that $f(0) < 0$, $f(1) = 0$ and that $f$ is unimodal, with a global maximum at the point
\[ \left(\frac{\ga + \de \te}{\ga + \de}\right)^{\frac{1}{1 - \te}} \in (0, 1). \]
For $\te = 1/2$, an explicit formula for $\xf$ can be obtained, namely,
\[ \xf(\de, \ga, 1/2) = \left(\frac{\ga}{\ga + \de}\right)^2. \]
In the cases $\te = 0$ and $\te = 1$, $\xf$ is defined similarly.
\begin{defn}
\label{D: xf01}
Consider the functions $f_0$ and $f_1$ defined on $(0, 1]$ by
\begin{align*}
f_0(x) &= \lim_{\te \to 0^{+}} f(x) = (\ga + \de) (1 - x) + \ga \log x, \\
f_1(x) &= \lim_{\te \to 1^{-}} f(x) = - \ga (1 - x) - (\ga + \de) \, x \log x.
\end{align*}
For each $\te \in \{ 0, 1 \}$, we denote by $\xf(\de, \ga, \te)$ the unique root of $f_{\te}(x) = 0$ in $(0, 1)$.
\end{defn}
\begin{obs}
In the last case, $\xf$ can be written in terms of the Lambert $W$ function, which is the multivalued inverse of the function $ x \mapsto x \, e^x $.
Let $W_0$ and $W_{-1}$ be the principal and the lower real branches of the Lambert $W$ function, respectively; see Corless et al.~\cite{LW} for more details.
Defining $h = 1 + \de / \ga$, we have that
\begin{align*}
\xf(\de, \ga, 0) &= - {h}^{-1} \, \LW{- h \, e^{-h}} \quad \text{and} \\
\xf(\de, \ga, 1) &= - \left[{h} \, \LWR{- e^{-1 / h} / h}\right]^{-1}.
\end{align*}
\end{obs}
We prove that, whatever the value of $\te$, the following inequality holds:
\begin{equation}
\label{F: Ineq xf}
\xf(\de, \ga, \te) < \frac{\ga}{\ga + \de}.
\end{equation}
For this, it is enough to show that $f$ and $f_{\te}$ evaluated at the point $\ga / (\ga + \de)$ are strictly greater than zero.
For $\te \in \{ 0, 1 \}$, formula~\eqref{F: Ineq xf} is therefore an immediate consequence of a standard logarithm inequality:
\begin{equation}
\label{F: Log}
\frac{u - 1}{u} < \log u < u - 1 \, \text{ for all } 0 < u < 1.
\end{equation}
For $\te \in (0, 1)$, we use~\eqref{F: Log} to prove that the function $ \te \mapsto (\ga / (\ga + \de \te))^{1 / \te} $ is increasing, whence~\eqref{F: Ineq xf} follows.
We are now ready to state our main results.
\begin{teo}
\label{T: WLLN}
Assume~\eqref{F: Hyp} and let $\xf$ be given by Definition~\ref{D: xf} or~\ref{D: xf01} according as $\te \in (0, 1)$ or not.
Define $\uf = (1 - \de)(1 - \xf)$.
Then,
\begin{equation*}
\lim_{N \to \infty} \, \frac{X^{(N)}(\tau^{(N)})}{N} = \xf \, \text{ and }
\lim_{N \to \infty} \, \frac{U^{(N)}(\tau^{(N)})}{N} = \uf \quad \text{in probability}.
\end{equation*}
\end{teo}
\begin{teo}
\label{T: CLT}
Assume~\eqref{F: Hyp} and define
{\allowdisplaybreaks
\begin{align*}
\ka &= 3 \tu + 2 \td - 4 \ga, \quad
A = \frac{\xf}{\ga - (\ga + \de) \xf}, \quad
B = \frac{\ga \de \uf}{\ga + \de \te}, \\[0.1cm]
C &= (\ga + \de)^2 \left(4 \de \te^2 - \ka (\ga + 2 \de \te)\right) \xf + \ka \ga (\ga + \de) (\ga + \de (2 \te - 1)) - 4 \de \ga^2 (1 - \te)^2, \\[0.1cm]
D &=
\left\{
\begin{array}{cl}
\dfrac{C (1 - \xf)}{2 (2 \te - 1) (\ga + \de \te)^2} &\text{ if } \te \neq \dfrac{1}{2}, \\[0.2cm]
\dfrac{2 \ga \left[\ka \de (2 \ga + \de) - 2 \ga (\de - \ka (\ga + \de)) \log \left(\frac{\ga}{\ga + \de}\right)\right]}{(\ga + \de)^2} &\text{ if } \te = \dfrac{1}{2}.
\end{array} \right.
\end{align*}
Then,
\begin{equation}
\label{F: Biv CLT}
\sqrt{N} \left(\frac{X^{(N)}(\tau^{(N)})}{N} - \xf, \frac{U^{(N)}(\tau^{(N)})}{N} - \uf\right)
\stackrel{\mathcal{D}}{\rightarrow} N_2(0, \Si) \, \text{ as } \, N \to \infty,
\end{equation}
where $ \stackrel{\mathcal{D}}{\rightarrow} $ denotes convergence in distribution, and $ N_2(0, \Si) $ is the bivariate normal distribution with mean zero and covariance matrix
\begin{equation*}
\Si =
\begin{pmatrix}
\Si_{11} & \Si_{12} \\
\Si_{21} & \Si_{22}
\end{pmatrix}
\end{equation*}
whose elements are given by
\begin{equation}
\label{F: Cov Matrix}
\begin{aligned}
\Si_{11} &= \xf (1 - \xf) + A^2 D, \\
\Si_{12} &= \Si_{21} = - (1 - \de) \Si_{11} + A B, \\
\Si_{22} &= (1 - \de)^2 \Si_{11} + (1 - \de) (\de (1 - \xf) - 2 A B).
\end{aligned}
\end{equation}
\end{teo}
Of course, when $\de = 1$, we can omit the second component of the vector in the left-hand side of~\eqref{F: Biv CLT}, and we denote by $\vn$ ($= \Si_{11}$) the variance of the asymptotic mean zero Gaussian distribution.
\begin{exa}
Let $\rho \in [0, 1]$ and consider our model with the choice $\la = \de = \ga = 1$, $\tu = \rho$ and $\td = 1 - \rho$, so $\te = 0$.
Thus, the limiting proportion of ignorants and the variance of the asymptotic normal distribution in the CLT are given respectively by
\begin{gather*}
\xf = \xf(1, 1, 0) = - \dfrac{\LW{- 2 \, e^{-2}}}{2} \approx 0.203188, \quad \text{and} \\[0.1cm]
\vn = \frac{\xf (1 - \xf) \left(1 - 2 \, \xf + 2 \, \rho \, \xf^2\right)}{(1 - 2 \, \xf)^2} \approx
0.272736 + 0.0379364 \, \rho.
\end{gather*}
We obtain the [MT] or [DK] model according to whether $\rho$ equals $0$ or $1$, showing that our theorems generalize classical results presented by Sudbury~\cite{Sudbury} and Watson~\cite{Watson}.
\end{exa}
\begin{exa}
\label{E: Hayes}
In an interesting confessional essay, Hayes~\cite{Hayes} describes a mistake committed when he attempted to simulate the basic [DK] model.
The author actually simulated the [MT] model, with the difference that, when two spreaders meet, both become stiflers.
This model is obtained by choosing $\la = \de = \ga = 1$, $\tu = 2$ and $\td = 0$, in which case $\te = 1$,
\begin{gather*}
\xf = \xf(1, 1, 1) = - \dfrac{1}{2 \, \LWR{- e^{-1 / 2} / 2}} \approx 0.284668, \quad \text{and} \\[0.1cm]
\vn = \frac{\xf (1 - \xf) \left(1 - 3 \, \xf + 3 \, \xf^2\right)}{(1 - 2 \, \xf)^2} \approx 0.427204.
\end{gather*}
This clarifies the numerical value of the proportion of ignorants remaining in the population that Hayes obtained in his simulations.
\end{exa}
\begin{exa}
\label{E: apq DK}
Let $\al, p, q \in (0, 1]$.
We describe a new variant of the [DK] model, which we call the $\apq$-[DK] model.
Suppose that, independently for each pairwise meeting and each individual,
\begin{enumerate}
\item[(a)] A spreader involved in a meeting decides to tell the rumour with probability~$p$.
\item[(b)] Once such a decision is made, any spreader in a meeting with somebody informed of the rumour has probability~$\al$ of becoming a stifler.
\item[(c)] Upon hearing the rumour, an ignorant becomes a spreader or an uninterested individual with respective probabilities~$q$ and $1 - q$.
\end{enumerate}
This model with $q = 1$ (no uninterested individuals) was introduced by Daley and Kendall~\cite{DK}, who studied its deterministic analogue.
This analysis is also presented in Daley and Gani~\cite[Section~5.2]{DG}.
The basic [DK] model corresponds to $\al = p = q = 1$.
Observe that, for the $\apq$-[DK] model, the continuous-time Markov chain $\{V(t)\}_{t \geq 0}$ evolves according to
\begin{equation*}
{\allowdisplaybreaks
\begin{array}{cc}
\text{transition} \quad &\text{rate} \\[0.1cm]
(-1, 0, 1) \quad &p q \, X Y, \\[0.1cm]
(-1, 1, 0) \quad &p (1 - q) \, X Y, \\[0.1cm]
(0, 0, -2) \quad &\al^2 p (2 - p) \displaystyle\binom{Y}{2}, \\[0.2cm]
(0, 0, -1) \quad &\al (1 - \al) p (2 - p) \, Y (Y - 1) + \al p \, Y (N + 1 - X - Y).
\end{array}
\end{equation*}
Therefore, Theorems~\ref{T: WLLN} and~\ref{T: CLT} yield novel bivariate limit theorems for this model, which are obtained by making the following substitutions:
\begin{equation*}
\la = p, \, \de = q, \, \tu = \al^2 (2 - p), \, \td = \al (1 - \al) (2 - p), \, \ga = \al \, \text{ and } \, \te = \al (1 - p).
\end{equation*}
Here are some important cases:
\smallskip
\textbf{(1)} For the $\apq$-[DK] model with $p < 1$, the asymptotic proportion of ignorants~$\xf$ is the unique root of
\[ f^{\ast}(x) = \frac{(1 + q (1 - p)) x^{\al (1 - p)} - (\al + q) (1 - p) x - 1 + \al (1 - p)}{(1 - p) (1 - \al (1 - p))} = 0 \]
in the interval $(0, 1)$.
The limiting fraction of uninterested individuals is $\uf = (1 - q)(1 - \xf)$.
When $q = 1$, the value $\xf$ coincides with that obtained for the deterministic analogue of the model in Daley and Gani~\cite[Equation~(5.2.8)]{DG}.
\smallskip
\textbf{(2)} The $(1, 1, q)$-[DK] model is the basic [DK] model with the additional rule that an ignorant is allowed not to have interest in spreading the rumour.
For this model, the limiting proportions of ignorant and uninterested individuals are expressed respectively as
\begin{equation}
\label{F: xu 11q DK}
\begin{aligned}
\xf &= \xf(q, 1, 0) = - \dfrac{\LW{- h \, e^{-h}}}{h} \text{ with } h = 1 + q \quad \text{and} \\[0.1cm]
\uf &= (1 - q)(1 - \xf).
\end{aligned}
\end{equation}
The entries of the covariance matrix~$\Si$ given in~\eqref{F: Cov Matrix} simplify to
{\allowdisplaybreaks
\begin{align*}
\Si_{11} &= \frac{\xf (1 - \xf) \left( 2 - (3 + q^2) \xf + (1 + q)^2 \xf^2 \right)}{2 (1 - (1 + q) \xf)^2}, \\[0.1cm]
\Si_{12} &= \frac{\xf \, \uf \left( -2 (1 - q) + (1 - q) (3 + q) \xf - (1 + q)^2 \xf^2 \right)}{2 (1 - (1 + q) \xf)^2}, \\[0.1cm]
\Si_{22} &= \frac{\uf \left( 2 q + 2 (1 - 5 q) \xf + (-3 + 9 q + 3 q^2 - q^3) \xf^2 + (1 - q) (1 + q)^2 \xf^3 \right)}{2 (1 - (1 + q) \xf)^2}.
\end{align*}
When $q = 1$, these formulae reduce to the well-known results for the basic [DK] model.
\smallskip
\textbf{(3)} For the $(\al, 1, 1)$-[DK] model, we have that
\begin{gather*}
\xf = \xf(1, \al, 0) = - \dfrac{\LW{- h \, e^{-h}}}{h} \text{ with } h = 1 + \frac{1}{\al}, \quad \text{and} \\[0.1cm]
\vn = \frac{\xf (1 - \xf) \left(2 \al^2 + \left[2 (1 - \al) - \al (1 + \al)^2\right] \xf + \al (1 + \al)^2 \xf^2 \right)}{2 (\al - (1 + \al) \xf)^2}.
\end{gather*}
These formulae agree with those presented in Exercise~5.7 of Daley and Gani~\cite{DG} (in which the reader is asked to obtain $\vn$ by making use of Kendall's Principle of Diffusion of Arbitrary Constants).
\end{exa}
\begin{exa}
We define the $\apq$ version of the [MT] model in a similar way, with the rules (a) and (c) given in Example~\ref{E: apq DK} and
\begin{enumerate}
\item[(b$^\prime$)] Once a spreader decides to tell the rumour in a directed contact with somebody already informed, only this spreader chooses with probability~$\al$ to become a stifler
\end{enumerate}
instead of (b) in Example~\ref{E: apq DK}, holding independently for each directed contact between two individuals.
The basic [MT] model has $\al = p = q = 1$.
Since the $\apq$-[MT] model is obtained by the choice
\begin{equation*}
\la = p, \, \de = q, \, \tu = 0, \, \td = \ga = \al \, \text{ and } \, \te = 0,
\end{equation*}
we conclude that the limiting proportion of ignorants is given by
\[ \xf = \xf(q, \al, 0) = - \dfrac{\LW{- h \, e^{-h}}}{h} \text{ with } h = 1 + \frac{q}{\al}. \]
Two particular cases are:
\smallskip
\textbf{(1)} For the $(1, 1, q)$-[MT] model, the values of $\xf$ and $\uf$ are the same as those of the $(1, 1, q)$-[DK] model, given by~\eqref{F: xu 11q DK}.
However,
{\allowdisplaybreaks
\begin{align*}
\Si_{11} &= \frac{\xf (1 - \xf) \left( 1 - (1 + q^2) \xf \right)}{(1 - (1 + q) \xf)^2}, \\[0.1cm]
\Si_{12} &= - \frac{\xf \, \uf^2}{(1 - (1 + q) \xf)^2}, \\[0.1cm]
\Si_{22} &= \frac{\uf \left( q + (1 - 5 q) \xf + (-1 + 4 q + q^2) \xf^2 \right)}{(1 - (1 + q) \xf)^2}.
\end{align*}
\smallskip
\textbf{(2)} The $(\al, 1, 1)$-[MT] model corresponds to the basic [MT] model in which the number of contacts with an informed individual by each spreader waiting to become a stifler has geometric distribution with parameter $\al$.
In this case, $\xf$ coincides with that of the $(\al, 1, 1)$-[DK] model, but
\[ \vn = \frac{\xf (1 - \xf) \left(\al^2 - \left(\al^2 + 2 \al - 1\right) \xf \right)}{(\al - (1 + \al) \xf)^2}. \]
This generalized form of the [MT] model in which a spreader becomes a stifler only after being involved in a random number of unsuccessful communications is further studied in Lebensztayn et al.~\cite{RPRS}.
\end{exa}
\begin{exa}
Other rumour models in the literature for which our results apply were proposed by Pearce~\cite{Pearce} and Kawachi~\cite{Kawachi}.
The first one has the dynamics of the [DK] model, with the following interaction rules.
A meeting between an ignorant and a spreader results in the ignorant turning into a spreader with probability~$p$.
When two spreaders interact, either both become stiflers with probability $q_2$ or only one of them does so with probability $q_1 \leq 1 - q_2$.
Finally, the interaction of a spreader with a stifler results in two stiflers with probability $r$.
Thus, this model is obtained by considering $\la = p$, $\de = 1$, $\tu = {q_2}/{p}$, $\td = {q_1}/{(2 p)}$ and $\ga = {r}/{p}$.
One of the deterministic models studied in Kawachi~\cite{Kawachi} evolves similarly to Hayes' model, as explained in Example~\ref{E: Hayes}.
The contacts are as in the [MT] model, but, when two spreaders meet, both become stiflers with probability~$\tilde{\beta}$; the transformation of only one of them into a stifler is not possible.
When a spreader encounters an ignorant, the first one transmits the rumour with probability~$\tilde{\alpha}$ and the latter joins the spreaders with probability~$\tilde{\theta}$.
Finally, in a meeting between a spreader and a stifler, the first individual becomes a stifler with probability~$\tilde{\gamma}$.
This model is obtained by the choice $\la = \tilde{\alpha}$, $\de = \tilde{\theta}$,
$\tu = 2 \, \tilde{\beta} / \tilde{\alpha}$, $\td = 0$ and $\ga = \tilde{\gamma} / \tilde{\alpha}$.
\end{exa}
\section{Proofs}
\label{S: Proofs}
The main method for proving Theorems~\ref{T: WLLN} and~\ref{T: CLT} is, by means of a random time change, to define a new process $ \{ \tilde V^{(N)}(t) \}_{t \geq 0} $ with the same transitions as $ \{ V^{(N)}(t) \}_{t \geq 0} $, so that they terminate at the same point.
This transformation is done in such a way that $ \{ \tilde V^{(N)}(t) \}_{t \geq 0} $ is a density dependent Markov chain to which we can apply Theorem~11.4.1 of Ethier and Kurtz~\cite{MPCC}.
The arguments are similar to those used in Kurtz et al.~\cite{EMCG} for the coverage of a random walks system on the complete graph.
\subsection{Random time change}
\label{SS: Random time change}
We define
\begin{align*}
\Theta^{(N)}(t) &= \int_0^t Y^{(N)}(s) \, ds, \, 0 \leq t \leq \tau^{(N)}, \\
\Upsilon^{(N)}(s) &= \inf \{t: \Theta^{(N)}(t) > s \}, \, 0 \leq s \leq \int_0^{\infty} Y^{(N)}(u) \, du,
\end{align*}
and let $ \tilde V^{(N)}(t) = V^{(N)}(\Upsilon^{(N)}(t)) $.
The time-changed process $ \{ \tilde V^{(N)}(t) \}_{t \geq 0} $ has the same transitions as $ \{ V^{(N)}(t) \}_{t \geq 0} $, hence if we define
\[ \tilde{\tau}^{(N)} = \inf \{t: \tilde{Y}^{(N)}(t) = 0 \}, \]
we have that $V^{(N)}(\tau^{(N)}) = \tilde{V}^{(N)}(\tilde{\tau}^{(N)})$.
Furthermore, $ \{ \tilde V^{(N)}(t) \}_{t \geq 0} $ is a continuous-time Markov chain with initial state $(N, 0, 1)$ and transition rates given by
{\allowdisplaybreaks
\begin{equation}
\label{F: Rates TCP}
\begin{array}{cc}
\text{transition} \quad &\text{rate} \\[0.1cm]
\ell_0 = (-1, 0, 1) \quad &\la \, \de \, \tilde X, \\[0.1cm]
\ell_1 = (-1, 1, 0) \quad &\la (1 - \de) \, \tilde X, \\[0.1cm]
\ell_2 = (0, 0, -2) \quad &\la \, \tu \displaystyle\frac{\tilde Y - 1}{2}, \\[0.2cm]
\ell_3 = (0, 0, -1) \quad &\la \, \td \, (\tilde Y - 1) + \la \, \ga \, (N + 1 - \tilde X - \tilde Y).
\end{array}
\end{equation}
\subsection{Deterministic limit of the time-changed process}
\label{SS: Deterministic limit}
We define for $t \geq 0$,
\[ \tilde v^{(N)}(t) = \dfrac{\tilde V^{(N)}(t)}{N} = (\tilde x^{(N)}(t), \tilde u^{(N)}(t), \tilde y^{(N)}(t)), \]
and consider
\begin{align*}
\beta_{\ell_0} (x, u, y) &= \la \, \de \, x, & \beta_{\ell_1} (x, u, y) &= \la (1 - \de) \, x, \\
\beta_{\ell_2} (x, u, y) &= \la \, \tu \, \frac{y}{2}, & \beta_{\ell_3} (x, u, y) &= \la \, \td \, y + \la \, \ga \, (1 - x - y).
\end{align*}
Note that the rates in~\eqref{F: Rates TCP} can be written as
\[ N \left[ \beta_{\ell_i} \left( \dfrac{\tilde X}{N}, \dfrac{\tilde U}{N}, \dfrac{\tilde Y}{N} \right) + O \left( \dfrac{1}{N} \right) \right], \]
so $ \{ \tilde v^{(N)}(t) \}_{t \geq 0} $ is a density dependent Markov chain with possible transitions in the set $\{ \ell_0, \ell_1, \ell_2, \ell_3 \}$.
Now we use Theorem~11.2.1 of Ethier and Kurtz~\cite{MPCC} to conclude that the time-changed system converges almost surely as $ N \to \infty $ (on a suitable probability space).
The drift function is given by
\[ F(x, u, y) = \sum_i \ell_i \, \beta_{\ell_i} (x, u, y)
= (- \la x, \la (1 - \de) x, \la (\ga + \de) x - \la \te y - \la \ga), \]
hence the limiting deterministic system is governed by the following system of ordinary differential equations
\begin{equation*}
\begin{cases}
x^{\prime}(t) = - \la \, x(t), \\[0.1cm]
u^{\prime}(t) = \la (1 - \de) \, x(t), \\[0.1cm]
y^{\prime}(t) = \la (\ga + \de) \, x(t) - \la \te \, y(t) - \la \ga, \\[0.1cm]
x(0) = 1, u(0) = 0 \text{ and } y(0) = 0.
\end{cases}
\end{equation*}
The solution of this system is given by $ v(t) = (x(t), u(t), y(t)) $, where
\begin{equation*}
x(t) = e^{-\la t}, \, u(t) = (1 - \de)(1 - x(t)), \, y(t) = f(x(t)),
\end{equation*}
with $f$ replaced by $f_{\te}$ if $\te$ equals $0$ or $1$.
According to Theorem~11.2.1 of Ethier and Kurtz~\cite{MPCC},
\begin{lem}
\label{L: Conv v}
We have that $ \tilde v^{(N)}(t) $ converges almost surely to $ v(t) $, uniformly on bounded time intervals.
\end{lem}
We prove that for each of the first two components of $ \tilde v^{(N)} $, the convergence is uniform on the whole line.
\begin{lem}
\label{L: Conv xu}
We have that $ \tilde x^{(N)}(t) $ converges almost surely to $ x(t) $, uniformly on~$ \bbR $.
The analogous assertion holds for $ \tilde u^{(N)} $ and $u$.
\end{lem}
\begin{proof}
We prove the first statement.
Given any $ \eps > 0 $, we take $t_0 = t_0(\eps)$ such that
$ x(t) \leq \eps/2 $ for all $ t \geq t_0 $.
By the uniform convergence of $ \tilde x^{(N)}(t) $ to $ x(t) $ on the interval $ [0, t_0] $, there exists $N_0 = N_0(\eps)$ such that
\[ |\tilde x^{(N)}(t) - x(t)| \leq \eps/2 \text{ for all } N \geq N_0
\text{ and } t \in [0, t_0]. \]
Therefore, for all $ N \geq N_0 $ and $ t \geq t_0 $,
\[ \tilde x^{(N)}(t) \leq \tilde x^{(N)}(t_0) \leq x(t_0) + \eps/2 \leq \eps, \]
whence $ |\tilde x^{(N)}(t) - x(t)| \leq \eps $. \qquad
\end{proof}
\subsection{Proofs of Theorems \ref{T: WLLN} and \ref{T: CLT}}
\label{SS: Proofs of Theorems}
Both theorems follow from Theorem~11.4.1 of Ethier and Kurtz~\cite{MPCC}.
We adopt the notations used there, except for the Gaussian process $V$ defined on p.~458, that we would rather denote by $\GV = (\GV_x, \GV_u, \GV_y)$.
Here $\varphi(x, u, y) = y$, and
\[ \tf = \inf \{t: y(t) \leq 0 \} = - \frac{1}{\la} \, \log \xf. \]
Moreover, from~\eqref{F: Ineq xf},
\begin{equation}
\label{F: Der Neg}
\nabla \varphi(v(\tf)) \cdot F(v(\tf)) = y^{\prime} (\tf) = \la (\ga + \de) \, \xf - \la \ga < 0.
\end{equation}
Let us explain the argument leading to the proof of the Law of Large Numbers.
Although $y(0) = 0$, we have that $y^{\prime}(0) > 0$.
This and~\eqref{F: Der Neg} imply that $y(\tf - \eps) > 0$ and $y(\tf + \eps) < 0$ for $0 < \eps < \tf$.
The almost sure convergence of~$\tilde{y}^{(N)}$ to~$y$ uniformly on bounded intervals yields that
\begin{equation}
\label{F: Conv tf}
\lim_{N \to \infty} \, \tilde \tau^{(N)} = \tf \quad \text{almost surely}.
\end{equation}
Thus, recalling that $V^{(N)}(\tau^{(N)}) = \tilde{V}^{(N)}(\tilde{\tau}^{(N)})$, we obtain Theorem~\ref{T: WLLN} from formula~\eqref{F: Conv tf} and Lemma~\ref{L: Conv xu}.
With respect to the Central Limit Theorem, we get from Theorem~11.4.1 of Ethier and Kurtz~\cite{MPCC} that
$\sqrt{N} \, (\tilde x^{(N)}(\tilde \tau^{(N)}) - \xf, \tilde u^{(N)}(\tilde \tau^{(N)}) - \uf)$
converges in distribution as $N \to \infty$ to
\begin{equation}
\label{F: LD}
\left( \GV_x(\tf) - A \, \GV_y(\tf), \, \GV_u(\tf) + A \, (1 - \de) \, \GV_y(\tf) \right),
\end{equation}
where $A$ is the constant defined in Theorem~\ref{T: CLT}.
The asymptotic distribution is a mean zero bivariate normal distribution, so it remains to explain how formula~\eqref{F: Cov Matrix} for the covariance matrix~$\Si$ is obtained.
For this, we summarize the steps taken to compute $ \La = \Cov(\GV(\tf), \GV(\tf)) $, a task that can be better carried out with mathematical software.
First, we calculate the matrix of partial derivatives of the drift function $ F $ and the matrix $ G $, as defined in Ethier and Kurtz~\cite[p.~458]{MPCC}.
They are given by
\begin{equation*}
\begin{gathered}
\partial F(x, u, y) =
\begin{pmatrix}
-\la & 0 & 0 \\
\la (1 - \de) & 0 & 0 \\
\la (\ga + \de) & 0 & -\la \te
\end{pmatrix}
\, \text{ and } \\[0.1cm]
G(x, u, y) =
\begin{pmatrix}
\la x & -\la (1 - \de) x & -\la \de x \\
-\la (1 - \de) x & \la (1 - \de) x & 0 \\
-\la \de x & 0 & \la (\de - \ga) x + \la (\ka - \te + 2 \ga) y + \la \ga
\end{pmatrix}.
\end{gathered}
\end{equation*}
Next, we obtain the solution $\Phi$ of the matrix equation
\[ \frac{\partial}{\partial t} \, \Phi(t, s) = \partial F(x(t), u(t), y(t)) \, \Phi(t, s),
\quad \Phi(s, s) = I_3, \]
where $I_3$ is the $3 \times 3$ identity matrix.
Then, we compute
\begin{equation}
\label{F: Covariance}
\Cov(\GV(t), \GV(t)) = \int_0^{t} \Phi(t, s) \, G(x(s), u(s), y(s)) \, {[\Phi(t, s)]}^T \, ds.
\end{equation}
We emphasize that the computation of~$\La$ must be separated into four cases: $\te \in (0, 1) \setminus \{ 1/2 \}$, $\te = 1/2$, $\te = 0$ and $\te = 1$.
The value $\te = 1/2$ must be considered separately from the interval $(0, 1)$ owing to the appearance of the integral $\int_0^{t} e^{\la (2 \te - 1) s} \, ds$ in the element $(3, 3)$ of the matrix given in formula~\eqref{F: Covariance}.
This also explains why the constant~$D$ in Theorem~\ref{T: CLT} is defined differently for $\te = 1/2$.
The final step to obtain $\La$ is to set $t = \tf$ in the formula obtained from~\eqref{F: Covariance}.
This is accomplished by making suitable substitutions in this formula according to the value of~$\te$; for instance, for $\te \in (0, 1) \setminus \{ 1/2 \}$ we replace $e^{-\la t}$ and $e^{-\la \te t}$ respectively by
\[ \xf \, \text{ and } \, \frac{(\ga + \de) \, \te \, \xf + \ga (1 - \te)}{\ga + \de \te}. \]
The resulting formula (valid for any $\te$) is
\begin{equation}
\label{F: Final Cov}
\La =
\begin{pmatrix}
\xf \, (1 - \xf) & -(1 - \de) \, (1 - \xf) \, \xf & 0 \\
-(1 - \de) \, (1 - \xf) \, \xf & (1 - \de) \, (1 - \xf) \, ((1 - \de) \, \xf + \de) & -B \\
0 & -B & D
\end{pmatrix}.
\end{equation}
Using~\eqref{F: LD}, \eqref{F: Final Cov} and the well-known properties of the variance and covariance, we get formula~\eqref{F: Cov Matrix}.
\section*{Acknowledgments}
We thank the referees for the careful reading of the paper and helpful suggestions.
We are grateful to Tom Kurtz and Alexandre Leichsenring for earlier fruitful discussions.
|
1,116,691,499,075 | arxiv | \section{Introduction
We present a boundary-integral method for the simulation of
time-harmonic acoustic-scattering by general surfaces in
three-dimensional space. As is well known, boundary integral methods
provide a number of advantages (notably, they only require
discretization of the scattering surfaces, and they inherently enforce
the condition of radiation at infinity), but they do give rise to
certain computational challenges, concerning the accurate evaluation
of the associated singular integrals, the $\mathcal{O}(N^2)$ computational
cost that results from straightforward numerical implementations (for
an $\mathcal{O}(N)$-sized grid, with, in particular, large $N$ for high
frequencies), the generation of high-fidelity surface grids, and well
known difficulties associated with their shared- and distributed-memory
parallelization.
This paper addresses all of these challenges. In particular, the
proposed approach provides high-order convergence (and thus, accurate
results with minimal computational grids), and it employs a certain
``IFGF'' acceleration technique discussed below which, not relying on
the Fast Fourier Transform (FFT), reduces the computing cost to
$\mathcal{O}(N\log N)$ operations in a manner that lends itself to effective
parallelization under both large and small shared memory and
distributed memory hardware
infrastructures~\cite{BauingerBruno2021,BauingerBruno_parallel_2021}. In
this work we focus on an OpenMP parallel implementation of this
method, suitable for computer systems containing relatively small
number of computing cores; the development of a massively parallel
scattering solver is left for future work. A variety of numerical
examples presented in this paper demonstrate that the proposed methods
enable the efficient solution of large problems over complex
geometries on small parallel hardware infrastructures. Numerical
examples include acoustic scattering by a sphere of up to $128$
wavelengths, an $80$-wavelength submarine, and a turbofan nacelle that
is more than $80$ wavelengths in size, each one requiring, on a
28-core computer, computing times of the order of a few minutes per
iteration and a few tens of iterations of the GMRES iterative solver.
In this work the scattering surface is represented by means of a set
of non-overlapping surface patches. The associated integral
operators, which require careful treatment to ensure high-order
accuracy, are tackled by means of the Nystr\"{o}m approach presented
in~\cite{BrunoGarza2020}, at least in the case of singular and
near-singular Green function interactions, which, on the basis of a
rectangular-polar change of variables, operator kernel
precomputations, and Chebyshev expansions of the underlying densities,
achieves high-order accuracy in a manner that leads to seamless
integration with the aforementioned IFGF acceleration approach. The
IFGF acceleration method mentioned above (Interpolated Factored Green
Function), in turn, evaluates the action of Green function based
integral operators at a cost of $\mathcal{O}(N\log N)$ operations for an
$N$-point surface mesh. The efficiency of the IFGF approach is not
based on previously-employed acceleration elements such as the Fast
Fourier Transform (FFT), special-function expansions, high-dimensional
linear-algebra factorizations, translation operators, equivalent
sources, or parabolic
scaling~\cite{BebendorfRjasanow2003,Bleszynski1996,Borm2017,
BormMelenk2017,BrunoKunyansky2001JCP,
ChengEtAl2006,EngquistYing2007,
MichielssenBoag1996,PhillipsWhite1997,PoulsonEtAl2014,
Rokhlin1993,YingEtAl2003}. Instead, the IFGF method relies on an
interpolation scheme of factored forms of the operator kernels which,
when applied recursively to larger and larger groups of Green function
sources, gives rise to the desired $\mathcal{O}(N\log N)$ accelerated
evaluation.
This paper is organized as follows: integral equation formulations for
exterior Dirichlet boundary-value problems are presented in
Section~\ref{sec:IntEqn}. Section~\ref{sec:SurfRep} describes the
proposed non-overlapping patch surface representation approach, and
Section~\ref{sec:RecPolar} provides a brief description of the
rectangular-polar Chebyshev-based solver~\cite{BrunoGarza2020} we use
for the small subset of interactions that are not treated by the IFGF
method. Section~\ref{sec:IFGF} then presents the IFGF algorithm,
including the details necessary for its coupling with the
rectangular-polar approach. The numerical results presented in
Section~\ref{sec:NumRes} demonstrate the efficiency of the overall
IFGF-based solver by means of a variety of numerical experiments.
Section~\ref{sec:Conclusion}, finally, presents a few concluding
remarks.
\section{Integral equations for acoustic scattering}\label{sec:IntEqn}
\subsection{Scattering boundary-value problem}\label{sec:AcousticBVP}
We consider wave propagation in a homogeneous isotropic medium with
density $\rho$, speed of sound $c$, and no
damping~\cite{ColtonKress2013IEMethods}. Scattering obstacles are
represented by a bounded set $\Omega \subset \mathbb{R}^3$ which is the open
complement of an unbounded domain. For time-harmonic acoustic waves,
the wave motion can be obtained from the velocity potential
$U(x,t) = \Real{u(x) e^{-i\omega t}}$, where $\omega > 0$ is the angular
frequency, and the spatially-dependent complex-valued part $u(x)$
satisfies the Helmholtz equation
\begin{equation}\label{eq:ScalarHelmholtz}
\Delta u(x) + k^2 u(x) = 0, \qquad x \in \mathbb{R}^3 \setminus \bar{\Omega},
\end{equation}
with $k = \omega / c$; the corresponding acoustic wavelength is given by
$\lambda = 2\pi/k$. Denoting the boundary of $\Omega$ by $\del\Dom$, the
\emph{sound-soft} obstacle case that we consider requires that $u = 0$
on $\del\Dom$. Writing the total field $u(x) = u^i(x) + u^s(x)$, where
$u^i(x)$ is a given incident field which also satisfies Helmoltz
equation, leads to an exterior Dirichlet boundary-value problem for
the scattered field $u^s(x)$
\begin{align}\label{eq:AcousticBVP}
\begin{cases}
\Delta u^s(x) + k^2 u^s(x) = 0, &x \in \mathbb{R}^3 \setminus \bar{\Omega}, \\
u^s(x) = -u^i(x), &x \in \del\Dom, \\
|x| \left( \frac{x}{|x|} \cdot \nabla u^s(x) - i k u^s(x) \right) = 0, & |x| \to \infty.
\end{cases}
\end{align}
\subsection{Integral representations and integral equations}\label{sec:IntRep}
The solutions to the acoustic scattering problem can be obtained in terms of an
integral equation posed on the obstacle boundary. To present the integral
formulation succinctly, we first define a few auxiliary boundary integral
operators.
Recall that the fundamental solution to the Helmholtz equation with positive
wave number $k$ is given by
\begin{equation}\label{eq:GreenFunction}
\Phi_k( x, y ) \coloneqq \frac{1}{4\pi} \frac{e^{\text{i} k |x-y|}}{|x-y|},
\quad x \neq y.
\end{equation}
Given a complex scalar function $\varphi \in C( \del\Dom, \mathbb{C} )$ and a point $x$ on
$\del\Dom$, we define the single- and double-layer integral operators,
respectively, as
\begin{align}
\opS_k [ \varphi ](x) &\coloneqq \int_{\del\Dom} \Phi_k(x,y) \varphi(y) \, dS(y),
\label{eq:scalarSop} \\
\opD_k [ \varphi ](x) &\coloneqq \int_{\del\Dom}
\frac{\partial \Phi_k(x,y)}{\partial \nu(y)} \varphi(y) \, dS(y).
\label{eq:scalarDop}
\end{align}
The solution to~\eqref{eq:AcousticBVP} can be expressed as a
combined-layer potential
\begin{equation}\label{eq:CombinedLayer}
u^s(x) = \int_{\del\Dom} \left\{ \frac{\partial \Phi_k(x,y)}{\partial \nu(y)}
- \text{i} \gamma \Phi_k(x,y) \right\} \varphi(y) \, dS(y),
\qquad x \in \mathbb{R}^3 \setminus \bar{\Omega},
\end{equation}
for a real \emph{coupling parameter} $\gamma \neq 0$, where the
density $\varphi$ is a solution to the integral equation
\begin{equation}\label{eq:CombLayIE}
\frac{1}{2} \varphi(x) + \opD_k [\varphi](x) - \text{i} \gamma \opS_k[\varphi](x) = f(x),
\qquad x \in \del\Dom,
\end{equation}
with $f(x) = -u^i(x)$.
\section{Surface representation}\label{sec:SurfRep}
Following~\cite{BrunoGarza2020}, we partition a scattering surface as
the disjoint union of a set of non-overlapping parametrized component
``patches''. We also refer to a surface patch as a ``logical
quadrilateral'' (LQ) since it is assumed to be the image of a
rectangular reference domain. Given a scattering surface $\del\Dom$, we
thus utilize a number $Q$ of smooth parametrizations
\[
y^q : R \to \mathbb{R}^3, \qquad (q=1,\dotsc,Q),
\]
from a $uv$-plane reference domain $R\coloneqq (-1,1)^2$ onto an LQ patch
$\Gamma^q \subset \mathbb{R}^3$ such that
\begin{equation}\label{eq:GammaCover}
\Gamma^q = y^q( R ) \quad \text{ and } \quad
\Gamma = \bigcup_{q=1}^Q \Gamma^q.
\end{equation}
A general integral operator defined over $\Gamma$ can then be evaluated
component-wise over each patch $\Gamma^q$.
We discretize the patch $\Gamma^q$ by means of a surface grid
containing $N_u^q \times N_v^q$ points given by the image of the
tensor-product discretization
\[
\left\{ u_i = s_i \,\,|\,\, i = 0,\dotsc, N_u^q-1 \right\} \times
\left\{ v_j = s_j \,\,|\,\, j = 0,\dotsc, N_v^q-1 \right\},
\]
under the parametrization $y^q$, where the nodes $s_j$ and associated
integration weights $w_j$ are given by Fej\'{e}r's first quadrature
rule:
\begin{align}\label{eq:FejersNodesWts}
s_j &= \cos \left( \pi \frac{2j+1}{2J} \right), \\
w_j &= \frac{2}{J} \left[1 - 2\sum_{\ell = 1}^{\floor{J/2}} \frac{1}{4\ell^2 -1}
\cos \left( \ell \pi \frac{2j+1}{J} \right) \right],
\end{align}
for $j = 0,\dotsc,J-1$ and $J$ is either $N_u^q$ or $N_v^q$. The set of all
surface discretization points will be denoted by
\begin{equation}\label{eq:AllSurfGrid}
\del\Dom_N \coloneqq \bigcup_{q=1}^Q \del\Dom^q_{N_u,N_v},
\end{equation}
where $N$ denotes the total number of grid points over all patches.
\section{Chebyshev-based rectangular-polar integral equation solver}\label{sec:RecPolar}
This section presents a brief description of the high-order integral
equation solver presented in~\cite{BrunoGarza2020}. In that approach,
a general integral operator $\mathcal{I}^q$ with singular kernel $K^q$ and
density $\varphi^q$ defined over a component patch $\Gamma^q$ is
expressed in the parametric form
\begin{equation}\label{eq:IntOpParametric}
(\mathcal{I}^q \varphi)(x) = \int_{R} K^q(x,u,v) \varphi^q(u,v) J^q(u,v) \, dudv,
\end{equation}
for $x \in \Gamma$, where $K^q(x,u,v) \coloneqq K(x,y^q(u,v))$ and
$\varphi^q(u,v) \coloneqq \varphi(y^q(u,v))$, and where $J^q(u,v)dudv$
denotes the element of area. We often refer to $x$ as the
``evaluation'' or ``target'' point.
To compute~\eqref{eq:IntOpParametric} accurately we use two different
high-order methods depending on whether the target point $x$ is less than
or greater than some ``proximity distance'' $\delta$ to the integration patch.
In detail, letting
\begin{equation}\label{eq:PointSetDist}
\dist{ x, \Gamma^q } \coloneqq
\inf \left\{\, |x - y| \,\, | \,\, y \in \Gamma^q \right\},
\end{equation}
denote the distance from a point $x$ to a patch $\Gamma^q$ (where
$|\cdot|$ denotes the Euclidean distance), the set of target points
gives rise to ``singular'' and ``nearly-singular'' over $\Gamma^q$ is
defined by
\begin{equation}\label{eq:SingGridPts}
\Omega_q^{s,\delta} \coloneqq \left\{ x \in \Gamma \,\, | \,\,
\dist{ x, \Gamma^q } \leq \delta \right\}.
\end{equation}
In contrast, the set of regular (non-singular) target points is
defined by
\begin{equation}\label{eq:ReguGridPts}
\Omega_q^{r,\delta} \coloneqq \left\{ x \in \Gamma \,\, | \,\,
\dist{ x, \Gamma^q } > \delta \right\}.
\end{equation}
We say that the interaction of an integration patch $\Gamma^q$ with a target
point is \emph{singular} or \emph{regular/non-singular}, according to whether
the target point lies in $\Omega_q^{s,\delta}$ or $\Omega_q^{r,\delta}$,
respectively.
\subsection{Integration algorithm for singular interactions}\label{sec:SingInter}
To evaluate~\eqref{eq:IntOpParametric} at a singular or near-singular target
point $x\in \Omega_p^{s,\delta}$, we proceed as follows. First, we form the
Chebyshev expansion of the density $\varphi^q$ over $\Gamma^q$:
\begin{equation}\label{eq:ChebyExpansion}
\varphi^q(u,v) \approx \sum_{m=0}^{N_v^q-1} \sum_{n=0}^{N_u^q-1}
a_{n,m}^q T_n(u) T_m(v),
\end{equation}
where, in view of the discrete orthogonality property satisfied by
Chebyshev polynomials at the Fej\'{e}r nodes, we have
\begin{equation}\label{eq:ChebyCoeff}
a_{n,m}^q = \frac{\alpha_n \alpha_m}{N_u^q N_v^q}
\sum_{j=0}^{N_v^q-1} \sum_{i=0}^{N_u^q-1} \varphi^q(u_i,v_j)
T_n(u_i) T_m(v_j),
\qquad \alpha_n \coloneqq \begin{cases} 1, & n=0 \\
2, & n \neq 0
\end{cases}.
\end{equation}
Replacing the density $\varphi^q$ by its Chebyshev
expansion~\eqref{eq:ChebyExpansion}, in the proposed scheme the
integral~\eqref{eq:IntOpParametric} is numerically approximated by
\begin{subequations}\label{eq:IOCheExpansion}
\begin{align}
(\mathcal{I}^q \varphi)(x) &\approx \int_{R} K^q(x,u,v)
\left( \sum_{m=0}^{N_v^q-1} \sum_{n=0}^{N_u^q-1}
a_{n,m}^q T_n(u) T_m(v) \right)
J^q(u,v) \, dudv \label{eq:IOCheExpA} \\
&= \sum_{m=0}^{N_v^q-1} \sum_{n=0}^{N_u^q-1} a_{n,m}^q
\left( \int_{R} K^q(x,u,v)
T_n(u) T_m(v) J^q(u,v) \, dudv \right). \label{eq:IOCheExpB}
\end{align}
\end{subequations}
Note that the double integral in~\eqref{eq:IOCheExpB} does not depend
on the density; it depends only on the kernel, a product of Chebyshev
polynomials, and the geometry. Once this integral has been computed to
the desired accuracy, the proposed method stores its value and uses it
as needed.
We write the value of $\mathcal{I}^q$ at all target points $x_{\ell} \in
\Omega_p^{s,\delta}$ succinctly as
\begin{equation}\label{eq:IntOpAtSing}
(\mathcal{I}^q \varphi)(x_{\ell}) = \sum_{m=0}^{N_v^q-1} \sum_{n=0}^{N_u^q-1} a_{n,m}^q
\, \beta_{n,m}^{q,\ell},
\end{equation}
where
\begin{equation}\label{eq:BetaAtSing}
\beta_{n,m}^{q,\ell} \coloneqq \int_{R} K^q(x_{\ell},u,v)
T_n(u) T_m(v) J^q(u,v) \, dudv.
\end{equation}
To compute~\eqref{eq:BetaAtSing} at an evaluation point $x_{\ell}$, we first
identify its corresponding integration patch node $(\bar{u}_{\ell}^q,
\bar{v}_{\ell}^q)$. If the target point $x_{\ell}$ is itself a grid point of
$\Gamma^q$, then finding its node is straightforward: $x_{\ell} =
y^q(\bar{u}_{\ell}^q, \bar{v}_{\ell}^q)$ for some point $(\bar{u}_{\ell}^q,
\bar{v}_{\ell}^q)$ in the $uv$-plane reference domain for $\Gamma^q$. On the
other hand, if $x_{\ell} \in \Omega_p^{s,\delta} \setminus \Gamma^q$, then we
search for a $\Gamma^q$ node such that
\begin{equation}\label{eq:MinNode}
(\bar{u}_{\ell}^q, \bar{v}_{\ell}^q) = \argmin_{(u,v)\in [-1,1]^2}
\norm{ x_{\ell} - y^q(u,v) }.
\end{equation}
As in~\cite{BrunoGarza2020}, for robustness and simplicity we solve
the minimization problem~\eqref{eq:MinNode} by means of the golden
section search algorithm.
Next, we apply a one-dimensional change of variables to each
coordinate in the $uv$-parameter space to construct a clustered grid
around each given target node. To this end we consider the following
one-to-one, strictly monotonically increasing, and infinitely
differentiable function $w:[0,2\pi] \to [0,2\pi]$, with parameter
$d \geq 2$ (proposed in~\cite[Section 3.5]{ColtonKress2013InvAcouEM}),
\begin{equation}\label{eq:wMap}
w( \tau; d ) \coloneqq 2\pi \frac{[\nu(\tau)]^d}
{[\nu(\tau)]^d + [\nu(2\pi - \tau)]^d},
\qquad 0 \leq \tau \leq 2\pi,
\end{equation}
where
\begin{equation}\label{eq:vMap}
\nu( \tau; d ) \coloneqq \left( \frac{1}{d} - \frac{1}{2} \right)
\left( \frac{\pi - \tau}{\pi} \right)^3
+ \frac{1}{d} \left( \frac{\tau - \pi}{\pi} \right)
+ \frac{1}{2}.
\end{equation}
It can be shown that $w$ has vanishing derivatives up to order $d-1$ at the
interval endpoints. Then, the following change of variables
\begin{equation}\label{eq:CoV}
\xi_{\alpha}( \tau; d ) \coloneqq
\begin{cases}
\alpha + \left( \frac{\sgn(\tau) - \alpha}{\pi} \right) w( \pi |\tau|; d ),
& \text{ for } \alpha \neq \pm 1, \\
\alpha - \left( \frac{1+\alpha}{\pi} \right) w( \pi \left| \frac{\tau - 1}{2} \right|; d ),
& \text{ for } \alpha = 1, \\
\alpha + \left( \frac{1-\alpha}{\pi} \right) w( \pi \left| \frac{\tau + 1}{2} \right|; d ),
& \text{ for } \alpha = -1,
\end{cases}
\end{equation}
has the effect of clustering points around $\alpha$. Fej\'{e}r's rule applied
to the integral~\eqref{eq:BetaAtSing}, transformed using the change of
variables~\eqref{eq:CoV}, yields the approximation
\begin{equation}\label{eq:BetaCov}
\beta_{n,m}^{q,\ell} \approx \sum_{j=0}^{N_{\beta}-1} \sum_{i=0}^{N_{\beta}-1}
K^q(x_{\ell}, u_i^{q,\ell}, v_j^{q,\ell} )
T_n(u_i^{q,\ell}) T_m(v_j^{q,\ell})
J^q(u_i^{q,\ell},v_j^{q,\ell}) \,
w_i^{u,q,\ell} w_j^{v,q,\ell},
\end{equation}
where
\begin{align}\label{eq:CovNodesWeights}
u_i^{q,\ell} &= \xi_{\bar{u}_{\ell}^q} ( s_i; d ), \quad
w_i^{u,q,\ell} = \frac{ d\xi_{\bar{u}_{\ell}^q} }{d\tau}( s_i; d ) \, w_i, \\
v_j^{q,\ell} &= \xi_{\bar{v}_{\ell}^q} ( s_j; d ), \quad
w_j^{v,q,\ell} = \frac{ d\xi_{\bar{v}_{\ell}^q} }{d\tau}( s_j; d ) \, w_j,
\end{align}
for $i,\, j = 0, \dotsc, N_{\beta} - 1$. To avoid division by zero, we set the
kernel $K^q$ to zero at integration points where the distance to the target
point is less than some prescribed tolerance, usually on the order of
$10^{-14}$.
\subsection{Integration algorithm for non-singular interactions}\label{sec:ReguInter}
Together with the singular integration method discussed in the
previous subsection, the (non-accelerated) high-order
solver~\cite{BrunoGarza2020} evaluates the integral
operator~\eqref{eq:IntOpParametric} at all regular target points
$x_{\ell} \in \Omega_q^{r,\delta}$ simply by means of Fej\'{e}r's
first quadrature rule:
\begin{equation}\label{eq:ReguIntOpQuadrature}
(\mathcal{I}^q \varphi)(x_{\ell}) \approx
\sum_{j=0}^{N_v^q-1} \sum_{i=0}^{N_u^q-1} K^q(x_{\ell},u_i,v_j)
\varphi^q(u_i,v_j) J^q(u_i,v_j) \, w_i w_j.
\end{equation}
It is not difficult to show that, asymptotically, the regular
interactions dominate the integral operator computation
(see~\cite[Section 4.4]{BrunoGarza2020}). Evaluating all non-singular
interactions using~\eqref{eq:ReguIntOpQuadrature} leads to an
algorithm with complexity $O(N^2)$ operations. For acoustically-large
problems such a computational cost becomes prohibitively expensive. To
deal with this difficulty we use, instead, the recent IFGF
acceleration method~\cite{BauingerBruno2021} to accelerate the
evaluation which is described in the following section. As indicated
in~\cite{BauingerBruno2021} for simple discrete operators, and is
confirmed for full scattering problems by the numerical results in
Section~\ref{sec:NumRes}, the IFGF method leads to an overall
algorithm that runs at computing cost of $O(N \log N)$ operations.
\section{IFGF method for fast evaluation of non-singular interactions}\label{sec:IFGF}
\subsection{General overview of IFGF method}\label{sec:GenIFGF}
The IFGF method provides a fast ($\mathcal{O}(N \log N)$) method for the
accelerated evaluation of discrete integral operators of the form
\begin{equation} \label{eq:field1}
I(x_\ell) \coloneqq \sum \limits_{\substack{m = 1 \\ m \neq \ell}}^N a_m G(x_\ell, x_m)
,\quad \ell = 1, \ldots, N,
\end{equation}
where, letting $\del\Dom_N \subset \mathbb{R}^3$ denotes an $N$-point
discretization of the surface $\Gamma$, $x_{\ell} \in \del\Dom_N$
($\ell, m = 1,\dotsc, N$) are pairwise different surface
discretization points, $a_m \in \mathbb{C}$ are arbitrary complex coefficients
and $G$ denotes a Green function such as the one displayed
in~\eqref{eq:GreenFunction}.
Given $D \in \mathbb{N}$, the IFGF method is based on a $D$-level octree
partition of the discrete surface $\del\Dom_N$, where the first octree
level consists of a single box that contains all of the surface
discretization points. Starting from the first level, the algorithm
partitions $\del\Dom_N$ recursively into axis-aligned boxes
$B_\mathbf{k}^d \subset \mathbb{R}^3$ ($d=1,\dotsc,D$) where
$\mathbf{k} \in \mathbb{N}^3$ is a multi-index that describes the
three-dimensional position of the box. Using this notation, we see
that the first octree level $(d=1)$ consists of the single box
$B_{(1,1,1)}^1$. The boxes are defined iteratively starting from the
single box $B_{(1, 1, 1)}^1 \supset \Gamma_N$ of side-length
$H_1$. The boxes on subsequent levels $d = 2, \dotsc, D$, are defined
through a partition of each of the level $(d-1)$ boxes into eight
equi-sized and disjoint boxes of side $H_d = H_{d-1}/2$ resulting in
the level $d$ boxes $B_\mathbf{k}^d$
($\mathbf{k} \in \{1, \ldots, 2^{d-1}\}^3 =: I^d_B$). Each box
$B_{\mathbf{k}}^d$ on level $d$ ($2 \leq d \leq D$) therefore is
contained in a parent box on level $d-1$, which we denote by
$\mathcal{P}B_{\mathbf{k}}^d$.
Figure~\ref{fig:domaindecomposition}(a) illustrates the
two-dimensional equivalent hierarchical octree structure in the
three-level ($D = 3$) case.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/IFGFPartitionAndCone2D.png}
\caption{(a) Two-dimensional sketch of the IFGF domain
decomposition for the three-level $(D=3)$ case; a scatterer is
sketched in blue. In particular, the figure displays the
neighbors (white boxes) and cousins (gray boxes) of the box
$B_{(2, 1)}^3$. (b) Illustration of typical cone segments for a
box $B_\mathbf{k}^d$ (shown in red and centered at the southeast
red box) and its parent box with some associated cone segments
(drawn in black).}
\label{fig:domaindecomposition}
\end{figure}
To achieve the desired acceleration, the IFGF method only considers
interactions in the \textit{relevant boxes} $\mathcal{R}_B$ set, which
are defined as the boxes that contain at least one surface
discretization point. More precisely, the relevant boxes are defined by
\begin{equation}\label{eq:RelevantBoxes}
\mathcal{R}_B := \{ B_{\mathbf{k}}^d : \Gamma_N \cap B_{\mathbf{k}}^d
\neq \emptyset, 1 \leq d \leq D, \mathbf{k} \in I^d \}.
\end{equation}
Furthermore, the following notation is introduced for the \textit{neighbors}
$\mathcal{N} B_\mathbf{k}^d$ and the \textit{cousins} (non-neighboring boxes
who are children of the parents neighbors) $\mathcal{M} B_\mathbf{k}^d$ of a
box $B_\mathbf{k}^d$ on level $d$ together with the sets of \textit{neighbor
points} $\mathcal{U} B_\mathbf{k}^d$ and \textit{cousin points} $\mathcal{V}
B_\mathbf{k}^d$
\begin{subequations}\label{eq:NeighborsCousins}
\begin{align}
\mathcal{N} B_\mathbf{k}^d &:= \{ B_\mathbf{j}^d \in \mathcal{R}_B : ||\mathbf{j} - \mathbf{k}||_\infty \leq 1 \}, \\
\mathcal{M} B_\mathbf{k}^d &:= \{ B_\mathbf{j}^d \in \mathcal{R}_B : B_\mathbf{j}^d \notin \mathcal{N} B_\mathbf{k}^d \wedge \mathcal{P}B_{\mathbf{j}}^d \in \mathcal{N} \mathcal{P} B_\mathbf{k}^d \} \\
\mathcal{U} B_\mathbf{k}^d &:= \left( \bigcup \limits_{B \in \mathcal{N} B_\mathbf{k}^d} B \right) \cap \Gamma_N \\
\mathcal{V} B_\mathbf{k}^d &:= \left( \bigcup \limits_{B \in \mathcal{M} B_\mathbf{k}^d} B \right) \cap \Gamma_N.
\end{align}
\end{subequations}
Figure \ref{fig:domaindecomposition}(a) illustrates the concept of neighbors
(white boxes) and cousins (gray boxes) in two dimensions for the single box
$B_{(2, 1)}^3$.
The IFGF algorithm accelerates the evaluation of \eqref{eq:field1} by
pairwise interactions of level-$d$ cousin boxes, for
$d = D, \ldots, 3$. (Indeed, the algorithm stops at level $d= 3$
since, per construction of the boxes and definition of cousins, at
that stage all boxes are cousins and therefore all interactions have
already been performed.) The IFGF method evaluates these interactions
by means of simple piece-wise interpolation of the Green function
analytic factor $g_\mathbf{k}^d(x, y)$, defined below, in box-centered
spherical coordinates, resulting in a set of so-called \textit{cone
segments} $C_{\mathbf{k}; \bf{\nu}}^d$ for each box
$B_\mathbf{k}^d$---each one of which represents the piece-wise
interpolation domain in spherical coordinates---and a set of
\textit{interpolation points} $\mathcal{X} C_{\mathbf{k}; \bf{\nu}}^d$
within the cone segment $C_{\mathbf{k}; \bf{\nu}}^d$. (The
problem-dependent multi-index
$\bf{\nu} \in I_C^d \subset \mathbb{N}^3$ labels the cone segments
associated with a single box.) More precisely, to achieve fast
computation times, a certain factorization
$G(x, y) = G(x, y_\mathbf{k}^d) g_\mathbf{k}^d(x, y)$ of the Green
function, into a \textit{centered} factor $G(x, y_\mathbf{k}^d)$
(centered at the center $y_\mathbf{k}^d$ of the box $B_\mathbf{k}^d$)
and an \textit{analytic} factor $g_\mathbf{k}^d(x, y)$, is used. The
field $I(x)$ in \eqref{eq:field1} can be expressed for each level $d$
as the sum over all multi-indices $\mathbf{k} \in I_B^d$ of fields
$I_\mathbf{k}^d(x)$ generated by point sources within the box
$B_\mathbf{k}^d$. Using the aforementioned factorization centered at
$y_\mathbf{k}^d$ yields
\begin{equation}\label{eq:IFGF-factor}
I_\mathbf{k}^d(x) = \sum \limits_{y \in B_\mathbf{k}^{d} \cap \Gamma_N}
a(y) G(x, y) = G(x, y_\mathbf{k}^d) F_\mathbf{k}^d (x), \qquad
F_\mathbf{k}^d(x) \coloneqq \sum \limits_{y \in B_\mathbf{k}^{d} \cap \Gamma_N}
a(y) g_\mathbf{k}^d(x, y),
\end{equation}
where $a(y)$ denotes the coefficient $a_m$ in \eqref{eq:field1} that
corresponds to the point $y \in \Gamma$.
The IFGF interpolation procedure is then only performed to evaluate
$F_\mathbf{k}^d$. In~\cite{BauingerBruno2021} it was shown---for the special
case of Helmholtz Green function---that the analytic factor is analytic up to
and including infinity in $\mathbb{R}^3\setminus \mathcal{N} B_\mathbf{k}^d$
and can therefore be interpolated accurately everywhere in
$\mathbb{R}^3\setminus \mathcal{N} B_\mathbf{k}^d$ using only a finite and
small number of interpolation points. It follows that the analyticity
properties and the interpolability transfer to $F_\mathbf{k}^d$ since it is a
finite sum of analytic factors.
The cone segments $C_{\mathbf{k}; \bf{\nu}}^d$ are defined iteratively for each
box---similarly to the boxes---but in reversed order starting from $d=D$ moving
upwards the tree to $d=3$. The choice of the cone segments depends on the level
$d$, the surface $\Gamma_N$, the wavenumber and, possibly, the Green function
$G$. While the procedure is straight-forward, it requires a lengthy discussion
which is not presented here but which was covered in detail
in~\cite{BauingerBruno2021}. A two-dimensional sketch of two box-centered cone
segments for a box $B_\mathbf{k}^d$ and its parent $\mathcal{P}B_\mathbf{k}^d$
is illustrated in Figure \ref{fig:domaindecomposition}(b).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/IFGFAndLocalInteraction.png}
\caption{(a) IFGF four level $(D=4)$ domain decomposition for a
sphere (only the finest level boxes are shown); a source surface
patch and its neighboring patches are highlighted in red and
yellow, respectively. (b) Near-singular and regular points
neighboring the red patch are depicted in blue and green,
respectively. All evaluations from the red patch to itself are
singular and, together with the near-singular and neighboring
regular interactions are computed by means of the
rectangular-polar algorithm described in
Section~\ref{sec:RecPolar}.}
\label{fig:LocalInter}
\end{figure}
To achieve the stated acceleration and perform the computation of
\eqref{eq:field1} in $\mathcal{O}(N \log N)$ operations, the IFGF algorithm
uses iterated interpolation to evaluate the analytic factor at the
interpolation points in consecutive levels. Moreover, analogously to
the set of relevant boxes, the method only considers the set of
\textit{relevant cone segments} $\mathcal{R}_C B_\mathbf{k}^d$
centered at the box $B_\mathbf{k}^d$,
\begin{subequations}\label{eq:RelevantCones}
\begin{align}
\mathcal{R}_C B_\mathbf{k}^d &:= \emptyset \quad \text{for} \quad d \in \{1, 2\}, \\
\mathcal{R}_C B_\mathbf{k}^d &:= \left\{ C_{\mathbf{k}; \mathbf{\nu}}^d \, : \, \nu \in I_C^d\, , \, C_{\mathbf{k}; \mathbf{\nu}}^d \cap \mathcal{V} B_\mathbf{k}^{d} \neq \emptyset \text{ or } C_{\mathbf{k}; \mathbf{\nu}}^d \cap \left( \bigcup \limits_{C \in \mathcal{R}_C \mathcal{P} B_\mathbf{k}^{d}} C \right) \neq \emptyset \right \} \quad \text{for} \quad d \geq 3.
\end{align}
\end{subequations}
The IFGF method is summarized in Algorithm \ref{alg:ifgf}. Note that
the algorithm does not compute the ``local interactions'', that is,
the interactions between neighboring boxes on the finest level
$(d=D)$. For the scattering solver proposed in this paper, such
interactions are evaluated by means of two separate methods, as
illustrated in Figure~\ref{fig:LocalInter} and described in what
follows. If the distance from a box's source points to a neighboring
box's target point $x_{\ell}$ is less than or equal to the proximity
distance $\delta$ defined in Section~\ref{sec:SingInter}, the local
interactions are evaluated by means of the algorithm described in that
section. In case the source points and the target point in neighboring
boxes are a distance greater than $\delta$, in turn, the quadrature
presented in Section~\ref{sec:ReguInter} is used instead. We emphasize
that local interactions are computed \emph{only in the finest level}
$(d=D)$. Figure~\ref{fig:LocalInter}(a) shows a four-level $(D=4)$
IFGF domain decomposition for a sphere, where a source patch and its
neighbors patches are highlighted in red and yellow, respectively. A
close-up view of these patches is shown in
Figure~\ref{fig:LocalInter}(b); source-to-neighbor near-singular
evaluation points are drawn in blue, while regular evaluation points
are depicted in green. All evaluations from the source patch to itself
are singular and are also computed using the algorithms from
Section~\ref{sec:SingInter}.
\begin{algorithm}[htp!]
\begin{algorithmic}[1]
\small
\State \textbackslash \textbackslash Direct evaluations of the
highest-level local interactions.
\For{$B_\mathbf{k}^D \in
\mathcal{R}_B$} \label{algstate:looprelevantboxeslevelD}
\For{$C_{\mathbf{k}; \mathbf{\nu}}^D \in \mathcal{R}_C
B_\mathbf{k}^D$} \label{algstate:looprelevantconeslevelD}
\Comment{Evaluate $F$ at all relevant interpolation points}
\For{$x \in \mathcal{X}
C_{\mathbf{k};\mathbf{\nu}}^D$} \label{algstate:loopinterpointslevelD}
\State Evaluate and store $F_\mathbf{k}^D(x)$. \EndFor \EndFor
\EndFor
\State
\State \textbackslash \textbackslash Interpolation onto surface discretization points and parent
interpolation points.
\For{$d = D, \ldots, 3$}\label{algstate:loopd}
\For{$B_\mathbf{k}^d \in \mathcal{R}_B$} \label{algstate:looprelevantboxes}
\For{$x \in \mathcal{V} B_\mathbf{k}^d$} \label{algstate:loopnearestneighbouringsurfacepoints}
\Comment{Interpolate at cousin surface points}
\State Evaluate $I_\mathbf{k}^d(x)$ by interpolation \label{algstate:interpolationtosurfacepoints}
\EndFor
\If {$d > 3$}
\Comment{Evaluate $F$ on parent interpolation points} \State Determine parent $B_\mathbf{j}^{d-1} = \mathcal{P} B_\mathbf{k}^d$
\For{$C_{\mathbf{j}; \mathbf{\nu}}^{d-1} \in \mathcal{R}_C B_\mathbf{j}^{d-1}$} \label{algstate:looprelevantcones}
\For{$x \in \mathcal{X} C_{\mathbf{j};\mathbf{\nu}}^{d-1}$} \label{algstate:loopinterppoints}
\State Evaluate and add $F_\mathbf{k}^d(x) G(x, y_\mathbf{k}^d)/ G(x, y_\mathbf{j}^{d-1})$
\EndFor
\EndFor
\EndIf
\EndFor
\EndFor
\caption{IFGF Algorithm}
\label{alg:ifgf}
\end{algorithmic}
\end{algorithm}
\section{Numerical results}\label{sec:NumRes}
This section presents numerical results that demonstrate the accuracy
and efficiency of the proposed IFGF-accelerated acoustic scattering
solvers. For comparison, results obtained using the $\mathcal{O}(N^2)$
nonaccelerated Chebyshev-based scattering solvers introduced
in~\cite{BrunoGarza2020} are also included. Both the accelerated and
nonaccelerated solver are implemented using OpenMP for shared-memory
parallelism.
After solving~\eqref{eq:CombLayIE} for the density $\varphi$, the far field
pattern $u^{\infty}$ can be obtained from
\begin{equation}\label{eq:FarField}
u^{\infty}(\hat{x}) = \frac{1}{4\pi}
\int_{\del\Dom} \left\{ \frac{\partial}{\partial \nu(y)} e^{-\text{i} k \hat{x} \cdot y}
- \text{i} \gamma e^{-\text{i} k \hat{x} \cdot y} \right\}
\varphi(y) \, dS(y), \quad \hat{x} \in \mathbb{S}^2,
\end{equation}
where $\mathbb{S}^2$ denotes the unit sphere and $\del\Dom$ is the scatterer's
boundary. The far field is computed over a uniformly-spaced unit spherical grid
\begin{equation}\label{eq:SphereGrid}
\mathbb{S}^2_N \coloneqq \left\{ (\phi_m,\theta_n) \in [0,\pi]\times[0,2\pi]
\,\,|\,\, 1 \leq m \leq N_{\phi}, \,
1 \leq n \leq N_{\theta} \right\},
\end{equation}
with $\phi_m = (m-1)\Delta \phi, \, \theta_n = (n-1) \Delta \theta$
and where the spacings are defined as $\Delta \phi = \pi/(N_{\phi}-1)$
and $\Delta \theta = 2\pi/(N_{\theta}-1)$, respectively; specific
values of $N_{\phi}$ and $N_{\theta}$ are given in each example's
subsection. Given the exact (or reference) far field modulus
$|u^{\infty}|$ and an approximate far field modulus
$|\tilde{u}^{\infty}|$, the maximum far field relative error
$\varepsilon_{far}$ over $\mathbb{S}^2_N$ given by
\begin{equation}\label{eq:FarFieldRelErr}
\varepsilon_{far} = \max_{(m,n) \in \mathbb{S}^2_N} \left\{
\frac{| |u_{m,n}^{\infty}| - |\tilde{u}_{m,n}^{\infty}| |}
{| |u_{m,n}^{\infty}| |} \right\}.
\end{equation}
is reported in each case
Similarly, using the solution $\varphi$ in the combined-layer
representation~\eqref{eq:CombinedLayer} we evaluate and display the
scattered field $u^s$ over near field planes that are parallel to the
$xy$-, $xz$-, or $yz$-planes. For example, we evaluate fields
(incident, scattered, and total) at every point of a uniformly-spaced
two-dimensional $xy$-planar grid $\mathbb{P}_N^{xy}(z_0)$ at $z = z_0$
defined by
\begin{equation}\label{eq:PlanarGrid}
\mathbb{P}_N^{xy}(z_0) \coloneqq \left\{ (x_m,y_m,z) \in [x_{min},x_{max}] \times
[y_{min},y_{max}] \times \{ z_0 \}
\,\,|\,\, 1 \leq m \leq N_x, \,
1 \leq n \leq N_y \right\},
\end{equation}
where the grid points are given by $x_m = (m-1) \Delta x, \, y_n = (n-1) \Delta
y$ and the grid spacings are $\Delta x = (x_{max} - x_{min}) /(N_x-1)$ and
$\Delta y = (y_{max} - y_{min}) /(N_y-1)$. Near field planar grids parallel to
the $xz$- and $yz$-plane are defined analogously. Denoting the exact (or
reference) and approximate modulus of the total field at each point of
$\mathbb{P}_N^{xy}(z_0)$ by $v_{m,n}$ ($=|u_{m,n}^s+u^i_{m,n}|$) and
$\tilde{v}_{m,n}$ ($=|\tilde{u}_{m,n}^s+u^i_{m,n}|$), respectively, we compute
the near field (total magnitude) relative error $\varepsilon_{near}$ over
$\mathbb{P}_N^{xy}(z_0)$ as
\begin{equation}\label{eq:NearFieldRelErr}
\varepsilon_{near} = \max_{(m,n) \in \mathbb{P}_N^{xy}(z_0)}
\left\{ \frac{|v_{m,n} - \tilde{v}_{m,n}|} {|v_{m,n}|} \right\}.
\end{equation}
The numerical results presented in what follows were obtained using a
single Intel Xeon Platinum 8276 2.20 GHz computer using 28
cores. Solutions to the complex-coefficient linear systems that arise
from discretizations of the boundary integral
equation~\eqref{eq:CombLayIE} were obtained with a complex-arithmetic
GMRES iterative solver~\cite{Saad1986}.
Following~\cite{BrunoKunyansky2001JCP}, we set the combined-layer
equation~\eqref{eq:CombLayIE} coupling parameter
$\gamma = \max\{ 3, A/\lambda \}$, where $A$ is the diameter of the
scatterer; computational results indicate that, to reach a given
residual tolerance, this value reduces the number of GMRES iterations
by a factor of $5-10$ compared with $\gamma = k$. Plots were
generated using the visualization software VisIt~\cite{HPV:VisIt}.
\subsection{Scattering by a sphere}
We consider plane wave scattering by a sphere of various acoustical
sizes. For a sound-soft acoustic sphere, the well-known closed-form
far field expression is used to compute relative
errors~\cite{BowmanEtAl1988}. Table~\ref{table:AcouSphere} summarizes
the accuracy and efficiency of the IFGF-accelerated solver and
nonaccelerated solver for a sphere of diameter ranging from $4$ to
$128$ wavelengths. For each problem, the number of IFGF levels is
selected so that the finest-level IFGF box side length is
approximately $0.5\lambda$. All computations are performed using a
GMRES residual tolerance set to $10^{-4}$. We report the total number
of unknowns, the size of the sphere in wavelengths, the time required
to compute one GMRES iteration as well as the total number of
iterations required to achieve the prescribed residual, and the far
field relative error. Far field relative errors are computed over the
spherical grid~\eqref{eq:SphereGrid} with
$(N_{\phi},N_{\theta}) = (200,200)$.
\begin{table}[htb!]
\caption{Comparison of IFGF-accelerated solver and nonaccelerated solver for
acoustic scattering by a sphere of acoustical sizes ranging from $4$ to $128$
wavelengths. The table summarizes the total number of surface unknowns, sphere
size in wavelengths, maximum number of IFGF levels, time required to compute one
GMRES iteration, total number of iterations, and far field relative error
$\varepsilon_{far}$. In all cases the GMRES residual tolerance was set to $10^{-4}$.}
\centering
\begin{tabular}{lcccccccc}\toprule
& & \multicolumn{3}{c}{Nonaccelerated} & \multicolumn{4}{c}{Accelerated}
\\ \cmidrule(lr){3-5} \cmidrule(lr){6-9}
Unknowns &Size &Time (1 iter.) &Tot. iter. &$\varepsilon_{far}$
&IFGF levels &Time (1 iter.) &Tot. iter. &$\varepsilon_{far}$ \\
\midrule
13,824 &4$\lambda$ &0.5 s &12 &1.1e-4
&4 &0.2 s &12 &1.3e-4 \\
55,296 &8$\lambda$ &7.4 s &14 &8.9e-5
&5 &1.0 s &14 &1.1e-4 \\
221,184 &16$\lambda$ &116.4 s &14 &2.6e-5
&6 &4.6 s &14 &6.3e-5 \\
884,736 &32$\lambda$ &1862.4 s (est.) &$\--$ &$\--$
&7 &19.4 s &16 &2.9e-5 \\
3,538,994 &64$\lambda$ &8.3 h (est.) &$\--$ &$\--$
&8 &83.1 s &18 &6.0e-5 \\
14,155,776 &128$\lambda$ &132.8 h (est.) &$\--$ &$\--$
&9 &443.2 s &21 &3.8e-4 \\
\bottomrule
\end{tabular}
\label{table:AcouSphere}
\end{table}
Table~\ref{table:AcouSphere} shows that the time per iteration
required by the nonaccelerated algorithm grows by a factor of around
$14.8-15.7$ as the number of points per dimension in each surface
patch is doubled (so that the overall number of unknowns is
quadrupled), which is consistent with the expected quadratic
complexity of the algorithm. For the IFGF-based solver, on the other
hand, the computing costs scale like $\mathcal{O}(N\log N)$, where $N$ is the
total number of discretization points. The reduced complexity of the
IFGF-based algorithm has a significant impact on computing times. At
$128$ wavelengths, the nonaccelerated solver takes more than $1000$
times longer than the accelerated method for each GMRES iteration; for
larger problems, the difference in compute times grows as expected
from the complexity estimates for the two methods. Note that the total
number of GMRES iterations necessary to satisfy the residual tolerance
is the same for both the nonaccelerated and accelerated solvers.
Additionally, the errors for both algorithms are comparable: the
nonaccelerated solver yields solutions for the $4,\,8$ and $16$
wavelength problems with an average relative error of
$1.1 \cdot 10^{-4}$, while errors obtained with the accelerated method
average $1.3 \cdot 10^{-4}$ across the entire $4$ to $128$ wavelength
range.
\subsection{Scattering by a submarine geometry}
In this section we present acoustic scattering simulations for a
realistic submarine configuration of up to $80$ wavelengths in
acoustical size. Due to its importance in detection and tracking
applications, methods for efficient and accurate scattering
simulations are the subject of ongoing research
~\cite{NellGilroy2003,SchneiderEtAl2003,MerzEtAl2009,Karasalo2012,WeiEtAl2016,TestaGreco2018,VenaasKvamsdal2020}.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{SubmarineMesh}
\caption{Submarine model and surface mesh with a total of $164,160$ points.
The submarine hull is aligned with the $z$-axis and the sail is parallel
to the $+y$-axis; the front of the vessel points in the $+z$-direction.}
\label{fig:SubmarineModel}
\end{figure}
The submarine model used in subsequent simulations, which is comprised of the
main hull, sail, diving planes, rudders, and a five-blade propeller, is
depicted in Figure~\ref{fig:SubmarineModel}. The complete submarine geometry
is contained in the bounding region $[-3.2,3.2] \times [-1.9,2.8] \times
[-19.2,10.9]$. Figures~\ref{fig:SubmarineModel}(b)
and~\ref{fig:SubmarineModel}(c) show a surface mesh of $4,560$ patches, each of
which is represented by $6\times 6$ points.
We consider plane wave scattering for two cases: a) head-on incidence and b)
oblique incidence. The incident field is a plane wave $u^i$ that travels along
the wave direction $\hat{k}$ and is given by
\begin{equation}\label{eq:PlaneWave}
u^i(x) = e^{ \text{i} k \hat{k} \cdot x}, \qquad
\hat{k} = \begin{pmatrix}
\cos \theta \sin \phi \\
\sin \theta \sin \phi \\
\cos \phi
\end{pmatrix},
\end{equation}
where the position vector $x = (x_1,x_2,x_3)$, $k>0$ is a given wavenumber, and
$(\theta,\phi) \in [0,2\pi) \times [0,\pi]$. Since the bow of the submarine
points in the $+z$-direction, ``head-on'' incidence corresponds to
$(\theta,\phi) = (0,\pi)$ in~\eqref{eq:PlaneWave}. For the oblique incidence
case we set $(\theta,\phi) = (0,5\pi/4)$.
To verify the accuracy of the IFGF-accelerated solver in the present
case, we conducted convergence studies for the submarine structure at
$10\lambda,\,20\lambda$, and $40\lambda$ in acoustical size (measured
from the bow to the propeller cap). In all cases the number of IFGF
levels was chosen so that the side length of the smallest,
finest-level, boxes is around $0.8\lambda$. The GMRES residual
tolerance was set to $10^{-3}$ in all cases. We start with a
$10\lambda$ vessel whose geometry is represented by $1,140$ surface
patches, each of which has $6 \times 6$ points. As the size of the
problem is doubled, the geometry is partitioned from the previous size
so that every patch is split into four subpatches while keeping the
same number of points per patch. Thus, for example, the $20\lambda$
problem uses four times as many surface points as the $10\lambda$
case. This is admittedly a suboptimal strategy, in this case, (as the
smaller patches on the propeller, rudders and diving planes which
already fully discretize the wavelength do not require additional
partitioning), which, however, simplifies the code implementation.
Additionally, this distribution of surface points makes suboptimal use
of the present version of the IFGF algorithm. As indicated
in~\cite{BauingerBruno2021}, the IFGF method can be extended to
incorporate a box octree algorithm that adaptively partitions a
geometry until each box contains a (small) prescribed number of
points, thus eliminating this difficulty. While such an addition is
left for future work, as demonstrated in Table~\ref{table:Submarine},
even the simple uniform-partition IFGF algorithm we use in this
contribution is sufficient to simulate scattering by a realistic
submarine geometry for up to $80$ wavelengths in size with several
digits of accuracy and using only modest computational resources. For
example, the $656,640$ unknowns, $40\lambda$ run for head-on
incidence, required a computing time of $313$ seconds per iteration
and a total of $78$ iterations. The fully adaptive version of the IFGF
algorithm, which, as mentioned above, is not pursued in this paper,
should yield for the submarine geometry computing times consistent
with those shown in Tables~\eqref{table:AcouSphere}
and~\eqref{table:Nacelle} for the sphere and nacelle geometries.
\begin{table}[htb!] \caption{Convergence study of IFGF-accelerated
acoustic solver for the submarine geometry, with acoustical sizes
ranging from $10$ to $40$ wavelengths. In all cases the residual
tolerance was set to $10^{-3}$.}
\centering
\begin{tabular}{lcccc}\toprule
& & & \multicolumn{1}{c}{Front Incidence} & \multicolumn{1}{c}{Oblique Incidence}
\\ \cmidrule(lr){4-4} \cmidrule(lr){5-5}
Unknowns &Size &IFGF levels &$\varepsilon_{near}$ &$\varepsilon_{near}$ \\
\midrule
41,040 &10$\lambda$ &5 &2.4e-4 &6.8e-4 \\
164,160 &20$\lambda$ &6 &1.8e-4 &6.0e-4 \\
656,640 &40$\lambda$ &7 &2.1e-4 &1.7e-4 \\
2,626,560 &80$\lambda$ &8 &2.1e-4 (est.) &4.8e-4 (est.) \\
\bottomrule
\end{tabular}
\label{table:Submarine}
\end{table}
Near field relative errors for front (head-on) and oblique plane wave incidence
are shown in Table~\ref{table:Submarine}. For each problem, we estimate
$\varepsilon_{near}$ over $\mathbb{P}_N^{xy}(z_0)$, where $[x_{min},x_{max}] \times
[y_{min},y_{max}] = [-12,12]^2,\, z_0 = -25$ and $N_x = N_y = 260$,
using~\eqref{eq:NearFieldRelErr} with a reference solution obtained with the
same number of surface patches as the target discretization but using $8 \times
8$ points per patch and a residual tolerance of $10^{-5}$. (Thus, the reference
solution uses nearly twice as many discretization points and it satisfies a
more stringent convergence condition.) The numerical results indicate that the
solution accuracy is consistent for both front and oblique incidence and for
all acoustical sizes considered. For front and oblique incidence, the relative
errors for the $10\lambda,\, 20\lambda$, and $40\lambda$ problems achieve an
average accuracy of $2.1\cdot 10^{-4}$ and $4.8 \cdot 10^{-4}$, respectively,
and we use these values to estimate the expected relative errors in the
$80$-wavelength case.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{SubmarineNearField80Lam0AngPlWav}
\caption{Total field magnitude $|u(x)| = |u^i(x) + u^s(x)|$ pseudocolor
plots for an $80$-wavelength submarine. The field is plotted over a uniform
grid of $1040\times 1760$ points for $(x,z) \in [-12,12]\times[-25,15]$.
In this case the incident plane wave impinges on the vessel head-on, which
corresponds to the wave direction $\hat{k}$ in~\eqref{eq:PlaneWave}
with $(\theta,\phi) = (0,\pi)$.} \label{fig:SubNF0Ang}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{SubmarineNearField80Lam45AngPlWav}
\caption{Total field magnitude $|u(x)| = |u^i(x) + u^s(x)|$ pseudocolor plots
for an $80$-wavelength submarine. The field is plotted over a uniform grid of
$1040\times 1760$ points for $(x,z) \in [-12,12]\times[-25,15]$. In this case
the incident plane wave impinges on the vessel at an oblique angle, which
corresponds to the wave direction $\hat{k}$ in~\eqref{eq:PlaneWave} with
$(\theta,\phi) = (0,5\pi/4)$.}
\label{fig:SubNF45Ang}
\end{figure}
In Figure~\ref{fig:SubNF0Ang} we present pseudocolor near field plots of the
total field magnitude $|u(x)| = |u^i(x) + u^s(x)|$ for front plane wave
incidence for an $80$-wavelength submarine. The field is plotted over a uniform
$1040\times 1760$ point planar grid $\mathbb{P}_N^{xz}(y_0)$ for $(x,z) \in
[-12,12]\times[-25,15]$ and $y_0 = 0$. The incident plane wave impinges on the
vessel head-on and we see in Figures~\ref{fig:SubNF0Ang}(a)
and~\ref{fig:SubNF0Ang}(b) that the strongest interaction occurs around the bow
and diving planes (also known as hydroplanes) of the ship. Shadow regions are
visible immediately behind the hydroplanes as well as along the hull,
particularly in the aft of the ship where the body tapers.
Figure~\ref{fig:SubNF0Ang}(c) shows that the wider sections of the ship
obstruct the propeller from most incoming waves and, as a consequence, there is
minimal interaction in this region.
Figure~\ref{fig:SubNF45Ang} shows near field pseudocolor plots for the same
$80$-wavelength submarine but this time for oblique plane wave incidence. The
total field magnitude is plotted over the uniform grid $\mathbb{P}_N^{xz}(y_0)$
described in the previous paragraph. In this case the wave interaction is
markedly different. We see the expected shadow region in the opposite side of
the incoming wave but there is now clear evidence of wave interaction between
the hull and diving planes as well as around the rudders and propeller. In
addition to multiple scattering, the close-up views of
Figures~\ref{fig:SubNF45Ang}(b) and~\ref{fig:SubNF45Ang}(c) show the formation
of bright spots near the junction of the left hydroplane and hull and in the
vicinity of the propeller.
\subsection{Scattering by an aircraft nacelle}
The simulation of aircraft engine noise has been the subject of intense
research for the past several decades due to its importance in civil aviation
applications~\cite{Nayfeh1975,Eversman1991,Casalino2008,Kewin2013,PyatuninEtAl2016}.
In this section we present simulations of sound propagation in and around the
turbofan engine nacelle model shown in Figure~\ref{fig:NacelleAndMesh}.
According to the nacelle wall liner case study~\cite{MSCSoft2021},
under typical operating conditions, engine nacelle noise occurs in the
$125\--5650$ Hz frequency range. For a typical airliner engine that is around
5 m long, these frequencies correspond to acoustical sizes between $2$ and $82$
wavelengths.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{NacelleModelAnd20LamMeshInset}
\caption{(a) Aircraft engine nacelle model, (b) translucent view of the
geometry where the center shaft is visible, and (c) $8,576$-surface-patch
discretization with $6\times6$ points per patch (for clarity, only every
other mesh point is plotted).}
\label{fig:NacelleAndMesh}
\end{figure}
The engine nacelle geometry used in the simulations that follow is depicted in
Figures~\ref{fig:NacelleAndMesh}(a) and~\ref{fig:NacelleAndMesh}(b); it is
comprised of an outer housing and a center shaft. The entire two-piece nacelle
structure is contained inside the bounding region $[-1.5,1.5] \times [-1.5,1.5]
\times [-3.27,3.27]$. The center shaft is aligned with the $z$-axis, with the
tip of the shaft pointing towards the positive direction. A discretization
with $8,576$ surface patches with $6 \times 6$ points per patch is shown in
Figure~\ref{fig:NacelleAndMesh}(c); for future reference, note that the inset
image shows that the mesh is not rotationally symmetric near the tip of the
shaft.
Two types of incident fields are used in simulations: a) a plane wave that
travels towards the $-z$-axis, so that it impinges on the nacelle head-on and
b) a set of eight point sources placed inside the housing around the center
shaft. As in the submarine example, the plane wave incident field is given
by~\eqref{eq:PlaneWave} with $(\theta,\phi) = (0,\pi)$. The incident field b),
on the other hand, serves as a simple model for fan noise generation inside the
nacelle and is given by
\begin{equation}\label{eq:PointSource}
u^i(x) = \sum_{j=1}^{8} \frac{e^{\text{i} k |x-x^j|}}{|x-x^j|},
\quad \text{with point source locations} \quad
x^j = (x^j_1, x^j_2, x^j_3) = (\cos \alpha_j, \sin \alpha_j, 2),
\end{equation}
where $\alpha_j = (j-1) \Delta \alpha + \pi / 8$, for $j = 1,\dotsc, 8$, and
$\Delta \alpha = \pi / 4$.
\begin{table}[htb!]
\caption{Convergence study of IFGF-accelerated acoustic solver for a nacelle
geometry of $10.2,\, 20.5$ and $40.9$ wavelengths for plane wave and point
source scattering. (The table also includes data for an $81.8\lambda$ nacelle
but in this case the near field relative error is estimated using the
average relative errors of the three previous problems.) The table summarizes
the total number of surface unknowns, nacelle size in wavelengths, maximum
number of IFGF levels, time required to compute one GMRES iteration, total
number of iterations, and near field relative error $\varepsilon_{near}$. In all cases
the GMRES residual tolerance was set to $10^{-3}$.}
\centering
\begin{tabular}{lccccccc}\toprule
& & & & \multicolumn{2}{c}{Plane Wave Incidence} & \multicolumn{2}{c}{Point Source Incidence}
\\ \cmidrule(lr){5-6} \cmidrule(lr){7-8}
Unknowns &Size &IFGF levels &Time (1 iter.)
&Tot. iter. &$\varepsilon_{near}$ &Tot. iter. &$\varepsilon_{near}$ \\
\midrule
77,184 &10.2$\lambda$ &6 &2.0 s
&33 &1.5e-3 &39 &4.1e-3 \\
308,736 &20.5$\lambda$ &7 &8.7 s
&47 &2.2e-3 &59 &4.0e-3 \\
1,234,944 &40.9$\lambda$ &8 &40.4 s
&55 &6.7e-4 &115 &2.4e-3 \\
4,939,776 &81.8$\lambda$ &9 &176.4 s
&65 &1.5e-3 (est.) &219 &3.5e-3 (est.) \\
\bottomrule
\end{tabular}
\label{table:Nacelle}
\end{table}
Table~\ref{table:Nacelle} tabulates the results of a convergence study
for both plane wave and point source incidence for a nacelle of
$10.2,\, 20.5$ and $40.9$ wavelengths in size. We also include results
for an $81.8$-wavelength nacelle. The number of IFGF levels is
selected so that the finest-level IFGF box side length is
approximately $0.6\lambda$ in all cases. All computations were
performed with a GMRES residual tolerance equal to $10^{-3}$. The
total near field magnitude relative error $\varepsilon_{near}$ was estimated
over a near field planar grid $\mathbb{P}_N^{xy}(z_0)$, where
$[x_{min},x_{max}] \times [y_{min},y_{max}] = [-4,4]^2, \,z_0 = -5$
and $N_x = N_y = 400$, by computing~\eqref{eq:NearFieldRelErr} with a
reference solution obtained with the same number of surface patches as
the target discretization but using $8 \times 8$ points per patch and
a residual tolerance of $10^{-5}$. In addition to the near field
relative error, Table~\ref{table:Nacelle} also includes the total
number of unknowns, the size of the nacelle in wavelengths, the time
required to compute one GMRES iteration and the total number of GMRES
iterations required to satisfy the $10^{-3}$ residual tolerance. Thus,
as the problem size increases from $10.2\lambda$ to $20.5\lambda$,
$20.5\lambda$ to $40.9\lambda$, and $40.9\lambda$ to $81.8\lambda$,
and the number of unknowns is quadrupled in each case, the computing
cost per iteration increases by a factor of only $4.4,\,4.6$ and
$4.4$, respectively (which is consistent with an $\mathcal{O}(N\log N)$
complexity), and not the 16-fold cost increase per wavelength doubling
that would result from a nonaccelerated algorithm with quadratic
complexity. This scaling of the IFGF-accelerated combined-layer solver
is consistent with the IFGF method computations presented
in~\cite{BauingerBruno2021}, which did not include singular local
interactions, and suggests that the partitioning and discretization of
the geometry makes optimal use of the IFGF algorithm. The results
also indicate that the discretization and $10^{-3}$ residual tolerance
is sufficient to produce solutions for the $10.2,\,20.5$ and $40.9$
wavelength cases with an average error of $1.5\cdot 10^{-3}$ for plane
wave scattering and $3.5\cdot 10^{-3}$ for point source
scattering. The average relative error values are used to estimate the
accuracy of the $81.8\lambda$ simulation, which also converged to the
same GMRES tolerance as the smaller problems. Note that, as reported
in Table~\ref{table:Nacelle}, the number of iterations required for
convergence increases by only $8\--14$ iterations.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{NacelleNearField82LamPlaneWave}
\caption{Total field magnitude $|u(x)| = |u^i(x) + u^s(x)|$
pseudocolor plots for an $82$-wavelength aircraft nacelle. The
field is plotted over a uniform grid of $2800\times 4000$ points
for $(x,z) \in [-4,4]\times[-5,6]$. The nacelle is aligned with
the $z$-axis and the front points towards the
$+z$-direction. The incident plane wave impinges on the geometry
head-on and travels towards the negative $z$-axis.}
\label{fig:NacNFPlWav}
\end{figure}
The total near field magnitude $|u(x)| = |u^i(x) + u^s(x)|$ for the
$81.8$-wavelength plane wave scattering case is displayed in
Figure~\ref{fig:NacNFPlWav}. The field magnitude is plotted over the
$xz$-planar grid $\mathbb{P}_N^{xz}(y_0)$ (recall the planar grid
definition~\eqref{eq:PlanarGrid}), where
$[x_{min},x_{max}] \times [z_{min},z_{max}] = [-4,4] \times [-5,6],
\,y_0 = 0$, with $N_x = 2800$ and $N_z = 4000$. Along most of the
exterior circumference of the nacelle housing, the total field forms a
relatively uniform stratified pattern. In other regions, intricate
multiple-scattering patterns develop, particularly in the region
around the intake and throughout the inside of the nacelle. For a
closer examination, Figures~\ref{fig:NacNFPlWav}(b)
and~\ref{fig:NacNFPlWav}(c) display top views of the field but with the
scattering surfaces removed. It is evident that the strongest
reflection occurs directly in front of the tip of the nacelle shaft.
Note the symmetry in the detail of the near field shown in
Figure~\ref{fig:NacNFPlWav}(b), which results in spite of the lack of
symmetry in the geometry discretization illustrated in the inset in
Figure~\ref{fig:NacelleAndMesh}(c).
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{NacelleNearField82LamPointSource}
\caption{Total field magnitude $|u(x)| = |u^i(x) + u^s(x)|$
pseudocolor plots for an $82$-wavelength aircraft nacelle. The
field is plotted over a uniform grid of $2800\times 4000$ points
for $(x,z) \in [-4,4]\times[-5,6]$. The nacelle is aligned with
the $z$-axis and the front points towards the
$+z$-direction. The incident field is given by the
sum~\eqref{eq:PointSource} of eight point sources within the
nacelle around the center shaft, four of which are shown as
small red spheres in panel~(b).}
\label{fig:NacNFPtSrc}
\end{figure}
Near fields for the eight-point source, $81.8$-wavelength, incident
field are displayed in Figure~\ref{fig:NacNFPtSrc}. The total field
magnitude is plotted over the same $xz$-planar grid $\mathbb{P}_N^{xz}(y_0)$
used for the plane wave scattering case. In
Figure~\ref{fig:NacNFPtSrc}(a), the point-source generated fields can
be seen to scatter and exit the front inlet and rear exhaust. The
close-up view in Figure~\ref{fig:NacNFPtSrc}(b) highlights the
location of four of the eight point sources, drawn as red spheres for
emphasis; the remaining four sources are obstructed from view by the
near field plane. In Figures~\ref{fig:NacNFPtSrc}(c)
and~\ref{fig:NacNFPtSrc}(d) the geometry is removed so we can examine
the field interaction within the scatterer in greater detail. Both
images exhibit complex multiple scattering and a high degree of
symmetry throughout the interior of the structure and in the regions
outside that surround the nacelle assembly. In contrast to the plane
wave scattering case, where the incident wave travels mostly parallel
to the housing and shaft, placing sources between the shaft and
nacelle walls guarantees that most waves scatter multiple times before
exiting the geometry.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{NacelleFarFieldAll}
\caption{Far-field magnitudes for the nacelle geometry under plane
wave and eight point-source incident fields. Panels (a) and~(b)
present the far field for a $40.9\lambda$ plane wave and panels (c)
and~(d) present the far field for an $81.8$-wavelength plane
wave. Panels~(e)-(g) display far fields for the eight-point
source incident field defined in~\eqref{eq:PointSource}, for
$40.9\lambda$ in panels~(e) and~(f) and for $81.8\lambda$ in
panel~(g).}
\label{fig:NacelleFF}
\end{figure}
The far field magnitudes are shown in Figure~\ref{fig:NacelleFF} for
both plane wave and point source incident
fields. Figures~\ref{fig:NacelleFF}(a) and~\ref{fig:NacelleFF}(b)
present the far field for a $40.9\lambda$ plane wave, while
Figures~\ref{fig:NacelleFF}(c) and~\ref{fig:NacelleFF}(d) present the
far field for an $81.8$-wavelength plane wave.
Using~\eqref{eq:FarField}, the far field $\tilde{u}^{\infty}$ is
computed over $\mathbb{S}^2_N$ (recall the spherical grid
definition~\eqref{eq:SphereGrid}) with
$(N_{\phi},N_{\theta}) = (2000,600)$ in the $40.9\lambda$ case and
$(N_{\phi},N_{\theta}) = (3200,800)$ for the $81.8$ wavelength plane
wave. More points are used at higher frequencies to resolve the far
field lobes that are visible in Figures~\ref{fig:NacelleFF}(b)
and~\ref{fig:NacelleFF}(d). The far field plots for both $40.9\lambda$
and $81.8\lambda$ plane wave scattering once again show that most of
the wave reflection occurs in the region directly in front of the
nacelle intake; note that this reflection intensifies as the
wavelength decreases. The maximum magnitude of the far field increases
by a factor of approximately $1.5$ for the $81.8\lambda$ wave compared
with the $40.9\lambda$ case. Figures~\ref{fig:NacelleFF}(e-g) show
the far field magnitude for the eight-point source incident field
defined in~\eqref{eq:PointSource} at $40.9$ and $81.8$
wavelengths. Figure~\ref{fig:NacelleFF}(e) displays a side view of the
$40.9\lambda$ far field magnitude $|\tilde{u}^{\infty}|$ including the
nacelle geometry, for reference, with the intake pointing
left. Figures~\ref{fig:NacelleFF}(f) and~\ref{fig:NacelleFF}(g), where
the geometry is not included, present the far field, with the positive
$z$ direction pointing out of the page, for the $40.9\lambda$ and
$81.8\lambda$ cases, respectively.
\section{Conclusions}\label{sec:Conclusion}
We presented an accelerated acoustic scattering boundary integral
solver based on a novel Interpolated Factored Green Function method
(IFGF) for the efficient evaluation of regular interactions and a
high-order rectangular-polar quadrature algorithm for local singular
and near-singular operator evaluations. Unlike standard nonaccelerated
methods which, for an $N$-point surface discretization, evaluate the
action of Green function-based integral operators at a complexity of
$\mathcal{O}(N^2)$, the IFGF method performs the same computation at a cost
of $\mathcal{O}(N\log N)$ operations. The IFGF accelerator relies on a
recursive interpolation scheme that enables the fast evaluation of
slowly-varying factored Green functions for groups of sources. The
IFGF-based acceleration approach does not rely on Fast Fourier
Transforms and this, in turn, gives rise to efficient shared- and
distributed-memory parallelization of the underlying algorithms. The
rectangular-polar local interaction algorithm, on the other hand,
evaluates integral operators at singular and near-singular target
points accurately and independently of the IFGF accelerator.
Although in this work we considered only a shared-memory implementation of
acoustic IFGF-based integral equation solvers, the numerical examples presented
show that even in this case our methods enable the efficient solution of
problems with millions of unknowns over complex geometries using only a single
computer. In addition to simulations of acoustic scattering by a sphere of up
to $128$ wavelengths in diameter, we demonstrated the versatility of our
numerical algorithms with computational results presented for realistic
geometries of relevance in acoustic applications, including acoustic scattering
by a submarine geometry and a turbofan nacelle.
\section*{CRediT authorship contribution statement}
\textbf{Oscar P. Bruno:} Conceptualization, Methodology, Validation,
Investigation, Resources, Writing, Supervision, Funding acquisition.
\textbf{Edwin Jimenez:} Conceptualization, Methodology, Software, Validation,
Investigation, Writing, Visualization. \textbf{Christoph Bauinger:}
Conceptualization, Methodology, Software, Validation, Investigation, Writing,
Visualization.
\section*{Declaration of competing interest}
The authors declare that they have no known competing financial interests or
personal relationships that could have appeared to influence the work reported
in this paper.
\section*{Acknowledgments}
This work was supported by NSF, DARPA and AFOSR through contracts
DMS-2109831 and HR00111720035 and FA9550-21-1-0373, and by the NSSEFF
Vannevar Bush Fellowship under contract number N00014-16-1-2808.
\bibliographystyle{amsplain}
|
1,116,691,499,076 | arxiv | \section{Introduction}
During the last few years, an intensive femtoscopy study of proton-proton collisions at the LHC has been provided by the CMS \cite{CMS}, ATLAS \cite{ATLAS}, ALICE \cite{ALICE}, and LHCb \cite{LHCb} Collaborations. Some interesting results, such as the saturation of the femtoscopy scales for increasing particle multiplicities, peculiarities of the intercept behavior for the correlation function, and anticorrelations of identical pions were observed. A decrease of the interferometry radii with an increase of pair transverse momentum in $p+p$ collisions was found if a specific selection of events (e.g., according to sphericity criteria \cite{ALICE}) is not performed. One interpretation of the radii behavior is the hydrodynamization of the systems created in very high-energy $p+p$ events with large multiplicities \cite{CMS}. In this way a successive description of the femtodata on $p+p$ collisions at $\sqrt{s} = 7$ TeV in the hydrokinetic model (HKM) has been reached in Ref. \cite{sinPLB}. This is one of the points that allows the CMS Collaboration to interpret the obtained results \cite{CMS} for $\sqrt{s} = 13$ TeV as a consequence of hydrodynamic expansion of the thermal systems formed in $p+p$ collisions at LHC energies.
At the same time, even if one admits the hydrodynamic scenarios, the description of the spectra and correlations in $p+p$ collisions requires an accounting of additional principal aspects compared to the case of $A+A$ collisions \cite{sinPLB}. A description of the latter needs neither an uncertainty principle explicitly, nor a hypothesis about the presence of Bose-Einstein condensate (as for the latter, see, e.g., Refs.~\cite{Rusk, Wong, Flor}), nor any other “nontrivial” physics. The particle yields and their ratios, hadron and photon spectra, anisotropic flows $v_n$, quantum statistical correlation functions that bring information about the chaoticity parameter and interferometry radii, and other observables in $A+A$ collisions are quite successfully described at the top RHIC energy and all the available LHC energies on the basis of relativistic viscous hydrodynamics, in particular, within integrated hydrokinetic model (iHKM), see Refs. \cite{13,14,15,16,17,18,19}. The reason for the success of standard hydrodynamic and kinetic methods is that at the active stage of spectra formation in $A+A$ collisions, the thermal/effective particle wavelengths are much smaller than the sizes of the system more precisely, than the corresponding homogeneity lengths \cite{hl1,hl2}
One of the peculiarities of the correlation femtoscopy for $p+p$ collisions is the smallness of homogeneity lengths in the strongly interacting system created in these processes: their typical effective sizes are about 1~fm, which is comparable with the mean wavelengths of emitted particles. As was considered in Ref. \cite{sinSmall}, the standard method of independent sources \cite{LLP} is violated because of the uncertainty principle: one cannot consider the emission of the particles from different parts of a small system as independent if the particle wave packets (or the regions associated with the effective wavelengths of the quanta) are essentially overlapping. As the result, in such cases the visible interferometry scale is reduced as compared to the geometrical system's size, and correlation function is suppressed: its intercept decreases \cite{sinSmall}. It is worth noting, that the approach to the problem of correlation femtoscopy for small systems, developed in Ref.~\cite{sinSmall} the approach which brings a good description of the 7~TeV $p+p$ data~\cite{sinPLB} deals, however, with events having small and fixed multiplicity, and does not use the hypothesis of thermalization.
In this paper, we propose the results for inclusive correlation femtoscopy in an analytically solved model of small thermal quantum systems. These findings could be applied for correlation measurements of the homogeneity lengths \cite{hl1,hl2} in $p+p$ collisions at large mean multiplicities in a way similar to what is used in Ref. \cite{sinPLB}.
\section{Statement of the problem and basic equations}
The main goal of the paper is to investigate the features of the inclusive spectra and correlations, which appear due to the smallness of considered quantum systems. For this purpose we apply the method of a local-equilibrium statistical operator~\cite{Zubarev}, which is a tool to obtain the density matrix $\hat{\rho}$ on the freeze-out hypersurface using the principle of maximal entropy $S(\sigma)$. Then the density matrix is defined by
\begin{equation}\label{rho0}
\hat{\rho}=\frac{1}{Z}e^{-S_{max}(\sigma)},
\end{equation}
where $S_{max}(\sigma)$ is a maximum of entropy on the hypersurface $\sigma$ (with timelike normal vector $n_{\mu}$) under conditions fixed by the local distributions of energy, momentum, and charge density (see Ref.~\cite{2} for details). These constraints must be taken into account, for example, by the method of Lagrange multipliers. For simplicity, we consider a real free scalar field in a~$(d+1)$-dimensional space-time which is associated with the stress–energy tensor $\hat{T}^{\mu \nu}(x)=\partial^{\mu}\hat{\phi}\partial^{\nu}\hat{\phi}-\frac{1}{2}g^{\mu\nu}\left( \partial^{\rho}\hat{\phi}\partial_{\rho}\hat{\phi} - m^2 \hat{\phi}^2 \right) $ and the current of particle number density $\hat{J}^{\mu}(x)=-i\hat{\phi}^{\dagger} \overset{\longleftrightarrow}{\partial^{\mu}} \hat{\phi}^{-}(x)$, where $\hat{\phi}^{\pm}(x)$ are the positive- and negative-frequency parts of the field, which are defined as follows:
\begin{equation}\label{scfield}
\hat{\phi}(x)=\hat{\phi}^{\dagger}(x)+\hat{\phi}^{-}(x)=\frac{1}{(2\pi)^{d/2}}\int\frac{d^dk}{\sqrt{2k^{0}} } \left( a^{\dagger}_{k}e^{ikx} + a^{}_{k}e^{-ikx}\right).
\end{equation}
Then the statistical operator [Eq. (\ref{rho0})] takes the form~\cite{2, Zubarev, Weert}:
\begin{equation}\label{rho}
\hat{\rho}=\frac{1}{Z}e^{ -\int d\sigma_{\nu}(x)\beta(x) n_{\mu}(x)\hat{T}^{\mu\nu}(x)+\int d\sigma_{\nu}(x)\mu(x)\beta(x) \hat{J}^{\nu}(x) },
\end{equation}
where $\beta(x)=\frac{1}{T(x)}$ and $\mu(x)$ are Lagrange multipliers, corresponding to the inverse temperature and the chemical potential, respectively, and $Z$ is a corresponding partition function such that $Tr[\hat{\rho}]=1$. The creation and annihilation operators obey the commutation relations:
\begin{equation}\label{commutation}
[a^{}_{k_1},a^{\dagger}_{k_2}]=\delta^d(\vec{k}_1-\vec{k}_2), \qquad [a^{\dagger}_{k_1},a^{\dagger}_{k_2}]=[a^{}_{k_1},a^{}_{k_2}]=0.
\end{equation}
Further, we consider an exact-solved model without internal flows on the hypersurface $\sigma_{\mu}$ with a uniform temperature distribution $T(x)=T$ in the moment of time $t=0$. Corresponding to the $\sigma$ normal vector is $n_{\mu}=(1,\vec{0} )$ so $ d\sigma_{\mu}= n_{\mu}d^dx$. Thus, using Eqs. (\ref{rho}) and (\ref{scfield}), we obtain:
\begin{equation}\label{rho2}
\hat{\rho}=\frac{1}{Z}\exp \left\lbrace -\beta \int d^{d}p p^{0}a^{\dagger}_{p}a^{}_{p}+ \frac{\beta}{(2\pi)^d}\int d^{d }x\mu(x) \frac{d^dk^{}}{\sqrt{2k^{0}}} \frac{d^dp^{}}{\sqrt{2p^{0}}} (k^0+p^{0})e^{-i(\vec{k}-\vec{p})\vec{x}} a^{\dagger}_{p} a^{}_{k} \right\rbrace.
\end{equation}
In the nonrelativistic limit, energy and chemical potential can be decomposed as $p^0=m+\frac{\mathbf{p}^2}{2m}$, $\mu(x)=m+\mu_0+\mu^{'}(x)$ (restrictions for chemical potential value will be discussed later). It is easy to see that terms which contain mass $m$ in the nonrelativistic limit of Eq.~(\ref{rho2}) are reduced. For simplicity, we take the chemical potential in parabolic form $\mu^{'}(x)=-\sum_{i=1}^{d}\frac{{x_i}^2}{2\beta R_{i}^2}$.
At this point, we are obliged to mention the paper \cite{Wong} which, unfortunately, we initially missed while working on the manuscript. In that article, authors consider a system of bosons in a self-consistent field of an oscillatory type. Then a “hybrid” model is constructed, where the lowest energy level is occupied by a coherent condensate with a fixed number of particles, while the distribution of particles over the remaining levels obeys the condition of grand canonical ensemble. Despite the similarity in mathematical formalism, such a formulation of the problem and the solution method are different from our approach of the quasiequilibrium statistical operator corresponding to the entropy maximum under given conditions (physical density distributions). In such a case the thermodynamic Wick theorem takes place in the system, and the chaoticity parameter is unity, which excludes a coherent condensate. The introduction of such a condensate into consideration is a specific separate problem, which we will discuss in this article later. Further, where appropriate, we will compare the results in both approaches.
Following Gaudin's idea~\cite{Gaudin}, modified for the case of local-equilibrium systems \cite{1}, we introduce new operators which depend on the dimensionless parameter $\alpha$ :
\begin{equation}\label{rhoalpha}
\hat{\rho}(\alpha)=\frac{1}{Z}e^{-\alpha \beta \hat{A}}, \qquad a^{\dagger}_{k}(\alpha)=\hat{\rho}(\alpha)a^{\dagger}_{k}\hat{\rho}(\alpha)^{-1}.
\end{equation}
The operator $\hat{A}$ here is defined in the following way:
\begin{equation*}
\hat{\rho}=\frac{1}{Z}e^{-\beta \hat{A}},
\end{equation*}
\begin{equation}\label{A}
\hat{A}=\int d^{d}p \sum_{i=1}^{d}\frac{p_{i}^2}{2m}a^{\dagger}_{p}a^{}_{p}+ \frac{1}{(2\pi)^d}\int d^{d }x\left\lbrace -\mu_0+\sum_{i=1}^{d}\frac{x_{i}^2}{2\beta R_i^2} \right\rbrace d^dk d^dp e^{-i(\vec{k}-\vec{p})\vec{x}} a^{\dagger}_{p} a^{}_{k}
\end{equation}
Using the new operators in Eq.~(\ref{rhoalpha}), an inclusive spectrum can be calculated~\cite{Gyulassy}:
\begin{equation}\label{1inclusivespectra}
n(p)=p^{0}\frac{d^dN}{dp^d}=Tr[\hat{\rho} a^{\dagger}_{p}a^{}_{p}]=\left. Tr[a^{\dagger}_{p}(\alpha)\hat{\rho}(\alpha)a_{p}] \right|_{\alpha=1} =Tr[\hat{\rho} a^{}_{p}a^{\dagger}_{p}(\alpha=1)].
\end{equation}
Explicit dependence of the operator $a^{\dagger}_{p}(\alpha)$ can be obtained from the next equation, which follows from definition (\ref{rhoalpha}):
\begin{equation}\label{eqforaalpha}
\frac{\partial a^{\dagger}_{p}(\alpha)}{\beta\partial \alpha}=[a^{\dagger}_{p}(\alpha),A].
\end{equation}
Substituting here the expression (\ref{A}) and taking into account the commutation relations~(\ref{commutation}), we get:
\begin{equation}\label{indifeq1}
-\frac{\partial a^{\dagger}_{k}(\alpha)}{\beta \partial \alpha}=\left( \sum_{i=1}^{d}\frac{k_{i}^2}{2m}-\mu_0\right) a^{\dagger}_{k}(\alpha)+\frac{1}{(2\pi)^d}\int d^d x \int d^d k^{'} \sum_{i=1}^{d}\frac{x_i^2}{2 R_i^2\beta } e^{i(\vec{k}^{'}-\vec{k})\vec{x}}a^{\dagger}_{k^{'}}(\alpha).
\end{equation}
Here it is useful to represent the coordinates $x_i$ in the form of the derivative of the exponent with respect to momenta:
\begin{equation}
x_i^2 e^{i(\vec{k}^{'}-\vec{k})\vec{x}}=-\frac{\partial^2}{\partial k^{'2}_i}e^{i(\vec{k}^{'}-\vec{k})\vec{x}}.
\end{equation}
Then, integrating by parts over $k^{'}_2$ twice allows us to integrate over $x_i$:
\begin{equation}
-\frac{\partial a^{\dagger}_{k}(\alpha)}{\beta \partial \alpha}=\left( \sum_{i=1}^{d}\frac{k_{i}^2}{2m}-\mu_0\right) a^{\dagger}_{k}(\alpha) - \frac{1}{(2\pi)^d} \int d^d k^{'}\int d^d x e^{i(\vec{k}^{'}-\vec{k})\vec{x}} \left(\sum_{i=1}^{d} \frac{1}{2 R_i^2\beta } \frac{\partial^2}{\partial k^{'2}_i} \right) a^{\dagger}_{k^{'}}(\alpha),
\end{equation}
\begin{equation}\label{finaleq}
-\frac{\partial a^{\dagger}_{k}(\alpha)}{\beta \partial \alpha}+\mu_0 a^{\dagger}_{k}(\alpha)= \int d^d k^{'}\delta^d(\vec{k^{'}}-\vec{k}) \sum_{i=1}^{d} \left( - \frac{1}{2 R_i^2\beta } \frac{\partial^2}{\partial k^{'2}_i} +\frac{k_i^2}{2m} \right) a^{\dagger}_{k^{'}}(\alpha).
\end{equation}
It is our basic equation that allows uus to find solutions for inclusive thermal mean values $\left\langle a^{\dagger}_{k_1} a^{}_{k_2}\right\rangle$ that define single- and double-particle spectra in the local-equilibrium systems with a parabolic falling chemical potential.
\section{Analytic solution of the problem}
Since the density matrix $\hat{\rho}$~[Eq. (\ref{rho2})] acting on any state does not change its particle number, the solution of Eq. (\ref{finaleq}) can be expressed as an integral over all creation operators. Moreover, due to its linearity, the general solution can be written as
\begin{equation}\label{formalsolution}
a^{\dagger}_k(\alpha)=\int d^dk^{'} \sum_{n}e^{-\alpha \beta \lambda_n}C_n(\vec{k},\vec{k^{'}})a^{\dagger}_{k^{'}},
\end{equation}
where ${C_n(\vec{k},\vec{k^{'}})}$ are solutions of oscillator-like equation:
\begin{equation}\label{Cneq}
(\lambda_n+\mu_0)C_n(\vec{k},\vec{k^{'}})=\left( - \frac{1}{2 R_i^2\beta } \frac{\partial^2}{\partial k^{2}_i} +\frac{k_i^2}{2m} \right) C_n(\vec{k},\vec{k^{'}}).
\end{equation}
Since $a^{\dagger}_k(\alpha=0)=a^{\dagger}_k$, $C_{n}(\vec{k},\vec{k^{'}})$ satisfy the additional condition:
\begin{equation}\label{Cncondition}
\sum_{n}C_{n}(\vec{k},\vec{k^{'}})=\delta^d(\vec{k}-\vec{k^{'}})
\end{equation}
From Eqs. (\ref{Cneq}) and (\ref{Cncondition}) it follows that $C_n$ can be factorized:
\begin{equation}\label{Ci}
C_n(\vec{k},\vec{k^{'}})=\sum_{\{n_i\}=0}^{\infty}\delta_{ n_1+n_2+..+n_d ,n}\prod_{i=1}^d C_{n_i}(k_i,k^{'}_i),
\end{equation}
where $\delta_{i,j}$ is the Kronecker delta.
Besides this, Eq.~(\ref{Cneq}) allows the separation of variables
$C_{n_i}(k_i.k^{'}_i)=A_{n_i }(k_i^{'})f_{n_i}(k_i)$. So, in terms of the variable $k_i$, it is the Schrödinger equation for a harmonic oscillator. Its solution is represented by the Hermite functions
\begin{equation}\label{psi}
\psi_n(bx)=\frac{1}{\sqrt{2^n n!}}\left( \frac{b}{\sqrt{\pi}}\right)^{1/2}e^{-b^2x^2}H_{n}(bx), \qquad H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}\left( e^{-x^2}\right),
\end{equation}
while Eq. (\ref{Cncondition}) is a completeness of the orthonormal basis
\begin{equation}\label{ortonorm}
\sum_{n_i=0}^{\infty} \psi_{n_i}(b_ik_i)\psi_{n_i }(b_ik_i^{'})=\delta(k_i-k_i^{'}).
\end{equation}
Then, Eqs. (\ref{Cncondition}), (\ref{Ci}), (\ref{psi}), (\ref{ortonorm}) yield
\begin{equation}
C_{n_i}(k_i.k^{'}_i)=\psi_{n_i}(b_ik_i)\psi_{n_i }(b_ik_i^{'}),
\end{equation}
where $b_i^2=R_i\Lambda_T=R_i/\sqrt{mT}$, and $\Lambda_T$ is the thermal (Compton) wavelength. The index $n$ in Eq.~(\ref{Cneq}) consists of $d$ components $\left(n=\left\lbrace n_1,n_2,...,n_d \right\rbrace \right)$ running runs from $0$ to infinity, and $\lambda_n=-\mu_0+\sum_{i=1}^{d}\lambda_{n_i}$. Altogether, the following notations are used in the paper:
\begin{equation}
\lambda_{i}=\omega_i\left(n_i+\frac{1}{2}\right), \qquad \beta\omega_i=\frac{\Lambda_T}{R_i}=\frac{1}{R_i\sqrt{mT}},\qquad b_i^2=\Lambda_TR_i=\frac{R_i}{\sqrt{mT}}.
\end{equation}
Now we are ready to write the solution in Eq.~(\ref{formalsolution}) precisely:
\begin{equation}\label{a+}
a^{\dagger}_p(\alpha)=e^{\alpha\beta\mu_0}\prod_{i=1}^{d}\left( \int d k_i \sum_{n_i=0}^{\infty} e^{-\alpha \omega_i \left( n_i+\frac{1}{2}\right) } \psi_{n_i}(b_ip_i)\psi_{n_i}(b_ik_i)\right) a^{\dagger}_k,
\end{equation}
which allows us to calculate the inclusive spectrum [Eq.~(\ref{1inclusivespectra})]
\begin{equation}\label{2inclusivespectra}
\left\langle a^{\dagger}_{k_1}a^{}_{k_2}\right\rangle =\left\langle a^{}_{k_2} a^{\dagger}_{k_1}\left( \alpha=1\right) \right\rangle = e^{\beta\mu_0} \prod_{i=1}^d \left( \int dk_i M_i\left( k_{1i},k_i\right) \right) \left\langle a_{k_2}a^{\dagger}_{k}\right\rangle.
\end{equation}
Here we introduce a kernel $M(\vec{k}_{1},\vec{k}_2)$
\begin{equation}\label{Mi}
M_i(k_{1i},k_i)=\sum_{n_i=0}^{\infty} e^{- \beta \omega_i \left( n_i+\frac{1}{2}\right) } \psi_{n_i}(b_ik_{1i})\psi_{n_i}(b_ik_i), \qquad M(\vec{k}_1,\vec{k})=\prod_{i}^d M_i(k_{1i},k_i).
\end{equation}
Equation~(\ref{2inclusivespectra}), with the commutation relations in Eq.~(\ref{commutation}) leads to the integral equation with a separable kernel with respect to the spatial components of momenta:
\begin{equation}\label{forapendix}
\left\langle a^{\dagger}_{k_1}a^{}_{k_2}\right\rangle e^{-\beta\mu_0} = \int d^dk \prod_{i=1}^d M_i\left( k_{1i},k_i\right) \left\langle a^{\dagger}_{k} a_{k_2}\right\rangle + \prod_{i=1}^d M_i\left( k_{1i},k_{2i}\right) .
\end{equation}
The solution of this equation can be found in the form \footnote{Do not consider $s$ as the number of particles in the system, the decomposition by which is derived in Appendix~A.}
\begin{equation}\label{3inclusive}
\left\langle a^{\dagger}_{k_1}a^{}_{k_2}\right\rangle= \sum_{s=1}^{\infty} e^{s \beta \mu_0}\prod_{i=1}^d K_i^{(s)} (k_{1i},k_{2i})=\sum_{s=1}^{\infty} e^{s \beta \mu_0}K^{(s)} (\vec{k}_{1},\vec{k}_{2})
\end{equation}
with the recurrent equation on $ K_i^{(s)} (k_{1},k_{2})$:
\begin{equation*}
K_i^{(1)}(k_1,k_2)=M_{i}(k_1,k_2),
\end{equation*}
\begin{equation}\label{inteq}
K_i^{(s)} (k_{1i},k_{2i}) =\int dk M_i(k_{1i},k)K_i^{(s-1)}(k,k_{2i}).
\end{equation}
The kernel $M_i(k_{1},k)$ can be calculated from the definition in Eq.~(\ref{Mi}) using Mehler's formula \cite{Mehler},\cite{Magnus}:
\begin{equation}
\sum_{s=0}^{\infty}u^s\psi_s(x)\psi_s(y)=\frac{1}{\sqrt{\pi (1-u^2)}}\exp\left( -\frac{1-u}{1+u}\frac{(x+y)^2}{4}-\frac{1+u}{1-u}\frac{(x-y)^2}{4} \right).
\end{equation}
\begin{equation}
M_i(k_{1i},k_{2i})=\sqrt{\frac{b_i^2}{2\pi \sinh(\beta \omega_i)}} \exp\left[ -(k_{1i}^2+k_{2i}^2)\frac{b_i^2\coth(\beta\omega_i)}{2}+k_{1i}k_{2i}\frac{b_i^2}{\sinh(\beta\omega_i)}\right].
\end{equation}
One can verify that the solution of Eq.~(\ref{inteq}) takes the form
\begin{equation}\label{Mi2}
K^{(s)}_i(k_{1i},k_{2i})=\sqrt{\frac{b_i^2}{2\pi \sinh(s\beta \omega_i)}} \exp\left[ -(k_{1i}^2+k_{2i}^2)\frac{b_i^2\coth(s\beta\omega_i)}{2}+k_{1i}k_{2i}\frac{b_i^2}{\sinh(s\beta\omega_i)}\right].
\end{equation}
Equations (\ref{3inclusive}), (\ref{inteq}), and (\ref{Mi}) lead to the expression for the inclusive spectrum, which in the variables $\vec{k}=\frac{\vec{p}_{1}+\vec{p}_{2}}{2}$ and $\vec{q}={\vec{p}_{1}-\vec{p}_{2}}$ takes the form:
\begin{equation}\label{finalspectrum}
\left\langle a^{\dagger}_{p_1} a^{}_{p_2}\right\rangle =\sum_{s=1}^{\infty}e^{\beta \mu_0 s} \prod_{i=1}^{d} \sqrt{\frac{b_i^2}{2\pi \sinh\left( s\beta \omega_i\right) }}e^{-b_i^2 k_i^2 \tanh\left(\frac{s\beta \omega_i}{2} \right) -\frac{b_i^2 q_i^2}{4} \coth\left( \frac{s\beta \omega_i}{2}\right)}.
\end{equation}
This equation corresponds to the one derived in Ref~\cite{Naraschewski} in the configuration representation for the trapped Bose gas. An average number of particles in the system can be obtained after integration over momentum
\begin{equation}\label{Ntot}
\left< N \right>=\int d^dp \left\langle a^{\dagger}_{p} a^{}_{p}\right\rangle=\sum_{s=1}^{\infty}e^{\beta\mu_0s} \prod_{i=1}^{d}\frac{1}{2\sinh \left( \frac{s\beta \omega_i}{2} \right) }.
\end{equation}
A necessary condition for convergence of the series is
\begin{equation}
\lim_{s\to\infty}e^{\beta\mu_0 s} \prod_{i}\sinh(s\beta\omega_i)^{-1/2}=\lim_{s\to\infty}e^{\beta(\mu_0-\frac{d \bar{w}}{2}) s}=0,
\end{equation}
which gives a restriction for the maximum value of the chemical potential $\mu_0$:
\begin{equation}\label{mumax}
\mu_0<\mu_{max}=\frac{d \bar{\omega}}{2}=\frac{\omega_1+\omega_2+...+\omega_d}{2}.
\end{equation}
The corresponding Wigner function can be obtained in the following way:
\begin{equation}\label{WFdef}
f_{W}(p,x)=\frac{1}{(2\pi)^d}\int d^d q \left\langle a^{\dagger}_{k+\frac{q}{2}} a^{}_{k-\frac{q}{2}} \right\rangle e^{-i \vec{q}\vec{x}},
\end{equation}
\begin{equation}\label{WF}
f_{W}(k,x)=\frac{1}{(2\pi)^d}\sum_{s=1}^{\infty}e^{\beta \mu_0 s} \prod_{i=1}^{d} \frac{1}{\cosh \left( \frac{s\beta \omega_i}{2} \right)} \exp \left( -\left( b_i^2 k_i^2 + x_i^2/b_i^2 \right) \tanh\left(\frac{s\beta \omega_i}{2} \right)\right) .
\end{equation}
This result, which follows directly from Eq.~(\ref{finalspectrum}), was earlier presented in Ref. \cite{Wong}.
One can expect that at some kind of thermodynamic limit, when the thermal wavelength of the emitting bosons is much smaller than the homogeneity length -- source size in our case -- a quasiclassical limit for the Wigner function~(\ref{WF}) should be reached. The naive expectation is that such a function takes the form of the Bose-Einstein distribution with the corresponding coordinate-dependent chemical potential. To demonstrate this, one has to consider the thermal (Compton) wavelengths of boson quanta $\Lambda_{T}$ to be much smaller than the size of the system, $\Lambda_{T }/R=\beta\omega=\frac{1}{R\sqrt{mT}} \ll 1$. For simplicity we investigate the isotropic case ($R_1=\dotsb=R_d=R$). In this case, a linear approximation to the hyperbolic functions in Eq.~(\ref{WF}) can be applied for $s$ less than some value -- $s_0$, say -- such that $s_0\beta\omega \approx \frac{1}{2}$. Another criterion for being able to get a quasi-classical limit is non-positiveness of the chemical potential, $\mu_0 < 0$. Then, one can get from Eq.~(\ref{WF})
\begin{equation*}
f_{W}(k,x)=\frac{1}{(2\pi)^d}\left[\sum_{s=1}^{s_0} e^{ - \left(\frac{k^2}{2mT} + \frac{x^2}{2R^2}-\frac{\mu_0}{T} \right)s }+ O\left(\frac{\Lambda_T}{R}\right) + \right.
\end{equation*}
\begin{equation}
+\left. \sum_{s=s_0+1}^{\infty}e^{\beta \mu_0 s} \frac{1}{\cosh^d( s \beta \omega /2)} \exp \left( -\left( b^2 k^2 + x^2/b^2 \right) \tanh\left(\frac{s\beta \omega}{2} \right)\right) \right]
\end{equation}
Extending the first sum up to infinity (and subtracting the added terms), we obtain a quasiclassical approximation with corrections that vanish in the thermodynamic limit when $\beta\mu_0 =$ const $< 0$ and $\beta \omega=\frac{\Lambda_T}{R} \to 0$:
\begin{equation}
f_{W,~qc}(k,x)=\frac{1}{(2\pi)^d}\sum_{s=1}^{\infty} \left( e^{\frac{\mu_0}{T}-\frac{k^2}{2m T}-\frac{x^2}{2R^2}} \right) ^s =\frac{1}{(2\pi)^d} \frac{1}{e^{\frac{k^2}{2m T}+\frac{x^2}{2R^2}-\frac{\mu_0}{T}} - 1}
\end{equation}
\section{Femtoscopy analysis}
\subsection{Basic notations}
To investigate correlations in our model we have to calculate the two-particle inclusive spectra \cite{Gyulassy} on the freeze-out hypersurface:
\begin{equation}\label{twoinclusive}
n(p_1,p_2)=p_1^0p_2^0\frac{d^{6}N}{dp_1^3dp_2^3}=Tr\left[ \hat{\rho} a^{\dagger}_{p_1} a^{\dagger}_{p_2} a^{}_{p_1}a^{}_{p_2} \right], \qquad n(p)= p^0\frac{d^{3}N}{dp^3}=Tr\left[ \hat{\rho} a^{\dagger}_{p} a^{}_{p} \right]
\end{equation}
\begin{equation}\label{cfdefinition}
C\left(k,q\right)=\frac{n(p_1,p_2)}{n(p_1)n(p_2)}, \qquad k=\frac{p_1+p_2}{2}, \qquad q=p_1-p_2,
\end{equation}
where $C\left(k,q\right)$ is a correlation function (CF), which carries information about the femtoscopy scales of the system $R_{side}, R_{out}, R_{long}$. The extraction of these radii can be performed by the Gaussian fit of the CF in the low-$q$ region~\cite{Makhlin}:
\begin{equation}
C(k,q)=1+\lambda(k)e^{-R^2_{out}q^2_{out}-R^2_{side}q^2_{side}-R^2_{long}q^2_{long}}.
\end{equation}
The value of the CF at zero relative momentum $q=0$ is usually called an intercept $C(k,0)=1+\lambda(k)$, and $\lambda(k)$ is a chaoticity parameter. In numerical calculations we consider only the isotropic systems ($R_1=R_2=R_3=R$) and find the interferometry radius by fitting the one-dimensional projection of the CF (i.e. $q_1=q$, $q_2=q_3=k_2=k_3=0$) in the range of $q$ limited by the condition of $1+\lambda(k)>C(k,q)>1+0.7\lambda(k)$. The obtained interferometry radius will be addressed as $R_{HBT}$.
\subsection{Ideal Bose gas femtoscopy}
The thermal average of four operators in a noninteracting boson system in grand canonical ensemble is reduced to a sum of the products of two-operator averages by means of Wick's theorem:
\begin{equation}\label{wicktheorem}
\left< a^{\dagger}_{p_1} a^{\dagger}_{p_2}a^{}_{p_3} a^{}_{p_4}\right> = \left< a^{\dagger}_{p_1} a^{}_{p_3} \right>\left< a^{\dagger}_{p_2} a^{}_{p_4}\right>+\left< a^{\dagger}_{p_1} a^{}_{p_4} \right>\left< a^{\dagger}_{p_2} a^{}_{p_3}\right>
\end{equation}
Moreover, for the grand canonical ensemble of ideal Bose gas in a finite volume, the partition function of the whole ensemble factorizes over all possible energy levels which means that Wick's theorem is applicable even for each of these levels independently. Consequently, to examine the correlation of the system, one needs to calculate only two-operator averages:
\begin{equation}\label{CF}
C(k,q)=\frac{\left< a^{\dagger}_{k_1}a^{}_{k_2} \right> \left< a^{\dagger}_{k_2}a^{}_{k_1} \right> } {\left< a^{\dagger}_{k_1}a^{}_{k_1} \right> \left< a^{\dagger}_{k_2}a^{}_{k_2} \right>}+1.
\end{equation}
As a result, in contrast to Ref.~\cite{Wong}, where the coherent condensate is postulated from the very beginning, in a pure thermal system, which is presented at the freeze-out stage as the local-equilibrium free Bose gas, the chaoticity parameter $\lambda(p)\equiv 1$.
In our approach the ground state of the system is described by the grand canonical ensemble, which implies any number of particles occupying this state, so the consideration of a coherent condensate (if it appears) should be different from just postulating its existence with a fixed particle number as in Ref. \cite{Wong}. We will discuss the possibility of a scenario with a coherent condensate in the next subsection.
Aiming to show the importance of quantum effects in small systems for the femtoscopy analysis, we compare the correlation functions in quantum and quasiclassical approaches. For this purpose, the pointlike bosons with the masses of $K$ and/or $\pi$ mesons are considered on the freeze-out hypersurface with the temperature $T=T_{f.o.}=155$~MeV ($\sim 10^{12}$~K); then, the thermal wavelengths of quanta are $\Lambda_{T}^{K}=\frac{1}{\sqrt{m_{K} T}}\approx 0.75$ fm for kaons and $\Lambda_{T}^{\pi}\approx$1.35~fm for pions. Figure~\ref{fig:1} shows the dependence of the correlation functions on the relative momentum $q$ of kaon pairs at the half-momentum $k=0.15$~GeV/$c$ and negative chemical potential $\mu_0=-0.1 \mu_{max}$ in both approaches. Three different homogeneity scales are considered: $R=0.75$~fm, $R=1.25$~fm, and $R=3$~fm. For sizes of about 1~fm, which are typical for $p+p$ collisions \cite{ppALICE}, the quantum corrections are substantial as one can see. At the same time, for $R=3$~fm the corrections are fairy small, so that for the sizes typical for $A+A$ collisions they can be ignored, except for the case when $\mu_0 \to \mu_{max}$ [see Eq. (\ref{mumax})].
\begin{figure}[h]
\center{\includegraphics[scale=1]{plot1.eps}}
\caption{Quantum (solid lines) and quasiclassical (dashed lines) kaon correlation functions for different sizes of the system at $T=155$ MeV and $k=0.15$~GeV/$c$. The chemical potential is negative, $\mu_{0}=-0.1 \mu_{max}$ ($N\simeq 1$). Blue lines correspond to $R=0.75$~fm, red lines to $R=1.25$~fm, green lines $R=3$~fm.}
\label{fig:1}
\end{figure}
The femtoscopic analysis of high-energy $p+p$ collisions accompanied by relatively large multiplicities has to be carried out more carefully, since the particle density is high ($\mu \to \mu_{max}$), and most bosons occupy the lowest energy level. In Appendix~B the average number of particles on this level is found and it can be described by Eq. (\ref{N0}):
\begin{equation}\label{44}
\left< N_0 \right> = \frac{1}{e^{\beta(\mu_{max}-\mu_0)}-1}.
\end{equation}
Note, that the same formula follows generally from general Eq.~(\ref{Ntot}) when the thermal wavelengths of quanta exceed the geometrical size of the system $\Lambda_{T}\gtrsim R$ [e.g., $T\to 0$ or $R \to 0$, after the linearization of~$\sinh (n\beta\omega)$]. The total number of bosons on this level still strongly depends on the constant part of the chemical potential $\mu_0$. It is worth noting that the momentum spectrum $\left< a^{\dagger}_{k_1}a^{}_{k_2}\right>$ of the lowest energy level [Eq.~(\ref{f0}); see Appendix~B] factorizes over the $k_1$ and $k_2$, so that the correlation function (\ref{CF}) in one-level approximation is constant, $C(q)=2$. That leads to an important consequence that is shown in Fig.~\ref{fig:2}(a) -- specifically, a broadening of the complete correlation function with the increase of the chemical potential (when the impact of the ground state increases). It means that interferometry radii obtained from the Gaussian fit of the real correlation function can be noticeably smaller than those formally related to the isotropic Gaussian source, $f_{G}\sim \exp(-x^2/2R^2)$, with a naive correlation function for an independent boson emission, $C(q)=1+\lambda\exp{(- R^2q^2)}$, and $\lambda=1$ for a fully chaotic emission.
At the end of this section, let us emphasis again that even a large number of bosons at the lowest level in an ideal Bose gas, concentrated in effectively limited volume and considered in the grand canonical ensemble, does not bring coherence in the system.
\subsection{Coherence state approach}
Now we approach a very important point:When occupation numbers at the ground state become dominant, it can lead to significant overlap between wave packets of bosons, and coherence in the system may develop \cite{sinSmall, Csorgo, Glauber}. The strongly overlapping bosons can hardly be considered as fully independently emitted, and even rather small interactions between them can bring correlations of the phases of the wave packets \cite{sinSmall}. In that case, systems with high multiplicities, $\mu_0$ close to $\mu_{max}$, and $ R \lesssim \Lambda_T = \frac{1}{\sqrt{mT}}$, have to be described by a density matrix of partially coherent thermal states\footnote{ To distinguish averages with this new density matrix from those of the grand canonical ensemble, we will use the ${pce}$ subscript. (Averages in the grand canonical ensemble, where we think it is important, labeled with ${gce}$ subscript). }.
According to the general idea described in Ref.~\cite{akk_sin_ledn} (see also Ref.~\cite{Thirring}) creation (annihilation) operators at the freeze-out stage split into a quantum part, associated with $q$-numbers (operators) $b_p$ and some $c$-numbers $d_p(\Delta t_f)$ where $\Delta t_f$ is the freeze-out duration time. In the case of slow adiabatic freeze-out $d_p (\Delta t_f \rightarrow \infty) \rightarrow 0$, while in the fast freeze-out scenario that takes place in $p+p$ collisions, the coherent condensate, if it appears, might give a nonzero contribution to observed spectra (see details in Ref.~\cite{akk_sin_ledn}). Since an area of our interest is small systems, it is reasonable to consider a fast freeze-out scenario with a near simultaneous decay of the boson coherent field into free particles: $d_p(\Delta t_f \rightarrow 0)=d_p \neq 0$.
The description of a trapped Bose condensed gas in atomic physics is usually followed with the problem of fluctuations in the occupancy of the ground state $\bar{N}_0$ \cite{Naraschewski, Politzer}. Specifically, the description of such systems using the grand canonical ensemble predicts variance proportional to the square of this quantity: $\left< \left(N_0- \bar{N}_0\right)^2\right>_{gce}=\bar{N}_0\left(\bar{N}_0+1\right)$, which can be derived from the distribution in Eq.~(\ref{C13}). The problem appears from the discrepancy with the experimental settings, as in the process of cooling the number of particles in the system is conserved, and an appropriate description should be made in the canonical ensemble. Usually, the energy exchange of the system with the environment is small, and the microcanonical ensemble has to be used instead (see, for example, Ref.~\cite{Tran}).
In the high-energy $p+p$ collisons at the LHC, we describe not a single event (single collision) with some known number of particles, but rather millions of them in a wide range of event-by-event multiplicities (from a few hundred). In addition, detectors typically cannot detect the whole system formed in $p+p$ collisions, but only part of it. In this open subsystem the energy and even net quantum numbers fluctuate quite significantly. Moreover, the temperature at the freeze-out in these processes is about $10^{12}$ K. Of course, this forces us to base the inclusive measurements on the grand canonical ensemble, possibly with some modifications.
Instead of GCE, we propose to use a new conception of partially coherent ensemble (PCE) that is applied if the {\it mean} occupancy of the ground state exceeds some critical value $N_{c}$. The latter depends on the peculiarity of (weak) interaction in the Bose gas, that makes it not quite ideal. Depending on whether the condition $\left\langle N_0\right\rangle > N_{c}$ is satisfied, we attribute the ensemble either to the PCE or to the GCE. In the case of the PCE, the lowest state transforms into a Glauber coherent state in a way when all the mean values in the GCE and PCE are the same.\footnote{ A similar, to some extent, approach was proposed in Ref.~\cite{Wong}, where the number of bosons in the lowest energy state is {\it fixed}, while occupancies of the exited states obey GCE statistics. It that model, however, the particle number variance at the ground state and that at $N_c$ are both zero.}
In a partial coherent ensemble, the ground state is a Glauber coherent state, and its wave function is given in the Fock representation by
\begin{equation}\label{cohpart}
\left|\gamma\right> =\exp\left( -\frac{\left|\gamma \right|^2}{2}\right)\sum_{n_{\bf 0}=0}^{\infty}\frac{\gamma^{n_{\bf 0}}}{\sqrt{n_{\bf 0}!}}\left|n_{\bf 0}\right>.
\end{equation}
The description of all excited states remains the same as in the GCE. Then the action of the annihilation (creation) operator on a single ensemble element factorizes into the two parts which were discussed before:
\begin{equation}
a^{}_{p}\left| {\bf i}\right> = \left( b^{}_{p}+d^{}_p \right)\left| \gamma \right>\left| {\bf i}\right>_{ex} = \left( b^{}_{p}+d^{}_p \right)\left| {\bf i}\right>,
\end{equation}
where the $c$-number $d_{p}$ is an eigenvalue of the annihilation operator $a^{}_{p}$ corresponding to the coherent state in Eq.~(\ref{cohpart}), and $b^{}_{p}$ quantum operator acts only on exited states. To define the $d_{p}$ number, it is more natural to use annihilation(creation) operators which decrease(increase) the occupancy numbers of a three-dimensional harmonic oscillator state $a_{\bf j}$. Such operators are connected with those in the momentum space $a^{}_{p}$ through the Hermite functions [Eq.~(\ref{psi})]:
\begin{equation}\label{cohpart2}
a^{}_{p}=\sum_{j_1,j_2,j_3=0}^{\infty}\psi_{\bf j}(p)a_{\bf j}, \qquad \psi_{\bf j}=\psi_{j_1}(b_1p_1)\psi_{j_2}(b_2p_2)\psi_{j_3}(b_3p_3).
\end{equation}
Then, according to Eqs.~(\ref{cohpart}) and (\ref{cohpart2}), the absolute value of $d_{p}$ can be expressed be the following average
\begin{equation}\label{gamma}
\left<\gamma\right|a^{\dagger}_{p_1}a^{}_{p_2}\left| \gamma \right> =d^{*}_{p_1}d^{}_{p_2}= \psi_{\bf 0}(p_1)\psi_{\bf 0}(p_2) \left| \gamma \right|^2.
\end{equation}
We, however, still did not fix the value of $\left| \gamma \right|^2$, and to do that we postulate that the one-particle inclusive spectra [Eq.~(\ref{finalspectrum})] and the Wigner function [Eq.~(\ref{WF})] in both ensembles must be the same. This condition is satisfied if we fix $\left| \gamma \right|^2=\left<N_0\right>$ from Eq.~(\ref{N0}). When this is done, it is possible to calculate the contribution from the excitation to the inclusive spectra $\left<b^{\dagger}_{p_1}b^{}_{p_2}\right>$:
\begin{equation}\label{qp_cpnd}
\begin{matrix}
\left\langle a^{\dagger}_{p_1}a^{}_{p_2} \right\rangle_{gce}=\left\langle a^{\dagger}_{p_1}a^{}_{p_2} \right\rangle_{pce} =\left\langle \left( b^{\dagger}_{p_1}+ d_{p_1}^{*}\right) \left( b^{}_{p_2}+d^{}_{p_2} \right) \right\rangle _{pce} = d^{*}_{p_1}d^{}_{p_2} + \left\langle b^{\dagger}_{p_1}b^{}_{p_2}\right\rangle _{pce}, \\
\left< b^{\dagger}_{p_1}b^{}_{p_2}\right>_{pce}=\sum_{{\bf i}\neq {\bf 0}}\psi_{\bf i}(p_1)\psi_{\bf i}(p_2)\left<a^{\dagger}_{\bf i}a^{}_{\bf i}\right>_{pce}=\sum_{{\bf i}\neq {\bf 0}}\psi_{\bf i}(p_1)\psi_{\bf i}(p_2)\left<a^{\dagger}_{\bf i}a^{}_{\bf i}\right>_{gce}.
\end{matrix}
\end{equation}
As we already mentioned, it is reasonable to expect that coherence develops only if the number of bosons occupying the ground state exceeds some critical value $N_{c}$. Indeed, it is hard to imagine a coherent state of one (on average) particle.\footnote{For the coherent state described by Eq.~(\ref{cohpart}), the probability of detecting $m$ particles obeys Poisson distribution $P(m,n)=e^{-n}\frac{n^m}{m!}$ with the average $n=\left| \gamma \right|^2$.} In this paper, we do not discuss the exact dependencies of this number on different parameters (such as the size of the system); we keep it in our numerical examples to be fixed at $N_{c}=2$. That means that the consideration described in this subsection is applicable, as we suggest, only when $\left< N_0 \right>>N_{c}$ [see Eq.~(\ref{N0average}) in Appendix~B]. This condition creates some restriction on the chemical potential $\mu_0$ (or average number of particles in the whole system $\left< N\right>$). For example, in Figs.~\ref{fig:2}(b), \ref{fig:3}(b), and \ref{fig:4}(b), where the size ($R=1.5$~fm) and temperature ($T=155$~MeV/$c$) are fixed, we start our description from $\left<N\right>=5$ as a minimal value which satisfies the mentioned condition. One can extract some values of $N_0$ from Table~{\ref{table:1}} using the relation $f_0=\left<N_0\right>/\left<N\right>$ [the analytic form for $f_0$ follows from Eqs. (\ref{Ntot}) and (\ref{44})). Indeed, from the first column, it follows that $\left<N_0\right>\approx 0.4\times5=2=N_c$ particles. For the larger multiplicities, that number only grows.
\subsection{Femtoscopy in the coherent approach}
Introducing the new ensemble in the previous subsection, we break Wick's theorem of the grand canonical ensemble, which means that we have to modify the correlation function defined by Eq.~(\ref{cfdefinition}). Let us rewrite the two-particle inclusive spectra Eq.~(\ref{twoinclusive}) in a representation described by Eq.~(\ref{cohpart2}):
\begin{equation}
\left\langle a^{\dagger}_{p_1} a^{\dagger}_{p_2} a^{}_{p_1}a^{}_{p_2} \right\rangle_{pce}=\sum_{\bf i,j,k,l}\psi_{\bf i}(p_1)\psi_{\bf j}(p_2)\psi_{\bf k}(p_1)\psi_{\bf l}(p_2)\left\langle a^{\dagger}_{\bf i} a^{\dagger}_{\bf j} a^{}_{\bf k}a^{}_{\bf l} \right\rangle_{pce}.
\end{equation}
To simplify this expression, we take a few steps: We distinguish terms which involve $a^{\dagger}_{\bf 0}, a^{}_{\bf 0}$ operators, apply Wick's theorem to the other terms,\footnote{For the ideal gas each energy level can be considered as an independent grand canonical ensemble, since excited states in the introduced partially coherent and grand canonical ensemble are the same; then we can apply Wick's theorem for them, but not for the ground state.} and then express two-operator averages of excited states ($a_{\bf i}, {\bf i}\neq {\bf 0}$) through the one-particle inclusive spectra by means of Eq. (\ref{qp_cpnd}). After these calculations, we get
\begin{equation}
\begin{matrix}
\left\langle a^{\dagger}_{p_1} a^{\dagger}_{p_2} a^{}_{p_1}a^{}_{p_2} \right\rangle_{pce} = \left\langle a^{\dagger}_{p_1} a^{}_{p_1} \right\rangle_{gce} \left\langle a^{\dagger}_{p_2} a^{}_{p_2} \right\rangle_{gce} + \left\langle a^{\dagger}_{p_1} a^{}_{p_2} \right\rangle_{gce} \left\langle a^{\dagger}_{p_1} a^{}_{p_2} \right\rangle_{gce} + \\
+ \left|\psi_{\bf 0}(p_1)\right|^2\left|\psi_{\bf 0}(p_2)\right|^2 \left\langle a^{\dagger}_{\bf 0} a^{\dagger}_{\bf 0} a^{}_{\bf 0}a^{}_{\bf 0} \right\rangle_{pce} -2 \left|d^{*}_{p_1}d_{p_2}\right|^2 = \\
= \left\langle a^{\dagger}_{p_1} a^{\dagger}_{p_2} a^{}_{p_1}a^{}_{p_2} \right\rangle_{gce} + \left|\psi_{\bf 0}(p_1)\right|^2\left|\psi_{\bf 0}(p_2)\right|^2 \left\langle a^{\dagger}_{\bf 0} a^{\dagger}_{\bf 0} a^{}_{\bf 0}a^{}_{\bf 0} \right\rangle_{pce} -2 \left|d^{*}_{p_1}d_{p_2}\right|^2
\end{matrix}
\end{equation}
An average of four operators on the right side of this equation is an expectation value of $n_0^2$ taken from the coherent state [Eq.~(\ref{cohpart})]. It is known that this state is described by the Poisson distribution with both average and variance equal to $\left| \gamma\right|^2=\left<N_0\right>$; then
\begin{equation}\label{cohcf}
\begin{matrix}
\left\langle a^{\dagger}_{\bf 0} a^{\dagger}_{\bf 0} a^{}_{\bf 0}a^{}_{\bf 0} \right\rangle_{pce}=\left<N_0\right>\left( \left<N_0\right>+1 \right) \\
\left\langle a^{\dagger}_{p_1} a^{\dagger}_{p_2} a^{}_{p_1}a^{}_{p_2} \right\rangle_{pce} = \left\langle a^{\dagger}_{p_1} a^{\dagger}_{p_2} a^{}_{p_1}a^{}_{p_2} \right\rangle_{gce} -\left|d^{*}_{p_1}d_{p_2}\right|^2\left(1-\frac{1}{\left<N_0\right>} \right).
\end{matrix}
\end{equation}
The last equation together with Eqs.~(\ref{twoinclusive}), (\ref{cfdefinition}), (\ref{wicktheorem}), and (\ref{gamma}) was used in numerical calculations in Figs.~\ref{fig:2}(b), \ref{fig:3}(b), and \ref{fig:4}(b).
\subsection{Comparison of results}
In Fig.~\ref{fig:2} one can see how the condensation affects the CF. For small numbers of particles, the contribution to the CF from the condensate is negligible, and the CF behaves in the same way as in chaotic systems [compare Figs.~\ref{fig:2}(a) and~\ref{fig:2}(b)]. It is easy to see that in the case where a condensate occurs, the intercept is less then 2 and is determined by the condensate contribution to the inclusive spectrum. The latter is controlled by the constant part of the chemical potential $\mu_0$, the ratio of the thermal wavelength to the size of the system $\frac{\Lambda_T}{R}=\beta\omega$, and the average momentum of the pair $k$.
Correlation functions in both approaches were built according the procedure described in previous subsections. Chemical potentials (see Table~\ref{table:1}) were wound numerically to guarantee proper values of $\left<N\right>$. Additionally we give corresponding condensate contributions to the spectrum $f_0=\left<N_0\right>/\left<N\right>$ which grow with the increase of multiplicity.
\begin{table}[h!]
\centering
\begin{tabular}{| c || c | c | c | c| c| c|}
\hline
$\left< N \right>$ & 5 & 20 & 40 & 80 & 160 & 250\\
\hline
$\frac{\mu_{max}-\mu_{0}}{\mu_{\max}}$ & $3.02\times10^{-1}$ & $4.69 \times10^{-2}$ & $2.09\times10^{-2}$& $9.87\times10^{-3}$ & $4.80\times10^{-3}$ & $3.04\times10^{-3}$\\
\hline
$f_{0}$ & 0.40& 0.77& 0.88& 0.94& 0.97& 0.98\\
\hline
\end{tabular}
\caption{Relative chemical potentials $\frac{\mu_{max}-\mu_{0}}{\mu_{\max}}$ and ground state occupancies $f_0$ at different multiplicities $\left<N\right>$ of the pion systems with $R=1.5$~fm and $T=155$~MeV. Such parameters correspond to the value $T/\omega\approx1.12$ ($\mu_{max}\approx207.6$~MeV) in Fig.~\ref{fig:3}(a).}
\label{table:1}
\end{table}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{corfunctions1.eps} \includegraphics[scale=0.5]{corfunctions2.eps}
\end{center}
\caption{(a) Quantum statistical CFs for the different chemical potentials
with a disordered condensate (solid lines) and for a Gaussian source related to the geometrical size $R$ (blue dashed line) at $k=0.3$~GeV/$c$, $R=1.5$~fm, and $T=155$~MeV. The solid lines correspond to different chemical potentials, and therefore to different average numbers of particles in the system $\left< N\right>$. (b) CF of the systems with the same $k,R,$ and $T$ as in (a) in the partial coherent state approach.}
\label{fig:2}
\end{figure}
In Fig.~\ref{fig:3}(b) one can see that at low $k$, the chaoticity parameter $\lambda(k)$ decreases in systems with a coherent condensate when $\mu_0$ approaches $\mu_{max}$. and so $\left< N \right>$ grows. It might be associated with similar experimental observations for $p+p$ collisions reported by the CERN ATLAS \cite{ATLAS} and LHCb \cite{LHCb} Collaborations. At small multiplicities the condensate contribution is small [see Fig~\ref{fig:3}(a)], and $\lambda(k)$ stays close to unity which is typical for chaotic systems. One can see from Fig.~\ref{fig:3}(b) as was also mentioned in Ref.~\cite{Wong}, that the difference between the chaoticity parameters in systems with high and low levels of coherence vanishes quite quickly with the increase of the momenta of measured boson pair $k$. This happens, as follows from Eq.~(\ref{f0}), due to the localization of the condensate in a low kinematic region of $\sqrt{\left<k^2\right>} \sim \frac{1}{\sqrt{R\Lambda_{T}}} $, whereas the exited states shift the same average to the higher momenta. Let us mention that color lines on the plot correspond to the fixed values of $\left< N\right>$ that, however, means that the chemical potential $\mu_{0}(\left<N\right>, \beta\omega)$ has to be defined numerically for each point of the plot independently.
Figure~\ref{fig:3}(a) demonstrates the fraction $f_0$ of the average number of particles in the coherent condensate $\left< N_{0} \right>$ compared $\left<N\right>$ for the different multiplicities available for the $p+p$ collisions at LHC, and different sizes and temperatures of the system. For the real experimental data we expect to consider sizes $R\approx 1.5$~fm, temperatures of freeze-out $T=150-165$~MeV, and multiplicities $\left< N\right> \approx 5 - 20$ identical bosons ($\pi^{\pm}$ mesons). We, however, demonstrate much wider sets of parameters in order to compare results with $N$ in Ref. ~\cite{Wong},\footnote{In Ref.~\cite{Wong}, the ground state is occupied by the fixed number of particles.} which should coincide, since mathematically both approaches provide the same number of particles in the ground state (or the average number in our case), which certainly cannot be said about fluctuation.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{f0.eps} \includegraphics[scale=0.5]{lambda.eps}
\end{center}
\caption{(a) The fraction of coherent condensate $f_0= \left\langle N_{0}^{coh}\right\rangle /\left\langle N \right\rangle$ as a function of $1/\beta\omega=R/\Lambda_T$ at different mean boson number $ \left\langle N \right\rangle$; (b) the $k$- dependence of chaoticity parameter $\lambda(k)$ in the grand canonical ensemble with a coherent condensate for different $\left\langle N \right\rangle$.}
\label{fig:3}
\end{figure}
Since $T/\omega=R/\Lambda_T=R\sqrt{mT}$, the plot in Fig.~\ref{fig:3}(a) can be applied for both $K$ and $\pi$ mesons, and one can fix $R$ to find the ``critical'' temperature $T_c(\mu)$, where $f_{0}$ becomes substantial; or fix $T$ , for example, at the typical freeze-out temperature $T_{f.o.}=155$ MeV and consider the plot as $f_0(R)$ to determine if coherence could develop in the system.
As we see in Fig.~\ref{fig:2}, the presence of a coherent condensate changes not only intercept but also the shape of the correlation functions. The latter affects the femtoscopy radii $R_{HBT}$. In the systems with a coherent condensate, the radii $R_{HBT}$, as one can see from Fig.~\ref{fig:4}, are higher than in those (with the same multiplicities) where the coherence does not develop. Also, as is demonstrated in Fig.~\ref{fig:4}(b), the $R_{HBT}(k)$ in the partial coherent approach oscillate near some fixed value which can be defined from the asymptotic behavior of this dependence, while in a fully chaotic system, the femtoscopy radii at low- and high-$k$ regions can differ a lot. This happens because in low-k region, the correlation function is suppressed in the presence of the condensate, while this is not the case in a pure ideal gas, where the contribution from the lowest level at small $k$ reduces the interferometry radius. At high momenta $k$, the plots in both Fig.~\ref{fig:4}(a) and \ref{fig:4}(b) to the same constant value for all multiplicities, which can be found from Eq.~(\ref{finalspectrum}) if one aborts series on the first term and neglects the condensate terms in the CF [Eq.~(\ref{cohcf})]. Then, in this approximation
\begin{equation}
C(k,q)=C(q)=1+e^{-\frac{q^2R\Lambda_{T}}{\sinh\left( \frac{\Lambda_{T}}{R}\right)}},
\end{equation}
which corresponds to $R_{HBT}=\sqrt{\frac{R\Lambda_{T}}{\sinh\left( \frac{\Lambda_{T}}{R}\right)}}$ and $\lambda(k)=1$ [see Fig.~\ref{fig:3}(b)]. For the large systems with $R \gg \Lambda_T$, this limit can be simplified to $R_{HBT}=\sqrt{R\Lambda_{T} \left( \frac{\Lambda_{T}}{R} \right)^{-1}}=R$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{HBT3.eps} \includegraphics[scale=0.5]{HBT4.eps}
\end{center}
\caption{Results of the HBT fit of the pion CF at low $q$ for the small source size of $R=1.5$~fm at $T=T_{f.o.}=155 $~MeV/$c$. The plot in (a) corresponds to the fully chaotic systems, and (b) shows systems with the coherent condensate.}
\label{fig:4}
\end{figure}
\section{Conclusions}
In this paper, we have studied the Bose-Einstein correlations in small local-equilibrium systems in a simple model having an exact analytic solution. We have considered a free scalar field on the freeze-out hypersurface with a uniform temperature. It is shown, that in systems, comparable in size with the thermal wavelength of emitted bosons, quantum corrections to the two-particle correlation functions of identical particles are substantial. Qualitatively, interferometry radii of the considered systems are smaller than those formally related to the Gaussian source with the same radii as geometrical sizes of the system. This difference increases in systems with higher multiplicities.
In the case of strong overlap of the wave packets in the ground state in most of the events -- the overlapping that happens because the thermal wavelengths of the quanta are larger or similar compared with the geometric size of the system and/or because the chemical potential in the center of the system approaches its maximal value -- the coherent Bose-Einstein condensate can appear. It leads to reduction of the intercept of the inclusive correlation function.
Note that the effect of the reduction of the femtoscales and suppression of the correlation functions compared with a naive picture of independent boson emission from a Gaussian source of the same effective size was found in a nonthermal model in Ref.~\cite{sinSmall}. Now, in the local-equilibrium thermal model, we have demonstrated in addition that the chaoticity parameter decreases, when the multiplicity grows. It might be associated with similar experimental observations for $p+p$ collisions reported by the CERN ATLAS \cite{ATLAS} and LHCb \cite{LHCb} Collaborations.
The results found in this paper for the model of a small thermal source are planned to be applied for the analysis of femtoscopic phenomena in $p+p$ collisions at the LHC.
\section{Acknowledgement}
This research was carried out within the project ``Spatiotemporal dynamics and properties of superdense matter in relativistic collisions of nuclei, and their signatures in current experiments at the LHC, RHIC and planned FAIR, NICA". Agreement No. 7/2020 with the NAS of Ukraine. It is partially supported by the Tomsk State University Competitiveness Improvement Program.
|
1,116,691,499,077 | arxiv |
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
\section{Introduction} \label{sec:intro}
Most supernovae (SNe) except Type Ia SNe are produced by core-collapse in massive stars at the end of their lives. The majority of core-collapse SNe belong to Type II SNe (SNe II) and their progenitors are hydrogen-rich supergiants \citep[e.g.,][]{Smith2011}. Type Ib and Ic SNe (SNe Ib/Ic) lack hydrogen lines in their spectra. This implies that their progenitors are single Wolf-Rayet (WR) stars which lost their hydrogen-rich envelopes by stellar wind mass-loss, and/or hydrogen-deficient stars produced in binary systems by interactions with their companion stars \citep[see][for a review]{Yoon2015}. Recent observational studies indicate that the ejecta mass distribution of SNe Ib/Ic is concentrated in the range of $1-4~M_\odot$ \citep[e.g.,][]{Drout2011, Cano2013, Lyman2014, Taddia2015}. The implied progenitor masses at explosion (i.e., about $2.4 - 5.4~M_\odot$) are significantly lower than the observed WR star masses and the predictions of stellar evolutionary models of single massive stars (i.e., $M \gtrsim 8~M_\odot$; e.g., \citealt{Hamann2006, Meynet2005, Eldridge2006, Georgy2012, Yoon2015, Hamann2019, Sander2012, Sander2019}). This favors the binary star scenario for SN Ib/Ic progenitors \citep[e.g.,][]{Podsiadlowski1992a,Vanbeveren1998, Wellstein1999, Yoon2010,Eldridge2008, Yoon2015, Yoon2017a}.
One of the best methods to study the exact nature of SN progenitors would be to directly identify the progenitor stars in the pre-explosion images. Unlike the SN II progenitors that have been observed as optically bright supergiants \citep[e.g.,][]{Smartt2009b, Smartt2015}, progenitors of SNe Ib/Ic turn out to be harder to detect. For more than 10 recently observed SNe Ib/Ic, no direct evidence for their progenitors has been found despite fairly deep observations using the \textit{Hubble Space Telescope (HST)} or other ground-based telescopes \citep[e.g.,][]{Eldridge2013, Smith2014, Smartt2015,Vandyk2017}. So far, only three cases are reported as candidates of SN Ib/Ic progenitors: iPTF13bvn \citep[Type Ib][]{Cao2013, Bersten2014, Eldridge2015}, SN 2019yvr \citep[Type Ib][]{Kilpatrick2021}, and SN 2017ein \citep[Type Ic][]{Vandyk2018, Kilpatrick2018}, among which only the progenitor of iPTF13bvn has been conclusively identified by its disappearance \citep{Eldridge2016,Folatelli2016}.
This implies that the majority of SN Ib/Ic progenitors are fainter in the optical than the observed WR stars in the local universe. This would be because most SN Ib/Ic progenitors are produced in binary systems and/or because SN Ib/Ic progenitor properties at the pre-SN stage is significantly different from those of observed WR stars due to the evolutionary effects \citep{Yoon2012}. Several authors make predictions on the optical properties of SN Ib/Ic progenitors using stellar evolution models of single and binary stars under the black body approximation or using detailed non-local thermodynamic equilibrium (non-LTE) stellar atmospheric models \citep{Yoon2012, Eldridge2013, Groh2013a, Groh2013b,Kim2015}. One of the important findings in these studies is that the location of the photosphere formed in an optically thick wind is an important factor to determine the optical brightness of a SN Ib/Ic progenitor \citep{Kim2015}. This means that the uncertain wind mass-loss rate at the pre-SN stage would be critical for the optical properties of SN Ib/Ic progenitors.
In this study, we systematically explore the effects of winds on the optical brightness and spectra of SN Ib/Ic progenitors for a wider parameter space than considered in \citet{Kim2015}, in terms of progenitor mass, chemcial composition, wind mass-loss rate and wind terminal velocity. For this purpose we use non-LTE stellar atmospheric models calculated with the radiative transfer code CMFGEN \citep{Hillier1998a,Hillier2003}. In Section \ref{sec:model}, we explain the detailed physical assumptions and SN Ib/Ic progenitor models used in this study. In Section \ref{sec:result}, we present our calculation results and discuss observational properties resulting from the various different assumptions of the mass-loss rate and wind terminal velocity. In Section \ref{sec:implication}, we discuss constraints on the mass-loss rate of SN Ib/Ic progenitors by comparing our models with the detection limits of undetected SN Ib/Ic progenitors provided by previous studies on pre-explosion images. We also discuss the implications of our results for the directly observed SN Ib/Ic progenitor candidates. We conclude and summarize our study in Section \ref{sec:conclusion}.
\section{Models and Physical Assumptions} \label{sec:model}
\subsection{Input Progenitor Models} \label{subsec:modelproperty}
Most of our progenitor models are based on the study of \citet[][hereafter
\citetalias{Yoon2017b}]{Yoon2017b}. These models are constructed by
calculating the evolution of pure helium stars at solar metallicity ($Z =
0.02$) using the \texttt{BEC} code \citep[see][and its references]{Yoon2010},
until the end of core oxygen burning, for 9 different initial helium star
masses ($M_\mathrm{He,i}$ = 3.9, 4, 6, 8, 10, 12, 15, 20 and 25~$M_\odot$). The
adopted WR wind mass-loss rate is given by the prescription of
\citetalias{Yoon2017b}.
This prescription is based on the empirical mass-loss rates by the Potsdam
group~\citep{Hamann2006, Hainich2014} for WNe type WR stars ($\dot{M}_\mathrm{WNE}$) and by
\citet{Tramper2016b} for WC/WO type WR stars ($\dot{M}_\mathrm{WC}$), respectively, as the following:
\begin{equation}
\begin{split}
\dot{M}_\mathrm{WNE}=f_\mathrm{WR}\left(\frac{L}{L_\odot}\right)^{1.18} \left(\frac{Z_\mathrm{init}}{Z_\odot}\right)^{0.60} 10^{-11.32} \\
\mathrm{for}\: Y=1-Z_\mathrm{init}~,
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\dot{M}_\mathrm{WC}= f_\mathrm{WR} \left(\frac{L}{L_\odot}\right)^{0.83} \left(\frac{Z_\mathrm{init}}{Z_\odot}\right)^{0.25} Y^{0.85} 10^{-9.20}
\\ \mathrm{for}\:Y<0.90~~,
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\dot{M}=(1-x)\dot{M}_\mathrm{WNE}+x\dot{M}_\mathrm{WC} \\
\mathrm{for}\:0.90 \le Y<1-Z_\mathrm{init}~,
\end{split}
\end{equation}
where
\begin{equation}\label{eq:Mdot_Y17_x}
x=(1-Z_\mathrm{init}-Y)/(1-Z_\mathrm{init}-0.9)~.
\end{equation}
Here the mass-loss rate is given in units of $M_\odot~\mathrm{yr^{-1}}$, and
$Y$ and $Z_\mathrm{init}$ denote the surface helium mass fraction and the initial metallicity of WR stars, respectively.
The wind factor $f_\mathrm{WR}$ is a free parameter and \citetalias{Yoon2017b} suggests $f_\mathrm{WR}= 1.58$ to explain the
luminosity distribution of WR stars of different types. See
\citetalias{Yoon2017b} for more details.
We also take two helium-poor SN Ic progenitor models (CO2.16 \& CO3.93) from \citet[][hereafter \citetalias{Yoon2019}]{Yoon2019}, which are calculated until the pre-SN stage with the \texttt{MESA} code \citep{Paxton2011,Paxton2013,Paxton2015,Paxton2018,Paxton2019}. For these two models, the standard mass-loss rate of \citet[][hereafter \citetalias{Nugis2000}]{Nugis2000} until the core helium exhaustion and thereafter an artificially increased mass-loss rate of $\dot{M}=10^{-4}~M_\odot ~\mathrm{yr^{-1}}$ is adopted. However, because we are interested in exploring the effects of different wind properties on the resulting spectra for a given progenitor structure, the details on how each progenitor model is constructed are not important in this study.
The physical properties of the progenitor models are summarized in Table \ref{tab:input}. The model name starting with `HE' refers to the helium-rich models, where the surface mass fraction of helium is $Y = 0.98$ and the integrated helium mass ($m_\mathrm{HE}=\int X_\mathrm{He}~dM_r$) is fairly large ($m_\mathrm{He} \ge 0.88~M_\odot$). The model name starting with `CO' refers to the helium-poor models with $Y < 0.5$ and $m_\mathrm{He} \le 0.23~M_\odot$. The numbers after HE or CO denote the progenitor masses in units of solar mass at the pre-SN phase. Recent studies indicate that not much helium can be hidden in SNe Ic spectra and here we assume that the HE and CO models represent SN Ib and Ic progenitors, respectively \citep[see][for related discussions]{Hachinger2012,Yoon2017b,Yoon2019,Dessart2020,Williamson2021}. The considered progenitor masses are $2.91 - 5.05~M_\odot$ and $2.16 - 9.09~M_\odot$ for helium-rich and helium-poor models, respectively. This can cover most of the SN Ib and Ic progenitor mass range inferred from SN observations (i.e., the SN ejecta masses of 1.0 - 4.0~$M_\odot$) assuming that the neutron star remnant mass is about 1.4~$M_\odot$.
\begin{deluxetable*}{hlrccrrcrccccc}
\tablenum{1}
\tablecaption{Input Physical Parameters of SN Ib/Ic Progenitor Models\label{tab:input}}
\tablehead{
\nocolhead{Model} & \colhead{Name} & \colhead{$M_\mathrm{He,i}$} & \colhead{$M$} &
\colhead{log $L$} & \colhead{$T_\star$} & \colhead{$R_\star$} &\colhead{$\dot{M}_\mathrm{fid}$} & \colhead{$v_{\infty,\mathrm{fid}}$} &
\colhead{$m_\mathrm{He}$} & \colhead{$Y$} & \colhead{log $X_\mathrm{C}$} & \colhead{log $X_\mathrm{N}$} & \colhead{log $X_\mathrm{O}$}\\
\nocolhead{Reference} & \colhead{} & \colhead{($M_\odot$)} & \colhead{($M_\odot$)} & \colhead{($L_\odot$)} & \colhead{(K)} & \colhead{($R_\odot$)}
& \colhead{($M_\odot \mathrm{\ yr^{-1}}$)} & \colhead{$\mathrm{(km\ s^{-1})}$} & \colhead{($M_\odot$)}
& \colhead{} & \colhead{} & \colhead{}& \colhead{}
}
\startdata
\multirow{9}{*}{\citetalias{Yoon2017b}}
& HE2.91 & 3.9 & 2.91 & 4.66 & 16850 & 25.04 & 2.37e-06 & 184.49 & 1.06 & 0.98 & -3.31 & -1.82 & -3.44\\
& HE2.97 & 4.0 & 2.97 & 4.68 & 19060 & 19.98 & 2.51e-06 & 206.87 & 1.07 & 0.98 & -3.31 & -1.82 & -3.44\\
& HE4.09 & 6.0 & 4.09 & 4.97 & 36310 & 7.67 & 5.50e-06 & 335.08 & 1.11 & 0.98 & -3.66 & -1.86 & -3.09\\
& HE5.05 & 8.0 & 5.05 & 5.11 & 47960 & 5.22 & 8.13e-06 & 415.33 & 0.88 & 0.98 & -3.69 & -1.86 & -3.08\\
& CO5.18 & 10.0 & 5.18 & 5.12 & 95080 & 1.34 & 1.55e-05 & 728.36 & 0.23 & 0.42 & -0.33 & - & -1.10\\
& CO5.50 & 12.0 & 5.50 & 5.18 & 117300 & 0.94 & 1.26e-05 & 942.76 & 0.17 & 0.21 & -0.27 & - & -0.65\\
& CO6.17 & 15.0 & 6.17 & 5.22 & 133300 & 0.76 & 1.41e-05 & 1157.21 & 0.20 & 0.23 & -0.27 & - & -0.68\\
& CO7.50 & 20.0 & 7.50 & 5.33 & 172800 & 0.52 & 1.70e-05 & 1715.16 & 0.18 & 0.21 & -0.28 & - & -0.62\\
& CO9.09 & 25.0 & 9.09 & 5.43 & 191700 & 0.47 & 1.95e-05 & 2171.91 & 0.16 & 0.19 & -0.29 & - & -0.56\\
\hline
\multirow{2}{*}{\citetalias{Yoon2019}}
& CO2.16 & & 2.16 & 4.40 & 198900 & 0.13 & 3.07e-06 & 414.08 & 0.06 & 0.31 & -0.27 & - & -0.89\\
& CO3.93 & & 3.93 & 4.26 & 76630 & 0.77 & 3.25e-06 & 798.00 & 0.10 & 0.49 & -0.36 & - & -1.19\\
\enddata
\tablecomments{Input model parameters of SN Ib/Ic progenitors which are given by the stellar evolution codes (\texttt{BEC} or \texttt{MESA}). Upper nine models are based on \citetalias{Yoon2017b}, and the other two models are from \citetalias{Yoon2019}. From left to right, each column has the meaning of\newline
$-M_\mathrm{He,i}$ : the initial mass of the He star at the beginning of the stellar evolution code, \newline
$-M$ : the SN progenitor mass after finishing the stellar evolution code, \newline
$-L$ : the total bolometric luminosity of the progenitor, \newline
$-T_\star$, $R_\star$ : the hydrostatic surface temperature and radius (without correcting the optical depth effects from the wind), \newline
$-\dot{M}_\mathrm{fid}$ : the mass-loss rate calcculated with the \citetalias{Yoon2017b} prescription
for the given surface properties of the progenitor model,\newline
$-v_{\infty,\mathrm{fid}}$ : the wind terminal velocity calculated with the equations \ref{eq:vinf_WN} \& \ref{eq:vinf_WC},\newline
$-m_\mathrm{He}$ : the integrated helium mass,\newline
$-Y$, $X_\mathrm{C}$, $X_\mathrm{N}$, $X_\mathrm{O}$ : the surface mass fraction of helium, carbon, nitrogen and oxygen of the progenitor.
}
\end{deluxetable*}
We do not consider the rotational line broadening effect because the majority of ordinary SN Ib/Ic progenitors would be slow rotators as they lose a large amount of angular momentum via mass-loss either by winds or binary interactions \citep[see][for a detailed discussion on this issue]{Yoon2010, Yoon2015}. We also do not consider the effects of different metallicity. The metallicity can have a variety of effects on the progenitor evolution and structure, the wind mass-loss rate, and the resulting spectra. Our preliminary results indicate that the wind density is the major factor that determines
spectral properties for a given progenitor structure and this study covers a wide range of the mass-loss rate (see Section \ref{subsec:windmodel}). Considering the full evolutionary effects
of metallicity on the progenitor evolution and structure would be a subject of future work.
Stellar evolution models predict that the suface properties (i.e., luminosity and temperature) of SN Ib/Ic progenitors do not change significantly during the last $\sim$ 10 years before the explosion~\citep[see][]{Yoon2017a}. However, as discussed in several studies \citep[e.g.,][etc]{Fuller2018}, energy injection from the neon or oxygen burning convective region during the last evolutionary stage might affect the structure of the outermost layers of progenitor stars and induce a mass-loss enhancement. Full consideration of these effects is beyond the scope of this paper, and here we only implicitly consider possible mass-loss enhancements by assuming various mass-loss rates as explained in Section~\ref{subsec:windmodel}.
Figure \ref{fig:LM} shows the luminosity-mass ($L-M$) relation and the temperature-luminosity ($T-L$) relation of our SN Ib/Ic progenitor models. More massive SN Ib/Ic progenitor models by \citet{Groh2013b} and observed Galactic WR stars \citep{Hamann2019,Sander2019} are also plotted in the figure for comparison. The comparison models from \citet{Groh2013b} are self-stripped progenitor models that have solar metallicity and initial masses of $25-120~M_\odot$ and $32-120~M_\odot$, and final masses of $10.9-30.8~M_\odot$ and $9.6-26.2~M_\odot$ for rotating and non-rotating models, respectively. These models are calculated until the end of core carbon burning with the Geneva stellar evolution code \citep{Ekstrom2012,Georgy2012} for which the WR mass-loss rates of \citetalias{Nugis2000} and \citet{Grafener2008} are used. These models are systematically more massive and luminous than our models, and the final masses are comparable to those of Galactic WR stars. The temperatures of the progenitor models in the right panel of Figure \ref{fig:LM} are the hydrostatic surface temperatures ($T_\star$) which are given by the stellar evolution models. However, the temperatures of Galactic WR stars are given by the temperatures at the location where the Rosseland mean optical depth ($\tau_\mathrm{ross}$) is 20 in stellar atmosphere models \citep{Hamann2019,Sander2019}, which might not necessarily correspond to the hydrostatic core surface temperatures.
\begin{figure*}[htb]
\includegraphics[width=\textwidth]{fig01_LM_TL.pdf}
\centering
\caption{The luminosity-mass relation (left) and the temperature-luminosity relation (right) of SN Ib/Ic progenitor models. The upright and inverted stars denote the SN Ib and Ic progenitor models of \citetalias{Yoon2017b}. The SN Ic progenitor models of \citetalias{Yoon2019} are denoted by filled circles. The rotating and non-rotating self-stripped SN Ib/Ic progenitor models of \citet{Groh2013b} are presented by plus and X markers for comparison. The observed Galactic WR stars from \citet{Sander2019} and \citet{Hamann2019} are presented by polygons. The dashed and dashed-dotted lines show the luminosity-mass relation for WNE and WC ($Y=0.3$) stars given by \citet{Langer1989b}.\label{fig:LM}}
\end{figure*}
The observed Galactic WR stars in the left panel of Figure \ref{fig:LM} follow the theoretical $L-M$ relation for core He-burning WR stars given by \citet{Langer1989b}. This is simply because the masses of observed WR stars given by \citet{Sander2019} and \citet{Hamann2019} are based on the Langer's $L-M$ relation. However, all the progenitor models except CO3.93 are several times more luminous than the predictions of the Langer's $L-M$ relation for a given mass due to a rapid luminosity increase ($\sim0.2-0.3$ dex in logarithmic scale) at the last evolutionary stages compared to the case of helium burning stage \citep{Yoon2012, Groh2013b,Yoon2017b}. This implies that the evolutionary effects should be properly considered when inferring WR star masses from their luminosities.
\subsection{Atmospheric Modeling} \label{subsec:windmodel}
To compute our model spectra of SN Ib/Ic progenitors, we use the non-LTE atmospheric radiative transfer code CMFGEN \citep{Hillier1998a,Hillier2003}. The CMFGEN code determines the temperature distribution of the expanding atmosphere by solving the statistical and radiative equilibrium equations, and computes line and continuum formation under the assumption of the spherical symmetric geometry \citep{Hillier1990c}.
While the CMFGEN code computes the radiative transfer self-consistently, it does not solve the momentum equation and we need to specify the hydrodynamic structure of the wind. We assume the standard $\beta$ law for the wind velocity profile with $\beta=1$, which is given by
\begin{fleqn}[\parindent]
\begin{equation}\label{eq:betalaw}
\begin{split}
v(r)=v_0+(v_\infty-v_0)\left(1-\frac{R_\star}{r}\right)^\beta~~,
\end{split}
\end{equation}
\end{fleqn}
where $v_0$ and $v_\infty$ denote the velocity at the hydrostatic stellar surface and the terminal velocity, respectively, and $R_\star$ means the hydrostatic stellar surface radius \citep{Lamers1999}.
We take the mass-loss rate given by \citetalias{Yoon2017b} with $f_\mathrm{WR} = 1.58$ (see above)
as our fiducial mass-loss rate $\dot{M}_\mathrm{fid}$ for all our progenitor models (see Table \ref{tab:input}).
We determine the wind terminal velocities of our fiducial models ($v_{\infty,\mathrm{fid}}$) by the equations for WN and WC stars from \citetalias{Nugis2000} as the following. For WN stars,
\begin{fleqn}[\parindent]
\begin{equation}\label{eq:vinf_WN}
\begin{split}
\mathrm{log}\: v_\infty/v_\mathrm{esc}\mathrm{(core})=\:&0.61-0.13(\pm0.09)\:\mathrm{log}\:L \\
&+0.30(\pm0.77)\:\mathrm{log}\:Y~.
\end{split}
\end{equation}
\end{fleqn}
For WC stars,
\begin{fleqn}[\parindent]
\begin{equation}\label{eq:vinf_WC}
\begin{split}
\mathrm{log}\: v_\infty/v_\mathrm{esc}\mathrm{(core})=\:&-2.37+0.43(\pm0.13)\:\mathrm{log}\:L \\
&-0.07(\pm0.27)\:\mathrm{log}\:Z~.
\end{split}
\end{equation}
\end{fleqn}
The escape velocity at the stellar surface is defined as
\begin{fleqn}[\parindent]
\begin{equation}\label{eq:v_esc}
\begin{split}
v_\mathrm{esc}\mathrm{(core}) =\sqrt{\frac{2GM(1-\Gamma_e)}{R_\star}}~.
\end{split}
\end{equation}
\end{fleqn}
The Eddington factor $\Gamma_e$ is defined as
\begin{fleqn}[\parindent]
\begin{equation}\label{eq:gamma_e}
\begin{split}
\Gamma_e = 7.66 \times 10^{-5} \sigma_{e} (L/L_\odot)/(M/M_\odot),
\end{split}
\end{equation}
\end{fleqn}
where $\sigma_{e}$ denotes the electron scattering opacity \citepalias[see][]{Nugis2000}. When calculating the wind terminal velocity, we assume the HE models as WN stars and the CO models as WC stars. The spectra of the most massive CO model (CO9.09) might look like a WO star rather than a WC star, depending on the adopted wind parameters (see Section \ref{subsec:fiducial_models}). The observed WO stars have a very high wind terminal velocity of $\sim5000~\mathrm{km~s^{-1}}$ \citep{Drew2004a,Sander2012}. However, we consider various mass-loss rates ($\dot{M}_\mathrm{fid}\times0.1,\;0.5,\;2.0,\;5.0\;\mathrm{and}\;10.0$) and wind terminal velocities ($v_{\infty,\mathrm{fid}}\times0.1,\;0.5,\;2.0,\;\mathrm{and}\;3.0$) in our model grid, and a wind terminal velocity of $~5000~\mathrm{km~s^{-1}}$ is covered within our parameter space.
The assumed volume filling factor $f_c$ of the wind matter, which is the inverse of the wind clumping factor, is 0.1.
Note that WR spectra would look similar for a given value of $\dot{M}/\sqrt{f_c}v_\infty$ \citep[see][etc]{Hamann1998}.
We take some WR and O type star models provided by the CMFGEN code as our initial trial models, which have the most similar temperature and surface gravity values to those of our input models. We first obtain the fiducial models with the fiducial mass-loss rate ($\dot{M}_\mathrm{fid}$) and the fiducial wind terminal velocity ($v_{\infty,\mathrm{fid}}$), which are determined with $L$, $R_\star$ and chemical abundances of each model (see Table \ref{tab:input}). Then we calculate other models with various mass-loss rates or wind terminal velocities from the fiducial models.
\section{Results of the atmospheric models} \label{sec:result}
\subsection{The fiducial models} \label{subsec:fiducial_models}
In Table \ref{tab:output}, we present the CMFGEN output results of our fiducial models. The photospheric values of the temperature and radius are obtained at the photosphere defined by the Rosseland mean opacity, which is located in the wind matter above the hydrostatic surface. The corresponding absolute optical magnitudes of different $HST$ Wide Field Planetary Camera 2 (WFPC2) filters are also given in the table. We choose the $HST$/WFPC2 filter system to compare the progenitor model magnitudes with the progenitor detection limits of previous searches (see Section \ref{subsec:detectionlimit}). Given that the instrument was replaced with the $HST$ Wide Field Camera 3 (WFC3) in 2009, we also provide the progenitor magnitudes of the $HST$/WFC3 filter system in Appendix \ref{subsec:Appendix1} to be compared with recent and future data. We find that the magntidue difference between the two filter systems is minor for HE models but significant for CO models (i.e., up to $\sim$1 mag).
\begin{deluxetable*}{hlcrcccc}
\tablenum{2}
\tablecaption{CMFGEN output parameters and optical magnitudes of SN Ib/Ic progeitors\label{tab:output}}
\tablehead{
\nocolhead{Model} & \colhead{Name} & \colhead{$T_\mathrm{eff}$} & \colhead{$R_\mathrm{phot}$}
& \colhead{$M_{F450W}$} & \colhead{$M_{F555W}$} & \colhead{$M_{F606W}$} & \colhead{$M_{F814W}$} \\
\nocolhead{Reference} & \colhead{} & \colhead{(K)} & \colhead{($R_\odot$)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)}
}
\startdata
\multirow{9}{*}{\citetalias{Yoon2017b}}
& HE2.91 & 16330 & 26.65 & -5.26 & -5.19 & -5.19 & -5.16\\
& HE2.97 & 18390 & 21.46 & -5.11 & -5.01 & -5.00 & -4.92\\
& HE4.09 & 34730 & 8.38 & -4.73 & -4.54 & -4.48 & -4.31\\
& HE5.05 & 41910 & 6.83 & -4.56 & -4.39 & -4.34 & -4.29\\
& CO5.18 & 49850 & 4.88 & -4.83 & -4.84 & -4.76 & -4.74\\
& CO5.50 & 68200 & 2.78 & -4.54 & -4.42 & -4.24 & -4.27\\
& CO6.17 & 73450 & 2.51 & -4.59 & -4.46 & -4.22 & -4.22\\
& CO7.50 & 89770 & 1.91 & -4.53 & -4.52 & -4.25 & -4.11\\
& CO9.09 & 99710 & 1.73 & -4.42 & -4.39 & -4.10 & -3.97\\
\hline
\multirow{2}{*}{\citetalias{Yoon2019}}
& CO2.16 & 74890 & 0.94 & -2.66 & -2.55 & -2.40 & -2.45\\
& CO3.93 & 48080 & 1.95 & -2.83 & -2.81 & -2.78 & -2.78\\
\enddata
\tablecomments{The meaning of each model name is same as that in Table \ref{tab:input}. From left to right, each column has the meaning of \newline
$-T_\mathrm{eff}$, $R_\mathrm{phot}$ : the effective temperature and the photospheric radius defined at the Rosseland optical depth $\tau_\mathrm{ross}$ = 2/3.\newline
$-M_{F450W,F555W,F606W,F814W}$ : the absolute magnitude in $F450W$, $F555W$, $F606W$ and $F814W$ filters in $\textit{HST}$/WFPC2.
}
\end{deluxetable*}
We present four examples of spectral energy distributions (SEDs) in Figure \ref{fig:sed}. Here, the HE2.91 and CO9.09 progenitor models have the lowest and highest surface temperatures among our input models from \citetalias{Yoon2017b}. The HE5.05 and CO5.18 progenitor models have a similar mass and bolometric luminosity, but very different surface compositions and hydrostatic surface temperatures (see Table~\ref{tab:input}).
\begin{figure*}[htp]
\centering
\includegraphics[width=0.98\textwidth]{fig02_SED.pdf}
\caption{Spectral energy distributions from different SN Ib/Ic progenitor models (HE2.91, HE5.05, CO5.18 and CO9.09). Continuum fluxes are presented by the purple line. The black body fluxes at the photosphere (BB-phot) and the hydrostatic core (BB-core) are plotted with the dashed and dotted lines, respectively. Transmission curves of $F450W$, $F555W$, $F606W$ and $F814W$ filters in $HST$ Wide Field and Planetary Camera 2 (WFPC2) are plotted in the lowest panel. The absolute $F555W$ filter magnitude ($M_{F555W}$) of each case is presented in the inner table. All fluxes are scaled to a distance of 10 pc.}\label{fig:sed}
\end{figure*}
Note that a strong line blanketing in the ultra-violet wavelength region and many emission lines in the optical region are observed in the full spectra of the models. For comparison, we also plot the continuum flux given by CMFGEN, the black body flux at the photosphere (BB-phot) and the black body flux at the hydrostatic stellar surface (BB-core) of each model together. The difference between the black body flux at the photosphere and the hydrostatic stellar surface is relatively large for the CO models compared to the HE models. This is because the CO models are more compact and have a denser wind, leading to a significant lifting-up of the photosphere compared to the HE models (see Tables \ref{tab:input} and \ref{tab:output}). As a result, the CO models with BB-phot cases are significantly brighter in the optical than the corresponding BB-core case, and the full spectra cases are even brighter than BB-phot case (e.g., $HST$/WFPC2 $F555W$ filter magnitudes $M_{F555W} = -4.39$ and $M_{F555W} = -0.55$ for the full spectrum and
BB-core cases of CO9.09, respectively; see Figure \ref{fig:sed}), while the differences are relatively small for the HE models.
Among the fiducial models in Figure \ref{fig:sed}, the full spectrum model of HE2.91 is fainter than the corresponding BB-core and BB-phot cases in the optical, in constrast to the other cases. This progenitor model has a surface helium mass fraction of $Y = 0.98$ and an effective temperature of $T_\mathrm{eff}=16330$ K. The helium-rich and hydrogen-free property causes the He bound-free discontinuity at 3422~$\text{\AA}$, which corresponds to the ionization from the 2$^3$P state of \ion{He}{1}. This discontinuity is also shown in previous spectral models and observations of He-rich stars having a similar temperature range ($T_\mathrm{eff}\sim16000-24000$ K) \citep[e.g.,][]{Popper1947,Hunger1969,Rosendhal1973,Cidale2007}. This continuous helium absorption makes the full spectrum brightness fainter than the BB-core and BB-phot brightness in the optical for HE2.91 model. The same phenomenon is also found for the HE2.97 model that has
a similar surface temperature to the HE2.91 model.
In Figure \ref{fig:sed}, we find that the continuum flux exceeds the BB-phot
flux in a relatively long wavelength range for the HE5.05, CO5.18 and CO9.09
models. For example, such an excess is found for $\lambda > 5000~\text{\AA}$ in the
CO5.18 model. This excess is due to the free-free emission from the ionized
stellar winds, and can significantly affect optical magnitudes, especially in
red filters.
In Figure \ref{fig:windeffect}, we show $M_{F555W}$ of our fiducial models calculated
with the full spectra (black markers), compared to the continuum flux (magenta), BB-phot (green) and
BB-core cases (cyan). It is clearly seen that for the CO models $\log
T_\star~\mathrm{[K]} > 4.8$, the optical
brightness becomes significantly higher when the full SEDs are taken into
account than the BB-core case. This is mainly because of the effect of
lifting-up of the photosphere with an optically thick wind, as discussed above.
\begin{figure}[htp]
\includegraphics[width=0.45\textwidth]{fig03_BB.pdf}
\centering
\caption{Absolute magnitudes of SN Ib/Ic progenitors in the $F555W$ ($HST$/WPFC2) filter. The upright and inverted stars denote the HE and CO progenitor models from \citetalias{Yoon2017b}, respectively. The CO progenitor models from \citetalias{Yoon2019} are denoted by filled circles. Different colors give the optical magnitudes calculated with full SED, continuum flux, black body flux at the photosphere (BB-phot) and the hydrostatic surface (BB-core) as indicated by the labels. Galactic WR star samples from \citet{Sander2019} and \citet{Hamann2019} are plotted together for comparison (Note that the temperatures of Galacitc WR stars are calculated at the radius of $\tau_\mathrm{ross}=20$ and the absolute magnitudes of them are obtained in the Smith $v$ filter($\lambda_\mathrm{peak}=5160~\text{\AA}$ ; \citet{Smith1968a}).). \label{fig:windeffect}}
\end{figure}
On the other hand, the magnitudes $M_{F555W}$ of the HE models $\log
T_\star~\mathrm{[K]} < 4.8$ do not change
much for different cases of full SED, continuum flux, BB-phot, and BB-core.
This is because the HE models have a relatively large radius at the hydrostatic
surface and the resulting wind density with $\dot{M}_\mathrm{fid}$ is not high
enough to cause a significant lifting-up effect of the photosphere. For the
cases of HE2.91 and HE2.97, continuum luminosity is fainter than BB-core
luminosity in the optical due to the continuous He absorption as discussed above.
Figure \ref{fig:normalizedSED} shows the optical spectra normalized by continuum,
which can be used for WR classification~\citep{Smith1968b}.
In HE2.91, most lines appear as absorption lines except for
a few P Cygni profiles of He I. In HE5.05, many P Cygni profiles and emission
lines are found.
HE5.05 may be classified as WN7-8 due to its strong
\ion{N}{3} $\lambda4634$ emission line, and the similar strength between
\ion{He}{1} and \ion{He}{2} lines. In CO5.18, a strong \ion{C}{3} $\lambda5696$
emission line and a relatively weak \ion{C}{4} $\lambda5805$ emission line are
seen. In CO9.09, a strong \ion{C}{4} line is seen but the \ion{C}{3}
$\lambda5696$ line is absent. The \ion{O}{4} and \ion{O}{5} emission lines are
also found. CO5.18 and CO9.09 may correspond to WC8-10 and WC4/WO4 stars
respectively. Normalized spectra of all the models with the fiducial mass-loss
rate are presented in Appendix \ref{subsec:Appendix2}. The CO3.93 and CO2.16 models of
\citetalias{Yoon2019} show very similar normalized spectra to those of CO5.18 and CO5.50, respectively.
\begin{figure*}[htp]
\gridline{\fig{fig04_Normalized_flux1.pdf}{\textwidth}{(a)}}
\gridline{\fig{fig04_Normalized_flux2.pdf}{\textwidth}{(b)}}
\caption{Normalized optical spectra of some selected SN Ib/Ic progenitor models in the wavelength
ranges (a) $4400~\text{\AA} \leqslant \lambda \leqslant 7000~\text{\AA}$ and (b) $7000~\text{\AA} \leqslant \lambda \leqslant 10000~\text{\AA}$, which are correspond to the wavelength range of $F555W$ and $F814W$ filters ($HST$/WFPC2), respectively. \label{fig:normalizedSED}}
\end{figure*}
The effect of emission lines on the optical brightness can be observed when we
compare the full SED flux and the continuum flux. The optical magnitudes of $M_{F555W}$
calculated with them are also summarized in the inner tables in Figure
\ref{fig:sed}. The HE models have a similar $M_{F555W}$ for full spectra and continuum.
For the CO models, however, the optical brightness with full spectra are
significantly brighter than those with continuum.
This implies that
the effect of emission lines is more important for SN Ic progenitors than SN
Ib progenitors. The degree of this brightening by emission lines varies
non-linearly with the wind density and the effective temperature, as discussed
in Section \ref{subsec:mdotvinfeffect}.
Note also that the optical brightness ($M_{F450W}$, $M_{F555W}$, $M_{F606W}$
and $M_{F814W}$) of the CO models are similar or somewhat lower
than those of the HE models, although the bolometric luminosities of most CO
models are systematically higher than the HE models (Table \ref{tab:input},
\ref{tab:output} and
the right panel of Figure \ref{fig:LM}). This qualitatively confirms the
conclusions of \citet{Yoon2012}, who argued that SN Ic progenitors would be
fainter than SN Ib progenitors. However, \citet{Yoon2012} adopt the black body
approximation with the hydrostatic surface temperature (i.e., BB-core case) and
they significantly underestimate the optical brightness of SN Ic progenitors.
In Figure \ref{fig:windeffect}, we find that our fiducial models are
systematically fainter than Galactic WR stars. This is mainly because our
models have a lower mass and bolometric luminosity than the Galactic WR stars,
on average. However, the lowest mass HE models
(HE2.91 and HE2.97) have optical magnitudes comparable to WR stars of $M \simeq 20
~M_\odot$ (i.e., Smith $v$ filter magnitude $M_{v} \simeq -5.2$), despite their
low bolometric luminosities (see Figure \ref{fig:LM}). This is because these
models have a large radius ($R_\star\simeq 20 - 25~R_\odot$) and a low surface
temperature ($T_\star \simeq 17000 - 19000~\mathrm{K}$), compared to those of
Galactic WR stars. These models are also brightest in the optical among our
models, implying that relatively low-mass helium star progenitors ($M \lesssim 3.0~
M_\odot$) would be relatively easy to detect in the visual wavelength range,
compared to more massive SN Ib/Ic progenitors \citep{Yoon2012, Kim2015, Folatelli2016, Eldridge2016, Yoon2017b}.
\subsection{Effects of the mass-loss rate and the wind terminal velocity} \label{subsec:mdotvinfeffect}
As discussed above in Section \ref{subsec:fiducial_models}, the effects of
winds on the optical brightness can be significant. However, the mass-loss rate
of SN Ib/Ic progenitors at the pre-SN stage is uncertain and might be different
from the mass-loss rate of the observed WR stars which would be mostly at core
helium burning stage \citep[cf.][]{Yoon2017b}.
We present the result of a parameter study with various mass-loss rates ($\dot{M} = 0.1~\dot{M}_\mathrm{fid} - 10~\dot{M}_\mathrm{fid}$) and wind terminal velocities ($v_\infty = 0.1~v_{\infty,\mathrm{fid}} - 3~v_{\infty,\mathrm{fid}}$) in Figure \ref{fig:mdot}. Increasing the mass-loss rate or decreasing the wind terminal velocity implies a denser wind, which leads to a lower effective temperature and a higher brightness in the optical. As seen in the figure, this effect is more prominent in the CO models (i.e., the models with $\log T_\star~\mathrm{[K]} > 4.8$ in the figure) than the HE models ($\log T_\star~\mathrm{[K]} < 4.8$ in the figure). For example, an increase in $\dot{M}$ by a factor of 100 can lead to a 4 -- 6 mag difference in $M_{F555W}$ for CO models. This dependency of the optical brightness on the mass-loss rate allows us to find a constraint on the mass-loss rate of some SN Ib/Ic progenitors as discussed below (Section \ref{sec:implication}). Normalized optical spectra of HE5.05 and CO5.18 with various mass-loss rates are presented in Appendix \ref{subsec:Appendix3}. The figures show that optical properties and emission lines of CO5.18 are more sensitively dependent on the adopted mass-loss rate than the corresponding HE5.05 case. This is because the CO5.18 progenitor model is much more compact than HE5.05, making the effects of winds on the spectra more important.
\begin{figure*}[htp]
\includegraphics[width=0.8\textwidth]{fig05_mdot_vinf.pdf}
\centering
\caption{$F555W$ ($HST$/WFPC2) filter magnitudes of SN Ib/Ic progenitors with various mass-loss rates (left) and wind terminal velocities (right). The upright and inverted stars denote the HE and CO progenitor models from \citetalias{Yoon2017b}, respectively. The CO progenitor models from \citetalias{Yoon2019} are presented with filled circles. Smith $v$ filter ($\lambda_\mathrm{peak}=5160\text{\AA}$) magnitudes of Galactic WR stars are plotted together for comparison \citep{Sander2019,Hamann2019}. Each color of the markers gives the adopted mass-loss rate and the wind terminal velocity as indicated by the labels.
\label{fig:mdot}}
\end{figure*}
In Figure \ref{fig:color}, we present the color of the models for various wind mass-loss rates as functions of the hydrostatic surface temperature and the effective temperature. The color would be a monotonic function of the effective temperature if the stars emitted black body radiation, as shown in the lower right panel of the figure. However, the continuum can be greatly affected by free-free emission as discussed above. The difference between middle and right panels in the figure shows the effect of free-free emission. The continuum flux colors of the CO models ($\log T_\star~\mathrm{[K]} > 4.8$ in the figure) are redder than the HE models ($\log T_\star~\mathrm{[K]} < 4.8$ in the figure) for the fiducial mass-loss rate, despite their higher $T_\mathrm{eff}$s. For a given $T_\mathrm{eff}$, the continuum color is redder for a higher mass-loss rate due to the higher wind density and stronger free-free emission. The same result can also be found in \citet{Kim2015}.
\begin{figure*}[htp]
\includegraphics[width=0.95\textwidth]{fig06_color_effect.pdf}
\centering
\caption{Temperature-color relations for the SN Ib/Ic progenitor models from \citetalias{Yoon2017b} (see Table \ref{tab:input}, \ref{tab:output} and \ref{tab:Appendix1}). The left panels show the $M_{F555W}-M_{F814W}$ ($HST$/WFPC2) colors calculated with full SEDs. The middle and right panels show the same colors calculated only with the continuum and black body fluxes at the photosphere, respectively. The circle and star markers denote SN Ib and Ic progenitor models, respectively. The color gives the adopted mass-loss rate as indicated by the labels. $\dot{M}_\mathrm{fid}$ means the fiducial mass-loss rate used in this paper. \label{fig:color}}
\end{figure*}
Moreover, emission lines from the wind matter can also play an important role in determining the optical brightness in a specific filter. The difference between left and middle panels of Figure \ref{fig:color} shows the effect of emission lines. In many of our models with the fiducial mass-loss rate, more strong emission lines are found in the spectral range of the $F555W$ filter than in the $F814W$ filter range (see Figure \ref{fig:normalizedSED} and figures in Appendix), and it generally makes the color of the fiducial models with full SED bluer than the continuum color. The CO5.50, CO6.17 and CO7.50 models with a very high mass-loss rate (i.e., 5 and 10 times the fiducial value), however, have a redder color with the full spectrum compared to the continuum color. This is because of strong \ion{C}{2}/\ion{C}{3} emission lines in the $F814W$ filter wavelength range in these models.
In short, the optical magnitude variation of SN Ib/Ic progenitors according to the wind density can be up to $\sim6$ mag for our considered parameter space (see Figure~\ref{fig:mdot}). The corresponding color change according to the effective temperature and the wind density is highly non-monotonic and the progenitor color in terms of $M_{F555W}-M_{F814W}$ can be significantly redder (up to $\sim0.7$ mag) or bluer (up to $\sim0.2$ mag) depending on the progenitor structure and mass-loss rate, compared to the corresponding prediction under the black body approximation (see Figure \ref{fig:color}). This result indicates that prediction of the optical brightness and color of SN Ib/Ic progenitors using the black body approximation can lead to a significant error, and the effects of winds need to be properly considered to infer/constrain the progenitor properties from observations.
\subsection{Comparison with the Analytic Prediction} \label{subsec:theorycomparison}
\citet{deLoore1982} present a simple analytic prediction
for the photospheric radius and the effective temperature of a WR star having an optically thick wind.
For example, assuming a constant opacity $\kappa$, $\beta = 2$ and $v(r = R_\star) = 0$ in the standard $\beta$-low wind velocity profile,
the photospheric radius can be given as follows \citep{Langer1989b}:
\begin{fleqn}[\parindent]
\begin{equation}\label{eq:Reff_beta2}
\begin{split}
&R_\mathrm{phot}=R_\star+\frac{3\kappa|\dot{M}|}{8\pi v_\infty}~.
\end{split}
\end{equation}
\end{fleqn}
For $\beta=1$ as assumed in this study, we get
\begin{fleqn}[\parindent]
\begin{equation}\label{eq:Reff_beta1}
\begin{split}
&R_\mathrm{phot}={R_\star}\left[1-\mathrm{exp}\left(\frac{8\pi R_\star v_\infty}{3\kappa |\dot{M}|}\right)\right]^\mathrm{-1}~.
\end{split}
\end{equation}
\end{fleqn}
With this $R_\mathrm{phot}$,
we can compute $T_\mathrm{eff}$ from the Stefan-Boltzmann
law (i.e., $T_\mathrm{eff}=T_\star(R_\star/R_\mathrm{phot})^\mathrm{1/2}$).
In Figure \ref{fig:theorycomparison}, we compare these analytically obtained
$R_\mathrm{phot}$ and $T_\mathrm{eff}$ for $\beta=1$ with the CMFGEN results.
We find that the two cases relatively well
agree within 20 per cent, implying that the analytical prescription for
$R_\mathrm{phot}$ and $T_\mathrm{eff}$ would be useful for a rough guess,
particularly for SN Ib progenitors. The caveat is that any prediction on
optical magnitudes and color from this $T_\mathrm{eff}$ under the black body
assumption could result in a significant error, as discussed above in
Sections \ref{subsec:fiducial_models} and \ref{subsec:mdotvinfeffect}.
\begin{figure*}[htp]
\includegraphics[width=\textwidth]{fig07_simple_analytic_model.pdf}
\centering
\caption{Comparison of photospheric parameters of the SN Ib/Ic progenitor models calculated by the simple analytic model and the non-LTE CMFGEN code. The green triangle ($\beta=1$) and red circle ($\beta=2$) give the photospheric radius and the effective temperature calculated by the simple analytic model. The CMFGEN results are plotted with blue inverted triangles. The radius and temperature at the hydrostatic surface are denoted by black squares for comparison. \label{fig:theorycomparison}}
\end{figure*}
\section{Implications for Supernova Progenitors} \label{sec:implication}
\subsection{Comparison with the detection limits of pre-explosion images} \label{subsec:detectionlimit}
There are only a limited number of candidates for SN Ib/Ic progenitors: SN Ib
progenitor candidates for iPTF13bvn \citep{Cao2013} and SN 2019yvr
\citep{Kilpatrick2021}, and SN Ic progenitor candidiate for SN 2017ein
\citep{Vandyk2018, Kilpatrick2018}. In the cases of no direct identification, we can get the
detection limits from the pre-explosion images. \citet{Eldridge2013}
report the detection limits for 12 SN Ib/Ic progenitors from the observations
(See also \citet{Smartt2015} and \citet{Vandyk2017}). We add a detection limit of
the pre-explosion image of SN Ic 2020oi \citep{Gagliano2021}. The detection limit is -4.2 mag
in $F555W$ filter ($HST$/WFC3) and obtained with the $3\sigma$ limit of the pre-explosion image.
In Figure \ref{fig:obs},
we present the detection limits in various filters.
We compare the absolute magnitudes of our SN Ib/Ic progenitor models
calculated with the same filter system. Note that the host galaxy extinction
has not been considered in some of this sample~\citep{Eldridge2013, Gagliano2021},
in which case the detection limits might be lower than the given values.
Even for the case where the host galaxy extinction was inferred,
its uncertainty can be significant, which might also affect the estimated detection limits.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.95\textwidth]{fig08_detection_limit2.pdf}
\caption{Optical magnitude of progenitor models and the detection limits from the pre-explosion images of SNe Ib/Ic. Each panel shows the absolute magnitude of progenitor models in different filters : $F450W$, $F555W$, $F606W$ and $F814W$ in $HST$/WFPC2, Johnson-B and $F555W$ in $HST$/WFC3. The detection limits from the pre-explosion images of SNe Ib, Ic and Ib/Ic are plotted in dashed, dash-dotted and dotted lines, respectively \citep{Eldridge2013,Gagliano2021}. Optical magnitude of O/B type stars with 9, 12, 15, 20, 25, 32, 40, 60, 85 and 120 $M_\odot$ are plotted together in green (ZAMS) and purple (evolved) marker for the comparison \citep{Fierro2015}. Each color gives the adopted mass-loss rate as indicated by the labels.
\label{fig:obs}}
\end{figure*}
In principle, we can roughly infer the upper limit of the mass-loss rate of
these non-detected progenitors because the optical brightness becomes
systematically higher for a higher mass-loss rate, in particular for SN Ic
progenitors. We find that the observational detection limits except for
SN 2002ap are located above the CO2.16/CO3.93 model predictions (circle symbols) with
$\dot{M}_\mathrm{fid}\times10.0$ in Figure \ref{fig:obs}.
When compared to the relatively massive CO model predictions ($M>5.0~M_\odot$,
inverted star symbols), the observational detection limits except
for 3 cases (SN 2002ap, SN 2010br and SN 2020oi) are located above the model
predictions with $\dot{M}_\mathrm{fid}\times2.0$. For SN 2010br
and the SN 2020oi, the mass-loss rate of the progenitor would not exceed twice
the fiducial mass-loss rate if their final masses were relatively high ($M\gtrsim5
~M_\odot$), while the upper limit of the mass-loss rate of the progenitor
would be higher than $M_\mathrm{fid}\times10$ if their final masses were relatively low ($M\lesssim4
~M_\odot$). The detection limit of SN 2002ap is -4.4 mag in Johnson-B filter,
which is larger than the fiducial model predictions of relatively massive CO progenitors ($M\gtrsim
5~M_\odot$) and smaller than the CO2.16/CO3.93 model cases. This implies
either that the SN 2002ap progenitor did not have a strong wind or that the
progenitor final mass was relatively low ($M \lesssim 4~M_\odot$). The
observationally inferred ejecta mass of SN 2002ap is about 2.5 $M_\odot$ \citep{Mazzali2007},
which is consistent with the latter interpretation.
There is another no direct identification case of Ca-rich SN Ib 2019ehk \citep{JacobsonGalan2020},
which is not presented in Figure \ref{fig:obs} and Table \ref{tab:detection_limit}.
Its absolute detection limit from the pre-explosion image is -2.4 mag in $F555W$ filter ($HST$/WFPC2)
under the host galaxy distance assumption of d~$=16.2$ Mpc. The progenitor of SN 2019ehk is not detected
although the detection limit is far higher than the HE model predictions ($M_{F555W}=-4\cdots-6$).
This result is consistent with the scenario that Ca-rich SNe Ib would have a white dwarf origin
instead of massive stars~\citep[e.g.,][]{Perets2011,Kasliwal2012}.
\begin{deluxetable*}{lllll}
\tablenum{3}
\tablecaption{Upper limits of the mass-loss rate of the non-detected SN Ib/Ic progenitors in previous searches and constraints on the possible companion stellar mass\label{tab:detection_limit}}
\tablehead{
\colhead{Type} & \colhead{Name} & \colhead{$\dot{M}_\mathrm{max}$$^\mathrm{a}$} & \colhead{$M_\mathrm{comp,max}$$^\mathrm{b}$} & \colhead{$M_\mathrm{comp,max}$$^\mathrm{b}$} \\
\colhead{} & \colhead{} & \colhead{} & \colhead{(ZAMS)} & \colhead{(Evolved)}
}
\startdata
\multirow{12}{*}{}
SN Ib & SN 2001B & & & \\
& SN 2011am & & & $<85~M_\odot$ \\
& SN 2012au & & & $<85~M_\odot$ \\
\hline
SN Ib/Ic & SN 2004gt & & $<120~M_\odot$ & $<40~M_\odot$ \\
& SN 2005V & & & \\
& SN 2010br & $<\dot{M}_\mathrm{fid}\times5.0$ & $<60~M_\odot$ & $<20~M_\odot$ \\
\hline
SN Ic & SN 2000ew & & & $<60~M_\odot$ \\
& SN 2002ap & $<\dot{M}_\mathrm{fid}\times1.0$ & $<32~M_\odot$ & $<15~M_\odot$ \\
& SN 2003jg & $<\dot{M}_\mathrm{fid}\times10.0$ & $<120~M_\odot$ & $<40~M_\odot$ \\
& SN 2004gn & $<\dot{M}_\mathrm{fid}\times5.0$ & $<80~M_\odot$ & $<32~M_\odot$ \\
& SN 2007gr & & & $<40~M_\odot$ \\
& SN 2011hp & $<\dot{M}_\mathrm{fid}\times10.0$ & $<120~M_\odot$ & $<40~M_\odot$ \\
& SN 2020oi & $<\dot{M}_\mathrm{fid}\times2.0$ & $<32~M_\odot$ & $<15~M_\odot$ \\
\enddata
\tablecomments{Detection limits are from the $HST$/WFPC2 images except SN 2002ap and SN 2020oi from Johnson-B and $HST$/WFC3 image, respectively. \newline
$^\mathrm{a}$Upper limits of the mass-loss rates are obtained by comparing with HE and relatively massive ($M\gtrsim5~M_\odot$) CO model predictions. Blank if a detection limit is brighter than the progenitor model with $\dot{M}_\mathrm{fid}\times10.0$.\newline
$^\mathrm{b}$Blank if a detection limit is brighter than the ZAMS or evolved $120M_\odot$ O type star.}
\end{deluxetable*}
We obtain the upper limit of the progenitor mass-loss rate
($\dot{M}_\mathrm{max}$) by comparing various mass-loss rate models with
detection limits of pre-explosion images, and summarize the results in Table
\ref{tab:detection_limit}. These upper limits are obtained by comparing
the detection limits with HE and relatively massive ($M\gtrsim5~M_\odot$) CO model predictions.
All of our relatively low-mass CO2.16/CO3.83 models are below the detection limits and cannot provide a upper limit for our considered parameter space.
This result implies that if the progenitors of observed SNe Ib/Ic had a mass-loss rate comparable
to $\dot{M}_\mathrm{fid}$, non-detection would be a natural consequence for most cases, and
that deeper observations with an optical absolute magnitude larger than about $-4
\cdots -5$ are needed to directly observe the progenitors and to make a better
constraint on their properties. The point source magnitude limit with
an optical wide band filter in $HST$/WFC3 for 1 hour observation is $\sim27.1-27.9$ mag
(at a signal-to-noise ratio of 10). This means that a progenitor with the absolute
optical magnitude of $-4$
mag would be detected only if it were in a galaxy closer than $\sim16-24$ Mpc.
The limiting distance can be decreased to $\sim10-15$ Mpc for 10 min observation.
The host galaxy extinction should be additionally considered for a more precise distance limit.
Many SN Ib/Ic progenitors would be produced in binary systems and the
contribution of the companion star to the optical brightness would also be
significant. We take CMFGEN model spectra of O/B type stars at zero-age main
sequence (ZAMS) and an evolved phase on the main sequence having the initial
mass range of $9-120~M_\odot$ from \citet{Fierro2015}, and present their optical
magnitudes in Figure \ref{fig:obs}. Our fiducial progenitor models have
optical magnitudes comparable to those of $15-25~M_\odot$ evolved O stars or
$25-60~M_\odot$ O stars at ZAMS. For each progenitor, we get the upper limit of
the possible companion star mass ($M_\mathrm{comp,max}$) by comparing the
detection limit with the O/B star magnitudes, ignoring the contribution from
the SN progenitor. The result is summarized in Table \ref{tab:detection_limit}.
Even for the deepest detection limit cases, we can
only exclude a companion star more massive than $15~M_\odot$.
\citet{Zapartas2017} predict the possible companions of stripped-envelope SNe
at the explosion time using the population synthesis simulation. The result of
the study shows that only $\sim$ 20~\% of stripped-envelope SNe progenitors
would have massive a companion with $>20~M_\odot$. This result also implies
that most SN Ib/Ic progenitors would not have massive companion star that is
bright enough to exceed the previous detection limits.
\subsection{Comparison with the progenitor candidate of SN Ib iPTF13bvn} \label{subsec:iPTF13bvn}
The progenitor candidate of Type Ib SN iPTF13bvn in NGC 5806 is reported by \citet{Cao2013} and later convincingly identified as the progenitor by a follow-up observation by \citet{Eldridge2016, Folatelli2016}. \citet{Cao2013} measure the Milky Way and host galaxy reddening as $E(B-V)_{MW}=0.0278$ and $E(B-V)_{host}=0.0437$, respectively, from the \ion{Na}{1} D lines and use the adopted distance modulus of $\mu=31.76\pm0.36~(22.5\pm2.4~$Mpc)
\citep{Tully2009}. Their photometry of the progenitor candidate gives the values of $m_{F435W}=26.50~\pm~0.15$, $m_{F555W}=26.40~\pm~0.15$ and $m_{F814W}=26.10~\pm~0.20$ mag in the $HST$ Advanced Camera for Surveys (ACS) filter system.
On the other hand, \citet{Bersten2014} suggest higher extinction values of $E(B-V)_{MW}=0.0447$ and $E(B-V)_{host}=0.17$ using the SN light curve and \ion{Na}{1} lines. Moreover, \citet{Eldridge2015} give brighter photometry values of $m_{F435W}=25.80\pm0.12$, $m_{F555W}=25.80\pm0.11$ and $m_{F814W}=25.88\pm0.24$ mag for the same progenitor candidate. In this study, we present the results of every combination of different extinctions and photometry values.
In Figure \ref{fig:Ib_CMD}, we show our HE progenitor models and the iPTF13bvn
progenitor candidate in the color-magnitude diagram and the color-color
diagram. We present four observational results together. For the photometry
result of \citet{Cao2013}, the magnitude and the color of iPTF13bvn progenitor can be
explained by the relatively low-mass fiducial models HE2.91 and HE2.97
or by most HE models with $\dot{M}_\mathrm{fid}\times5\cdots10$.
For the photometry result of \citet{Eldridge2015}, however, the progenitor candidate of the
iPTF13bvn is brighter than most of our considered HE models, being about 1 mag
brighter than the fiducial models in the optical. Only the
HE4.09 and HE5.05 models with $\dot{M}_\mathrm{fid}\times10.0$ can explain the optical brightness.
The inferred ejecta mass ($\sim2.0-2.3~M_\odot$) and helium core mass ($\sim3.4-3.5~M_\odot$)
of iPTF13bvn \citep{Bersten2014, Fremling2014} are in range of our HE models.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{fig09_Ib_CMD_CCD.pdf}
\caption{Color-magnitude diagram (left) and color-color diagram (right) of the HE models. The absolute optical magnitudes and colors of the iPTF13bvn progenitor candidate for two different extinction and photometry values are presented together. Magenta and green markers denote the values calculated with the photometry results of \citet{Cao2013} and \citet{Eldridge2015}, respectively. The values with high and low extinction assumptions are plotted in dark and light colors, respectively.
\label{fig:Ib_CMD}}
\end{figure*}
To consider the possible contribution from a companion star of the progenitor,
we also calculate the composite magnitudes from the spectral models of both the
HE progenitors and the evolved main sequence stars of $M_\mathrm{init}=9$ and
$25~M_\odot$ from \citet{Fierro2015}. As seen in Figure \ref{fig:Ib_companion},
the optical properties of the iPTF13bvn progenitor candidate are consistent
with the models with a $9~M_\odot$ or $25~M_\odot$ companion star for photometry
result of \citet{Cao2013} and \citet{Eldridge2015}, respectively. A $9~M_\odot$
companion would be a reasonable solution within the standard binary scenario of
SN Ib progenitors \citep[e.g.,][]{Bersten2014, Yoon2017b} and agrees well with
other previous studies on iPTF13bvn \citep[e.g.,][]{Fremling2014,
Srivastav2014, Kuncarayakti2015, Fremling2016, Folatelli2016, Eldridge2016}. A
companion star as massive as $25~M_\odot$ is somewhat hard to explain. Given
that the inferred ejecta mass of iPTF13bvn is only about $2.0~M_\odot$
, such a massive companion in a binary system
could be produced only with conservative mass transfer and an unusually high
mass-loss rate from the naked primary star during the post-mass transfer phase
\citep{Wellstein1999}. On the other hand, \citet{Groh2013a} suggest a single
WR star progenitor (initial mass $M_\mathrm{init}=31-35~M_\odot$) by comparing
of the Geneva single star evolution model with the observation., but the
predicted pre-explosion mass of $M\simeq10.9~M_\odot$) is incompatible with the
inferred ejecta mass of iPTF13bvn.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{fig10_Ib_companion.pdf}
\caption{Color-magnitude diagrams of the HE progenitor models. The left and right panel give the composite results with a 9$M_\odot$ and a 25$M_\odot$ companion star, respectively. The diamond markers without a black box denote the progenitor models without a companion star, and the filled diamonds with black border denote the composite models with a companion star. Each color of the markers gives the adopted mass-loss rate as indicated by the labels. The values of the iPTF13bvn progenitor candidate are plotted together as in Figure \ref{fig:Ib_CMD}.
\label{fig:Ib_companion}}
\end{figure*}
\subsection{Comparison with the progenitor candidate of SN Ib 2019yvr} \label{subsec:SN2019yvr}
\citet{Kilpatrick2021} report the SN Ib progenitor candidate of SN 2019yvr,
which is found $\sim980$ to 870 days ($\sim 2.6$ years) before the explosion in a $HST$/WFC3 observation.
The average photometry across all epochs in four bands gives the results of
$m_{F438W}=26.138~\pm~0.162$, $m_{F555W}=25.351~\pm~0.032$,
$m_{F625W}=24.897~\pm~0.022$ and $m_{F814W}=24.253~\pm~0.032$ mag. The color
corrected only with the Milky Way extinction $E(B-V)_{MW}=0.02$ is
$m_{F555W}-m_{F814W}=1.065~\pm~0.045$ mag, which corresponds to the
temperature of $T_\mathrm{eff}=3340$ K. Without a fairly high host galaxy
extinction, the color is too red to be explained by our HE progenitor models (see
the values in Figure \ref{fig:Ib_CMD}).
\citet{Kilpatrick2021} obtained the high host galaxy extinction value of
$A_V=2.4^{+0.7}_{-1.1}$ and $R_V=4.7^{+1.3}_{-3.0}$ from \ion{Na}{1} D spectra
and color curves of SN 2019yvr. They infer the physical parameters of the
progenitor candidate by fitting with the single star SEDs from
\citet{Pickles2010}. The best-fitting values are $T_\mathrm{eff}=6800$ K,
log($L/L_\odot$)$=5.3~\pm~0.2$ and $R_\mathrm{phot}=320~R_\odot$ from F2I star
model, and they argue that this large radius of the photosphere might be
explained by a very high mass-loss rate of $\sim10^{-4}~M_\odot$ $\mathrm{yr^{-1}}$. To
check this possibility, we construct spectra of our most massive HE model
(HE5.05) with mass-loss rates up to $100 \times \dot{M}_\mathrm{fid}$ and
present the results in Table \ref{tab:2019yvr_tab} and
Figure~\ref{fig:2019yvr}. We find that the models with very high mass-loss
rates (i.e., $\dot{M} = 50\cdots 100\times \dot{M}_\mathrm{fid}$) can explain
the optical color and magnitudes of the progenitor candidate.
The SN 2019yvr progenitor
candidate is observed during 980 to 870 days before the explosion.
Although stellar evolution models predict that the surface properties of SN Ib progenitors
would not undergo a rapid change during the pre-explosion phase
\citep{Yoon2017a},
the inferred high mass-loss rate implies that the progenitor underwent
mass eruption shortly before the explosion, for which energy injection from the inner covective layers via wave heating
during core neon/oxygen burning might be a possible explanation
\citep{Fuller2018, Leung2021}. Note also
that evidence for pre-SN mass eruption is found in the early-time light curves
of some stripped-envelope supernovae~\cite[e.g.,][]{Jin2021}.
However, the observational signature of late time interaction of SN 2019yvr~\citep{Kilpatrick2021}
is not likely to be related to this mass eruption because the matter emitted during the last several years
would be confined to a small radius ($\lesssim 10^{15}$ cm) from the progenitor at the time of explosion and
would affect only the early-time SN light curve~\citep[see][]{Jin2021}.
\begin{figure*}[ht]
\includegraphics[width=0.8\textwidth]{fig11_2019yvr_CMD.pdf}
\centering
\caption{Color-magnitude diagrams of SN 2019yvr progenitor candidate comparing with SN Ib progenitor models. SN 2019yvr progenitor candidate and HE5.05 models are plotted in diamond and circles, respectively. The color denotes the adopted mass-loss rate of HE5.05 models as indicated by the labels. Color and absolute magnitude of SN 2019yvr progenitor are corrected under the assumption of $A_\mathrm{v}=2.4$ and $R_\mathrm{v}=4.7$. \label{fig:2019yvr}}
\end{figure*}
\begin{deluxetable*}{ccccccccc}[ht!]
\tablenum{4}
\tablecaption{Physical parameters of the progenitor candidate of SN 2019yvr and SN Ib progenitor models \label{tab:2019yvr_tab}}
\tablehead{
\colhead{} & \colhead{} & \colhead{SN 2019yvr} & \colhead{HE5.05} & \colhead{HE5.05} & \colhead{HE5.05} & \colhead{HE5.05} & \colhead{HE5.05}\\
\colhead{} & \colhead{} & \colhead{\citep{Kilpatrick2021}} & \colhead{($\dot{M}_{\mathrm{fid}}$)} & \colhead{($\dot{M}_{\mathrm{fid}}\times10$)}
& \colhead{($\dot{M}_{\mathrm{fid}}\times20$)} & \colhead{($\dot{M}_{\mathrm{fid}}\times50$)} & \colhead{($\dot{M}_{\mathrm{fid}}\times100$)}
}
\startdata
\multirow{4}{*}{}
log $L/L_\odot$ & & 5.3 & 5.1 & 5.1 & 5.1 & 5.1 & 5.1 \\
$\dot{M}$ & [$M_\odot$ yr$\mathrm{^{-1}}$] & $>$1.4e-04 & 8.13e-06 & 8.13e-05 & 1.63e-04 & 4.06e-04 & 8.13e-04\\
$T_{\mathrm{eff}}$ & [K] & 6800 & 41910 & 18660 & 14260 & 10230 & 7852\\
$R_{\mathrm{phot}}$ & [$R_{\odot}$] & 320 & 6.834 & 34.49 & 59.03 & 114.7 & 194.7 \\
\enddata
\tablecomments{First column shows the physical parameters of the best-fit single star progenitor model (F2I model from \citet{Pickles2010}) for SN 2019yvr from \citet{Kilpatrick2021}. The other columns show the physical parameters of HE5.05 model with various mass-loss rates. HE5.05 model is evolved from a 8~$M_\odot$ helium star, which corresponds to a $\sim25~M_\odot$ star at ZAMS.
}
\end{deluxetable*}
Here we obtain the extinction corrected absolute magnitude of the progenitor
candidate with the host galaxy extinction ($A_V=2.4$ and $R_V=4.7$) and distance
modulus ($m-M=30.8~\pm~0.2$ mag) adopted from \citet{Kilpatrick2021}. The
corrected values are $M_{F438W}=-7.7~\pm~0.3$, $M_{F555W}=-7.8~\pm~0.2$,
$M_{F625W}=-8.0~\pm~0.2$ and $M_{F814W}=-8.0~\pm~0.2$ mag for each \textit{HST}/WFC3
filter. The adopted host galaxy extinction values, however, are obtained from
the observation after the SN explosion, so we have to consider the possibility
of additional extinction sources (e.g. dusty circumstellar matter) before the
explosion. If there were additional extinction sources, the corrected
magnitudes and the color would move to the brighter and bluer direction away
from the model predictions.
\subsection{Comparison with the progenitor candidate of SN Ic 2017ein} \label{subsec:SN2017ein}
There is a progenitor candidate for SN Ic 2017ein. \citet{Kilpatrick2018} suggest the Milky Way extinction of $A_V=0.058$ $(R_V=3.1)$ and the host galaxy extinction of $A_V=1.2$ $(R_V=2.6)$. With the distance modulus $\mu=31.17~\pm~0.10$ for NGC 3938 \citep{Tully2009}, the extinction corrected magnitudes are obtained as $M_{F555W}=-7.5~\pm~0.2$ mag and $M_{F814W}=-6.7~\pm~0.2$ mag for each $HST$/WFPC2 filter.
\citet{Vandyk2018} independently analyze the $HST$ photometric data before the explosion, and obtain a similar result with various extinction assumptions,
as shown in Figure \ref{fig:CMD_Ic}. In the figure, this candidate is compared with our CO progenitor models, the progenitor candidate is much more luminous than our fiducial models in the optical (by about 2 mag) and only a very high mass-loss rate of $\dot{M}_\mathrm{fid}\times10.0$ can result in a barely comparable magnitude in the $F814W$ filter. The observed color ($M_{F555W}-M_{F814W}$) is very blue compared to our progenitor models. No model within our parameter space satisfies both the observed magnitude and color of this candidate.
\begin{figure*}[thb!]
\includegraphics[width=\textwidth]{fig12_Ic_CMD2.pdf}
\caption{Color-magnitude diagrams of CO progenitor models. The upper left panel gives the results without considering a companion star. The upper right, lower left, and lower right panels present the composite results with a 120, 25, and 9~$M_\odot$ companion star, respectively. The diamond and circle marker with/without a black border denotes the progenitor models with/without a companion star. The reported magnitude and color of SN 2017ein progenitor candidate are plotted together for comparison. Each color gives the adopted mass-loss rate as indicated by the labels.\label{fig:CMD_Ic}}
\end{figure*}
We also consider O/B type evolved main-sequence companion stars of 9, 25, and
120~$M_\odot$ whose spectra are taken from \citet{Fierro2015} and the composite magnitudes and colors with the CO progenitor models are plotted in Figure \ref{fig:CMD_Ic}. Since our most massive CO progenitor
model (CO9.09) is evolved from a 25$M_\odot$ He star, which corresponds to
$M_\mathrm{init}\approx65~M_\odot$, a companion star with 120~$M_\odot$ is not realistic.
The upper right panel of Figure \ref{fig:CMD_Ic} shows, however, that even
this unrealistically massive companion cannot explain the blue color of the
progenitor candidate.
On the other hand, all of the composite models with 9 and 25~$M_\odot$
companion stars are fainter and redder than the candidate (the lower panels of
Figure \ref{fig:CMD_Ic}). Under the assumption that the values inferred from
the observation are reliable, we conclude that the reported candidate of SN
2017ein is too blue and too bright in the optical to be explained by a typical
SN Ic progenitor.
\citet{Kilpatrick2018} suggest a 55~$M_\odot$ single star or a $80+48~M_\odot$ binary at ZAMS
as the progenitor. \citet{Vandyk2018} suggest a $\sim47-48~M_\odot$ single star or a $\sim60-80~M_\odot$ in binary at ZAMS.
Our SN Ic progenitor models cover this initial
mass range since our most massive model (CO9.09) corresponds to a $60-70~M_\odot$ star at ZAMS.
\citet{Kilpatrick2018} obtain a host galaxy extinction value of
$A_\mathrm{V}=1.2-1.9$ using \ion{Na}{1} lines. The extinction corrected color
and magnitude of the SN 2017ein progenitor candidate are calculated using the lowest value of
$A_\mathrm{V}=1.2$, but it is already too bright and blue.
Moreover, there can be additional extinction sources before the explosion,
which can make it move to brighter and bluer region.
\citet{Vandyk2018} presents a similar result in spite of the various
extinction assumptions.
To confirm if the observed candidate was the real progenitor or
not, we need additional observation to check the disappearance of the
candidate.
\section{Conclusions} \label{sec:conclusion}
We present stellar atmosphere models of SN Ib/Ic progenitors having different
chemical compositions for the pre-SN mass range of $2.16 - 9.09~M_\odot$, which are
calculated by the non-LTE radiative transfer code CMFGEN. We investigate the
effects of various wind parameters on the resulting spectra and optical
properties and discuss implications of our results for direct identification of
SN Ib/Ic progenitors.
Our results indicate that optical properties of SN Ib/Ic progenitors can be
greatly affected by the presence of optically thick wind, and inferring SN
Ib/Ic progenitor masses and bolometric luminosities from optical brightness and
colors observed in pre-SN images could be misleading. More specifically, we
draw the following conclusions from our discussions.
\begin{enumerate}
\item
Presence of sufficiently high density wind material around a SN Ib/Ic
progenitor can make the photosphere significantly lifted-up from the
hydrostatic stellar surface. Strong emission lines and free-free emission
from the optical thick wind can also critically affect optical magnitudes.
These effects would generally make SN Ib/Ic
progenitors brighter in the optical compared to the corresponding case
without winds, in particular for helium-deficient compact SN Ic progenitors.
Our models predict that SN Ic progenitors would be brighter by more than 3 mag
in the optical than the case without the wind effects~for the fiducial WR
mass-loss rate. On the other hand, optical
properties of SN Ib progenitors having an extended helium-rich envelope are not
greatly affected by the effects of winds for the fiducial WR mass-loss rate.
The optical brightness becomes higher for a higher mass-loss rate (or a lower
wind terminal velocity), and this tendency is found more clearly for a more
compact progenitor (Sections \ref{subsec:fiducial_models} and
\ref{subsec:mdotvinfeffect}).
\item
The wind effects also greatly affect the color of SN Ib/Ic progenitors. In
general, the color becomes redder for a higher mass-loss rate (or a lower wind
terminal velocity) because of the lifting-up of the photosphere and the
continuum excess due to free-free emission in a relatively long wavelength
range. However, the optical colors are not well correlated with the effective
temperature, nor with the hydrostatic surface temperature. This is because strong
emission lines found in specific filters and the free-free emission make the
color-temperature relations highly non-monotonic (Sections
\ref{subsec:fiducial_models} and \ref{subsec:mdotvinfeffect}).
\item For a typical mass-loss rate of WR stars, the optical brightness of most
SN Ib/Ic progenitors would not be high enough to be detected within the
detection limits of previous searches in pre-SN images. Also, these detection
limits were not deep enough to observe companion stars of
$M_\mathrm{comp}<15~M_{\odot}$, which would be the typical case for SN Ib/Ic
progenitors. A deep search with an optical absolute magnitude larger than $\sim -4$
would be needed for identification of most of oridinary SN Ib/Ic progenitors.
The distance limit for a direct progenitor detection is $\sim10-15$ Mpc
and $\sim16-24$ Mpc for 10 min and 1 hour observation, respectively,
for an optical wide band filter of $HST$/WFC3.
(Section \ref{subsec:detectionlimit}).
\item The optical brightness and color of the observed SN Ib iPTF13bvn
progenitor can be explained by our models within the considered parameter
space. The photometry result of \citet{Cao2013} is consistent with the
relatively low-mass fiducial HE models (HE2.91 and HE2.97), HE4.09/HE5.05
models with a relatively high mass-loss rate
($\dot{M}_\mathrm{fid}\times5\cdots10$) or existence of a $\sim 9~M_\odot$
companion star. For the photometry result of \citet{Eldridge2015}, the optical
brightness and color of the progenitor can be explained by HE4.09/HE5.05 models
with a very high mass-loss rate ($\dot{M}_\mathrm{fid}\times10$) or with the
existence of $\sim 25~M_\odot$ companion star~(Section \ref{subsec:iPTF13bvn})
\item The SN Ib 2019yvr progenitor candidate is very bright and red in the optical
compared to our fiducial HE models. Its optical brightness and color can be
explained by the HE5.05 model with an unusually high mass-loss rate of
$\dot{M}_\mathrm{fid}\times50\cdots100$ This implies the SN 2019yvr progenitor
experienced mass-loss enhancement shortly before the SN explosion (Section
\ref{subsec:SN2019yvr}).
\item The SN Ic 2017ein progenitor candidate is too blue and bright to be
explained with any of our models whether we consider a companion star or not.
Any known massive stars could not explain both the optical brightness and the
blue color of this progenitor candidate (Section \ref{subsec:SN2017ein}).
\begin{acknowledgements}
This work has been supported by the National Research Foundation of Korea (NRF)
grant (NRF-2019R1A2C2010885). We are grateful to John Hillier for making the
CMFGEN code publicly available and for the helps he gave us at the beginning of
this project.
\end{acknowledgements}
\section{Introduction}
\lipsum[2]
\lipsum[3]
\section{Headings: first level}
\label{sec:headings}
\lipsum[4] See Section \ref{sec:headings}.
\subsection{Headings: second level}
\lipsum[5]
\begin{equation}
\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}
\end{equation}
\subsubsection{Headings: third level}
\lipsum[6]
\paragraph{Paragraph}
\lipsum[7]
\section{Examples of citations, figures, tables, references}
\label{sec:others}
\lipsum[8] \cite{kour2014real,kour2014fast} and see \cite{hadash2018estimate}.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations
appropriate for use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
\subsection{Figures}
\lipsum[10]
See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.}
\lipsum[11]
\begin{figure}
\centering
\fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\label{fig:fig1}
\end{figure}
\subsection{Tables}
\lipsum[12]
See awesome Table~\ref{tab:table}.
\begin{table}
\caption{Sample table title}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\label{tab:table}
\end{table}
\subsection{Lists}
\begin{itemize}
\item Lorem ipsum dolor sit amet
\item consectetur adipiscing elit.
\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.
\end{itemize}
\section{Conclusion}
Your conclusion here
\section*{Acknowledgments}
This was was supported in part by......
\bibliographystyle{unsrt}
|
1,116,691,499,078 | arxiv | \section{Introduction}
This paper studies the Gauss hypergeometric differential equation,
\begin{eqnarray} x(1-x) \ \frac{d^{2}y}{dx^{2}} + (\gamma - (\alpha+\beta+1)x) \ \frac{dy}{dx} - \alpha \beta \ y = 0, \label{eq:gauss} \end{eqnarray}
where $x \in \mathbb{C}$, and the Kummer confluent hypergeometric differential equation,
\begin{eqnarray} z \ \frac{d^{2}\tilde{y}}{dz^{2}} + (\gamma - z ) \ \frac{d\tilde{y}}{dz} - \beta \ \tilde{y} = 0, \label{eq:kummer} \end{eqnarray}
where $z \in \mathbb{C}$.
For brevity, in this paper, these equations are simply called Gauss equation and Kummer equation respectively.
The aim of the paper is to give rigour to the confluence of two regular singularities of the Gauss equation to produce
the Kummer equation with an irregular singularity at infinity. In particular, the monodromy data of the confluent equation (Kummer), including Stokes data, are produced as limits of the monodromy data of the original equation (Gauss) using explicit formulae.
One of the main difficulties addressed in this paper is how to make sense of the confluence limits by understanding how to pass from the solutions of the original system to the solutions of the confluent system. This is a non-trivial question because it involves passing from a solution with power-like behaviour which converges in a disk to solutions with exponential behaviour which are analytic in a sector and asymptotic to a divergent series.
The procedure of this paper is based on an existence theorem by Glutsyuk \cite{glutsyuk}. Essentially, this states that there exist certain diagonal matrices $K_{\varepsilon}$ and $K_{-\varepsilon}$ such that the limit,
\[\lim_{\varepsilon \rightarrow 0} K_{-\varepsilon}^{-1} \ C \ K_{\varepsilon},\]
where $C$ is the connection matrix between the merging simple poles of the original system, exists. Moreover, this limit gives one of the Stokes matrices if $\varepsilon\to 0$ is taken along a certain ray. However, this existence theorem does not prescribe how to calculate the diagonal matrices $K_{\varepsilon}$ and $K_{-\varepsilon}$. The main result of this paper is to calculate such diagonal matrices and thus produce both the Stokes matrices in terms of limits of the connection matrix of the original equation explicitly. In particular calculate how one Stokes matrix is produced as limit along a certain ray and the other one by the limit along the opposite ray.
Despite the fact that the analytic theory of the Gauss and Kummer equations has been developed more than a hundred years ago, the question of producing the Stokes data of the Kummer equation in terms of limits of monodromy data of the Gauss one has only been approached rather recently \cite{lambert,watanabe}. In particular, in \cite{watanabe}, the Mellin-Barnes integral representations of the solutions of Kummer equation are produced as limits of the ones for the Gauss equation, and then the Stokes data are deduced from the Mellin-Barnes integral representations (this last calculation is reported here in Appendix B for completeness).
In \cite{lambert}, the confluence problem is solved by observing that one of the Fuchsian singularities remains Fuchsian under the confluence, so that the corresponding local fundamental matrix of the Gauss equation admits an analytic limit under the confluence, thus allowing to compute explicitly the monodromy of the Kummer equation around $0$. The Stokes matrices are then determined by the fact that loops around $0$ are homotopic to loops around $\infty$ in the Riemann sphere with two punctures.
The approach of the current paper does not require closed form expressions such as Mellin-Barnes integrals. Indeed, in \cite{HM}, we use this procedure to calculate the Stokes matrices of the linear problem associated to the fifth Painlev\'e equation (and its higher order analogues ) in terms of limits of the connection matrix between $1$ and $\infty$ in the linear problem associated to the sixth Painlev\'e equation (and its higher order analogues) for which closed form fundamental matrices are unknown.
Another advantage of the procedure of the current paper is that it does not rely on the existence of an additional simple pole which survives the confluence limit, and therefore it can be applied to the confluence from the Bessel differential equation to the Airy one for example, or even more ambitiously, in the confluence from the fifth to the third Painlev\'e equation - this challenging work is postponed to subsequent publications.
This paper is organised as follows:
In Sections \ref{sec:gg} and \ref{sec:kummer}, the authors remind some background on the Gauss and Kummer hypergeometric differential equations respectively. In Section \ref{sec:hgconf} the confluence procedure is explained, and the main result of this paper, Theorem \ref{main:top} is proved.
In appendices A and B, the classical derivation of the monodromy data for the Gauss and Kummer hypergeometric differential equations respectively are derived using Mellin-Barnes integrals.
\vskip 3mm
{\it This paper is inspired by some of the facets of Nalini's mathematical taste and style because to tackle a seemingly simple problem it requires an unexpected depth that opens a Pandora's box of beautiful mathematical problems. For this reason, we wish to dedicate this paper to her.} [Calum Horrobin and Marta Mazzocco]
\vskip 3mm
{\it I wish to thank Nalini for her friendship of more than twenty years. Throughout her career, Nalini has mentored, supported and sponsored a huge number of
early career mathematicians, some formally as her PhD students and post docs, others informally, like myself and many others.} [Marta Mazzocco]
\vskip 2mm \noindent{\bf Acknowledgements.} We thank D. Guzzetti for many helpful conversations. This research was supported by the EPSRC Research Grant $EP/P021913/1$ and EPSRC DTA allocation to the Mathematical Sciences Department at Loughborough University.
\section{Gauss hypergeometric differential equation} \label{sec:gg}
Throughout the paper we work in the non-resonance assumption: $\gamma$, $\gamma-\alpha-\beta$, $\alpha-\beta \not \in \mathbb{Z}$.
To define monodromy data, it is easier to deal with a system of first order ODEs by using the following trivial lemma:
\begin{lemma} \label{lemma:hglemma} Under the assumptions $\alpha \neq 0$, $\gamma \neq \beta \neq 1$ and $\alpha \neq \beta-1$, the matrix
\begin{eqnarray}Y(x) = \left( \begin{array}{cc} y_{1}(x) & y_{2}(x) \\ \Psi \left( y_{1} , y_{1}' ; x \right) & \Psi \left( y_{2}, y_{2}' ; x \right) \end{array} \right) , \label{eq:Y} \end{eqnarray}
where
\begin{eqnarray} \Psi \left( y_{k}, y_{k}' ; x \right) = \frac{ \alpha \left(\beta - \gamma + (\alpha+1-\beta) x \right) y_{k}(x) + x(x-1)(\alpha+1-\beta) y_{k}'(x)}{\alpha (\beta-1)(\beta-\gamma)}, \label{eq:psi1} \end{eqnarray}
is a fundamental solution of the equation
\begin{eqnarray}\frac{{\rm d} Y}{{\rm d} x} = \left( \frac{A_{0}}{x} + \frac{A_{1}}{x-1}\right)Y, \label{eq:hg1} \end{eqnarray}
\begin{eqnarray} &\ & A_{0} = \frac{1}{\alpha+1-\beta} \left( \begin{array}{cc} \alpha(\beta-\gamma) & \alpha(1-\beta)(\beta-\gamma) \\ \alpha+1-\gamma & (1-\beta)(\alpha+1-\gamma) \end{array} \right), \nonumber \\
&\ & A_{1} = \frac{1}{\alpha+1-\beta} \left( \begin{array}{cc} \alpha(\gamma-\alpha-1) & \alpha(\beta-1)(\beta-\gamma) \\ \gamma-\alpha-1 & (\beta-1)(\beta-\gamma) \end{array} \right), \nonumber \end{eqnarray}
if and only if $y_{1}(x)$ and $y_{2}(x)$ are linearly independent solutions of Gauss hypergeometric equation (\ref{eq:gauss}).
\end{lemma}
So, from now on, we stick to the system of first order ODEs \eqref{eq:hg1} .
We define the following disks with chosen branches, as illustrated in Figure \ref{fig:123} below:
\begin{align} \Omega_{0} &= \left\{ x : |x|<1 , \ -\pi \leq \text{arg}(x) < \pi \right\}, \nonumber \\
\Omega_{1} &= \left\{ x : |x-1|<1, \ - \pi \leq \text{arg}(1-x) < \pi \right\}, \nonumber \\
\Omega_{\infty} &= \left\{ x : |x| > 1 , \ -\pi \leq \text{arg}(-x) < \pi \right\}, \nonumber \end{align}
\begin{figure}\begin{center}
\includegraphics[scale=.8]{branchcuts5}
\caption{\label{fig:123}Chosen disks with branch cuts. Note that $\Omega_{\infty}$ is a disk in the complement of $\overline\Omega_0\cup \overline\Omega_1$.}
\end{center} \end{figure}
It is well-known that the solutions of equation (\ref{eq:gauss}) are expressible in terms of Gauss hypergeometric $\ _{2}F_{1}$ series, in particular the following
three pairs of linearly independent local solutions $y_{1}^{(k)}(x)$ and $y_{2}^{(k)}(x)$ of (\ref{eq:gauss}) defined in the neighbourhoods $\Omega_{k}$ form a basis around around each singular point:
\begin{align} &\begin{matrix*}[l] y_{1}^{(0)}(x) = x^{1-\gamma} \ _{2}F_{1} \left( \begin{array}{c} \alpha+1-\gamma , \ \beta+1-\gamma \\ 2-\gamma \end{array} ; x \right), \\ y_{2}^{(0)}(x) = \ _{2}F_{1} \left(\begin{array}{c} \alpha , \ \beta \\ \gamma \end{array} ; x \right), \end{matrix*} &&x \in \Omega_{0}, \label{eq:y0} \\
&\begin{matrix*}[l] y_{1}^{(1)}(x) = (1-x)^{\gamma-\alpha-\beta} \ _{2}F_{1} \left(\begin{array}{c} \gamma-\alpha , \ \gamma-\beta \\ \gamma+1-\alpha-\beta \end{array} ; 1-x \right), \\
y_{2}^{(1)}(x) = \ _{2}F_{1} \left(\begin{array}{c} \alpha , \ \beta \\ \alpha+\beta+1-\gamma \end{array} ; 1-x \right), \end{matrix*} &&x \in \Omega_{1}, \label{eq:y1} \\
&\begin{matrix*}[l] y_{1}^{(\infty)}(x) = (-x)^{-\alpha} \ _{2}F_{1} \left( \begin{array}{c} \alpha , \ \alpha+1-\gamma \\ \alpha+1-\beta \end{array} ; x^{-1} \right), \\
y_{2}^{(\infty)}(x) = (-x)^{-\beta} \ _{2}F_{1} \left( \begin{array}{c} \beta , \ \beta+1-\gamma \\ \beta+1-\alpha \end{array} ; x^{-1} \right), \end{matrix*} &&x \in \Omega_{\infty}. \label{eq:yinf} \end{align}
\begin{lemma} \label{lemma:cup} The following local fundamental solutions of the matrix hypergeometric equation (\ref{eq:hg1}) have the following form
\begin{align} Y^{(0)}(x) &= R_{0} G_{0}(x) x^{\Theta_{0}}, &&x \in \Omega_{0}, \label{eq:ak} \\
Y^{(1)}(x) &= R_{1} G_{1}(x) (1-x)^{\Theta_{1}}, &&x \in \Omega_{1}, \label{eq:ak2} \\
Y^{(\infty)}(x) &= R_{\infty} G_{\infty}(x) (-x)^{-\Theta_{\infty}}, &&x \in \Omega_{\infty}, \label{eq:infinity} \end{align}
where $R_{k}$ and $\Theta_{k}$ are the following matrices:
\begin{align} R_{0} = \left(\begin{array}{cc} 1 & 1 \\ \frac{\alpha+1-\gamma}{\alpha(\beta-\gamma)} & \frac{1}{\beta-1} \end{array} \right), \ R_{1} = \left(\begin{array}{cc} 1 & 1 \\ \frac{1}{\alpha} & \frac{\alpha+1-\gamma}{(\beta-1)(\beta-\gamma)} \end{array} \right), \ R_{\infty} = \left( \begin{array}{cc} 1 & 0 \\ 0 & \frac{(\beta-\alpha)(\alpha+1-\beta)}{\alpha(\beta-1)(\beta-\gamma)} \end{array} \right), \nonumber \end{align}
\begin{eqnarray} \Theta_{0} = \left(\begin{array}{cc} 1-\gamma & 0 \\ 0 & 0 \end{array} \right) , \ \Theta_{1} = \left(\begin{array}{cc} \gamma-\alpha-\beta & 0 \\ 0 & 0 \end{array} \right) , \ \Theta_{\infty} = \left(\begin{array}{cc} \alpha & 0 \\ 0 & \beta-1 \end{array} \right), \nonumber \end{eqnarray}
which satisfy $R_{k}^{-1}A_{k}R_{k} = \Theta_{k}$, and $G_{k}(x)$ are the following series: \\
$G_{0}(x) = \left( \begin{matrix*}[l] \ _{2}F_{1} \left( \begin{array}{c} \alpha+1-\gamma,\ \beta-\gamma \\ 1-\gamma \end{array} ;x\right) \text{\LARGE ,} \\ \frac{x(\alpha+1-\gamma)(1-\beta)}{(1-\gamma)(2-\gamma)} \ _{2}F_{1} \left(\begin{array}{c} \alpha+2-\gamma, \ \beta+1-\gamma \\ 3-\gamma \end{array} ; x \right) \text{\LARGE ,} \end{matrix*} \right.$ \\
\begin{flushright}$\left. \begin{matrix*}[r] \frac{x \alpha (\gamma-\beta)}{\gamma(\gamma-1)} \ \ _{2}F_{1} \left(\begin{array}{c} \alpha+1, \ \beta \\ \gamma+1 \end{array} ; x\right) \\ \ _{2}F_{1} \left(\begin{array}{c}\alpha , \ \beta-1 \\ \gamma - 1 \end{array} ; x \right) \end{matrix*} \right),$ \end{flushright}
$G_{1}(x) = \left( \begin{matrix*}[l] \ _{2}F_{1} \left(\begin{array}{c}\gamma-\alpha-1 , \ \gamma-\beta \\ \gamma-\alpha-\beta \end{array} ; 1-x \right) \text{\LARGE ,} \\ \frac{(1-x)(\beta-1)(\beta-\gamma)}{(\alpha+\beta-\gamma-1)(\alpha+\beta-\gamma)} \ _{2}F_{1}\left(\begin{array}{c} \gamma-\alpha , \ \gamma+1-\beta \\ \gamma+2-\alpha-\beta \end{array} ; 1-x \right) \text{\LARGE ,} \end{matrix*} \right.$ \\
\begin{flushright} $\left. \begin{matrix*}[r] \frac{(1-x)\alpha(\alpha+1-\gamma)}{(\alpha+\beta-\gamma)(\alpha+\beta+1-\gamma)} \ _{2}F_{1} \left( \begin{array}{c} \alpha+1 , \ \beta \\ \alpha+\beta+2-\gamma \end{array} ; 1-x \right) \\ \ _{2}F_{1} \left( \begin{array}{c} \alpha , \ \beta-1 \\ \alpha+\beta-\gamma \end{array} ; 1-x \right) \end{matrix*} \right),$ \end{flushright}
$G_{\infty}(x) = \left( \begin{matrix*}[l] \ _{2}F_{1} \left(\begin{array}{c} \alpha , \ \alpha+1-\gamma \\ \alpha+1-\beta \end{array} ; x^{-1} \right) \text{\LARGE ,} \\ \frac{\alpha(\beta-1)(\beta-\gamma)(\gamma-\alpha-1)}{(\alpha-\beta)(\alpha+1-\beta)^{2}(\alpha+2-\beta)}\frac{1}{x} \ _{2}F_{1} \left( \begin{array}{c} \alpha+1, \ \alpha+2-\gamma \\ \alpha+3-\beta \end{array} ; x^{-1} \right) \text{\LARGE ,} \end{matrix*} \right.$ \\
\begin{flushright} $\left. \begin{matrix*}[r] -\frac{1}{x} \ _{2}F_{1} \left( \begin{array}{c} \beta , \ \beta+1-\gamma \\ \beta+1-\alpha \end{array} ; x^{-1} \right) \\ \ _{2}F_{1} \left(\begin{array}{c} \beta-1 , \ \beta-\gamma \\ \beta-\alpha-1 \end{array} ; x^{-1} \right) \end{matrix*} \right).$ \end{flushright} \end{lemma}
\begin{proof} This result can be proved in two ways: either by reducing equation (\ref{eq:hg1}) to Birkhoff normal form near each singularity and computing the corresponding gauge transformations $R_0G_0(x)$, $R_1G_1(x)$ and $G_\infty(x)$ recursively or by direct substitution of the local solutions (\ref{eq:y0})-(\ref{eq:yinf}) into expression (\ref{eq:Y}) and using Gauss contiguous relations. \end{proof}
\begin{remark} \label{remark:lm} The matrices $R_{k}$, $k=0,1$ and $\infty$, in the above solutions (\ref{eq:ak}), (\ref{eq:ak2}) and (\ref{eq:infinity}) have been chosen to satisfy $R_{k}^{-1}A_{k}R_{k} = \Theta_{k}$, where $A_{\infty} := -A_{0}-A_{1}$. The matrices $G_0,G_1,G_\infty$ have leading term given by the identity.\end{remark}
We now define the monodromy data of Gauss hypergeometric equation (\ref{eq:gauss}) and recall how to express them in explicit form \cite{bateman, ww}. In appendix A we
derive these classical formulae by following the approach of representing solutions using Mellin-Barnes integrals.
When defining local solutions, we have been specific about identifying which sheet of the Riemann surface of the logarithm we are restricting our local solutions to at each singular point. We may extend the definitions of our local fundamental solutions $Y^{(k)}(x)$ to other sheets $e^{2 m \pi i} \Omega_{k}$, $k=0,1,\infty$, by analytically continuing along a closed loop encircling the singularity $x=0,1,\infty$. This action simply means that our solution becomes multiplied by the corresponding exponent $e^{2 m \pi i \Theta_{k}}$, for $k=0,1$ and $\infty$, $m \in \mathbb{Z}$. Note that, for $k=0$ and $1$, the analytic continuation of $Y^{(k)}(x)$ around its singularity in the positive direction means $m>0$ in the previous sentence; while, for $k=\infty$, it means $m < 0$. The diagonal matrices $e^{2 \pi i \Theta_{k}}$ are called the local monodromy exponents of the singularities. \\
We proceed with the global analysis of solutions. Let $Y^{(0)}(x)$, $Y^{(1)}(x)$ and $Y^{(\infty)}(x)$ be the fundamental solutions of the hypergeometric equation as defined in the previous section. Denote by $\gamma_{j,k}\left[Y^{(j)}\right](x)$ the analytic continuation of $Y^{(j)}(x)$ along an orientable curve $\gamma_{j,k} : [0,1] \rightarrow \mathbb{C}$ with $\gamma_{j,k}(0) \in \Omega_{j}$ and $\gamma_{j ,k}(1) \in \Omega_{k}$, for $j,k=0,1,\infty$. We have the following connection formulae (see Appendix A for the detailed derivation of these):
\begin{align} \gamma_{j,k}\left[Y^{(j)}\right](x) = Y^{(k)}(x) C^{kj}, \label{eq:cont} \end{align}
where:
\begin{align} C^{0 \infty} &= \left(\begin{array}{cc} e^{i \pi (\gamma-1)} \frac{\Gamma(\alpha+1-\beta)\Gamma(\gamma-1)}{\Gamma(\alpha)\Gamma(\gamma-\beta)} & e^{i \pi(\gamma-1)} \frac{\Gamma(\beta+1-\alpha)\Gamma(\gamma-1)}{\Gamma(\beta)\Gamma(\gamma-\alpha)} \\ \frac{\Gamma(\alpha+1-\beta)\Gamma(1-\gamma)}{\Gamma(1-\beta)\Gamma(\alpha+1-\gamma)} & \frac{\Gamma(\beta+1-\alpha)\Gamma(1-\gamma)}{\Gamma(1-\alpha)\Gamma(\beta+1-\gamma)} \end{array} \right), \label{eq:ci0} \\
C^{1 \infty} &= \left(\begin{array}{cc} e^{i \pi (\gamma-\beta)} \frac{\Gamma(\alpha+1-\beta)\Gamma(\alpha+\beta-\gamma)}{\Gamma(\alpha)\Gamma(\alpha+1-\gamma)} & e^{i \pi (\gamma-\alpha)} \frac{\Gamma(\beta+1-\alpha) \Gamma(\alpha+\beta-\gamma)}{\Gamma(\beta)\Gamma(\beta+1-\gamma)} \\ e^{i \pi \alpha} \frac{\Gamma(\alpha+1-\beta)\Gamma(\gamma-\alpha-\beta)}{\Gamma(1-\beta)\Gamma(\gamma-\beta)} & e^{i \pi \beta} \frac{\Gamma(\beta+1-\alpha)\Gamma(\gamma-\alpha-\beta)}{\Gamma(1-\alpha)\Gamma(\gamma-\alpha)} \end{array} \right), \label{eq:ci1} \\
C^{0 1} &= \left(\begin{array}{cc} \frac{\Gamma(\gamma+1-\alpha-\beta)\Gamma(\gamma-1)}{\Gamma(\gamma-\alpha)\Gamma(\gamma-\beta)} & \frac{\Gamma(\alpha+\beta+1-\gamma)\Gamma(\gamma-1)}{\Gamma(\alpha)\Gamma(\beta)} \\ \frac{\Gamma(\gamma+1-\alpha-\beta)\Gamma(1-\gamma)}{\Gamma(1-\alpha)\Gamma(1-\beta)} & \frac{\Gamma(\alpha+\beta+1-\gamma)\Gamma(1-\gamma)}{\Gamma(\alpha+1-\gamma)\Gamma(\beta+1-\gamma)} \end{array} \right). \label{eq:c10} \end{align}
We choose to normalise the monodromy data of Gauss hypergeometric equation with the fundamental solution $Y^{(\infty)}(x)$. Denote by $\gamma_{k}\left[Y^{(\infty)}\right](x)$ the analytic continuation of $Y^{(\infty)}(x)$ along an orientable, closed curve $\gamma_{k} : [0,1] \rightarrow \mathbb{C}$ with $\gamma_{k}(0) = \gamma_{k}(1) \in \Omega_{\infty}$, $k=0,1$, which encircles the singularity $x=0,1$ respectively in the positive (anti-clockwise) direction. The curves $\gamma_{0}$ and $\gamma_{1}$ are illustrated in Figure \ref{fig:hgloops} below, note that $\gamma_{\infty} := \gamma_{1}^{-1}\gamma_{0}^{-1}$. We have:
\begin{align} \gamma_{k}\left[Y^{(\infty)}\right](x) = Y^{(k)}(x) M_{k}, \quad \quad k=0,1,\infty, \nonumber \end{align}
where,
\begin{align} M_{0} = \left(C^{0 \infty}\right)^{-1} e^{2 \pi i \Theta_{0}} C^{0 \infty}, \quad M_{1} = \left(C^{1 \infty}\right)^{-1} e^{2 \pi i \Theta_{1}} C^{1 \infty}, \quad M_{\infty} = e^{2 \pi i \Theta_{\infty}}. \label{eq:hgm} \end{align}
These matrices satisfy the cyclic relation,
\begin{align} M_{\infty}M_{1}M_{0} = I. \label{eq:cyc1} \end{align}
\begin{figure}[H] \begin{center}
\includegraphics[scale=0.6]{hgloops3}
\caption{\label{fig:hgloops}Curves defining the monodromy matrices $M_{k}$ of Gauss hypergeometric differential equation.}
\end{center} \end{figure}
\begin{definition}We define the monodromy data of Gauss hypergeometric equation (\ref{eq:gauss}) as the set,
\begin{align} \mathcal{M} := \left\{ \left(M_{0},M_{1},M_{\infty}\right) \in \left(\text{GL}_{2}(\mathbb{C})\right)^{3} \left| \begin{array}{c} M_{\infty}M_{1}M_{0} = I, \ M_{\infty} = e^{2 \pi i \Theta_{\infty}} \\ \text{eigenv}(M_{k}) = e^{2 \pi i \Theta_{k}}, \text{ k=0,1} \end{array} \right. \right\} _{\text{\Huge/\normalsize GL$_{2}(\mathbb{C})$}} \label{eq:hgmono} \end{align}
where eigenv$(M_{k}) = e^{2 \pi i \Theta_{k}}$ means that the eigenvalues of $M_{k}$ are given as the elements of the diagonal matrix $e^{2 \pi i \Theta_{k}}$ and the quotient is by global conjugation by a diagonal matrix. \end{definition}
\section{Kummer confluent hypergeometric equation} \label{sec:kummer}
We use $z$ as the variable of Kummer confluent hypergeometric equation, we also write tilde above some of the functions and parameters to distinguish from the Gauss hypergeometric equation. We recall the following,
\begin{lemma} \label{lemma:l2} Under the assumption $(\beta-1)(\beta-\gamma) \neq 0$, the matrix
\begin{eqnarray} \widetilde{Y}(z) = \left( \begin{array}{cc} \tilde{y}_{1}(z) & \tilde{y}_{2}(z) \\ \widetilde{\Psi}\left(\tilde{y}_{1},\tilde{y}_{1}';z\right) & \widetilde{\Psi}\left(\tilde{y}_{2},\tilde{y}_{2}';z\right) \end{array} \right), \label{eq:y2} \end{eqnarray}
where,
\[\widetilde{\Psi}\left(\tilde{y}_{k} , \tilde{y}_{k}' ; z \right) = \frac{\left(z + \beta - \gamma \right)\tilde{y}_{k}(z) - z \tilde{y}_{k}'(z)}{(\beta-1)(\beta-\gamma)}, \]
is a fundamental solution of the equation
\begin{eqnarray}
\frac{\partial \widetilde{Y}}{\partial z} = \left( \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) + \frac{\widetilde{A}_{0}}{z}\right) \widetilde{Y}, \text{ where } \widetilde{A}_{0} = \left( \begin{array}{cc} \beta-\gamma & (1-\beta)(\beta-\gamma) \\ 1 & 1-\beta \end{array} \right), \label{eq:chg1}
\end{eqnarray}
if and only if $\tilde{y}_{1}(z)$ and $\tilde{y}_{2}(z)$ are linearly independent solutions of Kummer confluent hypergeometric equation (\ref{eq:kummer}),
\begin{align} z \ \tilde{y}'' + (\gamma-z) \ \tilde{y}' - \beta \ \tilde{y}=0. \nonumber \end{align}\end{lemma}
Kummer confluent hypergeometric equation (\ref{eq:kummer}) has one Fuchsian singularity at $z=0$, since $\frac{\gamma-z}{z}$ and $\frac{-\beta}{z}$ have simple poles at $z=0$, and an irregular singularity at $z=\infty$ of Poincar\'{e} rank one. The exponents of the singularity $z=0$ are $1-\gamma$ and $0$ and at $z=\infty$ are $\gamma-\beta$ and $\beta-1$. We make the non-resonance assumption $\gamma \notin \mathbb{Z}$.
\subsubsection{Local behaviour of the solutions} \label{sec:kumsol}
Kummer confluent hypergeometric equation has an irregular singularity at $z=\infty$ of Poincar\'{e} rank one and, as such, solutions around this point exhibit Stokes phenomenon. In this sub-section, we will state some definitions and theorems which precisely describe fundamental solutions of Kummer equation at the irregular point and the monodromy data, including Stokes matrices.
We first fix the pair of linearly independent local solutions of (\ref{eq:kummer}) as follows:
\begin{align}
&\begin{matrix*}[l] \tilde{y}_{1}^{(0)}(z) = z^{1-\gamma} \ _{1}F_{1} \left( \begin{array}{c} \beta+1-\gamma \\ 2-\gamma \end{array} ; z \right), \\ \tilde{y}_{2}^{(0)}(z) = \ _{1}F_{1} \left( \begin{array}{c} \beta \\ \gamma \end{array} ; z \right), \end{matrix*} &&z \in \widetilde{\Omega}_{0}. \label{eq:yt0}
\end{align}
where
\[\widetilde{\Omega}_{0} := \left\{z: - \frac{3}{2}\pi \leq \text{arg}(z) < \frac{\pi}{2}\right\},\]
is a punctured disk around $0$ with branch cut along the positive imaginary axis.
In terms of the linear system \eqref{eq:y2}, these solutions correspond to the following local fundamental solution of the matrix hypergeometric equation (\ref{eq:chg1}):
\begin{align}
\widetilde{Y}^{(0)}(z) = \widetilde{R}_{0} H_{0}(z) z^{\widetilde{\Theta}_{0}}, &&z \in \widetilde{\Omega}_{0}, \label{eq:aroundzero}
\end{align}
where $\widetilde{R}_{0}$ and $\widetilde{\Theta}_{0}$ are the following matrices:
\begin{eqnarray}
\widetilde{R}_{0} = \left( \begin{array}{cc} 1 & 1 \\ \frac{1}{\beta-\gamma} & \frac{1}{\beta-1} \end{array} \right) \quad \text{and} \quad \widetilde{\Theta}_{0} = \left(\begin{array}{cc} 1-\gamma & 0 \\ 0 & 0 \end{array} \right), \nonumber
\end{eqnarray}
which satisfy $\widetilde{R}_{0}^{-1} \widetilde{A}_{0} \widetilde{R}_{0} = \widetilde{\Theta}_{0}$, and $H_{0}(z)$ is the following series:
$$
H_{0}(z) = \left(\begin{array}{cc} _{1}F_{1} \left( \begin{array}{c} \beta-\gamma \\ 1-\gamma \end{array} ;z\right) & \frac{z (\gamma-\beta)}{\gamma(\gamma-1)} \ \ _{1}F_{1} \left(\begin{array}{c} \beta \\ \gamma+1 \end{array} ; z\right) \\ & \\ \frac{z(1-\beta)}{(1-\gamma)(2-\gamma)} \ _{1}F_{1} \left(\begin{array}{c} \beta+1-\gamma \\ 3-\gamma \end{array} ; z \right) & _{1}F_{1} \left(\begin{array}{c}\beta-1 \\ \gamma - 1 \end{array} ; z \right) \end{array} \right).
$$
We now turn our attention to the irregular singularity $z=\infty$.
\begin{definition}
The rays $\{z : \text{Re}(z) = 0 , \ \text{Im}(z) >0\}$ and $\{z : \text{Re}(z) = 0 , \ \text{Im}(z) <0\}$ are called the Stokes rays of Kummer equation (\ref{eq:kummer}).
\end{definition}
We note that these rays constitute the borderline where the behaviour of $e^{z}$ changes, as $z \rightarrow \infty$; that is to say, on one side of each of these rays we have $e^{z} \rightarrow 0$, whereas on the other side of each ray we have $e^{z} \rightarrow \infty$. This is a key aspect of Stokes phenomenon and plays a role in understanding the following classical theorem.
\begin{theorem} \label{theorem:kummerst}
Let
\[\widetilde{\Sigma}_{k} = \left\{ z : -\frac{\pi}{2} < \text{arg}(z) - k \pi < \frac{3\pi}{2} \right\}.\]
For all $k \in \mathbb{Z}$, there exists a solution $\widetilde{Y}^{(\infty,k)}(z)$ of equation (\ref{eq:chg1}) analytic in the sector $\widetilde{\Sigma}_{k}$ such that,
\begin{align} \widetilde{Y}^{(\infty,k)}(z) \sim \widetilde{R}_{\infty}\left(\sum_{n=0}^{\infty}h_{n,\infty}z^{-n}\right) \left(\begin{array}{cc} e^{z} z^{\beta-\gamma} & 0 \\ 0 & z^{1-\beta} \end{array} \right), \quad \quad \text{as } z \rightarrow \infty, \ z \in \widetilde{\Sigma}_{k}, \label{eq:chgasy} \end{align}
where $\widetilde{R}_{\infty}$ is the following matrix,
\begin{eqnarray} \widetilde{R}_{\infty} = \left(\begin{array}{cc} 1 & 0 \\ 0 & \frac{-1}{(\beta-1)(\beta-\gamma)} \end{array} \right), \nonumber \end{eqnarray}
and $H_{\infty}(z)$ is the following series
$$
H_{\infty}(z) = \left( \begin{array}{cc} \ _{2}F_{0} \left(1-\beta , \gamma-\beta ; z^{-1}\right) & \frac{-1}{z} \ _{2}F_{0} \left(\beta , \beta+1-\gamma ; -z^{-1} \right) \\ \frac{(1-\beta)(\beta-\gamma)}{z} \ _{2}F_{0} \left( 2-\beta , \gamma+1-\beta ; z^{-1} \right) & \ _{2}F_{0} \left(\beta-1, \beta-\gamma ; -z^{-1} \right) \end{array} \right).
$$
Moreover, each solution $\widetilde{Y}^{(\infty,k)}(z)$ is uniquely specified by the relation (\ref{eq:chgasy}). \end{theorem}
\begin{proof} A proof of the existence of fundamental solutions $\widetilde{Y}^{(\infty,k)}(z)$ which are analytic on sectors $\widetilde{\Sigma}_{k}$ may be found in \cite{BJL}.
To find the asymptotic behaviour \eqref{eq:chgasy}, we make the following ansatz
\begin{align}
&\widetilde{Y}^{(\infty,k)}(z) \sim \widetilde{R}_{\infty} H_{\infty}(z) \text{exp}\left( \int_{-\infty}^{z} \left(\Lambda_{0} + \frac{\Lambda_{1}}{z'}\right)\ dz' \right), &&\text{as } z \rightarrow \infty, \ z \in \widetilde{\Sigma}_{k}, \nonumber
\end{align}
where,
\[ \widetilde{R}_{\infty} = \left(\begin{array}{cc} 1 & 0 \\ 0 & \frac{-1}{(\beta-1)(\beta-\gamma)} \end{array} \right),\]
$\Lambda_{0}$ and $\Lambda_{1}$ are constant, diagonal matrices to be determined and $H_{\infty}(z)$ is a formal series
\[H_{\infty}(z) = \sum_{n=0}^{\infty} h_{n,\infty}z^{-n}.\]
where the coefficients $h_{n,\infty}$ are to be determined.
By substitution in the equation (\ref{eq:chg1}), we obtain\begin{align} -\sum_{n=1}^{\infty} n h_{n,\infty} z^{-n-1} &+ \left(\sum_{n=0}^{\infty} h_{n,\infty} z^{-n}\right)\left( \Lambda_{0} + \frac{\Lambda_{1}}{z} \right) \nonumber \\
&= \widetilde{R}_{\infty}^{-1} \left( \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) + \frac{\widetilde{A}_{0}}{z} \right)\widetilde{R}_{\infty} \left( \sum_{n=0}^{\infty} h_{n,\infty} z^{-n} \right). \nonumber \end{align}
By setting $h_{0,\infty} = I$ and equating powers of $z^{-n}$ in this equation, for $n=0$ and $1$, we find:
\[\Lambda_{0} = \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \text{ and } \Lambda_{1} = \left(\begin{array}{cc} \beta-\gamma & 0 \\ 0 & 1-\beta \end{array}\right),\]
and, for $n \geq 1$, we find the recursion equation,
\[\left[ h_{n,\infty} , \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \right] = (n-1) h_{n-1,\infty} + h_{n-1,\infty} \left(\begin{array}{cc} \gamma-\beta & 0 \\ 0 & \beta-1 \end{array} \right) + \widetilde{R}_{\infty}^{-1}\widetilde{A}_{0}\widetilde{R}_{\infty}h_{n-1,\infty}.\]
It can be verified that the general solution of this equation is,
\begin{align}h_{n,\infty} &= \left(\begin{array}{cc} \frac{(1-\beta)_{n}(\gamma-\beta)_{n}}{n!} & \frac{(\beta)_{n-1}(\beta+1-\gamma)_{n-1}}{(-1)^{n}(n-1)!} \\ \frac{(1-\beta)(\beta-\gamma)(2-\beta)_{n-1}(\gamma+1-\beta)_{n-1}}{(n-1)!} & \frac{(\beta-1)_{n}(\beta-\gamma)_{n}}{(-1)^{n}n!} \end{array}\right), \label{eq:hninf} \end{align}
which are indeed the coeficients in the asymptotic series given.
To prove uniqueness of solutions, let $\widehat{Y}^{(\infty,k)}(z)$ denote another fundamental solution of equation (\ref{eq:chg1}) which is analytic on the sector $\widetilde{\Sigma}_{k}$ and has the correct asymptotic behavior, namely,
\begin{align} \widehat{Y}^{(\infty,k)}(z) \sim \widetilde{R}_{\infty}\left(\sum_{n=0}^{\infty}h_{n,\infty}z^{-n}\right) \left(\begin{array}{cc} e^{z} z^{\beta-\gamma} & 0 \\ 0 & z^{1-\beta} \end{array} \right), \quad \quad \text{as } z \rightarrow \infty, \ z \in \widetilde{\Sigma}_{k}. \label{eq:chgasy2} \end{align}
Since $\widetilde{Y}^{(\infty,k)}(z)$ and $\widehat{Y}^{(\infty,k)}(z)$ are fundamental solutions defined on the same sector, there exists a constant matrix $C \in \text{GL}_{2}(\mathbb{C})$ such that,
\[\widetilde{Y}^{(\infty,k)}(z) = \widehat{Y}^{(\infty,k)}(z) C, \quad \quad z \in \widetilde{\Sigma}_{k}.\]
Using the asymptotic relations (\ref{eq:chgasy}) and (\ref{eq:chgasy2}), we deduce the following,
\[\left(\begin{array}{cc} e^{z} z^{\beta-\gamma} & 0 \\ 0 & z^{1-\beta} \end{array} \right) C \left(\begin{array}{cc} e^{-z} z^{\gamma-\beta} & 0 \\ 0 & z^{\beta-1} \end{array} \right) \sim I, \quad \quad \text{as } z \rightarrow \infty, \ z \in \widetilde{\Sigma}_{k}.\]
From this relation, we immediately see that $(C)_{1,1} = (C)_{2,2}=1$. Moreover, since there exists rays belonging to $\widetilde{\Sigma}_{k}$ along which each exponential, $e^{z}$ and $e^{-z}$, explodes as $z \rightarrow \infty$, we conclude that $(C)_{1,2} = (C)_{2,1} = 0$. \end{proof}
\begin{remark} The matrices $\widetilde{R}_{0}$ and $\widetilde{R}_{\infty}$ in the above solutions (\ref{eq:aroundzero}) and (\ref{eq:chgasy}) have been chosen to satisfy $\widetilde{R}_{0}^{-1}\widetilde{A}_{0}\widetilde{R}_{0} = \widetilde{\Theta}_{0}$ and,
\[ \left[\widetilde{R}_{\infty} , \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right)\right] = 0.\]
\end{remark}
We denote the asymptotic behaviour of true solutions at infinity as in (\ref{eq:chgasy}) by,
\begin{align}
\widetilde{Y}_{f}^{(\infty)}(z) = \left( \sum_{n=0}^{\infty}h_{n,\infty}z^{-n}\right) \left(\begin{array}{cc} e^{z} z^{\beta-\gamma} & 0 \\ 0 & z^{1-\beta} \end{array} \right), \quad \quad z \in \widetilde{\Sigma}_{k}. \nonumber \end{align}
The series $H_{\infty}(z) = \sum_{n=0}^{\infty}h_{n,\infty}z^{-n}$ defines a formal gauge transformation which maps equation (\ref{eq:chg1}) to,
\begin{align} \frac{\partial}{\partial z}\widehat{Y}(z) = \left( \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) + \frac{1}{z} \left( \begin{array}{cc}\beta-\gamma & 0 \\ 0 & 1-\beta\end{array} \right) \right) \widehat{Y}, \label{eq:neweqn} \end{align}
via the transformation $\widetilde{Y}(z) = \widetilde{R}_{\infty} H_{\infty}(z) \widehat{Y}(z)$. We define the coefficient of $\frac{1}{z}$ in the new equation to be $-\widetilde{\Theta}_{\infty}$, namely,
\[\widetilde{\Theta}_{\infty} := \left(\begin{array}{cc} \gamma-\beta & 0 \\ 0 & \beta-1 \end{array} \right) \equiv - \text{diag}\left(\widetilde{A}_{0}\right).\]
In the generic case $a,b \notin \mathbb{Z}^{\leq 0}$, d'Alembert's ratio test shows that the series $\ _{2}F_{0}( a , b ; z^{-1} )$ diverges for all $z \in \mathbb{C}$. In this sense, the asymptotic behaviour $\widetilde{Y}_{f}^{(\infty)}(z)$ is a formal fundamental solution. \\
\begin{remark} \label{remark:formal} Using expression (\ref{eq:y2}) in Lemma \ref{lemma:l2}, the formal fundamental solution $\widetilde{Y}_{f}^{(\infty)}$ of (\ref{eq:chg1}) corresponds to the following standard formal basis of solutions of (\ref{eq:kummer}),
\begin{align} &\begin{matrix*}[l] \tilde{y}_{1,f}^{(\infty)}(z) = e^{z} z^{\beta-\gamma} \ _{2}F_{0} \left(\gamma-\beta, \ 1-\beta ; z^{-1} \right), \\ \tilde{y}_{2,f}^{(\infty)}(z) = -z^{-\beta}\ _{2}F_{0} \left( \beta , \ \beta+1-\gamma ; -z^{-1} \right). \end{matrix*} \label{eq:yformal} \end{align} \end{remark}
\subsubsection{Monodromy data} \label{sec:kummon}
We now define the monodromy data, including Stokes data, of Kummer equation (\ref{eq:kummer}) and recall how to express them in explicit form \cite{bateman,ww}. In Appendix B, we derive these classical formulae by representing solutions using Mellin-Barnes integrals.
\begin{definition} \label{definition:kumst} Let $\widetilde{Y}^{(\infty,k)}(z)$ be the fundamental solutions given in Theorem \ref{theorem:kummerst} and define sectors,
\[\widetilde{\Pi}_{k} := \widetilde{\Sigma}_{k} \cap \widetilde{\Sigma}_{k+1} \equiv \left\{ z : |z|>0 , \ \frac{\pi}{2} < \text{arg}(z) - k \pi < \frac{3\pi}{2}\right\},\]
as illustrated in Figure \ref{fig:psec} below. We define Stokes matrices $\widetilde{S}_{k} \in \text{SL}_{2}(\mathbb{C})$ as follows,
\begin{align} \widetilde{Y}^{(\infty,k+1)}(z) = \widetilde{Y}^{(\infty,k)}(z) \widetilde{S}_{k} , \quad z \in \widetilde{\Pi}_{k}. \label{eq:kummerstokes} \end{align} \end{definition}
\begin{figure}[H] \begin{center}
\includegraphics[scale=.6]{sectors2}
\caption{\label{fig:psec}Sectors $\widetilde{\Pi}_{0}$, $\widetilde{\Pi}_{-1}$, $\widetilde{\Sigma}_{0}$ and $\widetilde{\Sigma}_{-1}$ projected onto the plane $\overline{\mathbb{C}}\backslash\{0\}$. The positive and negative imaginary axes are Stokes rays.}
\end{center} \end{figure}
From the asymptotic relation (\ref{eq:chgasy}), it is clear that
\begin{align}
\widetilde{Y}^{(\infty,k+2)}(z) = \widetilde{Y}^{(\infty,k)}\left(ze^{-2 \pi i}\right) e^{-2 \pi i \widetilde{\Theta}_{\infty}}, \quad \quad z \in \widetilde{\Sigma}_{k+2}. \label{eq:sameasy}
\end{align}
due to the fact that these two solutions
have the same asymptotic behaviour as $z \rightarrow \infty$ in the sector $z \in \widetilde{\Sigma}_{k+2}$. Therefore all solutions $\widetilde{Y}^{(\infty,k)}(z)$ are categorised into two fundamentally distinct cases, namely, when $k$ is even and when $k$ is odd.
Combining Definition \ref{definition:kumst} with the relation (\ref{eq:sameasy}), one can show that
\begin{align}
e^{-2 \pi i \widetilde{\Theta}_{\infty}} \widetilde{S}_{k+1} = \widetilde{S}_{k-1} e^{-2\pi i \widetilde{\Theta}_{\infty}}, \nonumber \end{align}
which shows that Kummer equation has only two types of Stokes matrices $\widetilde{S}_{k}$ which are fundamentally different: one with $k$ odd and the other with $k$ even.
Here we select to work with the fundamental solutions $\widetilde{Y}^{(\infty,-1)}(z)$ in the sector $\widetilde\Sigma_{-1}$ and $\widetilde{Y}^{(\infty,0)}(z)$ in the sector $\widetilde\Sigma_{0}$ and with the Stokes matrices $\widetilde S_{0}$ and $\widetilde S_{-1}$. The explicit form of the Stokes matrices are derived in the Appendix B where the following Lemma is proved:
\begin{lemma} \label{lemma:stokes} We have the following classical formulae:
\begin{align}\widetilde{S}_{0} &= \left(\begin{array}{cc} 1 & \frac{2 \pi i }{\Gamma(\beta)\Gamma(\beta+1-\gamma)} e^{i \pi (\gamma-2\beta)} \\ 0 & 1 \end{array} \right) \quad \text{and} \quad \widetilde{S}_{-1} &= \left(\begin{array}{cc} 1 & 0 \\ \frac{2 \pi i}{\Gamma(1-\beta)\Gamma(\gamma-\beta)} & 1 \end{array} \right). \label{eq:kummers-1} \end{align} \end{lemma}
We choose to normalise our monodromy data with respect to the fundamental solution $\widetilde{Y}^{(\infty,0)}(z)$. Denote by $\gamma_{\infty,0}\left[\widetilde{Y}^{(\infty,0)}\right](z)$ the analytic continuation of $\widetilde{Y}^{(\infty,0)}(z)$ along an orientable curve $\gamma_{\infty,0} : [0,1] \rightarrow \mathbb{C}$ with $\gamma_{\infty,0}(0) \in \widetilde{\Sigma}_{0}$ and $\gamma_{\infty,0}(1) \in \widetilde{\Omega}_{0}$. We have,
\[\gamma_{\infty 0} \left[\widetilde{Y}^{(\infty,0)}\right](z) = \widetilde{Y}^{(0)}(z) \widetilde{C}^{0 \infty},\]
where,
\begin{align} \widetilde{C}^{0 \infty} &= \left(\begin{array}{cc} e^{i \pi (\beta-1)} \frac{\Gamma(\gamma-1)}{\Gamma(\gamma-\beta)} & -\frac{\Gamma(\gamma-1)}{\Gamma(\beta)} \\ e^{i \pi (\beta-\gamma)} \frac{\Gamma(1-\gamma)}{\Gamma(1-\beta)} & -\frac{\Gamma(1-\gamma)}{\Gamma(\beta+1-\gamma)} \end{array} \right). \label{eq:kummerc0} \end{align}
Denote by $\gamma_{0}\left[\widetilde{Y}^{(\infty,0)}\right](z)$ the analytic continuation of $\widetilde{Y}^{(\infty,0)}(z)$ along an orientable, closed curve $\gamma_{0} : [0,1] \rightarrow \mathbb{C}$ with $\gamma_{0}(0) = \gamma_{0}(1) \in \widetilde{\Sigma}_{0}$ which encircles the singularity $z = 0$ in the positive (anti-clockwise) direction. The curve $\gamma_{0}$ is illustrated below, note that $\gamma_{\infty}:=\gamma_{0}^{-1}$.
\begin{figure}[H] \begin{center}
\includegraphics[scale=.8]{chgloops3}
\caption{\label{fig:chgloops}Curves defining the monodromy matrices $\widetilde{M}_{k}$ of Kummer hypergeometric differential equation.}
\end{center} \end{figure}
We have,
\begin{align} \gamma_{k}\left[ \widetilde{Y}^{(\infty,0)}\right](z) = Y^{(\infty,k)}(z)\widetilde{M}_{k}, \quad \quad k=0,\infty, \nonumber \end{align}
where,
\begin{align} \widetilde{M}_{0} = \left(\widetilde{C}^{0\infty}\right)^{-1} e^{2 \pi i \widetilde{\Theta}_{0}} \widetilde{C}^{0 \infty} \quad \text{and} \quad \widetilde{M}_{\infty} = \widetilde{S}_{0}e^{2 \pi i \widetilde{\Theta}_{\infty}}\widetilde{S}_{-1}. \label{eq:chgm} \end{align}
These matrices satisfy the cyclic relation,
\begin{align} \widetilde{M}_{\infty}\widetilde{M}_{0} = I. \label{eq:cycrel} \end{align}
\begin{definition} We define the monodromy data of Kummer hypergeometric differential equation (\ref{eq:kummer}) as the set,
\begin{align}
\widetilde{\mathcal{M}} := \left\{ \begin{matrix*}[r] \left(\widetilde{M}_{0} , \widetilde{S}_{0} , \widetilde{S}_{-1}\right) \\ \in \left(\text{GL}_{2}(\mathbb{C})\right)^{3} \end{matrix*} \left| \begin{array}{c} \widetilde{S}_{0} \text{ unipotent, upper triangular,} \\ \widetilde{S}_{-1} \text{unipotent, lower triangular,} \\ \widetilde{S}_{0}e^{2 \pi i \widetilde{\Theta}_{\infty}} \widetilde{S}_{-1}\widetilde{M}_{0} = I, \\ \text{eigenv}\left(\widetilde{M}_{0}\right) = e^{2 \pi i \widetilde{\Theta}_{0}} \end{array} \right. \right\}_{\text{\Huge/\normalsize GL$_{2}(\mathbb{C})$}} \label{eq:chgmono} \end{align}
where eigenv$(\widetilde{M}_{0}) = e^{2\pi i \widetilde{\Theta}_{0}}$ means that the eigenvalues of $\widetilde{M}_{0}$ are given as the elements of the diagonal matrix $e^{2 \pi i \widetilde{\Theta}_{0}}$ and the quotient is by global conjugation by a diagonal matrix. \end{definition}
\section{Confluence from Gauss to Kummer equation} \label{sec:hgconf}
In this Section we analyse the confluence procedure from Gauss equation (\ref{eq:gauss}) to Kummer equation (\ref{eq:kummer}). We are primarily concerned with understanding how to produce the monodromy data of the Kummer equation, as defined in Section \ref{sec:kummon}, from the connection matrices of the Gauss equation (see Section \ref{sec:gg}), under the confluence procedure.
We first explain how the confluence procedure works intuitively. By the substitution $x=\frac{z}{\alpha}$,
on the Gauss equation (\ref{eq:gauss})
\begin{align} x(1-x) \ y''(x) + (\gamma - (\alpha+\beta+1)x) \ y'(x) - \alpha \beta \ y(x) &= 0, \nonumber \\
\Leftrightarrow \frac{z}{\alpha} \left( \frac{\alpha-z}{\alpha} \right) \alpha^{2} \ y_{zz} + \left(\gamma - (\alpha+\beta+1)\frac{z}{\alpha}\right) \alpha \ y_{z} - \alpha \beta \ y &= 0, \nonumber \\
\Leftrightarrow z \ y_{zz} + (\gamma-z) \ y_{z} - \beta \ y - \frac{1}{\alpha} \left(z^{2} y_{zz} + (\beta+1) y_{z}\right) &=0. \nonumber \end{align}
we produce an differential equation with three Fuchsian singularitites at $z=0, \alpha$ and $\infty$ respectively.
As a heuristic argument, one can see that the final equation becomes Kummer equation (\ref{eq:kummer}) as $\alpha \rightarrow \infty$ so that a double pole is created at $z=\infty$ as the two simple poles $z=\alpha$ and $\infty$ merge. This derivation does not explain how of obtain solutions of the Kummer equation by taking limits as $\alpha \rightarrow \infty$ of certain solutions of Gauss equation under the substitution $x=\frac{z}{\alpha}$. To understand this, we need to use a result by Glutsyuk \cite{glutsyuk}, which deals with limits of solutions at merging simple poles under a generic confluence procedure. This is explained in the next sub-section.
\subsection{A result by Glutsyuk} \label{sec:gl}
Consider the following differential equation,
\begin{align} \frac{\partial Y}{\partial \lambda} = \frac{A(\lambda, \varepsilon)}{(\lambda-\varepsilon)(\lambda+\varepsilon)} Y, \quad A(\lambda,\varepsilon) \in \text{GL}_{2}(\mathbb{C}), \label{eq:gl} \end{align}
with $A(\lambda,\varepsilon)$ a holomorphic matrix about $\lambda = \pm \varepsilon$ such that $A(\pm \varepsilon,\varepsilon) \neq 0$ for sufficiently small $\varepsilon \geq 0$ satisfying the following limit,
\[\lim_{\varepsilon \rightarrow 0} A(\lambda,\varepsilon) = A(\lambda,0).\]
Hence, the non-perturbed, or \textit{confluent}, equation,
\begin{align} \frac{\partial Y}{\partial \lambda} = \frac{A(\lambda, 0)}{\lambda^{2}} Y, \label{eq:gl2} \end{align}
has an irregular singularity at $\lambda=0$ of Poincar\'{e} rank one. Moreover, it is assumed that the eigenvalues of the residue matrices $A(\pm \varepsilon,\varepsilon)$ of at $\lambda = \pm \varepsilon$ are non resonant and thet the eigenvalues of the leading matrix of $A(\lambda,0)$ at $\lambda = 0$ are distinct.
We first deal with the perturbed equation (\ref{eq:gl}). We define neighbourhoods $\Omega_{\pm \varepsilon}$ of the points $\lambda = \pm \varepsilon$ respectively whose radii are less than $2 |\varepsilon|$ and with branch cuts made along the straight line passing through the points $\lambda = -\varepsilon , 0 , \varepsilon$, as illustrated in Figure \ref{fig:gl3} below. Equation (\ref{eq:gl}) has fundamental solutions $Y^{(\pm \varepsilon)}(\lambda)$ which are analytic in the cut disks $\Omega_{\pm}(\varepsilon)$ of the following form,
\begin{align} Y^{(\pm \varepsilon)}(\lambda) &= \left( \sum_{n=0}^{\infty} G_{n,\pm \varepsilon} (\lambda \mp \varepsilon)^{n} \right) (\lambda \mp \varepsilon)^{\Lambda_{\pm \varepsilon}}, &&\lambda \in \Omega_{\pm \varepsilon}, \nonumber \end{align}
where $G_{0,\pm \varepsilon}$ are fixed matrices which diagonalise the residue matrices $A(\pm \varepsilon,\varepsilon) $ and all other terms of the series are determined by certain recursion formulae.
\begin{figure}[H] \begin{center}
\includegraphics[scale=.5]{gl3}
\caption{\label{fig:gl3}An illustration of the neighbourhoods $\Omega_{\pm \varepsilon}$ with branch cuts in which we define the fundamental solutions $Y^{(\pm \varepsilon)}(\lambda)$.}
\end{center} \end{figure}
We now turn our attention to the confluent equation (\ref{eq:gl2}). Denote by $\mu_{1}$ and $\mu_{2}$ the eigenvalues of the leading matrix of $A(\lambda,0)$ at $\lambda=0$ (by assumption, $\mu_{1}\neq\mu_{2}$) and let,
\[r_{i,j}=\left\{ \lambda : \ \text{Re}\left( \frac{\mu_{i}-\mu_{j}}{\lambda} \right) = 0, \ \text{Im}\left(\frac{\mu_{i}-\mu_{j}}{\lambda} \right) >0 \right\}, \quad i,j\in \{1,2\},\]
be the Stokes rays. We denote by $\mathscr{S}_{0}$ and $\mathscr{S}_{1}$ open sectors whose union is a punctured neighbourhood of $\lambda=0$, each of which: has an opening greater than $\pi$; contains only one Stokes ray and does not contain the other Stokes ray at its boundary. An illustration of such Stokes rays and sectors is given below.
\begin{figure}[H] \begin{center}
\includegraphics[scale=.8]{gl}
\caption{\label{fig:gl}An illustration of the Stokes rays $r_{i,j}$ and sectors $\mathscr{S}_{0}$ and $\mathscr{S}_{1}$.}
\end{center} \end{figure}
We can cover all of the sheets of the Riemann surface of the logarithm at $\lambda = 0$ by extending the notation as follows,
\[\lambda \in \mathscr{S}_{k+2} \Leftrightarrow \lambda e^{-2 \pi i} \in \mathscr{S}_{k}.\]
From the standard theory of linear systems of ordinary differential equations, there exists a number $R$ sufficiently large such that, for all $k \in \mathbb{Z}$, there exist fundamental solutions $Y^{(0,k)}(\lambda)$ of the non-perturbed equation (\ref{eq:gl2}) analytic in the sectors $\mathscr{S}_{k}$
such that,
\[Y^{(0,k)}(\lambda) \sim \left( \sum_{n=0}^{\infty} H_{n} \lambda^{n} \right) \lambda^{\Theta_{0}} \text{exp}\left(\lambda^{-1} \left(\begin{array}{cc} \mu_{1} & 0 \\ 0 & \mu_{2} \end{array} \right) \right), \text{ as } \lambda \rightarrow 0, \ \lambda \in \mathscr{S}_{k},\]
where $H_{0}$ is a fixed matrix which diagonalises the leading term of $A(\lambda,0)$ at $\lambda=0$, all other terms of the series and the diagonal matrix $\Theta$ are uniquely determined by certain recursion relations. Each solution $Y^{(0,k)}(\lambda)$ is uniquely specified by the above asymptotic relation.
We define open sectors $\sigma_{\pm \varepsilon}(\varepsilon) \subset \Omega_{\pm}$ with base points at $\lambda = \pm \varepsilon$ respectively whose openings do not contain the branch cut between $-\varepsilon$ and $\varepsilon$ as illustrated in Figure \ref{fig:gl2} below.
\begin{figure}[H] \begin{center}
\includegraphics[scale=.8]{gl2}
\caption{\label{fig:gl2}An illustration of the sectors $\sigma_{\pm \varepsilon}(\varepsilon)$.}
\end{center} \end{figure}
We impose the condition that, as $\varepsilon \rightarrow 0$ along a ray, the sector $\sigma_{\varepsilon}(\varepsilon)$ (resp. $\sigma_{-\varepsilon}(\varepsilon)$) is translated along a ray to zero and becomes in agreement with the sector $\mathscr{S}_{k+1}$ (resp. $\mathscr{S}_{k}$), for some $k \in \mathbb{Z}$. We write this condition as follows,
\begin{align} \lim_{\varepsilon \rightarrow 0} \sigma_{\varepsilon}(\varepsilon) = \mathscr{S}_{k+1} \quad \text{and} \quad \lim_{\varepsilon \rightarrow 0} \sigma_{-\varepsilon}(\varepsilon) = \mathscr{S}_{k}. \label{eq:limits} \end{align}
\begin{theorem} \label{theorem:gl} Let the fundamental solutions $Y^{(\varepsilon)}(\lambda)$, $Y^{(-\varepsilon)}(\lambda)$ and $Y^{(0,k)}(\lambda)$ and the sectors $\sigma_{\varepsilon}(\varepsilon)$, $\sigma_{-\varepsilon}(\varepsilon)$ and $\mathscr{S}_{k}$ be defined as above. There exist diagonal matrices $K_{\varepsilon}$ and $K_{-\varepsilon}$ such that we have the following limits,
\begin{align} &\lim_{\varepsilon \rightarrow 0} \left. Y^{(\varepsilon)}(\lambda)\right|_{\lambda \in \sigma_{\varepsilon}(\varepsilon)} K_{\varepsilon} = Y^{(0,k+1)}(\lambda), \nonumber \\
&\lim_{\varepsilon \rightarrow 0} \left. Y^{(-\varepsilon)}(\lambda)\right|_{\lambda \in \sigma_{-\varepsilon}(\varepsilon)} K_{-\varepsilon} = Y^{(0,k)}(\lambda), \nonumber \end{align}
uniformly for $\lambda \in \mathscr{S}_{k+1}$, $\mathscr{S}_{k}$ respectively, as $\varepsilon$ belongs to a fixed ray. \end{theorem}
\begin{remark} It is well-known that, when solving a linear ordinary differential equation around a Fuchsian singular point, the maximal radius we may take for the neighbourhood on which we can define an analytic solution is the distance to the nearest singularity. For the perturbed equation (\ref{eq:gl}), as $\varepsilon$ becomes arbitrarily small it is clear from the hypotheses on $A(\lambda,\varepsilon)$ that the closest singularity to $\lambda = \pm \varepsilon$ will be $\lambda = \mp \varepsilon$ respectively. We have illustrated the domains $\Omega_{\pm \varepsilon}$ in Figure \ref{fig:gl3} with the maximal radii for which it is possible to define analytic solutions. Observe that the neighbourhoods of analyticity of the fundamental solutions diminish as $\varepsilon \rightarrow 0$. The intelligent part of restricting the fundamental solutions $Y^{(\pm \varepsilon)}(\lambda)$ to the sectors $\sigma_{\pm \varepsilon}(\varepsilon)$ as drawn in Figure \ref{fig:gl2}, rather than the neighbourhoods $\Omega_{\pm \varepsilon}$, is that the radii of these sectors need not be restricted to the distance to the nearest singularity. Indeed, by construction, the singularity $\lambda = \pm \varepsilon$ will not be inside the sector $\sigma_{\mp \varepsilon}(\varepsilon)$ respectively. In particular, this means that the radii of these sectors need not vanish. \end{remark}
By the same reasoning as in the previous remark, it is without loss of generality that we may assume $\sigma_{\varepsilon}(\varepsilon) \cap \sigma_{-\varepsilon}(\varepsilon) \neq \varnothing$ for $\varepsilon$ sufficiently close to zero. Accordingly, since we have two fundamental solutions defined on this intersection, they must be related by multiplication by a constant invertible matrix on the right, namely,
\begin{align} Y^{(\varepsilon)}(\lambda) = Y^{(-\varepsilon)}(\lambda) C, \quad \quad \lambda \in \sigma_{\varepsilon}(\varepsilon) \cap \sigma_{-\varepsilon}(\varepsilon), \label{eq:connection} \end{align}
for some connection matrix $C \in \text{GL}_{2}(\mathbb{C})$. Similarly, the two fundamental solutions $Y^{(0,0)}(\lambda)$ and $Y^{(0,1)}(\lambda)$ of the confluent equation must be related to each other by multiplication by a constant invertible matrix on the right on the intersection $\mathscr{S}_{0}$ and $\mathscr{S}_{1}$, namely,
\begin{align} Y^{(0,1)}(\lambda) = Y^{(0,0)}(\lambda) S, \quad \quad \lambda \in \mathscr{S}_{0} \cap \mathscr{S}_{1}, \label{eq:stokesmatrix} \end{align}
for some Stokes matrix $S \in \text{GL}_{2}(\mathbb{C})$.
\begin{corollary} \label{corollary:glcorollary} Let the fundamental solutions $Y^{(\varepsilon)}(\lambda)$, $Y^{(-\varepsilon)}(\lambda)$ and $Y^{(0,k)}(\lambda)$ and the sectors $\sigma_{\varepsilon}(\varepsilon)$, $\sigma_{-\varepsilon}(\varepsilon)$ and $\mathscr{S}_{k}$ be defined as above; let $K_{\pm \varepsilon}$ be matrices satisfying Theorem \ref{theorem:gl} and let $C$ and $S$ be the matrices defined by (\ref{eq:connection}) and (\ref{eq:stokesmatrix}) respectively. We have the following limit,
\begin{align} \lim_{\varepsilon \rightarrow 0} K_{-\varepsilon}^{-1} C K_{\varepsilon} = S, \label{eq:sat} \end{align}
as $\varepsilon$ belongs to a fixed ray. \end{corollary}
In (\ref{eq:sat}) it is clear how to obtain one of the Stokes matrices at the point $\lambda=0$ of the confluent equation. In order to obtain the second Stokes matrix we take $\varepsilon \rightarrow 0$ along the opposite ray to the one already considered. Rather than having the limits in (\ref{eq:limits}), we would now have, for example, that $\sigma_{\varepsilon}(\varepsilon)$ tends to $\mathscr{S}_{k}$ and $\sigma_{-\varepsilon}(\varepsilon)$ tends to $\mathscr{S}_{k-1}$. In this way, we use the limit in (\ref{eq:sat}) to produce the other Stokes matrix. We will explain all of these details and calculate everything explicitly for each of the cases we consider.
\subsubsection{Limits of solutions} \label{sec:331}
As outlined above, our confluence procedure is to introduce the new variable $z$ by the substitution $x = \frac{z}{\alpha}$ and take the limit $\alpha \rightarrow \infty$. For the remainder of this chapter we must be careful in which way we are taking $\alpha$ to infinity, for example it would be inconvenient for us if $\alpha$ spiralled towards infinity. We will consider two limits along fixed rays: one with $\arg(\alpha) = \frac{\pi}{2}$ and the other with $\arg(\alpha) = -\frac{\pi}{2}$.
\subsubsection{Obtaining the solutions $\widetilde{Y}^{(\infty,k)}(z)$}
We now turn our attention to the main problem of how to obtain fundamental solutions at the double pole of the confluent equation from solutions at the merging simple poles of the original equation. We first examine the behaviour of the fundamental solutions at $x=\infty$, as given in (\ref{eq:yinf}). Observe that these solutions are expressed using the Gauss $\ _{2}F_{1}$ series in the variable $x^{-1} \equiv \frac{\alpha}{z}$, which diverge for $|x^{-1}|>1 \Leftrightarrow |z| < |\alpha|$. In this case, we clearly do not have uniform convergence with respect to $\alpha$ and we need to use Glutsyuk's Theorem \ref{theorem:gl}.
The fundamental set of solutions (\ref{eq:yinf}) are written in canonical form. However, we will rewrite the solution $y_{1}^{(\infty)}(x)$ using one of Kummer relations as follows,
\begin{align} y_{1}^{(\infty)}(x) &= (-x)^{-\alpha} \ _{2}F_{1} \left(\begin{array}{c} \alpha , \ \alpha+1-\gamma \\ \alpha+1-\beta \end{array} ; x^{-1} \right), &&x \in \Omega_{\infty}, \nonumber \\
&= (-x)^{\beta - \gamma}(1-x)^{\gamma-\alpha-\beta} \ _{2}F_{1} \left(\begin{array}{c} 1-\beta , \ \gamma-\beta \\ \alpha+1-\beta \end{array} ; x^{-1} \right), &&x \in \widehat{\Omega}_{\infty}, \label{eq:kummerrelation} \end{align}
where the new domain $\widehat{\Omega}_{\infty}$ is defined as,
\[\widehat{\Omega}_{\infty} = \left\{x : |x|>1, \ -\pi \leq \text{arg}(-x) < \pi, \ -\pi \leq \text{arg}(1-x) < \pi\right\}.\]
There is no need to rewrite the solution $y_{2}^{(\infty)}(x)$ as given in (\ref{eq:yinf}) as it is already in a suitable form, this is explained in Lemma \ref{lemma:obtaining} below. We note that the above two forms of the solution $y_{1}^{(\infty)}(x)$ are equivalent on the domain $\Omega_{\infty} \cap \widehat{\Omega}_{\infty}$. The condition imposed on arg$(1-x)$ in $\widehat{\Omega}_{\infty}$ is only necessary to deal with the term $(1-x)^{\gamma-\alpha-\beta}$. After making the substitution $x=\frac{z}{\alpha}$ and taking the limit $\alpha \rightarrow \infty$we have
\begin{align}
\left(1-\frac{z}{\alpha}\right)^{\gamma-\alpha-\beta} &= \text{exp}\left((\gamma-\alpha-\beta)\log\left(1-\frac{z}{\alpha}\right) \right), \nonumber \\
&= \text{exp}\left((\gamma-\alpha-\beta) \left( -\frac{z}{\alpha} + \mathcal{O}\left(\alpha^{-2}\right)\right)\right), \nonumber \\
&= e^{z} \left(1+\mathcal{O}\left(\alpha^{-1}\right)\right). \label{eq:k3}
\end{align}
This computation shows how to asymptotically pass from power-like behaviour to exponential behaviour as $\alpha \rightarrow \infty$. Moreover, with this new form of $y_{1}^{(\infty)}(x)$ we are ready to state the following lemma.
\begin{lemma} \label{lemma:obtaining} Let $y_{2}^{(\infty)}(x)$ be given by (\ref{eq:yinf}) and $y_{1}^{(\infty)}(x)$ be given in its new form by (\ref{eq:kummerrelation}). After the substitution $x=\frac{z}{\alpha}$, the terms of these series tend to the terms in the formal series solutions $\tilde{y}_{1,f}^{(\infty)}(z)$ and $\tilde{y}_{2,f}^{(\infty)}(z)$ as given by (\ref{eq:yformal}), namely we have the following limits:
\begin{align}&\lim_{\alpha \rightarrow \infty} \frac{(1-\beta)_{n}(\gamma-\beta)_{n}\alpha^{n}}{(\alpha+1-\beta)_{n}n!z^{n}} = \frac{(\gamma-\beta)_{n}(1-\beta)_{n}}{n!z^{n}}, \nonumber \\
&\lim_{\alpha \rightarrow \infty} \frac{(\beta)_{n} (\beta+1-\gamma)_{n}\alpha^{n}}{(\beta+1-\alpha)_{n}n!z^{n}} = (-1)^{n}\frac{(\beta)_{n}(\beta+1-\gamma)_{n}}{n!z^{n}}. \nonumber \end{align} \end{lemma}
\begin{proof} By direct computation, using
\begin{align} \frac{\alpha^{n}}{(\alpha+1-\beta)_{n}} = 1 + \mathcal{O}\left(\alpha^{-1}\right) \quad \text{and} \quad \frac{\alpha^{n}}{(\beta+1-\alpha)_{n}} = (-1)^{n} + \mathcal{O}\left(\alpha^{-1}\right). \nonumber \end{align} \end{proof}
\begin{remark} \label{remark:r1} Lemma \ref{lemma:obtaining} is stated in terms of the solutions of the \textit{scalar} hypergeometric equations (\ref{eq:gauss}) and (\ref{eq:kummer}). From the viewpoint of working with the $(2 \times 2)$ equations (\ref{eq:hg1}) and (\ref{eq:chg1}), we rewrite the solution $Y^{(\infty)}(x)$, as given in (\ref{eq:infinity}), as follows,
\begin{align} Y^{(\infty)}(x) &= R_{\infty} \sum_{n=0}^{\infty} g_{n,\infty}x^{-n} (-x)^{-\Theta_{\infty}}, &&x \in \Omega_{\infty}, \nonumber \\
&= R_{\infty} \sum_{n=0}^{\infty} \widehat{g}_{n,\infty}x^{-n} (-x)^{-\Theta_{\infty}-\Theta_{1}}(1-x)^{\Theta_{1}}, &&x \in \widehat{\Omega}_{\infty}, \label{eq:kummerrelation2} \end{align}
where $\widehat{g}_{0,\infty}=I$ and we find all other coefficients $\widehat{g}_{n,\infty}$, $n \geq 1$, from the recursive relation,
\[\ n \widehat{g}_{n,\infty} + \left[\widehat{g}_{n, \infty}, \Theta_{\infty Y} \right] = -R_{\infty Y}^{-1}A_{1Y}R_{\infty Y}\sum_{l=0}^{n-1}\widehat{g}_{l,\infty} + \sum_{l=0}^{n-1}\widehat{g}_{l,\infty}\Theta_{1}.\]
This recursion equation only differs from that for $g_{n,\infty}$, given in the proof of Lemma \ref{lemma:cup}, by the final summation term. We find the solution to this equation is,
\begin{align} \widehat{g}_{n,\infty} = \left(\begin{array}{cc} \frac{(1-\beta)_{n}(\gamma-\beta)_{n}}{(\alpha+1-\beta)_{n}n!} & - \frac{(\beta)_{n-1}(\beta+1-\gamma)_{n-1}}{(\beta+1-\alpha)_{n-1}(n-1)!} \\ \frac{\alpha(1-\beta)(\beta-\gamma)(\alpha+1-\gamma)}{(\alpha-\beta)(\alpha+1-\beta)^{2}(\alpha+2-\beta)} \frac{(2-\beta)_{n-1}(\gamma+1-\beta)_{n-1}}{(\alpha+3-\beta)_{n-1}(n-1)!} & \frac{(\beta-1)_{n}(\beta-\gamma)_{n}}{(\beta-\alpha-1)_{n}n!} \end{array} \right). \label{eq:gninf} \end{align}
The transformation (\ref{eq:kummerrelation2}) is analogous to Kummer relation (\ref{eq:kummerrelation}). We note that,
\begin{align} Y^{(\infty)}\left(\frac{z}{\alpha}\right) &= R_{\infty} \sum_{n=0}^{\infty} \widehat{g}_{n,\infty}\alpha^{n}z^{-n} \left(\begin{array}{cc} (-\alpha)^{\gamma-\beta}z^{\beta-\gamma}\left(1-\frac{z}{\alpha}\right)^{\gamma-\alpha-\beta} & 0 \\ 0 & (-\alpha)^{\beta-1}z^{1-\beta} \end{array} \right), \nonumber \\
&\equiv R_{\infty} \left(\begin{array}{cc} 1 & 0 \\ 0 & \alpha^{-1} \end{array} \right) \left(\begin{array}{cc} 1 & 0 \\ 0 & \alpha \end{array} \right) \sum_{n=0}^{\infty} \widehat{g}_{n,\infty}\alpha^{n}z^{-n} \left(\begin{array}{cc} 1 & 0 \\ 0 & \alpha^{-1} \end{array} \right) \nonumber \\
&\quad \quad \quad \quad \quad \quad \quad \quad \left(\begin{array}{cc} z^{\beta-\gamma}\left(1-\frac{z}{\alpha}\right)^{\gamma-\alpha-\beta} & 0 \\ 0 & z^{1-\beta} \end{array} \right) \left(\begin{array}{cc} (-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array} \right). \nonumber \end{align}
The limits analogous to those in Lemma (\ref{lemma:obtaining}) are stated as follows: we have the following limit of the leading matrix,
\begin{align} \lim_{\alpha \rightarrow \infty} R_{\infty} \left(\begin{array}{cc} 1 & 0 \\ 0 & \alpha^{-1} \end{array} \right) &= \lim_{\alpha \rightarrow \infty} \left(\begin{array}{cc} 1 & 0 \\ 0 & \frac{(\beta-\alpha)(\alpha+1-\beta)}{\alpha(\beta-1)(\beta-\gamma)}\end{array}\right)\left(\begin{array}{cc} 1 & 0 \\ 0 & \alpha^{-1} \end{array} \right), \nonumber \\
&= \left(\begin{array}{cc} 1 & 0\\ 0 & \frac{-1}{(\beta-1)(\beta-\gamma)}\end{array} \right) = \widetilde{R}_{\infty}, \nonumber \end{align}
and for the terms of the new series,
\[\lim_{\alpha \rightarrow \infty} \left(\begin{array}{cc} 1 & 0 \\ 0 & \alpha \end{array} \right) \alpha^{n} \widehat{g}_{n,\infty} \left(\begin{array}{cc} 1 & 0 \\ 0 & \alpha^{-1} \end{array} \right) = h_{n,\infty},\]
where $\widehat{g}_{n,\infty}$ and $h_{n,\infty}$ are given by (\ref{eq:gninf}) and (\ref{eq:hninf}) respectively. Hence, we understand that a \textit{term-by-term} limit of the solution,
\[Y^{(\infty)}\left(\frac{z}{\alpha}\right) \ \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array} \right),\]
produces the formal solution $\widetilde{Y}^{(\infty)}_{f}(z)$, which is analogous to (\ref{eq:chgasy2}). \end{remark}
We now turn our attention to the fundamental solutions at $x=1$, as given in canonical form in (\ref{eq:y1}). Observe that these solutions are expressed using Gauss hypergeometric $\ _{2}F_{1}$ series in the variable $(1-x) \equiv (1-\frac{z}{\alpha})$, which diverge for $|1-x|>1 \Leftrightarrow |z-\alpha|>|\alpha|$. As with the fundamental solutions at $x=\infty$, we do not have uniform convergence with respect to $\alpha$ here. Rather than keeping these solutions in canonical form, we use two more of Kummer relations to rewrite them as follows,
\begin{align} y_{1}^{(1)}(x) &= (1-x)^{\gamma-\alpha-\beta} \ _{2}F_{1}\left(\begin{array}{c} \gamma-\alpha , \ \gamma - \beta \\ \gamma+1-\alpha-\beta \end{array} ; 1-x \right) &&x \in \Omega_{1}, \nonumber \\
&= x^{\beta-\gamma}(1-x)^{\gamma-\alpha-\beta} \ _{2}F_{1}\left(\begin{array}{c} \gamma-\beta , \ 1-\beta \\ \gamma+1-\alpha-\beta \end{array} ; 1-x^{-1} \right), &&x \in \widehat{\Omega}_{1}, \label{eq:kummerrelation3} \\
y_{2}^{(1)}(x) &= \ _{2}F_{1} \left(\begin{array}{c} \alpha , \ \beta \\ \alpha+\beta+1-\gamma \end{array} ; 1-x \right) &&x \in \Omega_{1}, \nonumber \\
&= x^{-\beta} \ _{2}F_{1} \left(\begin{array}{c} \beta+1-\gamma , \ \beta \\ \alpha+\beta+1-\gamma \end{array} ; 1-x^{-1} \right), &&x \in \widehat{\Omega}_{1}, \label{eq:kummerrelation4} \end{align}
where the new domain $\widehat{\Omega}_{1}$ is defined as,
\[\widehat{\Omega}_{1} = \left\{ x : \left|1-x^{-1}\right| < 1, \ -\pi \leq \text{arg}(x) < \pi, \ -\pi \leq \text{arg}(1-x) < \pi\right\}.\]
We note that the two forms of these solutions are equivalent on the domain $\Omega_{1}\cap\widehat{\Omega}_{1}$. There is a very simple philosophical reason why we rewrite the series in these solutions with $(1-x^{-1})^{n}$, rather than $(1-x)^{n}$: after the change of variable $x=\frac{z}{\alpha}$, we want to produce a formal series in $z^{-n}$. Similarly as before, the computations ending in (\ref{eq:k3}) show how the solution $y_{1}^{(1)}(x)$ asymptotically passes from power-like behaviour to exponential behaviour as $\alpha \rightarrow \infty$. Moreover, the terms of the series in these new forms of $y_{1}^{(1)}(x)$ and $y_{2}^{(1)}(x)$ satisfy the lemma below.
\begin{lemma} \label{lemma:obtaining2} Let $y_{1}^{(1)}(x)$ and $y_{2}^{(1)}(x)$ be given in their new forms by (\ref{eq:kummerrelation3}) and (\ref{eq:kummerrelation4}) respectively. After the substitution $x = \frac{z}{\alpha}$, the terms of these series tend to the terms in the formal series solutions $\tilde{y}_{1,f}^{(\infty)}(z)$ and $\tilde{y}_{2,f}^{(\infty)}(z)$ as given by (\ref{eq:yformal}), namely we have the following limits:
\begin{align} &\lim_{\alpha \rightarrow \infty} \frac{(\gamma-\beta)_{n}(1-\beta)_{n}(z-\alpha)^{n}}{(\gamma+1-\alpha-\beta)_{n}n!z^{n}} = \frac{(\gamma-\beta)_{n}(1-\beta)_{n}}{n!z^{n}}, \nonumber \\
&\lim_{\alpha \rightarrow \infty} \frac{(\beta+1-\gamma)_{n}(\beta)_{n}(z-\alpha)^{n}}{(\alpha+\beta+1-\gamma)_{n}n!z^{n}} = (-1)^{n}\frac{(\beta)_{n}(\beta+1-\gamma)_{n}}{n!z^{n}}. \nonumber \end{align} \end{lemma}
\begin{proof} By direct computation, after expanding the powers of $(z-\alpha)$ and the Pochhammer symbols to find,
\begin{align} \frac{(z-\alpha)^{n}}{(\gamma+1-\alpha-\beta)_{n}} = 1 + \mathcal{O}\left(\alpha^{-1}\right) \quad \text{and} \quad \frac{(z-\alpha)^{n}}{(\alpha+\beta+1-\gamma)_{n}} = (-1)^{n} + \mathcal{O}\left(\alpha^{-1}\right). \nonumber \end{align} \end{proof}
This lemma shows that \textit{term-by-term} limits of the solutions,
\begin{align} y^{(1)}_{1}(z \alpha^{-1}) \ \alpha^{\beta-\gamma} \quad \text{and} \quad -y_{2}^{(1)}(z \alpha^{-1}) \ \alpha^{-\beta}, \label{eq:analogous2} \end{align}
produce the formal solutions,
\[\tilde{y}_{1,f}^{(\infty)}(z) \quad \text{and} \quad \tilde{y}_{2,f}^{(\infty)}(z),\]
respectively. The factors $\alpha^{\beta-\gamma}$ and $\alpha^{-\beta}$ in (\ref{eq:analogous2}) are necessary because of the terms,
\[x^{\beta-\gamma} \equiv z^{\beta-\gamma} \alpha^{\gamma-\beta} \quad \text{and} \quad x^{-\beta} \equiv z^{-\beta} \alpha^{\beta},\]
in the solutions $y_{1}^{(1)}(x)$ and $y_{2}^{(1)}(x)$ respectively. We note that the direction in which $\alpha \rightarrow \infty$ is not yet important for this lemma. The importance of this lemma is shown in the proof of our Main Theorem \ref{main:importance}.
\begin{remark} \label{remark:r2} Similarly as in Remark \ref{remark:r1}, we may consider the viewpoint of working with the $(2\times2)$ equations (\ref{eq:hg1}) and (\ref{eq:chg1}) and rewrite the solution $Y^{(1)}(x)$, as given in (\ref{eq:ak2}), as follows,
\begin{align} Y^{(1)}(x) &= R_{1} \sum_{n=0}^{\infty} g_{n,1} (1-x)^{n} (1-x)^{\Theta_{1}}, &&x \in \Omega_{1}, \nonumber \\
&= R_{1} \sum_{n=0}^{\infty} \widehat{g}_{n,1} \left(1-x^{-1}\right)^{n} x^{-\Theta_{\infty}-\Theta_{1}}(1-x)^{\Theta_{1}}, &&x \in \widehat{\Omega}_{1}, \label{eq:kummerrelation5} \end{align}
where $\widehat{g}_{0,1} = I$ and we find all other coefficients $\widehat{g}_{n,1}$, $n \geq 1$, from the recursive equation,
\[\left[ \widehat{g}_{n,1} , \Theta_{1} \right] + n \widehat{g}_{n,1} = (n-1) \widehat{g}_{n-1,1} + \widehat{g}_{n-1,1}(\Theta_{1}+\Theta_{\infty}) + R_{1}^{-1}A_{0}R_{1}\widehat{g}_{n-1,1}.\]
This recursion equation differs quite significantly from that for $g_{n,1}$, given in the proof of Lemma \ref{lemma:cup}. We find the solution to this equation is,
\begin{align} \widehat{g}_{n,1} &= \left(\begin{array}{cc} \frac{(1-\beta)_{n}(\gamma-\beta)_{n}}{(\gamma+1-\alpha-\beta)_{n}n!} & \frac{(\beta)_{n}(\beta+1-\gamma)_{n}}{(\alpha+\beta+1-\gamma)_{n}n!}-\frac{(\beta)_{n-1}(\beta+1-\gamma)_{n-1}}{(\alpha+\beta+1-\gamma)_{n-1}(n-1)!} \\ \frac{1}{\alpha} \left(\frac{(2-\beta)_{n}(\gamma+1-\beta)_{n}}{(\gamma+1-\alpha-\beta)_{n}n!}-\frac{(2-\beta)_{n-1}(\gamma+1-\beta)_{n-1}}{(\gamma+1-\alpha-\beta)_{n-1}(n-1)!}\right) & \frac{\alpha+1-\gamma}{(\beta-1)(\beta-\gamma)}\frac{(\beta-1)_{n}(\beta-\gamma)_{n}}{(\alpha+\beta+1-\gamma)_{n}n!} \end{array} \right). \label{eq:gn1} \end{align}
The transformation (\ref{eq:kummerrelation5}) is analogous to Kummer relations (\ref{eq:kummerrelation3}) and (\ref{eq:kummerrelation4}). We note that,
\begin{align} Y^{(1)}\left(\frac{z}{\alpha}\right) &= R_{1} \sum_{n=0}^{\infty} \widehat{g}_{n,1}\left(1-\frac{\alpha}{z}\right)^{n} \left(\begin{array}{cc} \alpha^{\gamma-\beta}z^{\beta-\gamma}\left(1-\frac{z}{\alpha}\right)^{\gamma-\alpha-\beta} & 0 \\ 0 & \alpha^{\beta-1}z^{1-\beta} \end{array} \right), \nonumber \\
&\equiv R_{1} \left(\begin{array}{cc} 1 & 0 \\ 0 & -\alpha^{-1} \end{array} \right)\left(\begin{array}{cc} 1 & 0 \\ 0 & -\alpha \end{array} \right), \sum_{n=0}^{\infty} \widehat{g}_{n,1} \left(1-\frac{\alpha}{z}\right)^{n} \left(\begin{array}{cc} 1 & 0 \\ 0 & -\alpha^{-1} \end{array} \right) \nonumber \\
&\quad \quad \quad \quad \quad \quad \quad \quad \left(\begin{array}{cc} z^{\beta-\gamma}\left(1-\frac{z}{\alpha}\right)^{\gamma-\alpha-\beta} & 0 \\ 0 & z^{1-\beta} \end{array} \right) \left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{\beta} \end{array} \right). \nonumber \end{align}
The limits analogous to those in Lemma \ref{lemma:obtaining2} are stated as follows: we have the following limit of the leading matrix,
\begin{align} \lim_{\alpha \rightarrow \infty} R_{1} \left(\begin{array}{cc} 1 & 0 \\ 0 & -\alpha^{-1}\end{array} \right) &= \lim_{\alpha \rightarrow \infty} \left(\begin{array}{cc} 1 & 1 \\ \frac{1}{\alpha} & \frac{\alpha+1-\gamma}{(\beta-1)(\beta-\gamma)}\end{array} \right) \left(\begin{array}{cc} 1 & 0 \\ 0 & -\alpha^{-1} \end{array} \right), \nonumber \\
&= \left(\begin{array}{cc} 1 & 0 \\ 0 & \frac{-1}{(\beta-1)(\beta-\gamma)} \end{array} \right) = \widetilde{R}_{\infty},\nonumber \end{align}
and for the terms of the new series,
\begin{align} \lim_{\alpha \rightarrow \infty} \left(\begin{array}{cc} 1 & 0 \\ 0 & - \alpha \end{array} \right) (-\alpha)^{n} \widehat{g}_{n,1} \left(\begin{array}{cc} 1 & 0 \\ 0 & -\alpha^{-1} \end{array} \right) = h_{n,\infty},\nonumber \end{align}
where $\widehat{g}_{n,1}$ and $h_{n,\infty}$ are given by (\ref{eq:gn1}) and (\ref{eq:hninf}) respectivey. Hence, we understand that a \textit{term-by-term} limit of the solution,
\[Y^{(1)}\left(\frac{z}{\alpha}\right) \ \left(\begin{array}{cc} \alpha^{\beta-\gamma} & 0 \\ 0 & -\alpha^{-\beta} \end{array} \right),\]
produces the formal solution $\widetilde{Y}^{(\infty)}_{f}(z)$, which is analogous to (\ref{eq:analogous2}). \end{remark}
Having understood how to take term-by-term limits of the series solutions of Gauss equation around $x=1$ and $\infty$ to produce the formal solutions of Kummer equation around $z=\infty$, we now show how to apply Glutsyuk's Theorem \ref{theorem:gl} to Gauss hypergeometric equation. Let $\eta \in \left(0,\frac{\pi}{2}\right)$ be some fixed value. We define the following sectors,
\begin{align} \widetilde{\mathscr{S}}_{k} &:= \left\{ z : \text{arg}(z) -k \pi \in \left(\eta-\frac{\pi}{2} , \frac{3\pi}{2} - \eta \right)\right\}, \label{eq:sectors3} \end{align}
we note that if $z \in \widetilde{\mathscr{S}}_{k}$ then $z \in \widetilde{\Sigma}_{k}$. The presence of $\eta$ is to ensure that the boundaries of the sectors $\widetilde{\mathscr{S}}_{k}$ do not contain a Stokes ray, as is necessary in the hypothesis of Glutsyuk's Theorem \ref{theorem:gl}. We note that this condition is not satisfied by the sectors $\widetilde{\Sigma}_{k}$ defined in Theorem \ref{theorem:kummerst}, which are the maximal sectors on which we can define single-valued analytic fundamental solutions. \\
We also define the following sectors,
\begin{align} \sigma_{\alpha}(\alpha) &:= \left\{ z : \begin{array}{c} \left|1-\frac{\alpha}{z}\right|< |\alpha|^{2} , \ \text{arg}\left(\frac{z}{\alpha}\right) \in (\eta-\pi,\pi-\eta), \\ \text{arg}\left(1-\frac{z}{\alpha}\right) \in (\eta-\pi , \pi - \eta) \end{array} \right\}, \label{eq:sectors1} \\
\sigma_{\infty}(\alpha) &:= \left\{ z : \begin{array}{c} \text{arg}\left(-z\alpha^{-1}\right) \in (\eta -\pi , \pi-\eta), \\ \text{arg}\left(1-\frac{z}{\alpha}\right)\in(\eta-\pi,\pi-\eta) \end{array} \right\}. \label{eq:sectors2} \end{align}
We note that if $z$ is sufficiently close to $\alpha$ with $z \in \sigma_{\alpha}(\alpha)$ then $x = \frac{z}{\alpha} \in \widehat{\Omega}_{1}$ and if $z$ is sufficiently large with $z \in \sigma_{\infty}(\alpha)$ then $x = \frac{z}{\alpha} \in \widehat{\Omega}_{\infty}$. These sectors will be the new domains of our solutions $y_{1}^{(1)}(z \alpha^{-1})$, $y_{2}^{(1)}(z\alpha^{-1})$ and $y_{1}^{(\infty)}(z \alpha^{-1})$, $y_{2}^{(\infty)}(z\alpha^{-1})$ respectively, they are illustrated below.
\begin{figure}[H] \begin{center}
\includegraphics[scale=1]{hgsectors}
\caption{\label{fig:sss}Sectors $\sigma_{\alpha}(\alpha)$ and $\sigma_{\infty}(\alpha)$.}
\end{center} \end{figure}
Compared with the domains $\widehat{\Omega}_{1}$ and $\widehat{\Omega}_{\infty}$, which are disks with branch cuts, the sectors $\sigma_{\alpha}(\alpha)$ and $\sigma_{\infty}(\alpha)$ have larger radii and do not contain any part of the branch cut between $\alpha$ and $\infty$. We can analytically extend our solutions $y_{k}^{(1)}(z \alpha^{-1})$ and $y_{k}^{(\infty)}(z \alpha^{-1})$, $k=1,2$, to these larger domains because the singularity $z=\infty$ (resp. $z=\alpha$) can never lie inside the sector $\sigma_{\alpha}(\alpha)$ (resp. $\sigma_{\infty}(\alpha)$) or on its boundary. That is the key reason to restrict our solutions to sectors rather than disks. \\
We examine the sector $\sigma_{\alpha}(\alpha)$ more closely. From the first condition,
\[\left|1-\frac{\alpha}{z} \right| < |\alpha|^{2} \quad \Leftrightarrow \quad \left|\frac{1}{\alpha}-\frac{1}{z}\right| < |\alpha|,\]
observe that as $\alpha \rightarrow \infty$ the radius of this sector becomes infinite, indeed the above inequality becomes simply $|z|>0$. Furthermore, as $\alpha \rightarrow \infty$ along a ray, the base point of the sector $\sigma_{\alpha}(\alpha)$ is translated along that ray, tending to infinity. We illustrate this phenomenon in Figure \ref{fig:new} below.
\begin{figure}[H] \begin{center}
\includegraphics[scale=.7]{HypergeometricGlutsyukSectors}
\caption{\label{fig:new}As $\alpha \rightarrow \infty$ along a ray, the sector $\sigma_{\alpha}(\alpha)$ is translated along the branch cut and becomes in agreement with the sector $\widetilde{\Phi} := \left\{ z : \left| \text{arg}\left(\frac{z}{\alpha}\right)\right|<\pi-\eta \right\}$.}
\end{center} \end{figure}
In the two limit directions we are concerned with, for $\arg(\alpha)=\pm\frac{\pi}{2}$, we have,
\begin{align} &\text{arg}\left(\frac{z}{\alpha}\right) \in \left(\eta-\pi, \pi-\eta \right) \quad \Leftrightarrow \quad \text{arg}(z) \in \left(\eta - \pi \pm \frac{\pi}{2} , \pi \pm \frac{\pi}{2} - \eta \right), \nonumber \end{align}
For the sector $\sigma_{\infty}(\alpha)$, whose base point is already fixed at infinity, we have,
\begin{align} &\text{arg}\left(-\frac{z}{\alpha} \right) \in \left(\eta - \pi , \pi - \eta \right) \quad \Leftrightarrow \quad \text{arg}(z) \in \left(\eta \pm \frac{\pi}{2} , 2 \pi \pm \frac{\pi}{2} - \eta \right), \nonumber \end{align}
recall from (\ref{eq:k3}) that the condition on arg$\left(1-\frac{z}{\alpha}\right)$ in $\sigma_{\infty}(\alpha)$ does not play a role after taking the limit. With these considerations in mind, we write,
\begin{align} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \sigma_{\alpha}(\alpha) &= \widetilde{\mathscr{S}}_{-1}, &&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \sigma_{\infty}(\alpha) = \widetilde{\mathscr{S}}_{0}, \nonumber \\
\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \frac{\pi}{2}}} \sigma_{\alpha}(\alpha) &= \widetilde{\mathscr{S}}_{0}, &&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \frac{\pi}{2}}} \sigma_{\infty}(\alpha) = \widetilde{\mathscr{S}}_{1}. \nonumber \end{align}
We now apply Glutsyuk's Theorem \ref{theorem:gl} with the $(2\times2)$ hypergeometric equation (\ref{eq:hg1}) in place of the perturbed equation and the confluent hypergeometric equation (\ref{eq:chg1}) in place of the non-perturbed equation. Glutsyuk's theorem asserts the existence of invertible diagonal matrices $K^{\pm}_{\infty}(\alpha)$ and $K^{\pm}_{1}(\alpha)$ such that:
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left. Y^{(1)}\left(z \alpha^{-1}\right) \right|_{z \in \sigma_{\alpha}(\alpha)} K^{-}_{1}(\alpha) = \widetilde{Y}^{(\infty,-1)}(z), \label{eq:sat5} \\
&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left. Y^{(\infty)}\left(z \alpha^{-1}\right) \right|_{z \in \sigma_{\infty}(\alpha)} K^{-}_{\infty}(\alpha) = \widetilde{Y}^{(\infty,0)}(z), \label{eq:sat6} \end{align}
uniformly for $z \in \widetilde{\mathscr{S}}_{-1}$ and $z \in \widetilde{\mathscr{S}}_{0}$ respectively, and:
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \frac{\pi}{2}}} \left. Y^{(1)}\left(z \alpha^{-1}\right) \right|_{z \in \sigma_{\alpha}(\alpha)} K^{+}_{1}(\alpha) = \widetilde{Y}^{(\infty,0)}(z), \label{eq:sat7} \\
&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) =\frac{\pi}{2}}} \left. Y^{(\infty)}\left(z \alpha^{-1}\right) \right|_{z \in \sigma_{\infty}(\alpha)} K^{+}_{\infty}(\alpha) = \widetilde{Y}^{(\infty,1)}(z), \label{eq:sat8} \end{align}
uniformly for $z \in \widetilde{\mathscr{S}}_{0}$ and $z \in \widetilde{\mathscr{S}}_{1}$ respectively. We note that since we are considering two limits, namely one with $\arg(\alpha) = \frac{\pi}{2}$ and another with $\arg(\alpha) = -\frac{\pi}{2}$, we have distinguished the diagonal matrices in each case with a superscript $+$ or $-$ respectively. Due to the asymptotics of the fundamental solutions of Kummer equation as given in Theorem \ref{theorem:kummerst}, each of these four limits is asymptotic to the formal fundamental solution $\widetilde{Y}_{f}^{(\infty)}(z)$ as $z \rightarrow \infty$ with $z$ belonging to the corresponding sector. \\
Equivalently, from the viewpoint of studying the classical scalar hypergeometric equations (\ref{eq:gauss}) and (\ref{eq:kummer}), Glutsyuk's Theorem \ref{theorem:gl} asserts the existence of scalars $k_{1,\infty}^{\pm}(\alpha)$, $k_{2,\infty}^{\pm}(\alpha)$, $k_{1,1}^{\pm}(\alpha)$ and $k_{2,1}^{\pm}(\alpha)$ such that, for $j \in \{1,2\}$:
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left. y_{j}^{(1)}(z \alpha^{-1})\right|_{z \in \sigma_{\alpha}(\alpha)} \ k_{j,1}^{-}(\alpha) = \tilde{y}_{j}^{(\infty,-1)}(z), \label{eq:sat1} \\
&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left. y_{j}^{(\infty)}(z \alpha^{-1})\right|_{z \in \sigma_{\infty}(\alpha)} \ k_{j,\infty}^{-}(\alpha) = \tilde{y}_{j}^{(\infty,0)}(z), \label{eq:sat2} \end{align}
uniformly for $z \in \widetilde{\mathscr{S}}_{-1}$ and $\widetilde{\mathscr{S}}_{0}$ respectively, and:
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \frac{\pi}{2}}} \left. y_{j}^{(1)}(z \alpha^{-1})\right|_{z \in \sigma_{\alpha}(\alpha)} \ k_{j,1}^{+}(\alpha) = \tilde{y}_{j}^{(\infty,0)}(z), \label{eq:sat3} \\
&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \frac{\pi}{2}}} \left. y_{j}^{(\infty)}(z \alpha^{-1})\right|_{z \in \sigma_{\infty}(\alpha)} \ k_{j,\infty}^{+}(\alpha) = \tilde{y}_{j}^{(\infty,1)}(z), \label{eq:sat4} \end{align}
uniformly $z \in \widetilde{\mathscr{S}}_{0}$ and $\widetilde{\mathscr{S}}_{1}$ respectively. \\
Having applied Glutsyuk's theorem to our confluence of the hypergeometric equation, we now focus on understanding what we can deduce about these scalars $k_{j,\infty}^{\pm}(\alpha)$ and $k_{j,1}^{\pm}(\alpha)$, $j=1,2$. We are ready to state our first main theorem.
\begin{theorem} \label{main:importance} If $k_{j,\infty}^{\pm}(\alpha)$ and $k_{j,1}^{\pm}(\alpha)$ are scalars satisfying (\ref{eq:sat1})-(\ref{eq:sat4}), then these numbers satisfy the following limits,
\begin{align}
\lim_{ \substack{ \alpha\rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} k_{1,\infty}^{\pm}(\alpha) \ (-\alpha)^{\gamma-\beta}= 1, \label{eq:n1} \\
\lim_{ \substack{ \alpha\rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} -k_{2,\infty}^{\pm}(\alpha) \ (-\alpha)^{\beta}= 1, \label{eq:n2} \\
\lim_{ \substack{ \alpha\rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} k_{1,1}^{\pm}(\alpha) \ \alpha^{\gamma-\beta}= 1, \label{eq:n3} \\
\lim_{ \substack{ \alpha\rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} -k_{2,1}^{\pm}(\alpha) \ \alpha^{\beta}= 1. \label{eq:n4} \end{align}\end{theorem}
\begin{proof} In either case $\arg(\alpha) = \frac{\pi}{2}$ or $-\frac{\pi}{2}$, let $\mathscr{S}^{*}$ be a closed, proper subsector of $\widetilde{\mathscr{S}}_{1}$ or $\widetilde{\mathscr{S}}_{0}$ respectively. Combining the statements (\ref{eq:sat2}) and (\ref{eq:sat4}), together with the asymptotic behaviour (\ref{eq:chgasy}), we have,
\begin{align} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} \left. y_{1}^{(\infty)}(z \alpha^{-1})\right|_{z \in \sigma_{\infty}(\alpha)} \ k_{1,\infty}^{\pm}(\alpha) \sim \tilde{y}_{1,f}^{(\infty)}(z), \quad \text{as } z \rightarrow \infty, \ z \in \mathscr{S}^{*}. \label{eq:combining} \end{align}
We now re-write $y_{1}^{(\infty)}(z \alpha^{-1})$ using Kummer transformation as in (\ref{eq:kummerrelation}),
\begin{align} &\left. y_{1}^{(\infty)}\left(z \alpha^{-1}\right) \right|_{z \in \sigma_{\infty}(\alpha)} = z^{\beta-\gamma}(-\alpha)^{\gamma-\beta}\left(1-\frac{z}{\alpha}\right)^{\gamma-\alpha-\beta} \left. \sum_{n=0}^{\infty}\frac{(1-\beta)_{n}(\gamma-\beta)_{n}\alpha^{n}}{(\alpha+1-\beta)_{n}n!z^{n}} \right|_{z \in \sigma_{\infty}(\alpha)}. \nonumber \end{align}
We therefore deduce,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} z^{\beta-\gamma}(-\alpha)^{\gamma-\beta}\left(1-\frac{z}{\alpha}\right)^{\gamma-\alpha-\beta} \left. \sum_{n=0}^{\infty}\frac{(1-\beta)_{n}(\gamma-\beta)_{n}\alpha^{n}}{(\alpha+1-\beta)_{n}n!z^{n}} \right|_{z \in \sigma_{\infty}(\alpha)}k_{1,\infty}^{\pm}(\alpha) \nonumber \\
&\quad = \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} z^{\beta-\gamma}(-\alpha)^{\gamma-\beta}e^{z} \left. \sum_{n=0}^{\infty}\frac{(1-\beta)_{n}(\gamma-\beta)_{n}\alpha^{n}}{(\alpha+1-\beta)_{n}n!z^{n}} \right|_{z \in \sigma_{\infty}(\alpha)}k_{1,\infty}^{\pm}(\alpha). \nonumber \end{align}
Combining this with (\ref{eq:combining}) and writing $\tilde{y}_{1,f}^{(\infty)}(z)$ as in (\ref{eq:yformal}), we have,
\begin{align} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} \left. \sum_{n=0}^{\infty}\frac{(1-\beta)_{n}(\gamma-\beta)_{n}\alpha^{n}}{(\alpha+1-\beta)_{n}n!z^{n}} \right|_{z \in \sigma_{\infty}(\alpha)}(-\alpha)^{\gamma-\beta} k_{1,\infty}^{\pm} \sim \sum_{n=0}^{\infty}\frac{(\gamma-\beta)_{n}(1-\beta)_{n}}{n!z^{n}},\nonumber \end{align}
as $z \rightarrow \infty$ for $z \in \mathscr{S}^{*}$. \\
We now define $w = z^{-1}$ so that $w \rightarrow 0 \Leftrightarrow z \rightarrow \infty$ and we can apply
the following classical result \cite{wasow}:
\begin{lemma}
\label{lemma:wasow} Let $f(w)$ be holomorphic in an open sector $\sigma$ at $w=0$ and let $\sigma^{*}$ be a closed, proper sub-sector of $\sigma$. If,
\[f(w) \sim \sum_{n=0}^{\infty} a_{n}w^{n}, \quad \quad \text{as } w \rightarrow 0, \ w \in \sigma,\]
then:
\[a_{n} = \frac{1}{n!} \lim_{\substack{w \rightarrow 0 \\ w \in \sigma^{*}}} f^{(n)}(z),\]
where $f^{(n)}(w)$ denotes the $n^{\text{th}}$ derivative of $f(w)$,
\end{lemma}
\noindent to find,
\begin{align} &\frac{(\gamma-\beta)_{n}(1-\beta)_{n}}{n!} = \nonumber \\
&\quad \frac{1}{n!} \lim_{\substack{w \rightarrow 0 \\ w^{-1} \in \mathscr{S}^{*}}} \frac{d^{n}}{dw^{n}} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} \left. \sum_{l=0}^{\infty}\frac{(1-\beta)_{l}(\gamma-\beta)_{l}\alpha^{l}w^{l}}{(\alpha+1-\beta)_{l}l!} \right|_{w^{-1} \in \sigma_{\infty}(\alpha)}(-\alpha)^{\gamma-\beta} k_{1,\infty}^{\pm}(\alpha). \nonumber \end{align}
We proceed to treat the limits on the right hand side with special care. We first note that, due to the uniformity of the limits (\ref{eq:sat2}) and (\ref{eq:sat4}), we may interchange the limit in $\alpha$ with the derivative and the limit in $w$ as follows,
\begin{align} &\frac{(\gamma-\beta)_{n}(1-\beta)_{n}}{n!} = \nonumber \\
&\quad \frac{1}{n!} \lim_{\alpha \rightarrow \infty} \lim_{\substack{w \rightarrow 0 \\ w^{-1} \in \mathscr{S}^{*}}} \frac{d^{n}}{dw^{n}} \left. \sum_{l=0}^{\infty}\frac{(1-\beta)_{l}(\gamma-\beta)_{l}\alpha^{l}w^{l}}{(\alpha+1-\beta)_{l}l!} \right|_{w^{-1} \in \sigma_{\infty}(\alpha)}(-\alpha)^{\gamma-\beta} k_{1,\infty}^{\pm}(\alpha). \nonumber \end{align}
The next step is to notice that the series inside the limits on the right hand side represents an analytic function (or at least its analytic extension to the sector $\sigma_{\infty}(\varepsilon)$ does). We may therefore interchange the derivative and series as follows,
\begin{align} &\frac{(\gamma-\beta)_{n}(1-\beta)_{n}}{n!} = \nonumber \\
&\ \frac{1}{n!} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} \lim_{\substack{w \rightarrow 0 \\ w^{-1} \in \mathscr{S}^{*}}} \left. \sum_{l=0}^{\infty}\frac{d^{n}}{dw^{n}}\frac{(1-\beta)_{l}(\gamma-\beta)_{l}\alpha^{l}w^{l}}{(\alpha+1-\beta)_{l}l!} \right|_{w^{-1} \in \sigma_{\infty}(\alpha)}(-\alpha)^{\gamma-\beta} k_{1,\infty}^{\pm}(\alpha) = \nonumber \\
&\ \frac{1}{n!} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} \lim_{\substack{w \rightarrow 0 \\ w^{-1} \in \mathscr{S}^{*}}} \left. \sum_{l=0}^{\infty}\frac{(l+n)!}{l!} \frac{(1-\beta)_{l+n}(\gamma-\beta)_{l+n}\alpha^{l+n}w^{l}}{(\alpha+1-\beta)_{l+n}(l+n)!} \right|_{w^{-1} \in \sigma_{\infty}(\alpha)}(-\alpha)^{\gamma-\beta} k_{1,\infty}^{\pm}(\alpha). \nonumber \end{align}
Furthermore, due to the analyticity of the series on the right hand side, its limit as $w \rightarrow 0$ certainly exists and is simply equal to the first term of the series. We finally deduce,
\begin{align} \frac{(\gamma-\beta)_{n}(1-\beta)_{n}}{n!} = \frac{1}{n!} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} n! \frac{(1-\beta)_{n}(\gamma-\beta)_{n}\alpha^{n}}{(\alpha+1-\beta)_{n}n!} (-\alpha)^{\gamma-\beta} k_{1,\infty}^{\pm}(\alpha). \label{eq:fff}
\end{align}
Therefore
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} \frac{(1-\beta)_{n}(\gamma-\beta)_{n}\alpha^{n}}{(\alpha+1-\beta)_{n}n!} (-\alpha)^{\gamma-\beta} k_{1,\infty}^{\pm}(\alpha) \nonumber \\
&\quad = \frac{(1-\beta)_{n}(\gamma-\beta)_{n}}{n!} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} (-\alpha)^{\gamma-\beta} k_{1,\infty}^{\pm}(\alpha). \nonumber \end{align}
Comparing with the left hand side of (\ref{eq:fff}) we deduce the desired result (\ref{eq:n1}). The limit (\ref{eq:n2}) can be proved by using $y_{2}^{(\infty)}(z \alpha^{-1})$ as given by (\ref{eq:yinf}). The limits (\ref{eq:n3}) and (\ref{eq:n4}) can be proved using $y_{1}^{(1)}(z \alpha^{-1})$ and $y_{2}^{(1)}(z \alpha^{-1})$ as given by (\ref{eq:kummerrelation3}) and (\ref{eq:kummerrelation4}) and using Lemma \ref{lemma:obtaining2} in place of Lemma \ref{lemma:obtaining}.\end{proof}
\begin{remark} Returning to the point of view of studying the hypergeometric equations as the $(2 \times 2)$ equations (\ref{eq:hg1}) and (\ref{eq:chg1}), our Main Theorem \ref{main:importance} may be equivalently stated as follows. If $K_{1}^{\pm}(\alpha)$ and $K_{\infty}^{\pm}(\alpha)$ are diagonal matrices satisfying (\ref{eq:sat5})-(\ref{eq:sat8}), then they satisfy the following:
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} K_{\infty}^{\pm}(\alpha) \left(\begin{array}{cc} (-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta}\end{array}\right) = I, \label{eq:this1} \\
&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} K_{1}^{\pm}(\alpha) \left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{\beta} \end{array}\right) = I. \label{eq:this2} \end{align}
These limits can be proved in an analogous way to the limits in our Main Theorem \ref{main:importance} by using Remarks \ref{remark:r1} and \ref{remark:r2} in place of Lemmas \ref{lemma:obtaining} and \ref{lemma:obtaining2} respectively. \end{remark}
\subsubsection{Obtaining $\widetilde{Y}^{(0)}(z)$ from $Y^{(0)}(z)$}\ \\
Since the substitution $x=\frac{z}{\alpha}$ and limit $\alpha \rightarrow \infty$ do not interfere with the nature of the Fuchsian singularity $x=0$, corresponding to $z=0$, this limit is much easier. We will only consider the limit along $\arg(\alpha) = -\frac{\pi}{2}$, the other case is completely analogous even though it requires to change the branch cut in $\widetilde\Omega_0$.
\begin{lemma} \label{lemma:first} We have the following limit,
\begin{align} \lim_{\alpha \rightarrow \infty} \ _{2}F_{1}\left( \begin{array}{c} \alpha , \ \beta \\ \gamma \end{array} ; \frac{z}{\alpha} \right) = \ _{1}F_{1} \left(\begin{array}{c} \beta \\ \gamma \end{array} ; z \right). \nonumber \end{align} \end{lemma}
\begin{proof} By taking the term by term limit in the series for ${}_{2}F_{1}$ we obtain a uniformly convergent series that coincides with ${} _{1}F_{1}$. We conclude by uniqueness of Taylor series expansion for analytic functions. \end{proof}
\begin{theorem} \label{theorem:let}
Let $y_{k}^{(0)}(x)$ and $\tilde{y}^{(0)}_{k}(z)$, $k=1,2$, be defined as in (\ref{eq:y0}) and (\ref{eq:yt0}) respectively. For $\arg(\alpha) = - \frac{\pi}{2}$, we have the following limits,
\begin{align}
\begin{split}
&\lim_{\substack{\alpha \rightarrow \infty \\ z \in \omega_{0}(\alpha)}} y_{1}^{(0)}\left(z \alpha^{-1} \right) \alpha^{1-\gamma} =
\tilde{y}^{(0)}_{1}(z), \\ &\lim_{\substack{\alpha \rightarrow \infty \\ z \in \omega_{0}(\alpha)}} y_{2}^{(0)}\left(z\alpha^{-1}\right) = \tilde{y}^{(0)}_{2}(z), \end{split}
\quad \quad z \in \widetilde{\Omega}_{0}. \label{eq:020}
\end{align}
where
\[\omega_{0}(\alpha) = \left\{ z : |z| < |\alpha| , \ -\frac{3}{2}\pi \leq \text{arg}(z) < \frac{\pi}{2} \right\}.\]
\end{theorem}
\begin{proof} Notice that for $\arg(\alpha) =\frac{\pi}{2}$, $x \in \Omega_{0} \Leftrightarrow z \in \omega_{0}(\alpha)$. Since the radius of this neighbourhood clearly becomes infinite as $\alpha \rightarrow \infty$, if $z \in \omega_{0}(\alpha)$ for all $|\alpha|$ sufficiently large, then the domain $\omega_{0}$ tends to the domain $\widetilde{\Omega}_{0}$ (i.e. the domain in our definition of the fundamental solutions of Kummer equation around $z=0$ as given in Section \ref{sec:kumsol}).
Using Lemma \ref{lemma:first}, we compute the limits as follows,
\begin{align} \lim_{\alpha \rightarrow \infty} y_{1}^{(0)}\left(z \alpha^{-1} \right) \alpha^{1-\gamma} &= \lim_{\alpha \rightarrow \infty} z^{1-\gamma} \ _{2}F_{1} \left(\begin{array}{c} \alpha+1-\gamma , \ \beta+1-\gamma \\ 2-\gamma \end{array} ; \frac{z}{\alpha}\right) \nonumber \\
&= z^{1-\gamma} \ _{1}F_{1} \left(\begin{array}{c} \beta+1-\gamma \\ 2-\gamma \end{array} ; z \right) = \tilde{y}_{1}^{(0)}(z), \quad \quad z \in \widetilde{\Omega}_{0},\nonumber \\
\text{and } \lim_{\alpha \rightarrow \infty} y_{2}^{(0)}\left(z \alpha^{-1} \right)&= \lim_{\alpha \rightarrow \infty} \ _{2}F_{1}\left(\begin{array}{c} \alpha , \ \beta \\ \gamma \end{array} ; \frac{z}{\alpha} \right) \nonumber \\
&= \ _{1}F_{1} \left(\begin{array}{c} \beta \\ \gamma \end{array} ; z \right) = \tilde{y}_{2}^{(0)}(z), \quad \quad z \in \widetilde{\Omega}_{0}, \nonumber \end{align}
as required. \end{proof}
\begin{remark} The factor $\alpha^{1-\gamma}$ in the first limit of Theorem \ref{theorem:let} is necessary because of the term,
\[x^{1-\gamma} \equiv z^{1-\gamma} \alpha^{\gamma-1},\]
in the solution $y_{1}^{(0)}(x)$, as given in (\ref{eq:y0}). \end{remark}
\begin{remark} We have stated Theorem \ref{theorem:let} in terms of the solutions of the \textit{scalar} hypergeometric equatons (\ref{eq:gauss}) and (\ref{eq:kummer}). The limits (\ref{eq:020}) can be equivalently stated in terms of the solutions of the $(2\times2)$ equations (\ref{eq:hg1}) and (\ref{eq:chg1}): for $\arg(\alpha) = \pm \frac{\pi}{2}$,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ z \in \omega_{0}(\alpha)}} Y^{(0)}\left(\frac{z}{\alpha}\right) \alpha^{\Theta_{0}} = \widetilde{Y}^{(0)}(z), &&z\in \widetilde{\Omega}_{0}. \label{eq:y020} \end{align}
To see how this is equivalent to (\ref{eq:020}), we observe that for the diagonalising matrices we have
\begin{align}
\lim_{\alpha \rightarrow \infty} R_{0} = \lim_{\alpha \rightarrow \infty} \left(\begin{array}{cc} 1 & 1 \\ \frac{\alpha+1-\gamma}{\alpha(\beta-\gamma)} & \frac{1}{\beta-1} \end{array} \right) = \left(\begin{array}{cc} 1 & 1 \\ \frac{1}{\beta-\gamma} & \frac{1}{\beta-1} \end{array} \right) = \widetilde{R}_{0},\nonumber
\end{align}
and for the series, using Lemma \ref{lemma:first},
\begin{align}
&\lim_{\alpha \rightarrow \infty} G_{0}\left(z \alpha^{-1} \right) = \lim_{\alpha \rightarrow \infty} \left( \begin{matrix*}[l] \ _{2}F_{1} \left( \begin{array}{c} \alpha+1-\gamma,\ \beta-\gamma \\ 1-\gamma \end{array} ; \frac{z}{\alpha} \right) \text{\LARGE ,} \\ \frac{z(\alpha+1-\gamma)(1-\beta)}{\alpha(1-\gamma)(2-\gamma)} \ _{2}F_{1} \left(\begin{array}{c} \alpha+2-\gamma, \ \beta+1-\gamma \\ 3-\gamma \end{array} ; \frac{z}{\alpha} \right) \text{\LARGE ,} \end{matrix*} \right. \nonumber \\
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \left. \begin{matrix*}[r] \frac{z(\gamma-\beta)}{\gamma(\gamma-1)} \ \ _{2}F_{1} \left(\begin{array}{c} \alpha+1, \ \beta \\ \gamma+1 \end{array} ; \frac{z}{\alpha} \right) \\ \ _{2}F_{1} \left(\begin{array}{c}\alpha , \ \beta-1 \\ \gamma - 1 \end{array} ; \frac{z}{\alpha} \right) \end{matrix*} \right), \nonumber \\
&\quad = \left(\begin{array}{cc} _{1}F_{1} \left( \begin{array}{c} \beta-\gamma \\ 1-\gamma \end{array} ;z\right) \text{\LARGE ,} & \frac{z (\gamma-\beta)}{\gamma(\gamma-1)} \ \ _{1}F_{1} \left(\begin{array}{c} \beta \\ \gamma+1 \end{array} ; z\right) \\ & \\ \frac{z(1-\beta)}{(1-\gamma)(2-\gamma)} \ _{1}F_{1} \left(\begin{array}{c} \beta+1-\gamma \\ 3-\gamma \end{array} ; z \right) \text{\LARGE ,} & _{1}F_{1} \left(\begin{array}{c}\beta-1 \\ \gamma - 1 \end{array} ; z \right) \end{array} \right) = H_{0}(z).
\nonumber \end{align}
\end{remark}
\subsubsection{Limits of monodromy data} \label{sec:332}
Summarising the results so far, in section \ref{sec:331} we showed how \textit{term-by-term} limits of the solutions of Gauss equation around $x=\infty$ and $x=1$ produce the formal solutions of Kummer equation aroud $z=\infty$. We then explained how Glutsyuk's Theorem \ref{theorem:gl} asserts the existence of certain scalars which multiply Gauss solutions so that their true limits exist and are equal to the solutions of Kummer equation analytic in sectors at $z=\infty$. We have also proved our Main Theorem \ref{main:importance}, which establishes some important limits which these factors must satisfy. We now bring these results together to prove our second main theorem, concerned with explicitly producing the set of monodromy data $\widetilde{\mathcal{M}}$ from the set $\mathcal{M}$.
\begin{theorem} \label{main:top}
Define the monodromy data of Gauss equation as given in (\ref{eq:ci0})-(\ref{eq:hgmono}) and of Kummer equation as in (\ref{eq:kummers-1})-(\ref{eq:chgmono}). We have the following limits,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \frac{\pi}{2}}} \left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{\beta}\end{array}\right) C^{1 \infty} \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array}\right) = \widetilde{S}_{0}, \label{eq:firstlimit} \\
&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{\beta}\end{array}\right) C^{1 \infty} \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array}\right) = \widetilde{S}_{-1}, \label{eq:secondlimit} \\
&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left(\begin{array}{cc} \alpha^{\gamma-1} & 0 \\ 0 & 1\end{array}\right) C^{0 \infty} \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array}\right) = \widetilde{C}^{0 \infty} \label{eq:thirdlimit}
\end{align}
Furthermore, as immediate consequences of the above limits of connection matrices, we have the following limits of monodromy matrices,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left(\begin{array}{cc} (-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array}\right) M_{0} \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array}\right) = \widetilde{M}_{0}, \label{eq:fifthlimit} \\
&\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left(\begin{array}{cc} (-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array}\right) M_{\infty}M_{1} \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array}\right) = \widetilde{M}_{\infty}, \label{eq:seventhlimit}
\end{align} \end{theorem}
\begin{proof} As part of the proof of this theorem, we will use the following elementary lemma.
\begin{lemma} \label{lemma:elementary} Let $f(\alpha)$ and $g(\alpha)$ be matrices such that $\lim_{\alpha \rightarrow \infty} f(\alpha)g(\alpha)$ exists. \\
\textbf{i)} If $\lim_{\alpha \rightarrow \infty}$ det$(f(\alpha))$ exists and is non-zero and det$(f(\alpha)) \neq 0$ for all $\alpha$ sufficiently large and if the limit $\lim_{\alpha \rightarrow \infty} f(\alpha)$ exists and is invertible, then the limit $\lim_{\alpha \rightarrow \infty} g(\alpha)$ exists. \\
\textbf{ii)} If $\lim_{\alpha \rightarrow \infty}$ det$(g(\alpha))$ exists and is non-zero and det$(g(\alpha)) \neq 0$ for all $\alpha$ sufficiently large and if the limit $\lim_{\alpha \rightarrow \infty} g(\alpha)$ exists, then the limit $\lim_{\alpha \rightarrow \infty} f(\alpha)$ exists. \end{lemma}
Let $\sigma_{\alpha}(\alpha)$ and $\sigma_{\infty}(\alpha)$ be the sectors defined in (\ref{eq:sectors1}) and (\ref{eq:sectors2}) respectively. As mentioned previously, if $z \in \sigma_{\alpha}(\alpha)$ then $x \in \Omega_{1}$ and if $z \in \sigma_{\infty}(\alpha)$ then $x \in \Omega_{\infty}$, so that the connection matrix $C^{1 \infty}$ remains valid for the solutions $Y^{(1)}(z \alpha^{-1})$ and $Y^{(\infty)}(z \alpha^{-1})$ restricted to the sectors $\sigma_{\alpha}(\alpha)$ and $\sigma_{\infty}(\alpha)$ respectively. Since the radii of these sectors do not diminish as $\alpha \rightarrow \infty$, for $|\alpha|$ sufficiently large we must have,
\[\sigma_{\alpha}(\alpha) \cap \sigma_{\infty}(\alpha) \neq \varnothing,\]
recall Figure \ref{fig:sss}. Therefore, for $|\alpha|$ sufficiently large, we have,
\begin{align} Y^{(\infty)}\left(z \alpha^{-1} \right) = Y^{(1)}\left(z \alpha^{-1} \right) C^{1 \infty}, \quad \quad z \in \sigma_{\alpha}(\alpha) \cap \sigma_{\infty}(\alpha). \label{eq:bec} \end{align}
Let $\widetilde{\mathscr{S}}_{k}$ be the sectors defined in (\ref{eq:sectors3}). To prove the first limit (\ref{eq:firstlimit}), we first give a proof of Glutsyuk's Corollary \ref{corollary:glcorollary} in our case. We multiply by the matrices $K_{\infty}^{+}(\alpha)$ and $K_{1}^{+}(\alpha)$ and take the limit $\alpha \rightarrow \infty$, with $\arg(\alpha)=\frac{\pi}{2}$, so that (\ref{eq:bec}) becomes,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} \left. Y^{(\infty)}(z \alpha^{-1})\right|_{z \in \sigma_{\infty}(\alpha)}K_{\infty}^{+}(\alpha) \nonumber \\
&\quad = \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} \left. Y^{(1)}(z \alpha^{-1}) \right|_{z \in \sigma_{\alpha}(\alpha)}K_{1}^{+}(\alpha) \left(K_{1}^{+}(\alpha)\right)^{-1}C^{1\infty}K_{\infty}^{+}(\alpha), \label{eq:bec2} \end{align}
for $z \in \widetilde{\mathscr{S}}_{0}\cap \widetilde{\mathscr{S}}_{1}$. We apply Lemma \ref{lemma:elementary} \textbf{i)} with,
\[f(\alpha) = \left. Y^{(1)}(z \alpha^{-1})\right|_{z \in \sigma_{\alpha}(\alpha)}K_{1}^{+}(\alpha) \quad \text{and} \quad g(\alpha)=\left(K_{1}^{+}(\alpha)\right)^{-1}C^{1\infty}K_{\infty}^{+}(\alpha).\]
Observe that the hypotheses of Lemma \ref{lemma:elementary} hold: the limit,
\[\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \frac{\pi}{2}}} f(\alpha)g(\alpha),\]
exists and equals $\widetilde{Y}^{(\infty,1)}(z)$, by (\ref{eq:sat8}), and the limit,
\[\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} f(\alpha),\]
exists and equals $\widetilde{Y}^{(\infty,0)}(z)$, by (\ref{eq:sat7}), which is clearly invertible because it is a fundamental solution. For all $\alpha$, $f(\alpha)$ is also clearly invertible because it is a fundamental solution. The limit,
\[\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} g(\alpha) = \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} \left(K_{1}^{+}(\alpha)\right)^{-1}C^{1\infty}K_{\infty}^{+}(\alpha),\]
therefore exists and, from (\ref{eq:bec2}),
\begin{align} \widetilde{Y}^{(\infty,1)}(z) = \widetilde{Y}^{(\infty,0)}(z) \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} \left(K_{1}^{+}(\alpha)\right)^{-1}C^{1 \infty}K_{\infty}^{+}(\alpha), \quad \quad z \in \widetilde{\mathscr{S}_{0}} \cap \widetilde{\mathscr{S}}_{1}.\nonumber \end{align}
Recall that if $z \in \widetilde{\mathscr{S}}_{k}$ then $z \in \widetilde{\Sigma}_{k}$ and recall Definition \ref{definition:kumst} of Stokes matrices, namely we have,
\[\widetilde{Y}^{(\infty,1)}(z) = \widetilde{Y}^{(\infty,0)}(z) \widetilde{S}_{0}, \quad \quad z \in \widetilde{\Sigma}_{0}\cap\widetilde{\Sigma}_{1}.\]
We conclude that,
\[\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} \left(K_{1}^{+}(\alpha)\right)^{-1}C^{1\infty}K_{\infty}^{+}(\alpha) = \widetilde{S}_{0},\]
which is precisely Glutsyuk's Corollary \ref{corollary:glcorollary} in our case. Combining this with (\ref{eq:this1}) and (\ref{eq:this2}), we compute,
\begin{align} \widetilde{S}_{0} &= \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} \left(K_{1}^{+}(\alpha)\right)^{-1} C^{1 \infty} K_{\infty}^{+}(\alpha), \nonumber \\
&= \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}} \left(K_{1}^{+}(\alpha) \left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{\beta} \end{array} \right)\left(\begin{array}{cc} \alpha^{\beta-\gamma} & 0 \\ 0 & -\alpha^{-\beta} \end{array} \right) \right)^{-1} \nonumber \\
&\quad \quad \quad \quad \quad \quad \quad \quad C^{1 \infty} K_{\infty}^{+}(\alpha) \left(\begin{array}{cc} (-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array} \right)\left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta} \end{array} \right), \nonumber \\
&= \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=\frac{\pi}{2}}}\left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{\beta} \end{array} \right) C^{1 \infty} \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta} \end{array} \right), \nonumber \end{align}
where we have implicitly used Lemma \ref{lemma:elementary} again, this proves the first limit (\ref{eq:firstlimit}) of the theorem. To prove the second limit (\ref{eq:secondlimit}), we multiply by the matrices $K_{\infty}^{-}(\alpha)$ and $K_{1}^{-}(\alpha)$ and take the limit $\alpha \rightarrow \infty$, with $\arg(\alpha)=-\frac{\pi}{2}$, so that (\ref{eq:bec}) becomes,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=-\frac{\pi}{2}}} \left. Y^{(\infty)}(z \alpha^{-1})\right|_{z \in \sigma_{\infty}(\alpha)}K_{\infty}^{-}(\alpha) \nonumber \\
&\quad = \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=-\frac{\pi}{2}}} \left. Y^{(1)}(z \alpha^{-1}) \right|_{z \in \sigma_{\alpha}(\alpha)}K_{1}^{-}(\alpha) \left(K_{1}^{-}(\alpha)\right)^{-1}C^{1\infty}K_{\infty}^{-}(\alpha), \label{eq:bec3} \end{align}
for $z \in \widetilde{\mathscr{S}}_{-1}\cap\widetilde{\mathscr{S}}_{0}$. By following a similar procedure as above, using Lemma \ref{lemma:elementary} and the relations (\ref{eq:sat5}) and (\ref{eq:sat6}), we deduce,
\[\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=-\frac{\pi}{2}}} \left(K_{1}^{-}(\alpha)\right)^{-1}C^{1\infty}K_{\infty}^{-}(\alpha) = \widetilde{S}_{-1}.\]
Combining this with (\ref{eq:this1}) and (\ref{eq:this2}), we compute,
\begin{align} \widetilde{S}_{-1} &= \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=-\frac{\pi}{2}}} \left(K_{1}^{-}(\alpha)\right)^{-1} C^{1 \infty} K_{\infty}^{-}(\alpha), \nonumber \\
&= \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=-\frac{\pi}{2}}} \left(K_{1}^{-}(\alpha) \left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{\beta} \end{array} \right)\left(\begin{array}{cc} \alpha^{\beta-\gamma} & 0 \\ 0 & -\alpha^{-\beta} \end{array} \right) \right)^{-1} \nonumber \\
&\quad \quad \quad \quad \quad \quad \quad \quad C^{1 \infty} K_{\infty}^{-}(\alpha) \left(\begin{array}{cc} (-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array} \right)\left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta} \end{array} \right), \nonumber \\
&= \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=-\frac{\pi}{2}}}\left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{\beta} \end{array} \right) C^{1 \infty} \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta} \end{array} \right), \nonumber \end{align}
where we have implicitly used Lemma \ref{lemma:elementary}, this proves the second limit (\ref{eq:secondlimit}) of the theorem. \\
To prove the third limit (\ref{eq:thirdlimit}) we first note that the curve $\gamma_{\infty 0}$ which defines the connection matrix $C^{0 \infty}$ survives the confluence limit. In other words, after the substitution $x=\frac{z}{\alpha}$, the curve does not diminish or become broken under the limit $\alpha \rightarrow \infty$. This fact is expressed as follows,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \gamma_{\infty 0} \left[Y^{(\infty)}K_{\infty}^{-}(\alpha)\right]\left(z \alpha^{-1}\right) = \gamma_{\infty 0} \left[\widetilde{Y}^{(\infty,0)}\right](z), \nonumber \end{align}
or equivalently, using the domains $\omega_{0}^{-}(\alpha)$ and $\widetilde{\Omega}_{0}^{-}$ defined in Sections \ref{sec:331} and \ref{sec:kumsol} respectively,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left. Y^{(0)}\left(z \alpha^{-1}\right) \right|_{z \in \omega_{0}^{-}(\alpha)} C^{0 \infty} \left(C^{1\infty}\right)^{-1} K_{\infty}^{-}(\alpha) = \widetilde{Y}^{(0)}(z) \widetilde{C}^{0 \infty}, \quad \quad z \in \widetilde{\Omega}_{0}^{-}.\nonumber \end{align}
Combining this with the limits (\ref{eq:y020}) and (\ref{eq:this1}), we deduce the required result (\ref{eq:thirdlimit}) as follows, for $z \in \widetilde{\Omega}_{0}^{-}$:
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=-\frac{\pi}{2}}} \left. Y^{(0)}\left(z \alpha^{-1}\right) \right|_{z \in \omega_{0}^{-}(\alpha)} \left(\begin{array}{cc} \alpha^{1-\gamma} & 0 \\ 0 & 1 \end{array} \right) \left(\begin{array}{cc} \alpha^{\gamma-1} & 0 \\ 0 & 1 \end{array}\right) C^{0 \infty} \nonumber \\
&\quad \hspace{85pt} K_{\infty}^{-}(\alpha) \left(\begin{array}{cc} (-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array} \right)\left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta} \end{array} \right) \nonumber \\
&\quad \quad = \widetilde{Y}^{(0)}(z) \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=
-\frac{\pi}{2}}} \left(\begin{array}{cc}\alpha^{\gamma-1} & 0 \\ 0 & 1 \end{array} \right) C^{0 \infty}\left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta} \end{array} \right) = \widetilde{Y}^{(0)}(z) \widetilde{C}^{0 \infty}, \nonumber \\
&\hspace{40pt} \Leftrightarrow \quad \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha)=-\frac{\pi}{2}}} \left(\begin{array}{cc}\alpha^{\gamma-1} & 0 \\ 0 & 1 \end{array} \right) C^{0 \infty}\left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta} \end{array} \right) = \widetilde{C}^{0\infty}, \nonumber \end{align}
where we have implicitly used Lemma \ref{lemma:elementary}. \\
Having deduced the limit (\ref{eq:thirdlimit}) of the connection matrix, the limit (\ref{eq:fifthlimit}) follow directly since $M_{0} = \left(C^{0\infty}\right)^{-1} e^{2 \pi i \Theta_{0}}C^{0\infty}$ and $\Theta_{0} \equiv \widetilde{\Theta}_{0}$. For (\ref{eq:fifthlimit}), we have,
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left(\begin{array}{cc}(-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array} \right) M_{0} \left(\begin{array}{cc}(-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array}\right) \nonumber \\
&\quad \quad = \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left(\begin{array}{cc}(-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array} \right) \left(C^{0\infty}\right)^{-1} e^{2 \pi i \Theta_{0}}C^{0\infty} \left(\begin{array}{cc}(-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array}\right), \nonumber \\
&\quad \quad = \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = -\frac{\pi}{2}}} \left(\begin{array}{cc}(-\alpha)^{\gamma-\beta} & 0 \\ 0 & -(-\alpha)^{\beta} \end{array} \right) \left(C^{0 \infty}\right)^{-1} \left(\begin{array}{cc}\alpha^{\gamma-1} & 0 \\ 0 & 1 \end{array}\right) e^{2 \pi i \Theta_{0}} \nonumber \\
&\quad \hspace{60pt} \left(\begin{array}{cc} \alpha^{1-\gamma} & 0 \\ 0 & 1 \end{array} \right) C^{0\infty} \left(\begin{array}{cc}(-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{-\beta}\end{array}\right), \nonumber \\
&\quad \quad = \left(\widetilde{C}^{0\infty}\right)^{-1} e^{2 \pi i \widetilde{\Theta}_{0}} \widetilde{C}^{0 \infty} = \widetilde{M}_{0}, \nonumber \end{align}
as required. \end{proof}
\subsubsection{Explicit computations of limits of monodromy data}
Here we apply Theorem \ref{main:top} to calculate explicitly the Stokes' matrices. We will use the following classical facts:
\begin{align} &\lim_{\alpha \rightarrow \infty} a^{c-b} \frac{\Gamma(a+b)}{\Gamma(a+c)} = 1, \text{ as $a \rightarrow \infty$, $|\text{arg}(a)|<\pi$},\label{eq:f1} \\
&\Gamma(a) \equiv \frac{\pi}{\sin(\pi a)\Gamma(1-a)}, \label{eq:f2} \\
&\lim_{a \rightarrow \infty} e^{i \pi a}\csc(\pi a) = 2i \text{ for Im}(a)<0. \label{eq:f3} \end{align}
The proof of (\ref{eq:f3}) is elementary, the proofs of (\ref{eq:f1}) and (\ref{eq:f2}) can be found in \cite{ww} and \cite{bateman}.\\
Let $C^{1 \infty}$ be given by (\ref{eq:ci1}). Using $(-\alpha) \equiv \alpha e^{i \pi}$, we calculate,
\begin{align} &\left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{-\beta}\end{array}\right) C^{1 \infty} \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{\beta}\end{array}\right) \nonumber \\
&\quad = \left(\begin{array}{cc} \alpha^{\gamma-\beta} & 0 \\ 0 & -\alpha^{-\beta}\end{array}\right) \nonumber \\
&\quad \quad \left(\begin{array}{cc} e^{i \pi (\gamma-\beta)} \frac{\Gamma(\alpha+1-\beta)\Gamma(\alpha+\beta-\gamma)}{\Gamma(\alpha)\Gamma(\alpha+1-\gamma)} & e^{i \pi (\gamma-\alpha)} \frac{\Gamma(\beta+1-\alpha) \Gamma(\alpha+\beta-\gamma)}{\Gamma(\beta)\Gamma(\beta+1-\gamma)} \\ e^{i \pi \alpha} \frac{\Gamma(\alpha+1-\beta)\Gamma(\gamma-\alpha-\beta)}{\Gamma(1-\beta)\Gamma(\gamma-\beta)} & e^{i \pi \beta} \frac{\Gamma(\beta+1-\alpha)\Gamma(\gamma-\alpha-\beta)}{\Gamma(1-\alpha)\Gamma(\gamma-\alpha)} \end{array} \right) \left(\begin{array}{cc} (-\alpha)^{\beta-\gamma} & 0 \\ 0 & -(-\alpha)^{\beta}\end{array}\right), \nonumber \\
&\quad = \left(\begin{array}{cc} \frac{\Gamma(\alpha+1-\beta)\Gamma(\alpha+\beta-\gamma)}{\Gamma(\alpha)\Gamma(\alpha+1-\gamma)} & -e^{\pi i (\gamma-\alpha-\beta)}\alpha^{\gamma-2\beta} \frac{\Gamma(\beta+1-\alpha)\Gamma(\alpha+\beta-\gamma)}{\Gamma(\beta)\Gamma(\beta+1-\gamma)} \\ -e^{\pi i(\alpha+\beta-\gamma)}\alpha^{2\beta-\gamma}\frac{\Gamma(\alpha+1-\beta)\Gamma(\gamma-\alpha-\beta)}{\Gamma(1-\beta)\Gamma(\gamma-\beta)} & \frac{\Gamma(\beta+1-\alpha)\Gamma(\gamma-\alpha-\beta)}{\Gamma(1-\alpha)\Gamma(\gamma-\alpha)} \end{array} \right). \nonumber \end{align}
Using (\ref{eq:f1}), we find for the (1,1) and (2,2) elements:
\begin{align} &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} \frac{\Gamma(\alpha+1-\beta)\Gamma(\alpha+\beta-\gamma)}{\Gamma(\alpha)\Gamma(\alpha+1-\gamma)} = 1, \nonumber \\
\text{and } &\lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \pm \frac{\pi}{2}}} \frac{\Gamma(\beta+1-\alpha)\Gamma(\gamma-\alpha-\beta)}{\Gamma(1-\alpha)\Gamma(\gamma-\alpha)} = 1, \nonumber \end{align}
respectively, as required. We rewrite the (1,2) and (2,1) elements using (\ref{eq:f2}) as follows:
\begin{align} -e^{\pi i (\gamma-\alpha-\beta)}\alpha^{\gamma-2\beta} &\frac{\Gamma(\beta+1-\alpha)\Gamma(\alpha+\beta-\gamma)}{\Gamma(\beta)\Gamma(\beta+1-\gamma)} \nonumber \\
&= \frac{-e^{i \pi (\gamma-\alpha-\beta)}}{\sin(\pi(\alpha+\beta-\gamma))} \alpha^{\gamma-2\beta} \frac{\Gamma(\beta+1-\alpha)}{\Gamma(\gamma+1-\alpha-\beta)} \frac{\pi}{\Gamma(\beta)\Gamma(\beta+1-\gamma)}, \nonumber \end{align}
and,
\begin{align} -e^{i \pi (\alpha+\beta-\gamma)}\alpha^{2\beta-\gamma} &\frac{\Gamma(\alpha+1-\beta)\Gamma(\gamma-\alpha-\beta)}{\Gamma(1-\beta)\Gamma(\gamma-\beta)} \nonumber \\
&= \frac{-e^{i \pi (\alpha+\beta-\gamma)}}{\sin(\pi(\gamma-\alpha-\beta))} \alpha^{2\beta-\gamma} \frac{\Gamma(\alpha+1-\beta)}{\Gamma(\alpha+\beta+1-\gamma)} \frac{\pi}{\Gamma(1-\beta)\Gamma(\gamma-\beta)},\nonumber \end{align}
respectively. As $\alpha \rightarrow \infty$, the dominant terms in these expressions are $e^{\mp i \pi \alpha}$ respectively; observe that, if arg$(\alpha) = \pm \frac{\pi}{2}$ then $e^{\pm i \pi \alpha} \rightarrow 0$ as $\alpha \rightarrow \infty$, as required. Finally, for the most important computations, we have:
\begin{align} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = - \frac{\pi}{2}}} &\underbrace{\frac{-e^{i \pi (\alpha+\beta-\gamma)}}{\sin(\pi(\gamma-\alpha-\beta))}}_{\text{\normalsize $\rightarrow 2i \text{ by (\ref{eq:f3})}$}} \underbrace{\alpha^{2\beta-\gamma} \frac{\Gamma(\alpha+1-\beta)}{\Gamma(\alpha+\beta+1-\gamma)}}_{\text{\normalsize $\rightarrow 1 \text{ by (\ref{eq:f1})}$}} \frac{\pi}{\Gamma(1-\beta)\Gamma(\gamma-\beta)}, \nonumber \\
&= \frac{2 \pi i}{\Gamma(1-\beta)\Gamma(\gamma-\beta)} \equiv \left(S_{-1}\right)_{2,1}, \nonumber \end{align}
and,
\begin{align} \lim_{\substack{\alpha \rightarrow \infty \\ \text{arg}(\alpha) = \frac{\pi}{2}}} &\underbrace{\frac{-e^{i \pi (\gamma-\alpha-\beta)}}{\sin(\pi(\alpha+\beta-\gamma))}}_{\text{\normalsize $\rightarrow 2i \text{ by (\ref{eq:f3})}$}} \underbrace{\alpha^{\gamma-2\beta} \frac{\Gamma(\beta+1-\alpha)}{\Gamma(\gamma+1-\alpha-\beta)}}_{\text{\normalsize $\rightarrow e^{i \pi (\gamma-2\beta)} \text{ by (\ref{eq:f1})}$}} \frac{\pi}{\Gamma(\beta)\Gamma(\beta+1-\gamma)}, \nonumber \\
&= \frac{2 \pi i e^{i \pi (\gamma-2\beta)}}{\Gamma(\beta)\Gamma(\beta+1-\gamma)} \equiv \left(S_{0}\right)_{1,2}, \nonumber \end{align}
as required by formulae \eqref{eq:kummers-1}.
|
1,116,691,499,079 | arxiv | \section{Introduction}
In the present paper we continue a study of the onset of the perturbative
QCD regime in the processes initiated by virtual photon $\gamma^* (Q^2)$
along the lines of \cite{AMN,AMN_a}.
One may address two types of hard processes which allow one to
study the transition regime
from the Strong-QCD physics to the
perturbative QCD physics of hard processes at accessible energies:
these are (i)
hard hadronic form factors \cite{ER,BL}, and (ii)
pomeron-dominated reactions \cite{BFKL,Lip}.
An important problem which arises in the quantitative description of
hard processes is a numerical determination of the scale at which the
perturbative regime starts to dominate the amplitudes of interest.
Much attention to this subject has been paid in connection with the
elastic form factors \cite{ZhO,LSI}. In particular, the authors of \cite{ZhO}
advocated the viewpoint of an early applicability of pQCD at considerably low
$Q^2$, whereas in \cite{LSI}
the arguments in favour of the late onset of the
perturbative regime, at very large $Q^2\simeq 50$ GeV$^2$, were
given. A quantitative study of \cite{AMN} confirmed the
arguments of ref. \cite{LSI} and showed that even at $Q^2$ as large as $20$
Gev$^2$ the form factor is still dominated by the soft non-perturbative
region of the $q\bar q$ kinematical configurations. In \cite{AMN,AMN_a}
the elastic pion and transition form factors were studied at
$0\leq Q^2\leq 100$ GeV$^2$ within an approach where both the truly
non-perturbative and perturbative contributions have been taken into
account. Namely, the form factor has been represented as a sum of the
triangle diagram (direct convolution of soft wave functions) and
diagrams with the one-gluon exchanges
(the convolution of hard one-gluon exchange kernel with soft wave functions).
The analysis of form factors in the region of small $Q^2\sim 0 - 1$
GeV$^2$ allowed us to reconstruct phenomenological soft wave functions of
pseudoscalar mesons and soft photon. It was found that the
diagrams with the one-gluon exchange (the terms of the order of $\alpha_s$)
started to dominate the form factor only at $Q^2>50$ GeV$^2$, whereas at
smaller $Q^2$ the contribution of non-perturbative triangle diagram was
substantial.
In this work we address diffractive
production by the hard photon, namely the reactions
$\gamma^*(Q^2)p\to\rho^0 p$ and $\gamma^*(Q^2)p\to\gamma^*(Q^2) p$.
One would have in mind the following physical picture of such processes
at asymptotically large $Q^2$: the virtual photon
produces $q\bar q$ pair at small distances (see Fig. 1), then
the quarks convert into vector meson (or outgoing photon)
with the $t$-channel emission of gluons:
the gluons at the top of the ladder are at relatively
small distances and thus a hard component of the pomeron (i.e. the BFKL
pomeron \cite{BFKL}) is selected.
The lower part of the gluon ladder is attached to the nucleon at small
momentum transfer, and thus the region of small transverse momenta is
selected at the bottom of the ladder.
Hence the most realistic is the following scenario of the whole reaction:
typical virtualities in the gluon ladder change from hard values selected
by the virtual photon in the hard quark-loop at the top of the ladder
to the soft values at the
bottom, and thus the whole $t$-channel behavior is determined by the
complicated object which is a superposition of the hard and the soft pomerons.
The details of the partition of the whole $t$-channel amplitude into the hard
and soft components depend on the values of $Q^2$ and specific details of the
selection of the hard component of the Pomeron by the quark loop. That is why a detailed
consideration of the hard upper quark-antiquark subprocess
based on realistic wave functions of the initial photon and the outgoing
vector meson is crucial for the understanding of the mechanism of the reaction
at large but finite $Q^2$.
Notice that the interest in better understanding of the diffractive
production mechanism is also motivated by the experimental resutls.
Experimentally a rather strong change of the $W$-dependence of the
cross section of the reactions initiated by the photon in different
regions of the photon virtuality has been observed:
Namely, the $W$ dependence of the cross section of the
photoproduction ($Q^2=0$) in the region $W\sim 10-200$ GeV$^2$ is rather flat,
similar to that in hadronic processes like $\pi p\to \pi p$.
However, the reaction $\gamma^*(Q^2)p \to Vp $ at $Q^2 \sim 10-20$ GeV$^2$
demonstrates an increase of the cross section with $W$ as $W^{2\Delta}$
where $\Delta \approx 0.3$.
An attractive possibility one might think of is to refer this growth to the
change of the pomeron regime from the Strong-QCD one at small $Q^2$ to
the BFKL one at large $Q^2$. However, for better understanding of this
phenomenon one needs a more detailed quantitative analysis of the
different ingredients of a complicated amplitude
of the diffractive production.
In this paper we concentrate on the quark block with a
coupled photon, $\gamma^*(Q^2)$, which is responsible for a selection
of the region of small separations and thus of the hard pomeron component.
The quark block is determined by the convolution of photon wave functions
(the reaction $\gamma^*(Q^2)p\to\gamma^*(Q^2) p$) or
convolution of the photon and
$\rho$-meson wave functions (the reaction $\gamma^*(Q^2)p\to\rho^0 p$).
The light-cone wave function of photon
was found in ref. \cite{AMN_a} on the basis of experimental data
for the transition form factor $\gamma \gamma^*(Q^2) \to \pi^0$ at
$0 \leq Q^2 \leq 25$ GeV$^2$ \cite{g-pi}; this wave function
was successfully applied for the description of data on
$\gamma \gamma^*(Q^2) \to \eta$ and $\gamma \gamma^*(Q^2) \to \eta'$
\cite{g-pi,g-eta}. The analysis of the transition form factors has
been performed in terms of the double spectral representation over $q\bar q$
invariant mass squared, $M^2_{q\bar q} = \left ({m^2+k^2_{\perp} }
\right )/ \left (x(1-x) \right ) $
where $k^2_{\perp}$ and $x$
are the components of the light-cone quark momenta, and
$m$ is the constituent quark mass.
The photon wave function is represented as a
sum of two components which correspond to a direct production of the
$q\bar q$ pair with a point-like vertex $\gamma^* \to q\bar q$ at large
$M_{q\bar q}$, and to the production in the low-$M_{q\bar q} $ region
where the vertex has a nontrivial structure due to the soft $q\bar q$
interaction. The soft $q\bar q$ interaction yields an enhancement of the
contribution of low-masses and respectively large distances in the quark loop.
With the light-cone wave function of the photon at hand we can
calculate longitudinal and transverse polarization cross sections
$\gamma_L^*(Q^2)p\to\gamma_L^*(Q^2) p$ and
$\gamma_T^*(Q^2)p\to\gamma_T^*(Q^2) p$.
The onset of pQCD regime depends on the selection of
small interquark separations in a quark loop with $Q^2$.
So, it is important to test which fraction of the cross-section
as a function of $Q^2$ is actually gained at small transverse separations.
Therefore, in parallel with calculating the
$\gamma_{L,T}^*(Q^2)p\to\gamma_{L,T}^*(Q^2) p$
cross section which includes contribution of all transverse separations
in the quark-loop coupled to the BFKL pomeron,
we also calculate the part of this cross section which is actually gained at
small $q\bar q$ distances $\rho_{q\bar q}<0.2$ fm. Strictly speaking,
only this part of the cross section probes the BFKL pomeron, whereas
the cross section where the quarks in the loop are at large distances should be
rather coupled to the soft strong-QCD pomeron.
The results of our analysis show that in the region $Q^2 \sim 5 - 50$ GeV$^2$
the $\gamma^*(Q^2)p\to\gamma^*(Q^2) p$ cross section is dominated by the
domain of large transverse quark separations. The small quark
separations in the loop start to dominate the cross section only at
$Q^2 \geq 50$ GeV$^2$ and $W^2/Q^2\geq 10^6$.
This means that $\gamma^*(Q^2)$ selects small distances in the
$\gamma^*\gamma^*$-pomeron vertex approximately at the same values of
$Q^2$ as it was in the pion elastic and transition $\gamma \to
\pi, \; \eta,\; \eta'$ form factors.
The $\rho$-meson wave function necessary for the description of the reaction
$\gamma^*(Q^2)p\to\rho^0 p$ is not known promptly, so we have to use some
assumptions concerning the details of its behavior.
The low-$M_{q\bar q} $ component of the $\rho$-meson wave function is supposed
to be approximately the same as for a pion. The latter was
found in \cite{AMN} from fitting the data for pion form factor at
$0\leq Q^2 \leq 10$ GeV$^2$ \cite{pi}.
Moreover, the
$\rho$-meson low-$M_{q\bar q}$ wave function should be close to
photon's one due to the vector-meson dominance.
The high-mass component of the $\rho $-meson wave function needs a special
discussion: for its calculation, we cannot apply
exactly the same procedure as for the pion form factor.
In the analysis of pion form factor
\cite{AMN}, all the corrections of the order of $\alpha_s$ were taken
into account. To this end, the wave function of the pion was splitted into
soft and hard components, $\Psi^S$ and $\Psi^H$, in the following way
\begin{eqnarray}
\Psi^S \; {\rm is\; dominant\; at\;} M_{q\bar q} < M_0, \qquad
\Psi^H\; {\rm is\; dominant\; at\;} M_{q\bar q} > M_0.
\end{eqnarray}
The parameter $M_0$, which separates the soft
and hard regions, is expected to be of the order of several GeV;
in \cite{AMN} the value $M_0=3$ GeV was chosen: this corresponds to
$|\vec{k}_0|^{-1}=\left (M_0^2/4-m^2 \right )^{-\frac{1}{2}}
\simeq 0.15 \; {\rm fm}$ at $m\simeq 350$ MeV.
The separation of soft and hard wave functions was done in a simplest
way using the step-function:
\begin{eqnarray}
\Psi_{\pi}= \Psi^S \theta(M_0-M_{q\bar q})+
\Psi^H \theta(M_{q\bar q}-M_0).
\end{eqnarray}
The hard component $\Psi^H$ is a convolution of the soft wave function
$\Psi^S$ and hard gluon exchange kernel $V^{\alpha_s}$:
\begin{eqnarray}
\Psi^H = V^{\alpha_s} \otimes \Psi^S,
\end{eqnarray}
The splitting of the wave function into the soft and hard components
yields the following expansion of the form factor over $\alpha_s$
\begin{eqnarray}
F=F^{SS}+2F^{SH}+O(\alpha_s^2)
\end{eqnarray}
where $F^{SS}$ is the soft form factor, and $F^{SH}$ is the soft-hard
contribution of the order $O(\alpha_s)$ which is determined by the
diagram with the hard gluon exchange, Eq. (3). In this way, including
the Sudakov form factor and renormalizing the quark mass,
we have taken into account all the terms of the order $O(\alpha_s)$
in Eq. (4).
A proper consideration of all the $\alpha_s$ terms in the reaction
$\gamma^*(Q^2)p\to\rho^ p$ is a much more
complicated task since such terms
appear not only as corrections to the $\rho$-meson vertex but also
as the next-to-leading order corrections to the BFKL-pomeron \cite{Fadin}
and pomeron-$q\bar q$ vertex. Such kind of analysis lies beyond the scope
of our present work. At the same time, the calculation of the
$\gamma^*(Q^2)p\to\rho^0 p$ amplitude with taking into account
only a part of the $\alpha_s$ corrections is not consistent.
In this situation, it seems reasonable to consider two extreme cases
of the behavior of the $\rho \to q\bar q$ vertex at large
$M^2_{q\bar q}$: first, $G_{\rho}(M^2_{q\bar q})\simeq M^{-2}_{q\bar q}$
and second $G_{\rho}(M^2_{q\bar q})= const\sim O(\alpha_s)$.
In both considered cases the dominance of the
region of small interquark separations
$\rho_{q\bar q} <0.2$ fm was not observed at least at
$Q^2 \le 50$ GeV$^2$ and $W^2/Q^2\geq 10^6$.
The paper is organized as follows:
In Section II we discuss the kinematics of the reaction,
the Lorentz structure of the amplitudes
and some principal points of the spectral representations. Section III presents
a detailed calculation of the amplitudes in the framework of
the spectral representations over the invariant $q\bar q$-mass. The
numerical results are discussed in Section IV. In Conclusion we
summarize the main results and present our viewpoint on the Strong-QCD pomeron
and the electoproduction of vector mesons at $Q^2 < 100$
GeV$^2$.
\section{Kinematics of the reactions and the Lorentz
structure of the amplitudes}
In this Section we describe the kinematics of the reaction and outline the
principle points in the dispersive representation of the amplitude.
Further technical details and final analytical expressions are
given in the next Section.
We use the following notation for the
four-vector of the particles involved:
the initial virtual photon four-momentum $q$, the final virtual
photon (or vector meson) four-momentum $q'$,
the target nucleon momentum $p_N$,
and the outgoing nucleon momentum $p'_N$. The following momentum
conservations are imposed:
\begin{equation}
q'=q-\kappa, \quad p'_N=p_N+\kappa,\quad q+p_N=q'+p'_N.
\end{equation}
The reaction is characterized by three independent kinematical variables
$Q^2=-q^2$, $W=(p_Nq)/m_N$, and $\kappa^2$, since
$p_N^2=p'^2_N=m_N^2$ and $q'^2=\mu_V^2$ for
$\gamma^*(Q^2)p\to V p$ or $q'^2=-Q^2$ for
$\gamma^*(Q^2)p\to\gamma^*(Q^2) p$ .
We are interested in the kinematics when
\begin{equation}
\label{kin}
Q^2 {\rm \;\; is\;\; large}, \; \;
W^2/Q^2\gg 1,\qquad {\rm and\;} \; \kappa^2\to 0.
\end{equation}
It is convenient to describe the process in the reference frame where
the target nucleon is at rest $p_N=(m_N, 0,0)$
and photon fastly moving along the longitudinal axis $z$.
The kinematical conditions (\ref{kin}) then imply
that $q_z\to\infty$ and omitting the $1/q_z^2$ terms
we come to the following expressions for the
momentum and polarization vectors of the incoming photon
\begin{eqnarray}
q=(q_z-\frac{Q^2}{2q_z}, 0,q_z),
\qquad
\epsilon _{L}^{(\gamma)}(Q^2)=\frac{1}{iQ} (q_z,0,q_z-\frac{Q^2}{2q_z}),
\qquad
\epsilon _{T}^{(\gamma)}(Q^2)=(0,\vec e^{\;(\gamma)}_\perp,0).
\end{eqnarray}
For the outgoing vector meson in the reaction $\gamma^*(Q^2)p\to V p$,
one has
\begin{eqnarray}
q'=(q'_z+\frac{\mu^2_{\perp}}{2q'_z},-\vec\kappa_{\perp},q'_z),
\qquad
\epsilon _{L}^{(V)}(\mu^2)=\frac{1}{\mu_{\perp}}
(q'_z,0,q'_z+\frac{\mu_{\perp}^2}{2q'_z}),
\qquad
\epsilon _{T}^{(V)}=(0,\vec e^{\; (V)}_\perp,0).
\end{eqnarray}
where $\mu^2_\perp=\mu^2+\kappa_\perp^2$ and
$\vec e^{\; (V)}_\perp\vec\kappa_\perp=0$.
In the limit
$\kappa^2_{\perp}/m_N \to 0$, the
momenta $q'_z$ and $\kappa$ are equal to
\begin{equation}
q'_z=q_z -\frac{\mu_{\perp} ^2+Q^2}{2q_z}, \qquad
\kappa =q-q' =(0,\vec \kappa_{\perp},
\frac{\mu_{\perp} ^2+Q^2}{2q_z}).
\end{equation}
Similar expressions determine the momentum and polarization vectors
of the outgoing photon in the reaction
$\gamma^*(Q^2)p\to\gamma^*(Q^2) p$: one needs to substitute
$\mu_V^2 \to -Q^2 $. Because of that we present below the formulas
for the vector meson production only.
Instead of the nucleon momentum $p_N$,
it is convenient to characterize the
reaction under the kinematical conditions (\ref{kin}) by the
vector
\begin{equation}
\label{n}
n=\frac{1}{q_0+q_z}(1,0,-1).
\end{equation}
The vectors $q$ and $n$ determine the plane for
longitudinal polarization, whereas the transverse plane is determined by
transverse components of the vector $\kappa$.
Notice that $n^2=0$ and $qn=-1$.
The amplitude of the reaction can be written in the form
\begin{eqnarray}
A=\epsilon^{(\gamma)}_\mu A_{\mu\nu}(q,q',n)\epsilon^{(V)}_\nu,
\end{eqnarray}
where, due to the gauge invariance, the tensor $A_{\mu\nu}$
satisfies the relations
$q^\mu A_{\mu\nu}(q,q',n)=q'^\nu A_{\mu\nu}(q,q',n)=0$,
thus having the following Lorentz structure:
\begin{eqnarray}
\label{ampl}
\nonumber
A_{\mu\nu}(q,q',n)&=&-(g_{\mu\nu}-\frac{n_\mu q_\nu}{nq}
-\frac{q'_\mu n_\nu}{nq'}
+\frac{qq'}{(nq)\;(nq')}n_\mu n_\nu)\;A_T(q^2,q'^2,nq,\kappa^2)\\
&&+(q^2 n_\mu-nq\cdot q_\mu)(q'^2 n_\nu-nq'\cdot q'_\nu)
\;A_L(q^2,q'^2,nq,\kappa^2)+O(\kappa_\mu, \kappa_\nu),
\end{eqnarray}
and $O(\kappa_\mu, \kappa_\nu)$ stands for other possible tensor
structures transversal with respect to $q_\mu$ and $q'_\nu$ and
proportional to $\kappa$.
In the limit $\kappa\to 0$ they do not contribute
to the cross section and are omitted.
The cross sections are connected with the introduced amplitudes as
follows:
\begin{eqnarray}
\frac{d\sigma(\gamma^*_T(Q^2)p\to V_T p)}{d(-\kappa^2)}
=|A_T|^2; \qquad
\frac{d\sigma(\gamma^*_{L}(Q^2)p\to V_{L} p)}{d(-\kappa^2)}
=Q^2\mu^2_V|A_L|^2,
\end{eqnarray}
where we have omitted the terms proportional to $\kappa^2$.
The invariant amplitudes $A_{L,T}$ are connected with
$A_{\mu\nu}$ through the following relations:
\begin{eqnarray}
A_T=-\epsilon^{(\gamma)}_{T\mu}(Q^2)A_{\mu\nu}(q,q',n)
\epsilon^{(V)}_{T\nu}(\mu^2_V), \qquad
A_L=\frac{1}{i\mu_VQ}
\epsilon^{(\gamma)}_{L\mu}(Q^2)A_{\mu\nu}(q,q',n)
\epsilon^{(V)}_{L\nu}(\mu^2_V).
\end{eqnarray}
One can write the spectral representations for the
invariant amplitudes $A_{L,T}$ as follows:
\begin{eqnarray}
A_{L,T}(q^2,q'^2)=\int \frac{dM^2_{q\bar q} \;
dM'^2_{q\bar q}}{\pi^2}
\frac{{\rm disc}_{M^2_{q\bar q}} \;{\rm disc}_{ M'^2_{q\bar q}} \;
A_{L,T}(M^2_{q\bar q}, M'^2_{q\bar q})}{(M^2_{q\bar q}-q^2)
( M'^2_{q\bar q}-q'^2)}.
\end{eqnarray}
In order to obtain
the double spectral densities ${\rm disc}_{M^2_{q\bar q} }
\;{\rm disc}_{ M'^2_{q\bar q}}\;
A_{L,T}(M^2_{q\bar q},M'^2_{q\bar q})$ we must consider the process
with the off-shell incoming
and outgoing particles. Namely, we have the following off-shell
momenta
\begin{equation} q'\to P', \qquad q\to P,
\end{equation}
where
\begin{equation}
P^2=M^2_{q\bar q}, \qquad P\;'^2=
M'^2_{q\bar q}, \qquad ( P\;'-P)^2=\kappa ^2,
\end{equation}
but
\begin{equation}
P-P\; ' \ne \kappa,
\end{equation}
if we take into account the terms of the order of $1/q_z$
and omit $O(1/q^2_z)$-terms.
The components of the off-shell incoming $P$ and outgoing $P'$
momenta can be chosen as follows:
\begin{equation}
P=(q_z+\frac{M^2_{q\bar q}}{2q_z}, 0,q_z), \qquad
P\; '=(q_z+
\frac{M'^2_{q\bar q}+\kappa^2_{\perp}}{2q_z},-\vec\kappa_{\perp} ,q_z).
\end{equation}
The off-shell amplitude $A_{\mu\nu}(P,P',n)$ has a
similar decomposition in
terms of $P,P',n$ as has the amplitude $A_{\mu\nu}(q,q',n)$
of Eq. (\ref{ampl}) in terms of
$q,q',n$.
Let us introduce the polarization vectors of the off-shell
vector particles. The corresponding
longitudinal polarization vectors are
\begin{equation}
\epsilon _{L}^{(P)} (M^2_{q\bar q})=\frac{1}{\sqrt { M^2_{q\bar q} }}
(q_z,0,q_z+\frac{M^2_{q\bar q}}{2q_z}),
\qquad
\epsilon _{L}^{(P\; ')}
(M'^2_{q\bar q})=\frac{1}{\sqrt {M'^2_{q\bar q}+\kappa^2_{\perp}}}
(q_z,0,q_z+\frac {M'^2_{q\bar q}+\kappa^2_{\perp}}{2q_z}).
\end{equation}
The transverse polarization vectors of the off-shell particles
are equal to the transverse polarization vectors of the incoming
and outgoing particles.
The amplitudes $A_{L,T}$ are connected with $A_{\mu\nu}$ as follows:
\begin{eqnarray}
A_T=-\epsilon^{(P)}_{T\mu}A_{\mu\nu}(P,P',n)\epsilon^{(P')}_{T\nu}, \qquad
A_L=\frac{1}{\sqrt{M^2_{q\bar q}
M'^2_{q\bar q}}}\epsilon^{(P)}_{L\mu}A_{\mu\nu}(P,P',n)
\epsilon^{(P')}_{L\nu},
\end{eqnarray}
where the factor $\sqrt{M^2_{q\bar q}M'^2_{q\bar q}}$
is just the following expression:
\begin{equation}
\epsilon^{(P)}_{L\mu}
(P^2 n_\mu-nP\cdot P_\mu)(P'^2 n_\nu-nP'\cdot P'_\nu)
\epsilon^{(P')}_{L\nu}=\frac{\sqrt{M^2_{q\bar q}}M'^2_{q\bar q}}
{\sqrt{M'^2_{q\bar q}+\kappa_\perp^2}} \simeq\sqrt{M^2_{q\bar q}
M'^2_{q\bar q}}.
\end{equation}
Hence,
\begin{eqnarray}
\nonumber
{\rm disc}_{M^2_{q\bar q}} {\rm disc}_
{M'^2_{q\bar q}} A_T&=&{\rm disc}_{M^2_{q\bar q}}
{\rm disc}_{M'^2_{q\bar q}}
\epsilon^{(P)}_{T\mu}A_{\mu\nu}(P,P',n)\epsilon^{(P')}_{T\nu},
\\
{\rm disc}_{M^2_{q\bar q}} {\rm disc}_{M'^2_{q\bar q}}A_L&=&{\rm
disc}_{M^2_{q\bar q}} {\rm disc}_{M'^2_{q\bar q}}
\epsilon^{(P)}_{L\mu}A_{\mu\nu}(P,P',n)\epsilon^{(P')}_{L\nu}
\frac{1}{\sqrt{M^2_{q\bar q}M'^2_{q\bar q}}}.
\end{eqnarray}
Finally, recall that the double spectral density of the amplitude
$A_{\mu\nu}(P,P',n)$ is calculated by placing all particles in the
intermediate mass-on-shell states.
Notice that the amplitude $A_T$ can be
directly isolated from $A_{\mu\nu}$.
Namely, setting $\mu,\nu=a,b=1,2$ and isolating the term
$-g_{\mu\nu} \to \delta_{ab}$, one obtains $A_T$
\begin{equation}
A_{ab}(P,P',n)=\delta_{ab}A_T+O(\kappa_a\kappa_b),
\end{equation}
since $\vec \kappa$ is the only vector with the components in the
transverse plane. In the next Section we perform a detailed
consideration of $A_{L,T}$.
\section{Spectral representation of the amplitudes}
The amplitudes of the reactions are given by the diagrams of Fig 1.
First, we shall consider separately the upper quark-loop block
and then take into account its interaction with a nucleon through
the Pomeron exchange. We shall discuss
in parallel the cases of the transverse and longitudinal
polarizations of the
initial photon and outgoing vector meson (photon). As before,
we concentrate ourselves on the reaction
$\gamma^*(Q^2)p\to V p$ keeping in mind that for
$\gamma^*(Q^2)p\to\gamma^*(Q^2) p$ the formulas are written
analogously.
\subsection{The block of the photon and vector meson interaction
with the BFKL Pomeron }
The Pomeron is attached to the photon and the vector meson through the quark
loop diagrams of Fig. 1. We consider separately the cases
when both Reggeized gluons
are attached to the same and to different quarks in the loop.
\subsubsection{The gluon ladder attached to a single constituent}
The diagram for this subprocess is shown in Fig 2a. The analytical spectral
representation for this quark-loop diagram has the following structure
\begin{equation}
A^{I}_{L,T}=\int\frac{dM_{q\bar q}^2}\pi G_\gamma(M_{q\bar q}^2)
\frac{d\Phi_2(P;k_1,k_2)}{M_{q\bar q}^2-q^2-i0}
\frac{dM''^2_{q\bar q}}\pi\frac{d\Phi_1(P'';k_1'',k_2)}
{M''^2_{q\bar q}-(q-k)^2-i0}
\frac{dM'^2_{q\bar q}}\pi\frac{d\Phi_1(P';k_1',k_2)}
{M'^2_{q\bar q}-(q-\kappa)^2-i0}
G_V(M'^2_{q\bar q})g^2(-1)S^I_{L,T},
\label{start1}
\end{equation}
where $P^2=M_{q\bar q}^2$, $P'^2=M'^2_{q\bar q}$, $P''^2=M''^2_{q\bar
q}$ are the invariant masses squared of the $q\bar q$ pairs in the
intermediate states and the corresponding phase space factors read
\begin{eqnarray}
d\Phi_2(P;k_1,k_2)&=&\frac 12 \frac{d^3k_1}{(2\pi)^3 2 k_{10}}
\frac{d^3k_2}{(2\pi)^3 2 k_{20}} (2\pi)^4\delta^4(P-k_1-k_2)
\nonumber \\
&=&\frac 1{(4\pi)^2}\frac{dx_1dx_2}{x_1x_2}\delta(1-x_1-x_2)
d^2k_{1\perp} d^2k_{2\perp}\delta(\vec{k}_{1\perp}+\vec{k}_{2\perp})
\delta\left(M_{q\bar q}^2-\frac{m^2_{1\perp}}{x_1}
-\frac{m^2_{2\perp}}{x_2}\right),
\nonumber \\
d\Phi_1(P'';k_1'',k_2)&=&\frac 12 \frac{d^3k_1''}{(2\pi)^3 2
k_{10}''} (2\pi)^4\delta^4(P''-k_1''-k_2) \nonumber \\
&=&\pi\frac{dx_1''}{x_1''}\delta(1-x_1''-x_2)d^2k_{1\perp}''
\delta(\vec{k}_{1\perp}''+\vec{k}_\perp+\vec{k}_{2\perp})
\delta\left(M''^2_{q\bar q}+k^2_\perp
-\frac{m''^2_{1\perp}}{x_1''}-\frac{m^2_{2\perp}}{x_2}\right),
\nonumber \\
d\Phi_1(P';k_1',k_2)&=&\frac 12 \frac{d^3k_1'}{(2\pi)^3 2 k_{10}'}
(2\pi)^4\delta^4(P'-k_1'-k_2)
\nonumber \\
&=&\pi\frac{dx_1'}{x_1'}\delta(1-x_1'-x_2)d^2k_{1\perp}'
\delta(\vec{k}_{1\perp}'+\vec{\kappa}_\perp+\vec{k}_{2\perp})
\delta\left(M'^2_{q\bar q}+\kappa^2_\perp
-\frac{m'^2_{1\perp}}{x_1'}-\frac{m^2_{2\perp}}{x_2}\right).
\label{phas-vol}
\end{eqnarray}
Here we have taken into account that $k_z$ is small:
the integration over $k_z$ is performed enclosing the
integration contour over the pole
$(M''^2_{q\bar q}+Q^2-2q_zk_z-i0)^{-1}$, that is equivalent to
$(M''^2_{q\bar q}+Q^2-2q_zk_z-i0)^{-1}
\to \; -i\pi \delta (M''^2_{q\bar q}+Q^2-2q_zk_z)$.
As a result, we find
\begin{eqnarray}
A^{I}_{L,T}&=&\frac 1{4\pi} \int^{1}_{0}\frac{dx}{x(1-x)^3}
\int \frac{d^2k_{2\perp}}{(2\pi)^2}
dk_z \frac{G_\gamma(M_{q\bar q}^2)}{M_{q\bar q}^2+Q^2}\;
\frac 1{M''^2_{q\bar q}+Q^2-2q_zk_z-i0}\;
\frac{G_V(M'^2_{q\bar q})}
{M'^2_{q\bar q}-\mu^2_V} g^2(-1)S^I_{L,T}
\nonumber \\
&=&\int^{1}_{0} \frac{dx}{x(1-x)^3}
\int\frac{d^2k_{2\perp}}{(4\pi)^2}
\frac{G_\gamma(M_{q\bar q}^2)}{M_{q\bar q}^2+Q^2}\;
\frac{G_V(M'^2_{q\bar q})}{M'^2_{q\bar q}-\mu^2_V}\;
\frac {-ig^2}{2q_z} S^I_{L,T},
\label{A^I}
\end{eqnarray}
where
\begin{eqnarray}
M_{q\bar q}^2&=&\frac{m^2+k^2_{2\perp}}{x(1-x)},\quad
M''^2_{q\bar q}=
\frac{m^2+(\vec{k}_{2\perp}+x\vec{k}_\perp)^2}{x(1-x)},\quad
M'^2_{q\bar q}=
\frac{m^2+(\vec{k}_{2\perp}+x\vec{\kappa}_\perp)^2}{x(1-x)},\quad
x\equiv x_2.
\label{smas1}
\end{eqnarray}
The spin factor $S^I_{L,T}$ appears due to
a decomposition of the trace $S^{I}_{\mu\nu}$
corresponding to the quark loop of Fig. 2a
\begin{eqnarray}
\label{sp1}
S^{I}_{\mu\nu}=Tr \left( \gamma_\nu(\hat{k}_1'+m) \hat{n}
(\hat{k}_1''+m) \hat{n}(\hat{k}_1+m)\gamma_\mu
(-\hat{k}_2+m)\right),
\end{eqnarray}
where $\hat{n}$, being determined by (\ref{n}),
stands for the vertex of the reggeized gluon-quark
coupling \cite{FGL}. Let us
stress once more that all quarks in this expression
are mass-on-shell, whereas the photon and the vector meson momenta are
mass-off-shell, such that $P^2=M^2_{q\bar q}$ and $P'^2=M'^2_{q\bar q}$. As
we have discussed in the previous Section, to obtain the transverse
amplitude, i.e. $S^I_T$, we must set $\mu,\nu=a,b=1,2$ and isolate the
structure proportional to $\delta_{ab}$. This procedure yields $S^I_T$ in
the form
\begin{equation} S^I_T= -8 \frac{(1-x)}x \left [m^2+\left (
1-2x(1-x) \right ) \vec k_{2\perp}\vec k'_{2\perp} \right ],
\end{equation}
where $\vec k'_{2\perp}=\vec k_{2\perp}+x\vec\kappa_\perp$. Note that the
last term in the square brackets is not small, playing significant role
in the determination of $A_T$.
The spin-factor for longitudinal polarization can be
calulated performing the convolution with longitudinal
polarization vectors of the off-shell photon and vector meson as follows:
\begin{equation}
S_L^I= \frac {
\epsilon^{(V)}_{L\ \beta}(P'^2)
Tr \left( \gamma_\beta(\hat{k}_1'+m) \hat{n}
(\hat{k}_1''+m) \hat{n}(\hat{k}_1+m)\gamma_\alpha
(-\hat{k}_2+m)\right)\epsilon^{(\gamma)}_{L\ \alpha}(P^2)}
{P^2P'^2
\left(\epsilon^{(V)}_{L}(P'^2)n\right)
\left(\epsilon^{(\gamma)}_{L}(P^2)n\right)}\ .
\label{sp1l}
\end{equation}
The calculation of this trace yields the following result
\begin{eqnarray}
S_L^I&=&
-32\; \frac{(1-x)}x\;
\frac{x^2(1-x)^2}{m^2+{k'}_{2\perp}^2+x(1-x)\kappa_\perp^2}
\left[{m^2+k'^2_{2\perp}-(x-1/2){\vec k'}_{2\perp}\vec\kappa_\perp}
\right] \to -32x(1-x)^3.
\label{ssspI}
\end{eqnarray}
Here we take into account that the last term in the square brackets
gives a negligible contribution into the BFKL amplitude.
\subsubsection{The gluon ladder attached to both constituents}
This subprocess is displayed in Fig 2b. Likewise, the triple spectral
representation for the corresponding amplitude takes a form:
\begin{equation}
A^{II}_{L,T}=\int\frac{dM_{q\bar q}^2}\pi G_\gamma(M_{q\bar q}^2)
\frac{d\Phi_2(P;k_1,k_2)}{M_{q\bar q}^2-q^2-i0}
\frac{dM''^2_{q\bar q}}\pi\frac{d\Phi_1(P'';k_1',k_2)}
{M''^2_{q\bar q}-(q-k)^2-i0}
\frac{dM'^2_{q\bar q}}\pi\frac{d\Phi_1(P';k_1',k_2')}
{M'^2_{q\bar q}-(q-\kappa)^2-i0}
G_V(M'^2_{q\bar q})g^2(-1)S^{II}_{L,T}.
\end{equation}
Now the phase space factors read
\begin{eqnarray}
d\Phi_2(P;k_1,k_2)&=&
\frac 1{(4\pi)^2}\frac{dx_1dx_2}{x_1x_2}\delta(1-x_1-x_2)
d^2k_{1\perp} d^2k_{2\perp}\delta(\vec{k}_{1\perp}+\vec{k}_{2\perp})
\delta\left(M_{q\bar q}^2-\frac{m^2_{1\perp}}{x_1}
-\frac{m^2_{2\perp}}{x_2}\right),
\nonumber \\
d\Phi_1(P'';k_1',k_2)&=&
\pi\frac{dx_1'}{x_1'}\delta(1-x_1'-x_2)d^2k_{1\perp}'
\delta(\vec{k}_{1\perp}'+\vec{k}_\perp+\vec{k}_{2\perp})
\delta\left(M''^2_{q\bar q}+k^2_\perp
-\frac{m'^2_{1\perp}}{x_1'}-\frac{m^2_{2\perp}}{x_2}\right),
\nonumber \\
d\Phi_1(P';k_1',k_2')&=&
\pi\frac{dx_2'}{x_2'}\delta(1-x_1'-x_2')d^2k_{2\perp}'
\delta(\vec{k}_{1\perp}'+\vec{\kappa}_\perp+\vec{k}_{2\perp}')
\delta\left(M'^2_{q\bar q}+\kappa^2_\perp
-\frac{m'^2_{1\perp}}{x_1'}-\frac{m'^2_{2\perp}}{x_2'}\right).
\end{eqnarray}
This time the expressions for the invariants
$M_{q\bar q}^2,M''^2_{q\bar q},M'^2_{q\bar q}$ take a form:
\begin{equation}
M_{q\bar q}^2=
\frac{m^2+k^2_{2\perp}}{x(1-x)},\quad
M''^2_{q\bar q}=
\frac{m^2+(\vec{k}_{2\perp}+(1-x)\vec{k}_\perp)^2}{x(1-x)},\quad
M'^2_{q\bar q}=
\frac{m^2+(\vec{k}_{2\perp}+\vec{k}_\perp-x\vec{\kappa}_\perp)^2}{x(1-x)},
\quad x\equiv x_1.
\label{smas2}
\end{equation}
Performing the integration over $k_z$ we come to the expression:
\begin{equation}
A^{II}_{q\bar q}=\int^{1}_{0} \frac{dx}{x^2(1-x)^2}
\int\frac{d^2k_{2\perp}}{(4\pi)^2}
\frac{G_\gamma(M_{q\bar q}^2)}{M_{q\bar q}^2+Q^2}\;
\frac{G_V(M'^2_{q\bar q})}{M'^2_{q\bar q}-\mu^2_V}\;
\frac{-ig^2}{2q_z}S^{II}_{L,T}.
\end{equation}
The quark loop trace for the diagram of Fig. 2b reads
\begin{equation}
\label{sp2}
S^{II}_{\mu\nu}= Sp \left( \gamma_\nu(\hat{k}_1'+m) \hat{n}
(\hat{k}_1+m)\gamma_\mu (-\hat{k}_2+m)\hat{n}
(-\hat{k}_2'+m)\right).
\end{equation}
Again, setting $\mu,\nu=a,b=1,2$ and isolating the factor
proportional to the
$\delta_{ab}$ yield a simple expression:
\begin{equation}
S_T^{II}=
8\left[ m^2+
\left( 1-2x(1-x) \right )
{\vec k_{2\perp}}{\vec k''_{2\perp}}\right ],
\label{ssp2}
\end{equation}
where we have introduced the vector
\begin{equation}
\label{k''}
\vec k''_\perp=\vec{k}_{2\perp}+\vec{k}_\perp-x\vec{\kappa}_\perp.
\end{equation}
For longitudinal polarization the spin factor reads
\begin{eqnarray}
\label{sp2l}
S_L^{II}&=&\frac {
\epsilon^{(V)}_{L\ \beta}(P'^2)
Sp \left( \gamma_\beta(\hat{k}_1'+m) \hat{n}
(\hat{k}_1+m)\gamma_\alpha (-\hat{k}_2+m)\hat{n}
(-\hat{k}_2'+m)\right)\epsilon^{(\gamma)}_{L\ \alpha}(P^2)}
{P^2P'^2
\left(\epsilon^{(V)}_{L}(P'^2)n\right)
\left(\epsilon^{(\gamma)}_{L}(P^2)n\right)}\ .
\end{eqnarray}
which, after calculating the trace, takes the following form:
\begin{equation}
S_L^{II}=32\; \frac{x^2(1-x)^2}
{m^2+{k''_2}^2}\;
\left[ m^2+k''^2_{2\perp}
+(x-1/2)\vec\kappa_\perp \vec k''_{2\perp}\right ] \to 32x^2(1-x)^2.
\label{sssp2}
\end{equation}
Here, as in (\ref{ssspI}), we take into account that the last term in
the square brackets gives a small contribution into the BFKL amplitude.
The vertices $G_\gamma(M_{q\bar q}^2)$ and $G_V(M'^2_{q\bar q})$ which
are used for the calculation are shown in Fig. 3.
\subsection{BFKL-pomeron couplings to the quark loop and nucleon}
Now we must attach the quark loop and target nucleon to
the BFKL-pomeron amplitude; we face several possible
scenarios for this attaching.
\subsubsection{Pomeron-nucleon coupling}
We consider two possible scenarios for
the Pomeron attaching to the
nucleon target:\\
(i) the BFKL Pomeron is transformed first to the
soft Pomeron and then this soft Pomeron is
attached to the nucleon, \\
(ii) the BFKL is directly attached to the nucleon.
The momentum transfer to the Pomeron $\kappa$ is not large and there is
no selection of the small separations along the gluon
ladder: while moving from the quark block to the nucleon, the distances
between the $t$ channel gluons in the impact parameter space are increasing.
(i) {\bf Soft-Pomeron-nucleon coupling}.
If the ladder is rather long (i.e. at large $W$),
the distances between the
gluons become the normal hadronic distances, and in
this case the Pomeron is no
longer the perturbative BFKL but is rather in a soft regime. This
soft Pomeron is then attached to the nucleon, and we use the standard
soft-Pomeron-nucleon coupling which we denote as
$\tilde g F_{PNN}(\kappa^2_\perp)$.
Within this scenario and following to the prescription of refs.
\cite{BFKL,Lip,ryskin,forshaw}, we write the amplitude
of Fig. 1a for the case,
when both Reggeized gluons are attached to the same
quark (or antiquark):
\begin{eqnarray}
A_{L,T}^{BFKL-q}(\gamma^* p\to Vp)&=&-\frac i2\int^{1}_{0}
\frac{dx}{x^2(1-x)^2}
\int\frac{d^2k_{2\perp}}{(2\pi)^2}
\frac{d^2k_\perp}{(2\pi)^2}\;
\Psi_\gamma(x,k^2_{2\perp})\;
\Psi_V\left(x, \left( \vec{k}_{2\perp}+x\vec{\kappa}_\perp \right)^2
\right)
\nonumber \\
&\times&\frac{x}{1-x} S^I_{L,T}g^2 \int^{\infty}_{-\infty}
\frac{d\nu\;\;\nu^2}{(\nu^2+\frac 14)^2}
\left(\frac{W^2}{Q^2+\mu^2_V}\right)^{\omega(\nu)}
\int d^2\rho_1 d^2\rho_2
e^{i\vec{k}_\perp(\vec{\rho}_1-\vec{\rho}_2)}
e^{i\vec{\kappa}_\perp\vec{\rho}_2}
\nonumber \\
&\times&\left(
\left[\frac{(\vec{\rho}_1-\vec{\rho}_2)^2}{\rho^2_1\rho^2_2}\right]
^{\frac 12 +i\nu}
-\left[\frac 1{\rho^2_1}\right]^{\frac 12 +i\nu}
-\left[\frac 1{\rho^2_2}\right]^{\frac 12 +i\nu}
\right)
\tilde g F_{PNN}(\kappa^2_\perp),
\label{tot1}
\end{eqnarray}
where
\begin{equation}
\Psi_\gamma(x,k^2_{2\perp})=
\frac{G_\gamma(M_{q\bar q}^2)}{M_{q\bar q}^2+Q^2},\quad
\Psi_V\left(x, \left( \vec{k}_{2\perp}+x\vec{\kappa}_\perp\right)^2\right)
=\frac{G_V(M'^2_{q\bar q})}{M'^2_{q\bar q}-\mu^2_V}\ ,
\end{equation}
and the invariant masses
$M^2_{q\bar q}$, $M'^2_{q\bar q}$ are given by
(\ref{smas1}).
The variables $\vec{\rho}_1$ and $\vec{\rho}_2$ are the gluon coordinates
in the impact parameter space.
The energy dependence of the amplitude is given by the function $\omega(\nu)$:
\begin{equation}
\omega(\nu)=\frac{2\alpha_s C_A}\pi
Re\left(
\frac{\Gamma'(1)}{\Gamma(1)}
-\frac{\Gamma'\left( \frac 12+i\nu\right)}
{\Gamma\left( \frac 12+i\nu\right)}
\right),
\end{equation}
with $C_A=N_c=3$ and $\Gamma(z)$ the Euler $\Gamma$-function.
Likewise, the amplitude of Fig. 1b, when the gluons are attached to
quark and antiquark, takes the form
\begin{eqnarray}
A_{L,T}^{BFKL-q\bar q}(\gamma^*p\to Vp)&=&-\frac i2\int^{1}_{0}
\frac{dx}{x^2(1-x)^2}
\int\frac{d^2k_{2\perp}}{(2\pi)^2}
\frac{d^2k_\perp}{(2\pi)^2}\;
\Psi_\gamma(x,k^2_{2\perp})\;
\Psi_V\left(x,\left( \vec{k}_{2\perp}+\vec{k}_\perp-x\vec{\kappa}_\perp
\right)^2\right)
\nonumber \\
&\times&g^2S^{II}_{L,T}\int^{\infty}_{-\infty}
\frac{d\nu\;\;\nu^2}{(\nu^2+\frac 14)^2}
\left(\frac{W^2}{Q^2+\mu^2_V}\right)^{\omega(\nu)}
\int d^2\rho_1 d^2\rho_2
e^{i\vec{k}(\vec{\rho}_1-\vec{\rho}_2)}
e^{i\vec{\kappa}_\perp\vec{\rho}_2}
\nonumber \\
&\times&\left(
\left[\frac{(\vec{\rho}_1-\vec{\rho}_2)^2}{\rho^2_1\rho^2_2}\right]
^{\frac 12 +i\nu}
-\left[\frac 1{\rho^2_1}\right]^{\frac 12 +i\nu}
-\left[\frac 1{\rho^2_2}\right]^{\frac 12 +i\nu}
\right)
\tilde g F_{PNN}(\kappa^2_\perp),
\label{tot2}
\end{eqnarray}
where
\begin{equation}
\Psi_\gamma(x,k^2_{2\perp})=
\frac{G_\gamma(M_{q\bar q}^2)}{M_{q\bar q}^2+Q^2},\quad
\Psi_V\left(x, \left( \vec{k}_{2\perp}+\vec k_\perp - x\vec{\kappa}_\perp\right)^2\right)
=\frac{G_V(M'^2_{q\bar q})}{M'^2_{q\bar q}-\mu^2_V}\ ,
\end{equation}
and the invariant masses $M^2_{q\bar q}$ and
$M'^2_{q\bar q}$ are given by (\ref{smas2}).
The nucleon-Pomeron coupling $\tilde g F_{PNN}(\vec{\kappa}^2_\perp)$
can be well approximated assuming
$F_{PNN}(\kappa^2_\perp)= e^{-B\kappa^2_\perp}$ with
$B\simeq 2.5\; GeV^{-2}$.
The total amplitude is obtained summing the amplitudes of all
subprocesses, it takes the form
\begin{equation}
A^{BFKL}(\gamma^* p\to Vp)=
A_{L,T}^{BFKL-q}(\gamma^* p\to Vp)
+A_{L,T}^{ BFKL-\bar q}(\gamma^* p\to Vp)
+2\;A_{L,T}^{BFKL-q\bar q}(\gamma^* p\to Vp)\ .
\end{equation}
Some important cancellations occur in the total amplitude: namely,
the terms in the BFKL amplitude proportional
to $\rho_1$ or $\rho_2$ separately cancel each other.
In addition, the term proportional to $(\rho_1-\rho_2)^2$
in $A^{BFKL-q}_{L,T}$
vanishes, since the $k_\perp$ integration yields
$\delta(\rho_1-\rho_2)$.
Finally, we come to the following representation of the
total amplitude of the diffractive vector meson production
\begin{eqnarray}
A^{BFKL}_{L,T}(\gamma^*p\to V p)&=&-i
\int^{1}_{0} dx
\int^{\infty}_{0}
\frac{d\nu\;\;\nu^2}{(\nu^2+\frac 14)^2}
\left(\frac{W^2}{Q^2+\mu^2_V}\right)^{\omega(\nu)}
\nonumber \\
&\times&
\int d^2\rho_1 d^2\rho_2
\exp\left[{i\vec\kappa_\perp(\vec\rho_1x+\vec\rho_2(1-x))}\right]
\left[\frac{(\vec{\rho}_1-\vec{\rho}_2)^2}{\rho^2_1\rho^2_2}\right]
^{\frac 12 +i\nu}
\nonumber \\
&\times&
\int
\frac{d^2k''_{2\perp}}{(2\pi)^2}
\frac{d^2k_{2\perp}}{(2\pi)^2}
\Psi_\gamma(x,k^2_{2\perp})\Psi_V(x,{k''^2_{2\perp}})
\exp\left[{i(\vec k''_{2\perp}-\vec k_{2\perp})(\vec\rho_1-\vec\rho_2)}\right]
S_{L,T}^{II}
C e^{-B\kappa^2_\perp}\ .
\label{totfin1}
\end{eqnarray}
where $k''_{2\perp}$ is given by (\ref{k''}), and $C=g^2\tilde g$ plays
a role of the normalization constant.
It is convenient to transform the
final expression for the photon and vector meson wave functions to
the coordinate representation as follows:
\begin{eqnarray}
A^{BFKL}_{L,T}(\gamma^* p\to Vp)&=&
-\frac{i}{(2\pi)^2}\; C e^{-B\kappa^2_\perp}
\int^{1}_{0} \frac{dx}{x^2(1-x)^2}
\int^{\infty}_{0}
\frac{d\nu\;\;\nu^2}{(\nu^2+\frac 14)^2}
\left(\frac{W^2}{Q^2+\mu^2_V}\right)^{\omega(\nu)}
\nonumber \\
&\times&
\int d^2\rho \; d^2R
e^{i\vec{\kappa}_\perp\left( \vec{R}-\left( \frac 12-x\right)
\vec{\rho}\right)}
\left[\frac{\rho^2}{\left(\vec{R}+\frac{\vec{\rho}}2\right)^2
\left(\vec{R}-\frac{\vec{\rho}}2\right)^2} \right]^{\frac 12 +i\nu}
F_{L,T}(x,\rho^2),
\end{eqnarray}
where $\vec{R}=(\vec{\rho}_1+\vec{\rho}_2)/2$ and
$\vec{\rho}=\vec{\rho}_1-\vec{\rho}_2$.
The functions $F_{L,T}(x,\rho^2)$ are defined as follows:
\begin{eqnarray}
F_{L}(x,\rho^2)&=&32\;x^2(1-x)^2\; \Phi^{(0)}_V(x,\rho^2)
\Phi^{(0)}_\gamma(x,\rho^2),
\\
F_{T}(x,\rho^2)&=&8\;\left( m^2 \Phi^{(0)}_V(\rho^2)
\Phi^{(0)}_\gamma(\rho^2)+
2\left( 1-2x(1-x) \right)
\Phi^{(1)}_V (x,\rho^2) \Phi^{(1)}_\gamma(x,\rho^2)\right),
\label{final}
\end{eqnarray}
where
\begin{eqnarray}
\Phi^{(n)}(\rho^2)&=&\int^{\infty}_{0}dk\;k^{n+1}\;\Psi(k^2) J_n(k\rho).
\label{deffi}
\end{eqnarray}
(ii) {\bf BFKL-pomeron-nucleon coupling}.
One could also imply another scenario for
the Pomeron coupling to nucleon
target. Indeed, if the gluon ladder is rather short and BFKL Pomeron
is not transformed into a soft one, the amplitude of the vector meson
production takes the following form:
\begin{eqnarray}
A^{BFKL}_{L,T}(\gamma^* p\to Vp)&=&-i
\int^{1}_{0} \frac{dx}{x^2(1-x)^2}
\int^{\infty}_{0}
\frac{d\nu\;\;\nu^2}{(\nu^2+\frac 14)^2}
\left(\frac{W^2}{Q^2+\mu^2_V}\right)^{\omega(\nu)}
\nonumber \\
&\times&
\int d^2\rho \; d^2R
e^{i\vec{\kappa}_\perp\left( \vec{R}-\left( \frac 12-x\right)
\vec{\rho}\right)}
\left[\frac{\rho^2}{\left(\vec{R}+\frac{\vec{\rho}}2\right)^2
\left(\vec{R}-\frac{\vec{\rho}}2\right)^2} \right]^{\frac 12 +i\nu}
\;\;\frac 1{(2\pi)^2}\; C A_N(\nu,\vec{\kappa}_\perp)
S^{II}_{L,T}.
\label{finalnew}
\end{eqnarray}
Here we have introduced the quantity
\begin{eqnarray}
A_N(\nu,\vec{\kappa}_\perp)&=&
\int d^2\rho'_1 d^2\rho'_2
\left(
\left[\frac{(\vec{\rho'}_1-\vec{\rho'}_2)^2}{\rho'^2_1\rho'^2_2}\right]
^{\frac 12 -i\nu}
-\left[\frac 1{\rho'^2_1}\right]^{\frac 12 -i\nu}
-\left[\frac 1{\rho'^2_2}\right]^{\frac 12 -i\nu}
\right) \exp\left(i\vec{\kappa}_\perp \frac{\vec{\rho'}_1+\vec{\rho'}_2}
2\right)
\nonumber \\
&\times&
\left( \exp\left(-\frac{2(\rho'^2_1+\rho'^2_2)}{3<r^2>}\right)
-\delta(\vec{\rho'}_1-\vec{\rho'}_2)\frac 32\pi <r^2>
\exp\left(-\frac{(\vec{\rho'}_1+\vec{\rho'}_2)^2}{6<r^2>}\right)
\right)\ ,
\label{an}
\end{eqnarray}
with $<r^2>=0.8$ fm$^2$. This form of the nucleon wave function
corresponds to
the exponential dependence of the nucleon-pomeron vertex
$\exp(-B\kappa^2_\perp)$ with $B \simeq 2$ GeV$^{-2}$.
However, numerically the amplitude turns out to be poorly
sensitive
to the details of the mechanism of the pomeron attachment to the nucleon
target: namely, all the results obtained within
the assumiption of a direct
attachment of the BFKL pomeron to the nucleon coincide, within a
few percent accuracy, with the results obtained under the assumption
that the
transition from the BFKL regime to the soft one occurs before attaching to
the nucleon.
\subsubsection{Pomeron coupling to the quark loop}
Before, following refs. \cite{BFKL,Lip,ryskin,forshaw},
we have determined the $W$-dependence using the factor
$[W^2/(Q^2+\mu^2_V)]^{\omega (\nu )}$: here the variables $Q^2$ and
$\mu^2_V$ are external with respect to the quark loop. In writing spectral
representation for the quark loop, the more self-consistent procedure
is to use the internal variables, $M^2_{q\bar q}$ and
$M'^2_{q\bar q}$, instead of external ones.
For this scheme, we should
replace in the above formulas:
\begin{eqnarray}
\left (\frac{W^2}{Q^2+\mu^2_V}\right )^{\omega (\nu )} \to
\left (\frac{W^2}{M^2_{q\bar q}+M'^2_{q\bar q}}\right )^{\omega (\nu )}
\label{formfac}
\end{eqnarray}
The BFKL-equation with internal variables for the
dimensionless $W$-dependent factor was considered in \cite{Fadin}.
We perform calculations for both variants, using both external and
internal variables.
\subsubsection{Numerical calculations}
Numerical calculations have been performed using the
Monte-Carlo simulation program VEGAS \cite{veg}.
Integration limits related to $M^2_{q\bar q}$ and $M'^2_{q\bar q}$ depend
on $Q^2$, namely, $M^2_{q\bar q} \le 100\, Q^2$ (the increase of the upper
limit does not change the result). The integration over $\rho$ is
performed up to $\rho \le 5$ fm.
\section{Results}
To study the onset of the hard region dominance in diffractive processes
initiated by the photon with the increase of $Q^2$,
we consider in parallel the following reactions
\begin{eqnarray}
\gamma_L^*(Q^2)p\to\gamma_L^*(Q^2) p ,
\label{gamma1}
\end{eqnarray}
\begin{eqnarray}
\gamma_T^*(Q^2)p\to\gamma_T^*(Q^2) p
\label{gamma2}
\end{eqnarray}
and
\begin{eqnarray}
\gamma_L^*(Q^2)p\to\rho_L^0 p ,
\label{rho1}
\end{eqnarray}
\begin{eqnarray}
\gamma_T^*(Q^2)p\to\rho_T^0 p .
\label{rho2}
\end{eqnarray}
The main ingredients of the calculation of the amplitudes of these reactions
are the consideration of the hard quark block and the pomeron amplitude and
its attachment to the nucleon target.
The processes (\ref{gamma1}), (\ref{gamma2})
can be reliably described
at large $Q^2$, because the photon wave function is known
due to the analysis of ref. \cite{AMN_a}. The corresponding vertex
$G_{\gamma}(M^2_{q\bar q})$ is shown in Fig. 3a. Notice that a reliability of
calculating the quark loop diagram
$\gamma^*(Q^2) \to q\bar q \to \gamma^*(Q^2)$
is the main motivation for the study of
the processes (\ref{gamma1}),
(\ref{gamma2}). The calculated cross section
of the reaction (\ref{gamma1}) within the
amplitude determined by Eq. (49)
is shown in Fig. 4 by solid lines.
The amplitude $A_L$ determines $d\sigma
/d\kappa_{\perp}^2
(\gamma_L^*(Q^2)p\to\gamma_L^*(Q^2) p) $ (Figs. 4b,c,d): the
integrated $\kappa_{\perp}^2$-distribution
over the region
$\kappa_{\perp \; min}^2 \leq \kappa_{\perp}^2 \leq 1$ GeV$^2$
yields $\sigma (\gamma_L^*(Q^2)p\to\gamma_L^*(Q^2) p)$
(the cutting parameter $\kappa_{\perp \; min}^2=0.05$ GeV$^2$
is introduced to avoid the divergence of the BFKL-amplitude at
$\kappa_{\perp }^2=0$).
For illustration of the role of low-$M_{q\bar q}$ region in the
realistic photon vertex, $ G_{\gamma}(M^2_{q\bar q})$, we also perform
calculations of the cross section $\gamma_L^*(Q^2)p\to\gamma_L^*(Q^2) p$
with $G_{\gamma}(M^2_{q\bar q})=1$, Fig. 5.
Results for the reaction (\ref{gamma1}) are shown in Fig. 6.
An essential ingredient of the amplitude of the reaction is the
description of the pomeron-exchange block.
A realistic picture of the pomeron block is the following:
small average transverse separations in the gluon ladder selected by the
quark loop increase as a result of the $t$-channel evolution, and the hard
pomeron transforms into a soft one along the gluon ladder.
This feature is taken into account by using
the pomeron-proton coupling as given in Eq. (49).
The corresponding results are shown in Figs. 4-8.
Nevertheless, for the sake of illustration we also performed
calculations for another variant, with a
prompt attachment of the BFKL-pomeron directly to the target nucleon,
see Eq. (\ref{an}). The results for the $W$-dependence,
after renormalizing the coupling constant $C=g^2\tilde g$, are
practically the same for both variants so we do not
present separately the results obtained by using Eq. (\ref{an}).
We also study different possibilities of choosing the scale of the
$W^2$-dependence of the BFKL amplitude and consider the two
possibilities:
(1) $W^2/(Q^2+\mu^2_V)$ and
(2) $W^2/(M^2_{q\bar q}+M'^2_{q\bar q})$, see
Eq. (\ref{formfac}).
Calcuations with the variant (2) are shown in
Fig. 7. One observes a sizeable sensitivity of the
calculated cross sections to the choice of the $W^2$-scale.
The reactions of the $\rho^0$-meson diffractive production
are extensively studied both experimentally and theoretically
(see, e.g., \cite{forshaw,martin,data} and references therein).
However, the calculation of the quark-loop diagram
$\gamma^*(Q^2) \to q\bar q \to \rho^0$
turns out to be rather ambiguous due to the uncertainty in the
large-$M_{q\bar q}$ behaviour of the $\rho$-meson vertex
$G_{\rho}(M^2_{q\bar q})$.
Because of that, we analyze several possibilities of the
large-$M_{q\bar q}$ behaviour of the vertex $G_{\rho}(M^2_{q\bar q}) $.
The results for the two
variants, namely (1) $ G_{\rho}(M^2_{q\bar q}) \sim M^{-2}_{q\bar q}$ and
(2) $G_{\rho}(M^2_{q\bar q}) \sim \; const $ (see Fig. 3b), are represented
by solid lines in Fig. 8.
In order to check the consistency of our initial assumption that at
the considered $Q^2$ and $W$ the ampitude is dominated by the small
separations of the gluons in the ladder, and thus the BFKL form of the kernel
should be used we have also performed calculations
introducing explicitly a $\theta$-function cut
into the BFKL-amplitude integrand of (49): this $\theta$-function
measures an actual contribution of the hard region of separations smaller
than 0.2 fm:
\begin{eqnarray}
d^2\rho \to d^2\rho\; \theta (0.2\;fm- \rho).
\label{0.2fm}
\end{eqnarray}
The cross sections constrained by (\ref{0.2fm}) are shown
in Figs. 4, 5, 6, 7 and 8 by dashed lines:
one can see that for all reactions
the selection of small distances, if any, occurs very slowly with the
increase of $Q^2$. Only at $ Q^2>100 $ GeV$^2 $, the 80\% of the cross sections
are actually gained in the hard region. Thus we conclude that
the dominance of the hard region cannot be expected earlier than at
$ Q^2>50 - 100 $ GeV$^2 $.
Let us notice that the situation with the diffractive production
by the hard photon in the region
of $Q^2\le$ 100 GeV$^2$ turns out
to be quite similar to the situation with the elastic meson form factor
at $Q^2 \le$ 10-20 GeV$^2$:
in the latter case one can expect the dominance of the hard-scattering
mechanism (which is proved to be the dominant mechanism at
asymptotically large $Q^2$) also at $Q^2$ of several GeV$^2$.
Assuming such dominance at several $Q^2$
one determines the soft wave function of the pion at low normalization point
by descrining the data on the form factor and finds for this wave function a
double-humped form \cite{ZhO}. However, analyzing the content of the
calculated form factor, one finds that the bulk of the contribution actually
comes from the end-point region where the hard-scattering mechanism is not
applicable but rather the Feynman mechanism works \cite{LSI}. This analysis
shows that the pertutbative treatment of the form factor at several GeV$^2$
is not consistent (or at least the perturbative mechanism cannot give a bulk
of the form factor) and that the soft physics actually dominates in the
kinematical region $Q^2\le 20$ GeV$^2$.
Likewise, we have found that the assumption of the BFKL form of the pomeron
kernel actually yields in the region of $Q^2\le 100$ GeV$^2$ the cross
section which is at 80\% level gained in the region
of {\it large} transverse separations in the quark loop.
The latter are equal to gluon separations at the top of the gluon ladder
which thus turns out to be in the soft regime just from the top.
Therefore, similar to the elastic form factor case, we have to conclude
that the perturbative treatment in this range of $Q^2$ is not consistent and
that rather the soft pomeron should be used.
Considering the region of $Q^2\ge$ 100 GeV$^2$, we have observed that the
dominance of the region of small separations in the quark loop depends on the
subtle details of the calculation procedure: namely, some of the variants of
the calculation discussed here yield the cross sections $\sigma
(W,Q^2)$ and $(\sigma (W,Q^2))_{\rho < 0.2\; fm}$ which differ even at
very large $Q^2$: this means that the region $\rho > 0.2$ fm still
gives a non-vanishing contribution even at asymptotically large $Q^2$.
More detailed and technical explanation is as follows:
It might happen that the spectral representation of the quark loop was
superconvergent, i.e. the factor $M^2_{q \bar q}$ in the denominator
was not essential for the convergence of the integral. Then the
denominator $(M^2_{q \bar q} +Q^2)^{-1}$ could be safely expanded in
powers of $1/Q^2$, namely, $(M^2_{q \bar q} +Q^2)^{-1} \to 1/Q^{2}$.
Then one would not observe any selection of the hard region even at
asymptotically large $Q^2$: the $Q^2$ dependence would just factorize
and the integrals would be still dominated by typical hadronic scales.
In the reactions (\ref{gamma1}) and
(\ref{gamma2}), choosing the scale $W^2/(Q^2+\mu^2_V)$ for the
$W^2$-dependence, one observes the cross sections $\sigma(W,Q^2)$ and
$(\sigma (W,Q^2))_{\rho < 0.2\; fm}$
to be very close to each other at $Q^2 \ge 100$ GeV$^2$ (see Figs.
4 and 6). This means that the spectral representations are not
superconvergent and thus the hard photon actually selects small distances in
the quark loop and, as a result, the top of the pomeron ladder is in the
perturbative regime.
The introduction of the scale-factor
$W^2/(M^2_{q\bar q}+M'^2_{q q})$ provides a superconvergence of the spectral
integrals, thus making the cross sections
$\sigma (W,Q^2)$ and $(\sigma (W,Q^2))_{\rho < 0.2\; fm}$
different even at $Q^2 \to \infty$. In other words, small separations are not
selected in this variant of calculation even at asymptotically large $Q^2$.
The similar situation is observed in the diffractive vector meson
production reactions (\ref{rho1}) and (\ref{rho2}).
Namely, for the variant $G_{\rho}(M^2_{qq}) \sim Const$ at
$M^2_{qq} \to \infty$ (see Fig. 8) the integrals are not superconvergent and
thus small separations dominate the amplitude at large $Q^2$. The only
quantitative difference with the photon production case is that the
proximity of cross sections $\sigma (W,Q^2)$ and $(\sigma (W,Q^2))_{\rho <
0.2\; fm}$, with the scale $W^2/(Q^2+\mu^2_V)$, comes later with the
increase of $Q^2$.
For the variant $G_\rho(M^2_{q\bar q}) \sim 1/M^2_{q\bar q}$
the spectral representations are superconvergent and there is no dominance of
the small separations in the quark loop even at asymptotically large $Q^2$;
correspondingly, the contribution of distances $\rho > 0.2$ fm is always
important in this variant.
Low-$M_{q\bar q}$ structure of
$G_\rho (M^2_{q\bar q})$ and $G_\gamma (M^2_{q\bar q})$ is very
important in realistic treatment of reactions
$\gamma^*(Q^2)p\to V p$ and $\gamma^*(Q^2)p\to \gamma^*(Q^2) p$:
it is seen in comparing Figs. 4 and 5.
Low-$M_{q\bar q}$ structure is essential for different behaviour
of $d\sigma /d\kappa ^2_{\perp}(W,Q^2,\kappa ^2_{\perp})$ and
$(d\sigma /d\kappa ^2_{\perp}(W,Q^2,
\kappa ^2_{\perp}))_{\rho < 0.2\; fm}$ in the region of small
$\kappa ^2_{\perp}$ at $Q^2 \sim 20 - 50$ GeV$^2$.
Let us discuss now in a more detail the role of large and small
$M^2_{q\bar q}$ in the formation of cross sections initiated by
$\gamma^*_L(Q^2)$ and $\gamma^*_T(Q^2)$. The distinction of
corresponding cross sections is due to a different structure of the
spin-dependent factors, $S^{II}_T$ and $S^{II}_L$, related to the loop
diagrams (see (\ref{ssp2}) and (\ref{sssp2}): for the case
of longitudinal polarization the spin dependent factor is
$S^{II}_L \sim x^2(1-x^2)$, whereas $S^{II}_T \ne 0$ at $x \to 0$ or
$(1-x) \to 0$. Therefore, since $M^2_{q\bar q}=m^2_\perp/x/(1-x)$, the
large masses, i.e. the regions $x \sim 0$ and $(1-x) \sim 0$, become
dominant for the reactions $\gamma^*_T(Q^2) \to \gamma^*_\pi (Q^2)$ and
$\gamma^*_T(Q^2) \to \rho^0_T $, but there is no such dominance
for the reactions with $\gamma^*_L(Q^2)$. This means that in the
transitions $\gamma^*_L(Q^2) \to \gamma^*_\pi (Q^2)$
$\gamma^*_L(Q^2) \to \rho^0_L $ small $M^2_{q\bar q}$ or large
interquark separations contribute considerably. The realistic photon
wave function also enhances the role of the large interquark distances,
thus resulting in a different behaviour of the cross sections
$d\sigma/d\kappa_\perp^2\left(\gamma^*_L(Q^2)p \to \gamma^*_L(Q^2)p
\right )$ and
$d\sigma/d\kappa_\perp^2\left(\gamma^*_L(Q^2)p \to \gamma^*_L(Q^2)p
\right )_{\rho < 0.2\,{\rm fm}} $ at $\kappa^2_\perp 0.1$ GeV$^2$ in
the region $Q^2 \sim 20-50$ GeV$^2$ (compare Fig. 4c and Fig. 5d).
\section{Conclusion}
The performed calculations demonstrate that a selection of small
distances by the virtual photon $\gamma^*(Q^2)$ with the increase of
$Q^2$ have the following features:
First, it strongly depends on the choice of the $W^2$-scale.
Second, it proceeds very slowly: in the quark loops for the transitions
$\gamma^*(Q^2) \to \gamma^*(Q^2) $ and
$\gamma^*(Q^2)\to\rho^0 $ the soft interquark distances $ \rho >0.2$ fm
clearly dominate the amplitude at $Q^2<100$ GeV$^2$.
The situation here is quite similar to that of meson elastic form factors,
where the pQCD regime starts to dominate at $Q^2>50$ GeV$^2$ only
(see the results for the pion and transition form factors in \cite{AMN,AMN_a}
and a general discussion in \cite{LSI}). From this point of view, the results
of our analysis \cite{AMN,AMN_a} and the present paper seem to be quite
consistent with each other.
The late onset of the pQCD in the reaction
$\gamma^*(Q^2)p \to Vp $ means that it is the Strong-QCD pomeron that actually
works at $Q^2 \leq 100$ GeV$^2$, thus exposing an intriguing
situation with vector meson electroproduction processes.
The matter is that the $W$-dependence of the photoproduction
reactions ($Q^2=0$)
in the region $W\sim 10-200$ GeV is rather flat, similar to that
in hadronic processes like $\pi p\to \pi p$. However, the reaction
$\gamma^*(Q^2)p \to Vp $ at $Q^2 \sim 10-20$ GeV$^2$ demonstrates
a different $W$-dependence: namely, the cross sections increase
as $W^{2\Delta}$ with $\Delta \approx 0.3$.
There might be an attractive possibility to refer this growth to the change of
the pomeron regime from the Strong-QCD one in the first case to the BFKL
pomeron in the second one.
However the results of our analysis show that the BFKL pomeron cannot be
seen in the diffractive production in the region $Q^2\le$ 100 GeV$^2$.
Therefore, a theoretical explanation of the change of the $W$ dependence
lies in better understanding of properties of the Strong-QCD pomeron.
One of the possibilities is to explain
this change of the $W$ dependence by a complicated
'heterogeneous' structure of the Strong-QCD pomeron, when its different
components reveal themselves at $Q^2\sim 0$ and $Q^2\sim 10$
GeV$^2$.
For example, one may suggest the following scenarios for the Strong-QCD
pomeron:
\begin{itemize}
\item
In the reactions with $Q^2 \sim 0$ as well as hadronic
processes $\pi p \to \pi p$ or $p p \to p p$, the amplitude is
governed by the pomeron with intercept close to unity: $j=1+\Delta$,
with $\Delta \simeq 0.1$ (KTDL-pomeron \cite{KTDL}). In this case,
the KTDL-pomeron contribution must vanish at $Q^2 \sim 10$
GeV$^2$ leaving the leading role to a new pole with a larger intercept,
$\Delta_{new\; pole} \simeq 0.3$ \cite{L}.
\item
In the reactions with $Q^2 \sim 0$ as well as hadronic
processes $\pi p \to \pi p$ or $p p \to p p$, the amplitude is
determined by multiple primary-pomeron-induced rescatterings in the
direct channel \cite{eikonal,mila}. A slow growth of the
amplitudes at $W\sim 10-200$ GeV, $\Delta_{effective} \simeq 0.1$,
occurs at rather large value of the primary pomeron intercept,
$\Delta_{primary\; pomeron}\simeq 0.3$ \cite{mila}. In this
scenario, in order to obtain the growth of the electroproduction
amplitudes observed in the experiment, $\sim W^{0.6}$,
the multiple rescatterings
should die with the increase of $Q^2$ in such a way that at moderately
large $Q^2$ only the one-pomeron exchange survives, leading to the amplitude
growth $\sim W^{2\Delta_{primary\,pomeron}}$ .
\end{itemize}
However a detailed consideration of the Strong-QCD pomeron
is beyond the scope of this article and we leave this
intriguing subject to other publications.
\section{Acknowledgments}
We are grateful to G. Korchemsky for helpful comments.
This work was supported in part by RFBR under grant 98-02-17236.
|
1,116,691,499,080 | arxiv | \section{Introduction}
\label{sec:intro}
\vspace{-1.5mm}
Voice interactions between human and digital assistants integrated in smartphones and home-owned voice
command devices are becoming ubiquitous in our daily lives. This necessitates a built-in wake word
detection system which constantly listens to its environment, expecting a predefined word to
be spotted before turning into a more power consumptive state to understand users' intention (e.g. \cite{wang2019end}).
Similar to automatic speech recognition (ASR), modern wake word detection systems can be constructed with either HMM/DNN hybrid \cite{panchapagesan2016multi,sun2016compressed,wu2018monophone,wang2020wake}
or pure neural networks \cite{chen2014small,sainath2015convolutional,he2017streaming,shan2018attention,hou2020mining}.
In either of these two categories some types of neural networks are used for acoustic modeling to encode the input features of an audio recording into a high level representation for the decoder to determine whether the wake word
has been detected within some range of frames.
A wake word detection system usually runs on devices, and it needs to be triggered as soon as the wake
word actually appears in a stream of audio. Hence the neural networks are limited to: 1) small memory footprint;
2) small computational cost; and 3) low latency in terms of the number of future frames needed to compute
the score for the current frame. Under these criteria, the family of recurrent neural networks
\cite{hochreiter1997long,cho2014learning} is not suitable because its sequential nature in computation
prevents it from being parallelized across time in the chunk-streaming case even with GPUs. So most of the current systems adopt
convolutional networks. A convolution kernel spans over a small and fixed range of frames, and is
repeatedly applied by sliding across time or frequencies. Although each kernel only captures a local
pattern, the receptive field can be enlarged by stacking several convolution layers together: higher
layers can ``see'' longer range of frames than lower layers, capturing more global patterns.
Recently the self-attention structure, as a building block of the Transformer networks \cite{vaswani2017attention}, receives popularity in both NLP and speech communities for its capability of modeling context dependency for sequence data without recurrent connections \cite{vaswani2017attention,karita2019comparative}. Self-attention replaces recurrence with direct interactions between all the pairs of frames in a layer, making each frame aware of its contexts. The computations are more parallelized, in the sense that the processing of a frame does not depend on the completion of processing other frames in the same layer. \cite{bai2019time} also explored the self-attention in the keyword search (KWS) task. However, the original self-attention requires the entire input sequence to be available before any frames can be processed, and the computational complexity and memory usage are both $O(T^2)$. Time-restricted self-attention \cite{povey2018time} allows the self-attention to be restricted within a small context window around each
frame with attention masks. But it still does not have a mechanism of saving the current computed
state for future computations, and thus is not applicable to streaming data. Transformer-XL \cite{dai2019transformer} performs chunk-wise training where the previous chunk is cached as hidden state
for the current chunk to attend to for long-range temporal modeling. So it can be used for streaming tasks. The time and space complexity are both reduced to $O(T)$, and the within-chunk computation across time can be parallelized with GPUs. While there has been recent work \cite{tsunoo2019transformer,tian2020synchronous,zhang2020transformer,lu2020exploring,wu2020streaming} with similar ideas showing that such streaming Transformers achieve competitive performance compared with latency-controlled BiLSTMs \cite{zhang2016highway} or non-streaming Transformers for ASR, it remains unclear how the streaming transformers work for shorter sequence modeling task like wake word detection.
In this paper we investigate various aspects of the streaming Transformers with its application to wake word detection for the recently proposed alignment-free LF-MMI system \cite{wang2020wake}. This paper has the following contributions: 1) we explore how the gradient stopping point during back-propagation affects the performance; 2) we show how different positional embedding methods affect the performance; and 3) we compare the performance of either obtaining the hidden state coming from the current or previous layer. In addition, we propose an efficient way to compute the relative positional embedding in streaming Transformers. To the best of our knowledge, this is the first time that streaming Transformers are applied to this task.
\vspace{-1.5mm}
\section{The Alignment-Free LF-MMI System}
\vspace{-1mm}
We build our system on top of the state-of-the-art system described in \cite{wang2020wake}. We briefly explain that system below to provide some background information. Interested readers can refer to \cite{wang2020wake} for details.
This is a hybrid HMM/DNN system with alignment-free LF-MMI loss \cite{hadian2018end,povey2016purely}, where the positive wake word (denoted as \emph{wake word}) and the negative
non-silence speech (denoted as \emph{freetext}) are modeled with a single left-to-right 4-state HMM respectively,
regardless of how many actual phonemes are there. In addition, a 1-state HMM is dedicated
to model \emph{optional} silence \cite{chen2015pronunciation} (denoted as \emph{SIL}). The motivation behind this
is that we believe the proposed design choice has sufficient modeling power for this task.
In LF-MMI loss, the numerator represents the likelihood of the input feature given the correct output state
sequence, while the denominator represents the likelihood given incorrect state sequences. So the model is trained
to maximize the posterior of the correct sequence among other competing sequences. ``Alignment-free'' here refers to unfolded
FSTs as the numerator graphs are directly derived from the truth labels (``positive'' or ``negative'' in our task). The
denominator graph is specified manually, containing one path corresponding to the positive recordings and two paths
corresponding to the negatives. Since the alignment-free LF-MMI system outperforms the cross-entropy HMM-based and other pure neural systems \cite{wang2020wake}, we base our work in this paper on this specific system.
The work in \cite{wang2020wake} adopts dilated and strided 1D convolutional networks (or
``TDNN''
\cite{peddinti2015time,povey2018semi}) for acoustic modeling, which is straightforward as the
computation of
convolution is both streamable by its nature and highly parallelizable. In the next section, we will
detail our approach to streaming Transformers for modeling the acoustics in our task.
\vspace{-1.5mm}
\section{Streaming Transformers}
\vspace{-1mm}
We recapitulate the computation of a vanilla single-headed Transformer here.\footnote{~A multi-headed extension is straightforward and irrelevant to our discussion here.} Assume the input to a self-attention layer is
${\mathbf X} = [{\mathbf x}_1,\ldots, {\mathbf x}_T] \in \mathbb{R}^{d_x \times T}$ where $ {\mathbf x}_j \in \mathbb{R}^{d_x}$.
The tensors of query $\mathbf{Q}$, key $\mathbf{K}$, and value $\mathbf{V}$ are obtained via
\vspace{-1mm}
\begin{equation}
\mathbf{Q}={\mathbf W}_Q {\mathbf X}, ~~ \mathbf{K}={\mathbf W}_K {\mathbf X}, ~~ \mathbf{V}={\mathbf W}_V {\mathbf X} \quad \in \mathbb{R}^{d_h \times T}
\end{equation}
where the weight matrices ${\mathbf W}_Q,{\mathbf W}_K,{\mathbf W}_V \in \mathbb{R}^{d_h \times d_x}$. The output at $i$-th time step is computed as
\vspace{-1mm}
\begin{equation}
\label{eq:compute_val}
{\mathbf h}_i=\mathbf{V} {\mathbf a}_i \in \mathbb{R}^{d_h},\quad {\mathbf a}_i=\mathrm{softmax}\left(\frac{[\mathbf{Q}^\top \mathbf{K}]_i}{\sqrt{d_h}}\right) \in \mathbb{R}^T
\end{equation}
where $[\cdot]_i$ means taking the $i$-th row of a matrix. All the operations mentioned above are
homogeneous across time, thus can be parallelized on GPUs. Note that here $\mathbf{Q},\mathbf{K},\mathbf{V}$
are computed from the same input ${\mathbf X}$, which represents the entire input sequence.
Such dependency of each output frame on the entire input sequence makes the model unable to process
streaming data where in each step only a limited number of input frames can be processed. Also,
the self-attention is conducted between every pair of frames within the whole sequence, making the
memory usage and computation cost are both $O(T^2)$. Transformer-XL-like architectures address these concerns by performing a chunk-wise processing of the sequence. The whole input sequence is segmented into several equal-length chunks (except the last one whose length can be shorter). As shown in Fig. \ref{fig:transformer_dependency_a}, the hidden state from the previous chunk are cached to extend keys and values from the current chunk, providing
extra history to be attended to. In this case, $\tilde{\mathbf{K}}$ (or $\tilde{\mathbf{V}}$) is longer in length than $\mathbf{Q}$ due to the prepended history. To alleviate the gradient explosion/vanishing issue introduced in this kind of recurrent structure, gradient is set to not go beyond the cached state, i.e.,
\vspace{-1mm}
\begin{equation}
\label{eq:state_extended}
\tilde{\mathbf{K}}_c=[\mathrm{SG}(\mathbf{K}_{c-1});\mathbf{K}_c], \quad \tilde{\mathbf{V}}_c=[\mathrm{SG}(\mathbf{V}_{c-1});\mathbf{V}_c]
\end{equation}
where $c$ is the chunk index, $[\cdot;\cdot]$ represents concatenation along the time dimension, and
$\mathrm{SG}(\cdot)$ is the stop gradient operation.\footnote{~For example, this would be \texttt{Tensor.detach()} in PyTorch.} The memory usage and computation cost are both reduced to $O(T)$ given the chunk size is constant.
\begin{figure}
\centering
\subfloat[\centering Dependency on the previous layer of the previous chunk.]{{\includegraphics[width=0.2\textwidth]{figs/transformer-xl.pdf} }\label{fig:transformer_dependency_a}}
\qquad\qquad
\subfloat[\centering Dependency on the same layer of the previous chunk.]{{\includegraphics[width=0.2\textwidth]{figs/transformer-same-layer.pdf} }\label{fig:transformer_dependency_b}}
\caption{Two different type of nodes dependency when computing self-attention in streaming Transformers. The figures use 3-layer networks with 2 chunks (delimited by the thick vertical line in each sub-figure) of size 2 as examples. The grey arcs illustrate the nodes dependency within the current chunk, while the green arcs show the dependency from the current chunk to the previous one.}
\label{fig:transformer_dependency}
\vspace{-4mm}
\end{figure}
\vspace{-2mm}
\subsection{Look-ahead to the Future and Gradient Stop in History}
\vspace{-1mm}
\label{sec:look-ahead-and-caching}
Our preliminary experiments show that only having history to the left is not sufficient for a good
performance in our task. So we also allow a chunk to ``look-ahead'' to the next chunk to get future context
when making predictions from the current chunk. For the right context, the gradient in back-propagation does
not just stop at $K_{c+1}$ and $V_{c+1}$, but rather go all the way down to the input within the chunk
$c+1$. On the other hand we can optionally treat the left context (i.e. the history state) the same way as
well. Intuitively, having weights to have more information while being updated should always be beneficial, as long as their gradient flow is constrained within a small range of time steps. This can be achieved by splicing the left chunk together with the current chunk and then only selecting the output of the current chunk for loss evaluation, at the cost of having one more forward computation for each chunk by not caching them. We will show a performance comparison between with and without such state-caching in the experiments.
\vspace{-2mm}
\subsection{Dependency on the Previous Chunk from the Same Layer}
\vspace{-1mm}
\label{sec:same-layer}
Note that when there are multiple stacked self-attention layers, the output of the $c$-th chunk of the $l$-th layer actually depends on the output of the $(c-1)$-th chunk of the $(l-1)$-th layer. So the receptive field of each chunk grows linearly with the number of the self-attention layers, and the current chunk does not have access to previous chunks in the same layer (Fig. \ref{fig:transformer_dependency_a}). This may limit the model’s temporal modeling capability as not all parts of the network within the receptive field are utilized. Hence, we take the output from the previous chunk in the same layer, and prepend it to the key and value. Formally, let
${\mathbf H}=[{\mathbf h}_1,\ldots,{\mathbf h}_T] \in \mathbb{R}^{d_h \times T}$ where ${\mathbf h}_i$ is defined in Eq. (\ref{eq:compute_val}). Then Eq. (\ref{eq:state_extended}) becomes:
\vspace{-1mm}
\begin{equation}
\label{eq:state_extended_samelayer}
\tilde{\mathbf{K}}_c^l=[\mathrm{SG}({\mathbf H}_{c-1}^l);\mathbf{K}_c^l],\quad \tilde{\mathbf{V}}_c^l=[\mathrm{SG}({\mathbf H}_{c-1}^l);\mathbf{V}_c^l]
\end{equation}
where we use superscript $l$ to emphasize tensors from the same layer. Fig. \ref{fig:transformer_dependency_b} illustrates nodes dependency in such computation flows.
\vspace{-3mm}
\subsection{Positional Embedding}
\vspace{-1mm}
The self-attention in Transformers are invariant to the sequence reordering, i.e., any permutations of the input sequence will yield exactly the same output for each frame. So it is crucial to encode the positions.
The original Transformer \cite{vaswani2017attention} employs deterministic sinusoidal functions to encode absolute positions. In our chunk-streaming setting, we can also enable this by adding an offset to the frame positions within each chunk. However our goal for wake word detection is just to spot the word rather than recognizing the whole utterance, a relative positional encoding may be more appropriate. One type of relative positional embedding, as shown in \cite{shaw2018self}, encodes the relative positions from a query frame to key/value frames in the self-attention, and pairs of frames having the same position difference share the same trainable embedding vector. The embedding vectors $\mathbf{E} \in \mathbb{R}^{d_h\times T}$ are then added to the key (optionally to the value as well) of each self-attention layer. So Eq. (\ref{eq:compute_val}) is modified as:
\vspace{-1mm}
\begin{equation}
\label{eq:compute_val_pos_embed}
{\mathbf h}_i=\left(\mathbf{V}+\mathbf{E}\right) {\mathbf a}_i \in \mathbb{R}^{d_h}, ~~ {\mathbf a}_i=\mathrm{softmax}\left(\frac{[\mathbf{Q}^\top \left(\mathbf{K}+\mathbf{E}\right)]_i}{\sqrt{d_h}}\right) \in \mathbb{R}^T
\end{equation}
As suggested, the relative positional embedding is fed into every self-attention layer and jointly trained with other model parameters.
Different from the case in \cite{shaw2018self} where the query and key (or value) have the same sequence length, there is extra hidden state prepended to the left of the key and the value in the current chunk, making the resulting key and value longer than the query. By leveraging the special structure of the layout of relative positions between the query and key, we design a series of simple but efficient tensor operations to compute self-attentions with positional embedding. Next we show how the dot product between the query $\mathbf{Q}$ and the positional embedding $\mathbf{E}$ for the key $\mathbf{K}$ can be obtained\footnote{~We drop the batch and heads dimensions for clarity. So all tensors become 2D matrices in our description.}. The procedure when adding the embedding to the value $\mathbf{V}$ is similar.
Let's denote the length of the query and the extended key as $l_q$ and $l_k$, respectively, where $l_q < l_k$. There are $(2l_k-1)$ possible relative positions from the query to the key ranging in $[-l_k+1,l_k-1]$. Given an embedding matrix $\mathbf{E} \in \mathbb{R}^{d_h \times (2l_k-1)}$, we first compute its dot product with the query $\mathbf{Q}$, resulting in a matrix ${\mathbf M} = \mathbf{Q}^\top \mathbf{E} \in \mathbb{R}^{l_q \times (2l_k-1)}$. Then for the $i$-th row in ${\mathbf M}$, we select $l_k$ consecutive elements corresponding to $l_k$ different relative positions from the $i$-th frame in the query to each frame in the key, and rearrange them into ${\mathbf M}' \in \mathbb{R}^{l_q \times l_k}$. This process is illustrated in Fig. \ref{fig:relative_to_absolute}. In the $0$-th row, we keep those corresponding to the relative positions in the range $[-l_k+l_q,l_q-1]$; in the $i$-th row, the range is left shifted by 1 from the one in the $(i-1)$-th row; finally in the $(l_q-1)$-th row, the range would be $[-l_k+1,0]$. This process can be conveniently implemented by reusing most of the memory configuration from ${\mathbf M}$ for ${\mathbf M}'$ without copying the underlying storage of ${\mathbf M}$, and then do the following steps: 1) point ${\mathbf M}'$ to the position of the first yellow cell in ${\mathbf M}$; 2) modify the row stride of ${\mathbf M}'$ from $l_k$ to $(l_k-1)$; and 3) modify the number of columns of ${\mathbf M}'$ from $(2l_k-1)$ to $l_k$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/relative_to_absolute.pdf}
\caption{The process of selecting relevant cells from the matrix ${\mathbf M} \in \mathbb{R}^{l_q \times (2l_k-1)}$ (left) and rearranging them into ${\mathbf M}' \in \mathbb{R}^{l_q \times l_k}$ (right). The relevant cells are in yellow, and others are unselected. Note that the position of yellow block in one row of ${\mathbf M}$ is left shifted by 1 cell from the yellow block in the row above.}
\label{fig:relative_to_absolute}
\vspace{-4mm}
\end{figure}
\vspace{-5.5mm}
\section{Experiments}
\vspace{-2.5mm}
\subsection{The Dataset}
\vspace{-1.5mm}
We use the Mobvoi (SLR87) dataset\footnote{~\url{https://www.openslr.org/87}} \cite{hou2019region} including two wake words: ``Hi Xiaowen'' and ``Nihao Wenwen''. It contains 144 hrs training data with 43,625 out of 174,592 positive examples, and 74 hrs test data with 21,282 out of 73,459 positive examples. We do not report results on the other datasets mentioned in \cite{wang2020wake}, because both the numbers reported there and in our own experiments are too good (FRR $<0.1$\%) to demonstrate any significant difference.
\vspace{-3mm}
\subsection{Experimental Settings}
\vspace{-1.5mm}
All the experiments in this paper are conducted in \textsc{Espresso}\xspace, a PyTorch-based end-to-end ASR toolkit \cite{wang2019espresso}, using \textsc{PyChain}\xspace, a fully parallelized PyTorch implementation of LF-MMI \cite{shao2020pychain}.
We follow exactly the same data preparation and preprocessing pipeline as those in \cite{wang2020wake}, including HMM and decoding graph topolopies, feature extraction, negative recording sub-segmentation, and data augmentations. During evaluation, when one of the two wake words is considered, the other one is treated as negative. The operation points are obtained by varying the positive path cost while fixing the negative path cost at 0 in the decoding graph. It it worth mentioning that all the results reported here are from an offline decoding procedure, as currently Kaldi \cite{povey2011kaldi} does not support online decoding with PyTorch-trained neural acoustic models. However, we believe that the offline decoding results would not deviate significantly from the online ones.
The baseline system is a 5-layer dilated 1D convolution network with 48 filters and the kernel size of 5 for each layer, leading to 30 frames for both left and right context (25\% less than that in \cite{wang2020wake}) and only 58k parameters (60\% less than that in \cite{wang2020wake}). For the streaming Transformer models, the first two layers are 1D convolution. They are then followed by 3 self-attention layers with the embedding dimension 32 and the number of heads 4, resulting in 48k parameters without any relative embedding\footnote{~See Table \ref{tab:transformer_cache} for model sizes with different relative embedding settings.}. To make sure that the outputs can ``see'' approximately the same amount of context as those in the baseline, the chunk size is set to 27, so that in the no state-caching setting the right-most frame in a chunk depends on 27 input frames (still smaller than 30) as its right context \footnote{~Our experiments (not shown here) also suggest 27 is the optimal in this setting: a smaller chunk hurts the performance, and a larger one does not have significantly improvement but incurs more latency.}; in the state-caching case, the receptive field covers one more chunk (or 27 more frames) on the left, as it increases linearly when the self-attention layers increases.
All the models are optimized using Adam with an initial learning rate $10^{-3}$ , and then
halved if the validation loss at the end of an epoch does not improve over the previous epoch. The training process stops if the number of epochs exceeds 15, or the learning rate is less than $10^{-5}$. We found that learning rate warm-up is not necessary to train our Transformer-based systems, probably due to the relatively simple supervisions in our task.
\vspace{-2.5mm}
\subsection{Streaming Transformers with State-caching}
\vspace{-1mm}
We first evaluate our streaming Transformer models with state-caching. The results are reported in Table \ref{tab:transformer_cache}, as false rejection rate (FRR) at 0.5 false alarms per hour (FAH). If we only rely on the current chunk and the cached state from the previous chunk but without taking any look-ahead to the future chunk, the detection results (see row 2 in Table \ref{tab:transformer_cache}) are much worse than the baseline. It is actually expected, as the symmetric property of convolution kernels allows the network to take future frames into consideration. This validates that look-head to the future frames is important in the chunk-wise training of Transformers. Then adding absolute positional embedding seems not improve the performance significantly. One possible explanation could be: the goal of the wake word detection is not trying to transcribe the whole recording, but just spot the word of interest, where the absolute encoding of positions do not have too much effective impact. On contrary, when we add relative positional embedding to the key of self-attention layers instead, there is slightly improvement over adding the absolute embedding, which supports our previous hypothesis that the relative embedding makes more sense in such task. When the embedding is also added to the value, FRR reaches 0.7\% and 0.5\% at FAH=0.5 for the two wake words respectively (i.e., 25\% relative improvement over the baseline on average), showing that the embedding is not only useful when calculating the attention weights, but also beneficial when encoding the positions into the layer's hidden values.
\begin{table}
\caption{Results of streaming Transformers with state-caching.}
\vspace{-4mm}
\begin{center}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{ l c c c}
\toprule
& \multirow{2}{*}{\#Params} & \multicolumn{2}{c}{FRR(\%) at FAH=0.5} \\
\cmidrule(lr){3-4}
& & Hi Xiaowen & Nihao Wenwen\\
1D Conv. (baseline)\tablefootnote{~We do not compare with other systems, because to our best knowledge this baseline system is the state-of-the-art reported on the same dataset at the time of submission.} & 58k & 0.8 & 0.8 \\
Transformer (w/o look-ahead) & 48k & 3.5 & 4.7 \\
\quad+look-ahead to next chunk & 48k & 1.3 & 1.2 \\
\quad\quad+abs. emb. & 48k & 1.2 & 1.2 \\
\quad\quad+rel. emb. to key & 52k & 1.0 & 1.1 \\
\quad\quad\quad+rel. emb. to value & 57k & \textbf{0.7} & \textbf{0.5} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{center}
\label{tab:transformer_cache}
\vspace{-8mm}
\end{table}
\vspace{-2.5mm}
\subsection{Streaming Transformers without State-caching}
\vspace{-1mm}
Next we would like to explore whether having gradient to been back-propagated into the history state would help train a better model. As we mentioned in Sec. \ref{sec:look-ahead-and-caching}, this can be done by concatenating the current chunk with the previous chunk of input, instead of caching the internal state of the previous chunk. Table \ref{tab:transformer_nocache} shows several results. By looking at Table \ref{tab:transformer_nocache} itself, we observe a similar trend as that in the state-caching model from the previous section: relative positional embedding is advantageous over the absolute sinusoidal positional embedding, and adding the embedding to both key and value is again the best. Furthermore, by comparing the rows in Table \ref{tab:transformer_nocache} with their corresponding entries in Table \ref{tab:transformer_cache}, we observe that, except the case in the last row, regardless of the choice of positional embedding and how it is applied, the models without state-caching outperform models with state-caching. It indicates the benefit of updating the model parameters with more gradient information back-propagated from the current chunk into the previous chunk. However in the case where relative positional embedding is also added to the value, the gap seems diminished, suggesting that by utilizing the positional embedding in a better way, there is no need to recompute the part of the cached state in order to reach the best performance.
\begin{table}
\caption{Results of streaming Transformers without state-caching.}
\vspace{-4mm}
\begin{center}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{ l c c c}
\toprule
& \multirow{2}{*}{\#Params} & \multicolumn{2}{c}{FRR(\%) at FAH=0.5} \\
\cmidrule(lr){3-4}
& & Hi Xiaowen & Nihao Wenwen \\
1D Conv. (baseline) & 58k & 0.8 & 0.8 \\
Transformer (w/ look-ahead) & 48k & 1.0 & 1.1 \\
\quad+abs. emb. & 48k & 0.8 & 0.8 \\
\quad+rel. emb. to key & 52k & 0.6 & 0.7 \\
\quad\quad+rel. emb. to value & 57k & \textbf{0.6} & \textbf{0.6} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{center}
\label{tab:transformer_nocache}
\vspace{-8mm}
\end{table}
We provide DET curves of the baseline convolution network and the two proposed streaming Transformers in Fig. \ref{fig:det}, for a more comprehensive demonstration of their performance difference.
\vspace{-2mm}
\begin{figure}[ht]
\centering
\includegraphics[width=0.37\textwidth]{figs/det_trans.pdf}
\caption{DET curves for the baseline 1D convolution network and our two proposed streaming Transformers.}
\label{fig:det}
\end{figure}
\vspace{-6.5mm}
\subsection{Streaming Transformers with Same-layer Dependency}
\vspace{-1mm}
We now explore the architectural variant introduced in Sec. \ref{sec:same-layer}. Note that if the relative positional embedding is added to the value $\mathbf{V}_{c-1}^l$ as shown in Eq. (\ref{eq:compute_val_pos_embed}), ${\mathbf H}_{c-1}^l$ will no longer be in the same semantic space as $\mathbf{V}_c^l$. So it is problematic to concatenate ${\mathbf H}_{c-1}^l$ and $\mathbf{V}_c^l$ together in Eq. (\ref{eq:state_extended_samelayer}). A similar issue arises if the parameter ${\mathbf W}_K$ and ${\mathbf W}_V$ from the same layer are not tied because ${\mathbf H}_{c-1}^l$ is going to be concatenated to both $\mathbf{K}_c^l$ and $\mathbf{V}_c^l$. Our solution is to only add the positional embedding to $\mathbf{K}_c^l$, and also tie $\mathbf{K}_c^l$ and $\mathbf{V}_c^l$ together. However, it only achieves FRR=1.3\% at FAH=0.5. When absolute embedding is used, FRR=1.1\% at the same FAH. This contradicts the observations in \cite{lu2020exploring,wu2020streaming} where same-layer dependency was found to be more advantageous for ASR and it was attributed to the fact that the receptive field is maximized at every layer\footnote{~They did not mention the type of positional embedding being used.}. A better way of incorporating relative positional information for this case is our future work.
\vspace{-3mm}
\section{Conclusions}
\vspace{-2mm}
We propose using streaming Transformers for wake word detection with the latest alignment-free LF-MMI system. We explore how look-ahead of the future chunk, and different gradient stopping, layer dependency, and positional embedding strategies could affect the system performance. Along the way we also propose a series of simple tensor operations to efficiently compute the self-attention in the streaming setting when relative positional embedding is involved. Experiments on Mobvoi (SLR87) show the advantage of the proposed streaming Transformers over the 1D convolution baseline.
\bibliographystyle{IEEEbib}
\fontsize{8.8}{10.3}\selectfont
|
1,116,691,499,081 | arxiv | \section{Introduction}
Let $\mathbb{F}_{q^m}$ be the finite field with $q^m$ elements and $\mathbb{F}_{q^m}^{*}=\mathbb{F}_{q^m}\backslash\{0\}$, where $q$ is a power of a prime $p$ and $m$ is a positive integer. An $[n, k, d]$ linear code $\mathcal{C}$ over $\fq$ is a $k$-dimensional subspace of $\fq^{n}$ with minimum Hamming distance $d$. An $[n,k,d]$ linear code $\mathcal{C}$ over $\fq$ is said to be distance-optimal if no $[n,k,d+1]$ code exists (i.e., this code has the largest minimum distance for given length $n$ and dimension $k$) and it is called almost distance-optimal if there exists an $[n,k,d+1]$ distance-optimal code. An $[n,k,d]$ linear code $\mathcal{C}$ is called optimal if its parameters $n$, $k$ and $d$ meet a bound on linear codes with equality and almost optimal if its parameters $n$, $k$ and $d+1$ meet a bound on linear codes with equality \cite{HWPV}. Optimal linear codes are important in both theory and practice, the reader is referred to \cite{HDWZ,JYHLL,WZQY} for recent results. The Griesmer bound \cite{JHG,GSJS} for an $[n,k,d]$ linear code $\C$ over $\fq$ is given by
\[ n\geq g(k,d):=\sum_{i=0}^{k-1} \lceil \frac{d}{q^i}\rceil,\]
where $\lceil \cdot \rceil$ denotes the ceiling function. An $[n,k,d]$ linear code $\mathcal{C}$ is called a Griesmer code if its parameters $n$, $k$ and $d$ achieve the Griesmer bound and called a near Griesmer code if $n-1$, $k$ and $d$ achieve the Griesmer bound. Griesmer codes have been an interesting topic of study for many years due to not only their optimality but also their geometric applications \cite{DING1,DING2}.
The dual code of an $[n,k,d]$ linear code $\C$ over $\fq$ is defined by
$ \C^{\bot}=\{x\in \fq^{n}\,\,|\,\, x \cdot y = 0 \,\, {\rm for \,\, all} \,\, y\in \C \},$
where $x \cdot y$ denotes the Euclidean inner product of $x$ and $y$.
The code $\C^{\bot}$ is an $[n,n-k]$ linear code over $\fq$. A linear code $\C$ is called projective if its dual code has minimum distance at least $3$, and it is called self-orthogonal if $\C\subseteq \C^{\bot}$.
Let $A_{i}$ denote the number of codewords with Hamming weight $i$ in a code $\mathcal{C}$ of length $n$. The weight enumerator of $\mathcal{C}$ is defined by
$1+A_{1}z+A_{2}z^{2}+\cdots +A_{n}z^{n}$. The sequence $(1, A_{1}, A_{2}, \cdots ,A_{n})$ is called the weight distribution of $\mathcal{C}$.
A code is said to be a $t$-weight code if the number of nonzero $A_{i}$ in the sequence $(A_{1}, A_{2}, \cdots ,A_{n})$ is equal to $t$. Linear codes with few weights have applications in secret sharing schemes \cite{ARJD,CCDY}, authentication codes \cite{DCHT,DW}, association schemes \cite{CAGJ}, strongly regular graphs and some other fields.
In 2007, Ding and Niederreiter \cite{DN} introduced a nice and generic way to construct linear codes via trace functions. Let $D \subset \fqm$ and define
\begin{equation} \label{CD}
\C_D=\{c_{a}=(\Tr_{q}^{q^{m}}(ax))_{x\in D}: a\in \fqm\},
\end{equation}
where $\Tr_{q}^{q^{m}}(\cdot)$ is the trace function from $\mathbb{F}_{q^m}$ to $\fq$.
Then $\C_{D}$ is a linear code of length $n:=|D|$ over $\mathbb{F}_{q}$. The set $D$ is called the defining set of $\mathcal{C}_D$. Later, Ding and Niederreiter's construction was extended to the bivariate form, namely, linear codes of the form
\begin{equation} \label{CDbi}
\mathcal{C}_{D}=\{c_{a,b}=(\Tr_{q}^{q^{m}}(ax)+\Tr_{q}^{q^{k}}(by))_{(x,y) \in D}: a\in \fqm,b \in \fqk\},
\end{equation}
where $D \subset \fqm \times \fqk$.
The objective of this paper is to construct (distance-) optimal linear codes and (near) Griesmer codes over the finite field $\fq$ of the forms \eqref{CD} and \eqref{CDbi}. Our main contributions are summarized as follows:
\begin{enumerate}
\item [1)] we construct distance-optimal linear codes $\C_{D}$ of the form \eqref{CD} with the defining set
$D=\fqm\backslash \Omega_1$, where $\Omega_1=\cup_{i=1}^{h}\mathbb{F}_{q^{r_{i}}}$ and $1\leq r_{1} < r_{2} < \cdots < r_{h}<m$. A criterion $\sum_{i=1}^{h}q^{r_{i}}-|\Omega_1|<r_{1}+h-1$ for $\C_{D}$ to be distance-optimal is given by using the Griesmer bound, which enables us to obtain many distance-optimal linear codes. In particular, when $h=1$, our construction reduces to the Solomon and Stiffler codes in the nonprojective case (see \cite{HELL,GSJS}), and when $h=2$, we show that the code $\C_{D}$ is a near Griesmer code if $(q,t)=(2,1)$ and distance-optimal if $r_{1}+1>q^{t}$, where $\gcd(r_{1},r_{2})=t$. Further, when $h=1$ and $h=2$, the weight distributions of $\C_{D}$ are completely determined, which are $2$-weight and $5$-weight respectively.
\item [2)] we construct Griesmer codes and distance-optimal linear codes $\C_{D}$ of the form \eqref{CD} with the defining set $D=\fqm\backslash \Omega_2$, where $\Omega_2=\cup_{i=0}^{h}(\theta_{i}+\fqr)$, $r|m$, $\theta_{0}=0$ and $\theta_{i}\in\fqm^{*}$ for any $1\leq i \leq h$. This construction produces Griesmer codes if $h+1\leq q$ which have different parameters with the Solomon and Stiffler codes in the nonprojective case if $h\ne 0$. When $h+1> q$, we give an explicit computable criterion on $\C_{D}$ such that it is distance-optimal and consequently obtain many distance-optimal linear codes. It is proved that in this construction $\C_{D}$ is at most $(h+2)$-weight and the weight distributions of $\C_{D}$ for $h=1$ and $h=2$ are completely determined which are $3$-weight and $4$-weight respectively.
\item [3)] we characterize the optimality of the linear codes $\C_{D}$ of the form \eqref{CD} with the defining set $D=\fqm\backslash \Omega_3$, where $\Omega_3=\cup_{i=1}^{h}(\theta_{i}*\fqr)$, $r|m$, $\theta_{i}\in \fqm^{*}$ for any $1\leq i \leq h$, and give an explicit computable criterion on $\C_{D}$ such that it is distance-optimal. This allows us to produce many distance-optimal linear codes from this construction.
It is shown that $\C_{D}$ is a Griesmer code with the same parameters as the Solomon and Stiffler code in the nonprojective case if $h=1$ and it is a near Griesmer code if $h=2$ or $(q,h)=(2,3)$. In addition, we prove that $\C_{D}$ is at most $(h+1)$-weight and completely determine its weight distributions for $h=2$ and $h=3$, which are $3$-weight and $3$ or $4$-weight respectively.
\item [4)] we characterize the optimality of the linear codes $\C_{D}$ of the form \eqref{CDbi} for the defining set $D= \{(x,y):x\in \fqm\backslash \fqr, y \in \fqk\backslash \fqs \}$, where $r|m$ and $s|k$, and give an explicit computable criterion on $\C_{D}$ such that it is distance-optimal. As a consequence, we obtain many distance-optimal linear codes from this construction. In addition, the weight distribution of $\C_{D}$ is completely determined which is shown to be $4$-weight. A similar discussion for $r=s=0$ and ${\mathbb F}_{q^0}=\{0\}$ shows that the linear code $\C_{D}$ is a near Griesmer code (distance-optimal if $m+\lfloor \frac{2}{q}\rfloor>1$) when $m=k$ and it is a Griesmer code with the same parameters as the Solomon and Stiffler code in the nonprojective case when $m\ne k$.
\end{enumerate}
We also investigate the self-orthogonality and minimality of the linear codes constructed in this paper. Self-orthogonal codes can be used to construct quantum error-correcting codes \cite{KKKS} which can protect quantum information in quantum computations and quantum communications \cite{CRSS, DGOT} and minimal linear codes can be used to construct secret sharing schemes \cite{CCDY,DY,JLM,JYCD}. It is shown that most of our linear codes are either self-orthogonal or minimal.
\section{Preliminaries}
Let $q$ be a power of a prime $p$ and denote the canonical additive character of $\fq$ by
\[\chi(x)=\zeta_{p}^{\Tr_{p}^{q}(x)},\]
where $\zeta_{p}$ is a primitive complex $p$-th root of unity and $\Tr_{p}^{q}(\cdot)$ is the trace function from $\fq$ to $\fp$.
\begin{lem}(\cite{Lidl})\label{lem-trace}
Let $\alpha\in \fqm$. Then $\Tr_{q}^{q^{m}}(\alpha)=0$ if and only if $\alpha=\beta^q-\beta$ for some $\beta\in \fqm$.
\end{lem}
The optimality of near Griesmer codes can be determined as below.
\begin{lem}\label{near-Griesmer}
Let $\C$ be an $[n,k,d]$ near Griesmer code over $\fq$ with $k>1$. Then $\C$ is distance-optimal if $q\mid d$ and almost distance-optimal if $q\nmid d$.
\end{lem}
\begin{proof}
According to the Griesmer bound one obtains $n=g(k,d)+1=\sum_{i=0}^{k-1} \lceil \frac{d}{q^i}\rceil+1$.
To complete the proof, it suffices to prove that
$g(k,d+1)>g(k,d)+1$ if $q\mid d$ and $g(k,d+1)=g(k,d)+1$ if $q\nmid d$. Note that
\begin{align*}
g(k,d+1)-g(k,d)-1&=\sum_{i=0}^{k-1} \lceil \frac{d+1}{q^i}\rceil-\sum_{i=0}^{k-1} \lceil \frac{d}{q^i}\rceil-1
=\sum_{i=1}^{k-1} \lceil \frac{d+1}{q^i}\rceil-\sum_{i=1}^{k-1} \lceil \frac{d}{q^i}\rceil.
\end{align*}
Then the result follows from the fact that $\lceil \frac{d+1}{q^i}\rceil=\lceil \frac{d}{q^i}\rceil+1$ if $q^i\mid d$ and otherwise $\lceil \frac{d+1}{q^i}\rceil=\lceil \frac{d}{q^i}\rceil$ for any integer $i>0$.
This completes the proof.
\end{proof}
Some results on the self-orthogonality of linear codes of the form \eqref{CD} are given as follows.
\begin{lem}\label{self-orthogonalF}
Let $q$ be a power of a prime $p$ and $m$, $r$ be positive integers with $r\mid m$ and $(q,r)\notin \{(2,1),(2,2),(3,1)\}$. Define \\
\indent $1).$ $D= \fqr;$ or\\
\indent $2).$ $D=\{x+\theta: x\in \fqr\};$ or\\
\indent $3).$ $D=\{x*\theta: x\in \fqr\}$\\
where $\theta\in \fqm\backslash \fqr$. Then the linear code $\C_{D}$ defined in \eqref{CD} is self-orthogonal.
\end{lem}
\begin{proof}
$1)$ Observe that $\Tr_{q}^{q^{m}}(ax)=\Tr_{q}^{q^{r}}(\Tr_{q^{r}}^{q^{m}}(a)x)$ if $x\in D=\fqr$ which implies that in this case the linear code $\C_{D}$ can be reduced to $\{c_{a}=(\Tr_{q}^{q^{r}}(ax))_{x\in \fqr}: a\in \fqr\}$. Thus, to complete the proof, it is sufficient to show that $c_{a}\cdot c_{b}=0$ for any $a,b\in \fqr^*$. If $a$ and $b$ are linearly dependent over $\fq$, namely, there exists some $u\in \fq^*$ such that $b=ua$, then by the balanced property of trace functions one has that
$$c_{a}\cdot c_{b}=\sum\nolimits_{x\in\fqr}\Tr_{q}^{q^{r}}(ax)\Tr_{q}^{q^{r}}(uax)=u\sum\nolimits_{x\in\fqr}\Tr_{q}^{q^{r}}(ax)^2=uq^{r-1}\sum\nolimits_{y\in\fq}y^2.$$
Let $\alpha$ be a primitive element of $\fq$, it can be readily verified that $\sum_{y\in\fq}y^2=0$ if $\alpha\ne 1$ and $\alpha\ne -1$, i.e., $q\ne 2$ and $q\ne 3$. Then $c_{a}\cdot c_{b}=0$ if $(q,r)\ne (2,1)$ or $(3,1)$. Now assume that $a$ and $b$ are linearly independent over $\fq$, which implies $r\geq 2$. Then, for any $(s,t)\in \fq^2$, we have
\begin{align*}
N_{s,t}:=&|\{ x\in \fqr: \Tr_{q}^{q^{r}}(ax)=s\,\,{\rm and}\,\, \Tr_{q}^{q^{r}}(bx)=t\}|\\
=&\frac{1}{q^2}\sum_{x\in \fqr}\sum_{u\in \fq}\chi(u(\Tr_{q}^{q^{r}}(ax)-s))
\sum_{v\in \fq}\chi(v(\Tr_{q}^{q^{r}}(bx)-t))\\
=&\frac{1}{q^2}\sum_{u\in \fq}\sum_{v\in \fq}\chi(-(us+vt))
\sum_{x\in \fqr}\chi(\Tr_{q}^{q^{r}}((au+bv)x))\\
=&q^{r-2}
\end{align*}
due to $au+bv=0$ if and only if $u=v=0$. This gives
$$c_{a}\cdot c_{b}=\sum_{x\in\fqr}\Tr_{q}^{q^{r}}(ax)\Tr_{q}^{q^{r}}(bx)=q^{r-2}\sum_{s,t\in \fq}st=q^{r-2}\sum_{s\in \fq}s\sum_{t\in \fq}t=0$$
if $(q,r)\not=(2,2)$.
$2)$ If $D=\{x+\theta: x\in \fqr\}$ for some $\theta\in \fqm\backslash \fqr$, then any codeword $c_{a}\in\C_{D}$ can be expressed as $c_{a}=\bar{c}_{a}+u_{a}$ where $\bar{c}_{a}=(\Tr_{q}^{q^{m}}(ax))_{x\in \fqr}\in \C_{\fqr}$ and $u_{a}=(\Tr_{q}^{q^{m}}(a\theta))_{x\in \fqr}$. For any two codewords $c_{a}=\bar{c}_{a}+u_{a}$ and $c_{b}=\bar{c}_{b}+u_{b}$ in $\C_{D}$, we have
$$c_{a}\cdot c_{b}=(\bar{c}_{a}+u_{a})\cdot(\bar{c}_{b}+u_{b})=\bar{c}_{a}\cdot \bar{c}_{b}+u_{a}\cdot\bar{c}_{b}+u_{b}\cdot\bar{c}_{a}+u_{a}\cdot u_{b}.$$
Note that $u_{a}\cdot u_{b}=q^{r}\Tr_{q}^{q^{m}}(a\theta)\Tr_{q}^{q^{m}}(b\theta)=0$ and
$u_{a}\cdot\bar{c}_{b}=\Tr_{q}^{q^{m}}(a\theta)\sum_{x\in\fqr}\Tr_{q}^{q^{r}}(\Tr_{q^{r}}^{q^{m}}(b)x)$ since $x\in \fqr$.
If $\Tr_{q^{r}}^{q^{m}}(b)=0$, then $u_{a}\cdot\bar{c}_{b}=0$. Otherwise, by the balanced property of trace functions, we have $\sum_{x\in \fqr}\Tr_{q}^{q^{r}}(\Tr_{q^{r}}^{q^{m}}(b)x)=q^{r-1}\sum_{y\in \fq}y=0$ if $(q,r)\ne (2,1)$. Similarly, we have $u_{b}\cdot\bar{c}_{a}=0$ if $(q,r)\ne (2,1)$. Then, by 1) of Lemma \ref{self-orthogonalF}, we have $c_{a}\cdot c_{b}=\bar{c}_{a}\cdot \bar{c}_{b}=0$ if $(q,r)\notin \{(2,1),(2,2),(3,1)\}$.
$3)$ The linear code $\C_{D}$ in this case can be expressed as $\{c_{a}=(\Tr_{q}^{q^{m}}(a\theta x))_{ x\in \fqr}: a\in \fqm\}$ which is exactly the code $\{c_{b}=(\Tr_{q}^{q^{r}}(bx))_{x\in \fqr}: b\in \fqr\}$ since $\Tr_{q}^{q^{m}}(ax)=\Tr_{q}^{q^{r}}(\Tr_{q^{r}}^{q^{m}}(a)x)$ if $x\in \fqr$. Then the result follows from 1) of Lemma \ref{self-orthogonalF}.
\end{proof}
The following lemma can be readily verified and will be frequently used in the sequel.
\begin{lem}\label{self-orthogonalD-}
Let $D,D_{1},D_{2}\subseteq \fqm$ (resp. $\fqm\times \fqk$). Then \\
\indent $1).$ Denote $D^{*}=D\backslash \{0\}$ if $0\in D$ (resp. $D^{*}=D\backslash \{(0,0)\}$ if $(0,0)\in D$). $\C_{D}$ defined in \eqref{CD} (resp. \eqref{CDbi}) is self-orthogonal if and only if $\C_{D^{*}}$ defined in \eqref{CD} (resp. \eqref{CDbi}) is self-orthogonal$;$\\
\indent $2).$ Let $D=D_{1}\cup D_{2}$ and $D_{1}\cap D_{2}=\{0\} \,\,{\rm or}\,\, \emptyset$ (resp. $\{(0,0)\} \,\,{\rm or}\,\, \emptyset$). $\C_{D}$ defined in \eqref{CD} (resp. \eqref{CDbi}) is self-orthogonal if $\C_{D_{1}}$ and $\C_{D_{2}}$ defined in \eqref{CD} (resp. \eqref{CDbi}) are both self-orthogonal$;$\\
\indent $3).$ Let $D_{1}\subseteq D_{2}$ and $D=D_{2}\backslash D_{1}$. $\C_{D}$ defined in \eqref{CD} (resp. \eqref{CDbi}) is self-orthogonal if $\C_{D_{1}}$ and $\C_{D_{2}}$ defined in \eqref{CD} (resp. \eqref{CDbi}) are both self-orthogonal.
\end{lem}
A vector $u\in \fq^{n}$ covers a vector $v\in \fq^{n}$ if ${\rm Suppt}(v)\subseteq {\rm Suppt}(u)$, where ${\rm Suppt}(u)= \{1 \leq i \leq n : u_{i}\ne 0\}$ is the support of $u=(u_{1},u_{2},\cdots,u_{n})\in \fq^{n}$. A codeword $u$ in $\C$ is said to be minimal if $u$ covers only the codeword $au$ for all $a \in \fq$, but no other codewords in $\C$. A linear code $\C$ is said to be minimal if every
codeword in $\C$ is minimal.
Aschikhmin and Barg's result is often used to determine whether a linear code is minimal.
\begin{lem} (\cite{AAAB}) \label{minimal}
A linear code $\C$ over $\fq$ is minimal if $w_{min}/w_{max}>(q-1)/q$, where $w_{min}$ and $w_{max}$ denote the minimum and maximum
nonzero Hamming weights in $\C$, respectively.
\end{lem}
\section{The first family of optimal linear codes} \label{section-3}
In this section, we investigate the linear codes $\C_{D}$ of the form \eqref{CD} for
\begin{eqnarray}\label{CD1-D1}
D= \fqm \backslash \Omega_1,\;\;\Omega_1=\cup_{i=1}^{h} {\mathbb F}_{q^{r_{i}}},
\end{eqnarray}
where $m>1$, $1\leq r_{1} < r_{2} < \cdots < r_{h}<m$ are positive integers satisfying $r_{i}|m$ for any $1\leq i \leq h$, $r_{i}\nmid r_{j}$ for any $1\leq i<j \leq h$ and $\gcd(r_{1}, r_{2}, \cdots, r_{h})=t$. In particular, define $t=r_1$ if $h=1$.
For simplicity, define
\begin{eqnarray*
\Theta_1=\{a \in \fqm: \Tr_{q^{r_{i}}}^{q^{m}}(a)\ne 0 \,\,{\rm for\,\,any\,\,} i \,\,{\rm and\,\,} \Tr_{q^{\gcd(r_{i},r_{j})}}^{q^{m}}(a)= 0 \,\,{\rm for\,\,any\,\,}
i<j\}.
\end{eqnarray*}
\begin{thm} \label{optimalcode-con-1}
Let $\C_{D}$ be defined by \eqref{CD} and \eqref{CD1-D1}. If $\Theta_1$ is nonempty, then \\
$1).$ $\C_{D}$ is a $[q^m-|\Omega_1|,m,(q-1)(q^{m-1}-\sum_{i=1}^{h}q^{r_{i}-1})]$ linear code;\\
$2).$ $\C_{D}$ is distance-optimal if $\sum_{i=1}^{h}q^{r_{i}}-|\Omega_1|<r_{1}+h-1$; \\
$3).$ $\C_{D}$ is self-orthogonal if $(q,t)\notin \{(2,1),(2,2),(3,1)\}$;\\
$4).$ $\C_{D}$ is minimal if $q^{m-1}> \sum_{i=1}^{h}q^{r_{i}}$.
\end{thm}
\begin{proof}
Denote $\Upsilon:=\{r_{1},r_{2},\cdots, r_{h}\}$ and $r_{S}:=\gcd(s_{1},s_{2},\cdots ,s_{|S|})$ for any set $S=\{s_{1},s_{2},\cdots ,s_{|S|}\}$, where $s_{i}$'s are positive integers. By the definition of $\Omega_1$ and the principle of inclusion-exclusion, we have
\[|\Omega_1|=\sum_{1\leq i\leq h} |{\mathbb F}_{q^{r_{i}}}|
-\sum_{1\leq i<j\leq h}|{\mathbb F}_{q^{r_{i}}}\cap {\mathbb F}_{q^{r_{j}}}|
+\cdots(-1)^{h-1} |\cap_{i=1}^{h} {\mathbb F}_{q^{r_{i}}}|
=\sum_{\emptyset \not=S\subseteq \Upsilon}(-1)^{|S|-1}q^{r_{S}}\]
and consequently, the length of $\C_{D}$ is $n=q^m-\sum_{\emptyset \not=S\subseteq \Upsilon}(-1)^{|S|-1}q^{r_{S}}$. For any $a\in \fqm^*$, the Hamming weight $wt(c_{a})$ of the codeword $c_{a}$ in $\C_{D}$ is $n-N_{a}$, where $N_{a}=|\{x\in D: \Tr_{q}^{q^{m}}(ax) = 0\}|$. Using the orthogonal property of nontrivial additive characters, for $a\not=0$, we have
\begin{align*}
N_{a}=&\frac{1}{q}\sum_{x\in D}
\sum_{u\in \fq} \chi(u\Tr_{q}^{q^{m}}(ax))
=q^{m-1}-\frac{1}{q}\sum_{x\in \Omega_1}
\sum_{u\in \fq} \chi(u\Tr_{q}^{q^{m}}(ax)):=q^{m-1}-\Delta.
\end{align*}
The above discussions lead to
\begin{align} \label{equation-thm1-1}
wt(c_{a})=(q-1)q^{m-1}-(|\Omega_1|-\Delta).
\end{align}
Next, we determine the maximal value of $|\Omega_1|-\Delta$ for any $a\in \fqm^*$ in order to determine the minimal distance of $\C_{D}$.
To calculate $\Delta$, for any positive integer $l\mid m$, define
\[\Phi({\mathbb F}_{q^{l}}):= \frac{1}{q}\sum_{u\in \fq} \sum_{x\in {\mathbb F}_{q^{l}}}\chi(u\Tr_{q}^{q^{m}}(ax)),\]
which satisfies
\begin{align} \label{equation-thm1-4}
\Phi({\mathbb F}_{q^{l}})
=\left\{\begin{array}{ll}
q^{l}, & \mbox{if}\,\,\Tr_{q^{l}}^{q^{m}}(a)=0, \\[0.05in]
q^{l-1}, & \mbox{otherwise}.
\end{array}\right.
\end{align}
Utilizing the principle of inclusion-exclusion gives
\begin{align*}
\Delta &=\sum_{1\leq i\leq h} \Phi({\mathbb F}_{q^{r_{i}}})
-\sum_{1\leq i<j\leq h}\Phi({\mathbb F}_{q^{r_{i}}}\cap {\mathbb F}_{q^{r_{j}}})
+\cdots(-1)^{h-1} \Phi(\cap_{i=1}^{h}{\mathbb F}_{q^{r_{i}}})
=\sum_{\emptyset \not=S\subseteq \Upsilon}(-1)^{|S|-1}\Phi({\mathbb F}_{q^{r_{S}}}).
\end{align*}
Then we have
\begin{align} \label{equation-thm1-5}
|\Omega_1|-\Delta=&\sum_{\emptyset \not=S\subseteq \Upsilon}(-1)^{|S|-1}(q^{r_{S}}-\Phi({\mathbb F}_{q^{r_{S}}})):=\sum_{\emptyset \not=S\subseteq \Upsilon}(-1)^{|S|-1}f_{a}(S).
\end{align}
Define $\Upsilon_{i}:=\{\gcd(r_{1},r_{i}),\gcd(r_{2},r_{i}),\cdots, \gcd(r_{i-1},r_{i})\}$.
Then by \eqref{equation-thm1-5} one gets
\begin{align} \label{equation-thm1-2}
|\Omega_1|-\Delta=\sum_{\emptyset \not=S\subseteq \Upsilon}(-1)^{|S|-1}f_{a}(S)
=\sum_{i=1}^{h}f_{a}(\{r_{i}\})-\sum_{i=2}^{h}\sum_{\emptyset \not=S\subseteq \Upsilon_{i}}(-1)^{|S|-1}f_{a}(S),
\end{align}
which can be verified by using the mathematical induction as follows:\\
\indent $1).$ For $h=2$, it can be readily verified that
\[\sum_{\emptyset \not=S\subseteq \{r_{1},r_{2}\}}(-1)^{|S|-1}f_{a}(S)=f_{a}(\{r_{1}\})+f_{a}(\{r_{2}\})-\sum_{\emptyset \not=S\subseteq \Upsilon_{2}}(-1)^{|S|-1}f_{a}(S).\]
\indent $2).$ Suppose that \eqref{equation-thm1-2} holds for $h=s$. Then we have
\begin{align*}
&\sum_{\emptyset \not=S\subseteq \{r_{1},r_{2},\cdots,r_{s+1}\}}(-1)^{|S|-1}f_{a}(S)\\
=&\sum_{\emptyset \not=S\subseteq \{r_{1},r_{2},\cdots,r_{s}\}}(-1)^{|S|-1}f_{a}(S)+f_{a}(\{r_{s+1}\})-\sum_{\emptyset \not=S\subseteq \Upsilon_{s+1}}(-1)^{|S|-1}f_{a}(S)\\
=&\sum_{i=1}^{s+1}f_{a}(\{r_{i}\})-\sum_{i=2}^{s+1}\sum_{\emptyset \not=S\subseteq \Upsilon_{i}}(-1)^{|S|-1}f_{a}(S),
\end{align*}
which implies that \eqref{equation-thm1-2} also holds for $h=s+1$.
Now we claim that $\sum_{\emptyset \not=S\subseteq \Upsilon_{i}}(-1)^{|S|-1}f_{a}(S)\geq0$ for any $2\leq i \leq h$, which implies $|\Omega_1|-\Delta \leq \sum_{i=1}^{h}f_{a}(\{r_{i}\})$.
Indeed, for $a\in \fqm^*$, the Hamming weight of the codeword $\tilde{c}_{a}$ in the linear code $\C_{D_{\Upsilon_{i}}}$ of the form \eqref{CD} with $D_{\Upsilon_{i}}=\bigcup_{r\in \Upsilon_{i}} {\mathbb F}_{q^{r}}$ is \[wt(\tilde{c}_{a})=|D_{\Upsilon_{i}}|-|\{x\in D_{\Upsilon_{i}}: \Tr_{q}^{q^{m}}(ax) = 0 \}|=|D_{\Upsilon_{i}}|-\frac{1}{q}\sum_{x\in D_{\Upsilon_{i}}}\sum_{u\in \fq} \chi(u\Tr_{q}^{q^{m}}(ax)).\]
Again by the principle of inclusion-exclusion, similar to the computation of \eqref{equation-thm1-5}, we have
\[
wt(\tilde{c}_{a})=\sum_{\emptyset \not=S\subseteq \Upsilon_{i}}(-1)^{|S|-1}f_{a}(S)\geq 0.
\]
Therefore, by \eqref{equation-thm1-4}, we can obtain
\[|\Omega_1|-\Delta \leq \sum_{i=1}^{h}f_{a}(\{r_{i}\}) \leq (q-1)\sum_{1\leq i\leq h} q^{r_{i}-1}.\]
On the other hand, for any $S\subseteq \Upsilon$ with $|S|\geq 2$, we have $r_{S}|\gcd(r_{i},r_{j})$ for some $1\leq i<j \leq h$.
Since $\Theta_1$ is nonempty, then for $a\in \Theta_1$, by \eqref{equation-thm1-4} one has that $|\Omega_1|-\Delta=(q-1)\sum_{1\leq i\leq h} q^{r_{i}-1}$ and consequently,
\[d=wt(c_{a})=(q-1)(q^{m-1}-\sum\nolimits_{1\leq i\leq h} q^{r_{i}-1}).\]
Note that $\sum_{1\leq i\leq h} q^{r_{i}}\leq \sum_{0\leq i\leq m-1} q^{i}=\frac{q^m-1}{q-1}\leq q^m-1<q^m$ which implies that $d>0$. This shows that the dimension of $\C_{D}$ is equal to $m$.
A detailed computation using the Griesmer bound gives
\begin{align} \label{equation-thm1-3}
g(m,d)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)(q^{m-1}-\sum_{1\leq i\leq h} q^{r_{i}-1})}{q^i}\rceil
=q^{m}-\sum_{1\leq i\leq h} q^{r_{i}}+h-1
\end{align}
and
\begin{align*}
g(m,d+1)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)(q^{m-1}-\sum_{1\leq i\leq h} q^{r_{i}-1})+1}{q^i}\rceil
=q^{m}-\sum_{1\leq i\leq h} q^{r_{i}}+r_{1}+h-1.
\end{align*}
Thus, the code $\C_{D}$ is distance-optimal if $g(m,d+1)>n$, i.e., $r_{1}+h-1>\sum_{i=1}^{h}q^{r_{i}}-|\Omega_1|$. By Lemmas \ref{self-orthogonalF} and \ref{self-orthogonalD-}, it can be verified that $\C_{D}$ is self-orthogonal if $(q,t)\notin \{(2,1),(2,2),(3,1)\}$.
Note that $|\Omega_1|-\Delta\geq 0$ and $wt(c_{a})\leq (q-1)q^{m-1}$ due to \eqref{equation-thm1-1}. Then, by Lemma \ref{minimal}, $\C_{D}$ is minimal if $q^{m-1}> \sum_{i=1}^{h}q^{r_{i}}$ since $\frac{w_{min}}{w_{max}}\geq\frac{(q-1)(q^{m-1}-\sum_{i=1}^{h}q^{r_{i}-1})}{(q-1)q^{m-1}}>\frac{q-1}{q}$.
This completes the proof.
\end{proof}
\begin{remark}
Notice that the condition $\Theta_1\ne \emptyset$ can be easily satisfied. For $h=1$, we have
$|\Theta_1|=q^{m}-q^{m-r_{1}}>0$. For $h=2$, we have $|\Theta_1|\geq q^{m-t}-q^{m-r_{1}}-q^{m-r_{2}}>q^{m-t}-q^{m-r_{1}+1}\geq 0$ since $|\{a\in \fqm: \Tr_{q^{r}}^{q^{m}}(a)=0\}|=q^{m-r}$ for any $r|m$ and $t< r_{1}$.
In addition, for a general $h>2$, if $\gcd(r_{i},r_{j})=t$ for any $i<j$, which implies $r_{1}>t$ since $r_{i}\nmid r_{j}$ for any $i<j$, we have
$|\Theta_1| \geq q^{m-t}-\sum_{i=1}^{h}q^{m-r_{i}}\geq q^{m-t}-\sum_{i=t+1}^{m}q^{m-i}=\frac{(q-2)q^{m-t}+1}{q-1}>0$.
\end{remark}
The inequality $r_{1}+h-1>\sum_{i=1}^{h}q^{r_{i}}-|\Omega_1|$ in Theorem \ref{optimalcode-con-1} can be easily satisfied when $r_{1}$ is large enough and $\max_{1\leq i<j \leq h}\{\gcd(r_{i},r_{j})\}$ is small, and then many optimal linear codes can be produced.
In particular, if $h=1$, then $\C_{D}$ in Theorem \ref{optimalcode-con-1} is a Solomon and Stiffler code \cite{GSJS} in the nonprojective case and its weight distribution can be determined as below.
\begin{thm} \label{optimalcode-con-1-1}
Let $m>1$ and $r<m$ be positive integers with $r|m$. If $D= \fqm \backslash \fqr$, then $\C_{D}$ defined in \eqref{CD} is a $2$-weight $[q^m-q^r,m,(q-1)(q^{m-1}-q^{r-1})]$ linear code with weight enumerator
$$1+(q^{m}-q^{m-r})z^{(q-1)(q^{m-1}-q^{r-1})}+(q^{m-r}-1)z^{(q-1)q^{m-1}}$$
and it is a Griesmer code.
\end{thm}
\begin{proof}
It is obvious that $n=|D|=q^m-q^r$ and by \eqref{equation-thm1-1} and \eqref{equation-thm1-5}, for $a\in \fqm^{*}$, we have
\[wt(c_{a})=(q-1)q^{m-1}-q^{r}+\Phi({\mathbb F}_{q^{r}}).\]
This together with \eqref{equation-thm1-4} implies that
\begin{align*}
wt(c_{a})=\left\{\begin{array}{ll}
(q-1)q^{m-1}, & \mbox{if}\,\,\Tr_{q^{r}}^{q^{m}}(a)=0, \\[0.05in]
(q-1)(q^{m-1}-q^{r-1}), & \mbox{otherwise}.
\end{array}\right.
\end{align*}
Then the weight distribution of $\C_{D}$ follows from the balanced property of trace functions and
$\C_{D}$ is a Griesmer code due to \eqref{equation-thm1-3}.
This completes the proof.
\end{proof}
\begin{example}
Let $q=3$, $m=6$, $r=2$. Magma experiments show that $\C_{D}$ is a $[720,6,480]$ linear code with the weight enumerator $1+648z^{480}+80z^{486}$, which is consistent with our result in Theorem \ref{optimalcode-con-1-1}. This code is a Griesmer code.
\end{example}
Note that $D=\ftwom \backslash \{0,1\}$ if $q=2$ and $r=1$ in Theorem \ref{optimalcode-con-1-1}. The following result shows that a special class of Griesmer codes can also be obtained from $D=\fqm \backslash \{0,1\}$ for $q\ne 2$. We omit its proof here since it can be proved in the same manner.
\begin{thm} \label{optimalcode-con-1-3}
Let $q\ne 2 $ and $m>1$ be a positive integer. If $D=\fqm \backslash \{0,1\}$, then $\C_{D}$ defined by \eqref{CD} is a $2$-weight $[q^m-2,m,(q-1)q^{m-1}-1]$ linear code with weight enumerator
$$1+(q^{m-1}-1)z^{(q-1)q^{m-1}}+(q^{m}-q^{m-1})z^{(q-1)q^{m-1}-1}.$$
Moreover, this code is a Griesmer code and it is minimal.
\end{thm}
\begin{example}
Let $q=3$, $m=5$. Magma experiments show that $\C_{D}$ is a $[241,5,161]$ linear code with the weight enumerator $1+162z^{161}+80z^{162}$, which is consistent with our result in Theorem \ref{optimalcode-con-1-3}. This code is a Griesmer code and it is also minimal.
\end{example}
In what follows, we determine the weight distribution of $\C_{D}$ in Theorem \ref{optimalcode-con-1} for $h=2$.
\begin{thm} \label{optimalcode-con-1-2}
Let $\C_{D}$ be defined by \eqref{CD} and \eqref{CD1-D1}. If $h=2$, then $\C_{D}$ is a linear code with parameters $[q^m-q^{r_{2}}-q^{r_{1}}+q^t,m,(q-1)(q^{m-1}-q^{r_{2}-1}-q^{r_{1}-1})]$
and its weight distribution is given by
\begin{center}
\begin{tabular}{lllll}
\hline weight $w$ & Multiplicity $A_{w}$\\ \hline
$0$ & $1$ \\
$(q-1)q^{m-1}$ & $q^{m-r_{2}-r_{1}+t}-1$ \\
$(q-1)(q^{m-1}-q^{r_{1}-1})$ & $ q^{m-r_{2}}-q^{m-r_{2}-r_{1}+t}$ \\
$(q-1)(q^{m-1}-q^{r_{2}-1})$ & $q^{m-r_{1}}-q^{m-r_{2}-r_{1}+t}$ \\
$(q-1)(q^{m-1}-q^{r_{2}-1}-q^{r_{1}-1}+q^{t-1})$ & $q^{m}-q^{m-t}$ \\
$(q-1)(q^{m-1}-q^{r_{2}-1}-q^{r_{1}-1})$ & $q^{m-t}+q^{m-r_{2}-r_{1}+t}-q^{m-r_{2}}-q^{m-r_{1}}$ \\
\hline
\end{tabular}
\end{center}
Moreover, $\C_{D}$ is a near Griesmer code if $(q,t)=(2,1)$ and distance-optimal if $r_{1}+1>q^t$.
\end{thm}
\begin{proof}
Based on the discussions in Theorem \ref{optimalcode-con-1}, one can obtain the parameters of $\C_{D}$ and conclude that the weight $wt(c_{a})$ of $c_{a}\in\C_{D}$ is
\[wt(c_{a})=n-q^{m-1}+\Phi({\mathbb F}_{q^{r_{1}}})+\Phi({\mathbb F}_{q^{r_{2}}})-\Phi({\mathbb F}_{q^{t}})\]
for $a\in \fqm^{*}$. Then, by \eqref{equation-thm1-4}, for $a\in \fqm^{*}$, one gets
\begin{align*}
wt(c_{a})=\left\{\begin{array}{lllll}
w_1:=(q-1)q^{m-1}, &
\mbox{if}\,\,\Tr_{q^{r_{2}}}^{q^{m}}(a)=0,\,\,\Tr_{q^{r_{1}}}^{q^{m}}(a)=0 , \\
w_2:=(q-1)(q^{m-1}-q^{r_{1}-1}), &
\mbox{if}\,\,\Tr_{q^{r_{2}}}^{q^{m}}(a)=0,\,\,\,\Tr_{q^{r_{1}}}^{q^{m}}(a)\not=0, \\
w_3:=(q-1)(q^{m-1}-q^{r_{2}-1}), &
\mbox{if}\,\,\Tr_{q^{r_{2}}}^{q^{m}}(a)\not=0,\,\,\Tr_{q^{r_{1}}}^{q^{m}}(a)=0, \\
w_4:=(q-1)(q^{m-1}-q^{r_{2}-1}-q^{r_{1}-1}+q^{t-1}), &
\mbox{if}\,\,\Tr_{q^{r_{2}}}^{q^{m}}(a)\Tr_{q^{r_{1}}}^{q^{m}}(a)\Tr_{q^{t}}^{q^{m}}(a)\not=0 , \\
w_5:=(q-1)(q^{m-1}-q^{r_{2}-1}-q^{r_{1}-1}), &
\mbox{otherwise}
\end{array}\right.
\end{align*}
which together with Lemma \ref{lem-trace} indicates that
\begin{align*}
A_{w_{1}}+1=&|\{x\in \fqm: \Tr_{q^{r_{2}}}^{q^{m}}(x)=\Tr_{q^{r_{1}}}^{q^{m}}(x)=0\}|
=\frac{1}{q^{r_{1}}}|\{x\in \fqm: \Tr_{q^{r_{2}}}^{q^{m}}(x^{q^{r_{1}}}-x)=0\}|\\
=&\frac{1}{q^{r_{2}+r_{1}}}\sum_{x\in \fqm}\sum_{y\in {\mathbb F}_{q^{r_{2}}}}\chi(\Tr_{q}^{q^{r_{2}}}(y\Tr_{q^{r_{2}}}^{q^{m}}(x^{q^{r_{1}}}-x)))\\
=&\frac{1}{q^{r_{2}+r_{1}}}\sum_{y\in {\mathbb F}_{q^{r_{2}}}}\sum_{x\in \fqm}\chi(\Tr_{q}^{q^{m}}(y(x^{q^{r_{1}}}-x)))\\
=&\frac{1}{q^{r_{2}+r_{1}}}\sum_{y\in {\mathbb F}_{q^{r_{2}}}}\sum_{x\in \fqm}\chi(\Tr_{q}^{q^{m}}((y-y^{q^{r_{1}}})x))\\
=&q^{m-r_{2}-r_{1}+t}.
\end{align*}
The last equal sign holds since the equation $y^{q^{r_{1}}}=y$ has exactly $q^t$ solutions in ${\mathbb F}_{q^{r_{2}}}$.
Further, we have $A_{w_{2}}=q^{m-r_{2}}-q^{m-r_{2}-r_{1}+t}$ and $A_{w_{3}}=q^{m-r_{1}}-q^{m-r_{2}-r_{1}+t}$ since $A_{w_{1}}+A_{w_{2}}= |\{a\in \fqm^{*}: \Tr_{q^{r_{2}}}^{q^{m}}(a)=0\}|=q^{m-r_{2}}-1$ and $A_{w_{1}}+A_{w_{3}}= |\{a\in \fqm^{*}: \Tr_{q^{r_{1}}}^{q^{m}}(a)=0\}|=q^{m-r_{1}}-1$.
Since $0 \notin D$, the minimum distance of $\C_{D}^{\perp}$ is bigger than one, i.e., $A_{1}^{\perp}=0$. From the Pless Power Moments (see \cite{HWPV}, page 259), we have
$$\left\{\begin{array}{lll}
A_{w_{1}}+A_{w_{2}}+A_{w_{3}}+A_{w_{4}}+A_{w_{5}}=q^{m}-1,\\
w_{1}A_{w_{1}}+w_{2}A_{w_{2}}+w_{3}A_{w_{3}}+w_{4}A_{w_{4}}+w_{5}A_{w_{5}}=q^{m-1}(q-1)n.
\end{array}\right.$$
Solving the above equations gives the values of $A_{w_{4}}$ and $A_{w_{5}}$. On the other hand, using \eqref{equation-thm1-3}, we have $g(m,d)=q^m-q^{r_{2}}-q^{r_{1}}+1$ and then $\C_{D}$ is a near Griesmer code if $(q,t)=(2,1)$. The code $\C_{D}$ is distance-optimal if $r_{1}+1>q^t$
according to Theorem \ref{optimalcode-con-1}.
This completes the proof.
\end{proof}
\begin{example}
Let $q=2$, $m=6$, $r_{2}=3$ and $r_{1}=2$. Magma experiments show that $\C_{D}$ is a $[54,6,26]$ linear code with the weight enumerator $1+12z^{26}+32z^{27}+12z^{28}+4z^{30}+3z^{32}$, which is consistent with our result in Theorem \ref{optimalcode-con-1-2}. This code is a near Griesmer code and it is distance-optimal due to \cite{GMB}.
\end{example}
\section{The second family of optimal linear codes} \label{section-4}
In this section, we investigate the linear codes $\C_{D}$ of the form \eqref{CD} for
\begin{eqnarray}\label{CD1-D3}
D= \fqm \backslash \Omega_2,\;\;\Omega_2=\cup_{i=0}^{h}(\theta_{i}+\fqr),
\end{eqnarray}
where $m>1$, $r<m$ are positive integers satisfying $r|m$ and $\theta_{0}=0$, $\theta_{i}\in \fqm^*$ for $1\leq i\leq h$ satisfying $\theta_{i}-\theta_{j}\notin\fqr$ for any $0\leq i<j \leq h$.
For simplicity, define
\begin{eqnarray*}
\Theta_2=\{a\in \fqm: \Tr_{q^{r}}^{q^{m}}(a)=0\,\,{\rm and}\,\,\Tr_{q}^{q^{m}}(a\theta_{i})\not=0\,\,{\rm for\,\,any\,\,} 1\leq i \leq h\}.
\end{eqnarray*}
\begin{thm} \label{optimalcode-con-3}
Let $\C_{D}$ be defined by \eqref{CD} and \eqref{CD1-D3}, where $h$ is a positive integer satisfying $h+1<q^{m-r}$ if $h+1\leq q$ and otherwise $h<(q-1)q^{m-r-1}$ and $\Theta_2$ is nonempty.
Then \\
$1).$ $\C_{D}$ is a $[q^m-(h+1)q^r,m]$ linear code with minimal distance $d=(q-1)(q^{m-1}-(h+1)q^{r-1})$ (resp. $d=(q-1)q^{m-1}-hq^{r}$) if $h+1\leq q$ (resp. $h+1>q$);\\
$2).$ $\C_{D}$ is a Griesmer code if $h+1\leq q$; and when $h+1>q$, it is distance-optimal if
$(h+1)q^r+r>1+hq\frac{q^r-1}{q-1}+\sum_{i=r}^{m-1}\lfloor \frac{hq^r-1}{q^i}\rfloor$;\\
$3).$ $\C_{D}$ is at most $(h+2)$-weight and its weights take values from
\[\{(q-1)(q^{m-1}-(h+1)q^{r-1})\}\cup\{(q-1)q^{m-1}-iq^{r}:i=0,1,2,\cdots, h\};\]
$4).$ $\C_{D}$ is self-orthogonal if $(q,r)\notin \{(2,1),(2,2),(3,1)\}$;\\
$5).$ $\C_{D}$ is minimal if $h+1<q^{m-r-1}$ (resp. $h<(q-1)q^{m-r-2}$) when $h+1\leq q$ (resp. $h+1> q$).
\end{thm}
\begin{proof}
Observe that the length of $\C_{D}$ is $n=|D|=q^m-(h+1)q^r$ since $(\theta_{i}+\fqr)\cap(\theta_{j}+\fqr)=\emptyset$ due to $\theta_{i}-\theta_{j}\notin\fqr$ for any $0\leq i<j \leq h$. For $a\in \fqm^*$, the Hamming weight $wt(c_{a})$ of the codeword $c_{a}$ in $\C_{D}$ is $n-N_{a}$, where $N_{a}=|\{x\in \fqm\backslash \Omega_2: \Tr_{q}^{q^{m}}(ax) = 0\}|$. Using the orthogonal property of nontrivial additive characters gives
\begin{align*}
N_{a}=&\frac{1}{q}\sum_{x\in \fqm\backslash \Omega_2}\sum_{u\in \fq} \chi(u\Tr_{q}^{q^{m}}(ax))
=\frac{1}{q}\sum_{u\in \fq}(\sum_{x\in \fqm} \chi(u\Tr_{q}^{q^{m}}(ax))
-\sum_{i=0}^{h}\sum_{x\in (\theta_{i}+\fqr)} \chi(u\Tr_{q}^{q^{m}}(ax)))\\
=&q^{m-1}-\frac{1}{q}\sum_{u\in \fq}\sum_{i=0}^{h}\sum_{x\in \fqr} \chi(u\Tr_{q}^{q^{m}}(a(x+\theta_{i}))).
\end{align*}
Hence, for $a\in \fqm^*$, we have
\[wt(c_{a})=(q-1)q^{m-1}-(h+1)q^r+\frac{1}{q}\sum_{i=0}^{h}\sum_{u\in \fq}\sum_{x\in \fqr} \chi(u\Tr_{q}^{q^{m}}(a(x+\theta_{i}))).\]
This together with the following fact
\begin{align*}
\sum_{u\in \fq}\sum_{x\in \fqr} \chi(u\Tr_{q}^{q^{m}}(a(x+\theta)))=\left\{\begin{array}{ll}
q^{r+1}, & \mbox{if $\Tr_{q^{r}}^{q^{m}}(a)=0$ and $\Tr_{q}^{q^{m}}(a\theta)=0$}, \\
0, & \mbox{if $\Tr_{q^{r}}^{q^{m}}(a)=0$ and $\Tr_{q}^{q^{m}}(a\theta)\not=0$}, \\
q^{r}, & \mbox{if $\Tr_{q^{r}}^{q^{m}}(a)\not=0$},
\end{array}\right.
\end{align*}
where $\theta\in \fqm$, one can claim that $wt(c_{a})$ takes values from
\[\{(q-1)(q^{m-1}-(h+1)q^{r-1})\}\cup\{(q-1)q^{m-1}-iq^{r}:i=0,1,2,\cdots, h\}.\]
Moreover, $wt(c_{a})=(q-1)(q^{m-1}-(h+1)q^{r-1})$ if and only if $\Tr_{q^{r}}^{q^{m}}(a)\not=0$ and $wt(c_{a})=(q-1)q^{m-1}-hq^{r}$ if and only if $\Tr_{q^{r}}^{q^{m}}(a)=0$ and $\Tr_{q}^{q^{m}}(a\theta_{i})\not=0$ for any $1\leq i \leq h$.
Let $w_1=(q-1)(q^{m-1}-(h+1)q^{r-1})$ and $w_2=(q-1)q^{m-1}-hq^{r}$. Then the above discussion indicates that $A_{w_1}=|\{a\in \fqm: \Tr_{q^{r}}^{q^{m}}(a)\not=0\}|=q^{m}-q^{m-r}>0$ and $A_{w_2}>0$ if $\Theta_2$ is nonempty. If $h+1\leq q$, then we have $w_1\leq w_2$ which means that $d=w_1>0$ due to $q^{m-r}>h+1$. If $h+1> q$, then we have $w_1>w_2$ and consequently, $d=w_2>0$ since $\Theta_2 \ne \emptyset$ and $h<(q-1)q^{m-r-1}$. This also shows that the dimension of $\C_{D}$ is equal to $m$.
According to the Griesmer bound, for $h+1\leq q$, one obtains
\begin{align*}
g(m,d)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)(q^{m-1}-(h+1)q^{r-1})}{q^i}\rceil
=q^{m}-(h+1)q^{r},
\end{align*}
which implies that $\C_{D}$ is a Griesmer code.
For the case $h+1> q$, we have
\begin{align*}
g(m,d)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)q^{m-1}-hq^{r}}{q^i}\rceil
=q^m-1-hq\frac{q^r-1}{q-1}-\sum_{i=0}^{m-r-1}\lfloor \frac{h}{q^i}\rfloor
\end{align*}
and
\begin{align*}
g(m,d+1)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)q^{m-1}-hq^{r}+1}{q^i}\rceil
=q^m+r-1-hq\frac{q^r-1}{q-1}-\sum_{i=r}^{m-1}\lfloor \frac{hq^r-1}{q^i}\rfloor.
\end{align*}
Note that $n>g(m,d)$ when $h+1>q$ since $n-g(m,d)=hq\frac{q^r-1}{q-1}+\sum_{i=0}^{m-r-1}\lfloor \frac{h}{q^i}\rfloor-(h+1)q^r+1=h(q+\cdots+q^{r-1})-q^r+\sum_{i=0}^{m-r-1}\lfloor \frac{h}{q^i}\rfloor+1>0$.
Thus, when $h+1> q$, $\C_{D}$ is distance-optimal if
$(h+1)q^r+r>1+hq\frac{q^r-1}{q-1}+\sum_{i=r}^{m-1}\lfloor \frac{hq^r-1}{q^i}\rfloor$.
The self-orthogonality and minimality of $\C_{D}$ can be readily verified by Lemmas \ref{self-orthogonalF}, \ref{self-orthogonalD-} and \ref{minimal}. This completes the proof.
\end{proof}
\begin{remark}
The code in Theorem \ref{optimalcode-con-3} is a Griesmer code if $h+1\leq q$. When $h=0$, it has the same parameters and weight distribution as the code in Theorem \ref{optimalcode-con-1-1}, and it has different parameters with the Solomon and Stiffler codes in the nonprojective case when $1\leq h\leq q-1$.
\end{remark}
\begin{remark} \label{remark-7}
The condition $\Theta_2\ne \emptyset$ for $h+1>q$ can be easily satisfied. For example, let $a\in\fqm^{*}$ with $\Tr_{q^{r}}^{q^{m}}(a)=0$ and $\theta_{1},\theta_{2},\cdots,\theta_{h} \in \Lambda$ satisfy $\theta_{i}\notin \theta_{j}+\fqr$ for any $1\leq i<j \leq h$, where $\Lambda=\{\theta\in \fqm:\Tr_{q}^{q^{m}}(a\theta)\not=0\}$. If $(q-1)q^{m-1}-hq^{r}\geq 0 $, namely, $h\leq (q-1)q^{m-r-1}$, then there must exist $\theta_{i}$'s such that $\Theta_2\ne \emptyset$ since $|\Lambda|=(q-1)q^{m-1}$.
Specially, for $h=2$, which implies $q=2$, $|\Theta_2|=\tau q^{m-r-2}>0$ always holds due to $\tau\geq 1$ (see Theorem \ref{optimalcode-con-3-2} below).
\end{remark}
\begin{example}
Let $q=4$, $m=6$, $r=2$, $\theta_{1}=\alpha$, $\theta_{2}=\alpha^2$ and $\theta_{3}=\alpha^3$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[4032,6,3024]$ linear code with the weight enumerator
$1+3948z^{3024}+108z^{3040}+36z^{3056}+3z^{3072}$, which is consistent with our result in Theorem \ref{optimalcode-con-3}. This code is a Griesmer code and it is also self-orthogonal and minimal.
\end{example}
In what follows, we determine the weight distributions of $\C_{D}$ in Theorem \ref{optimalcode-con-3} for $h=1$ and $h=2$ respectively.
\begin{thm} \label{optimalcode-con-3-1}
Let $\C_{D}$ be defined by \eqref{CD} and \eqref{CD1-D3} with $h=1$ and $q^{m-r}>2$.
Then $\C_{D}$ is a $[q^m-2q^r,m,(q-1)(q^{m-1}-2q^{r-1})]$ Griesmer code with the following weight distribution:
\begin{center}
\begin{tabular}{lll}
\hline weight $w$ & Multiplicity $A_{w}$\\ \hline
$0$ & $1$ \\
$(q-1)q^{m-1}$ & $q^{m-r-1}-1$ \\
$(q-1)q^{m-1}-q^{r}$ & $(q-1)q^{m-r-1}$ \\
$(q-1)(q^{m-1}-2q^{r-1})$ & $q^{m-r}(q^{r}-1)$ \\
\hline
\end{tabular}
\end{center}
\end{thm}
\begin{proof}
According to the proof of Theorem \ref{optimalcode-con-3}, one can get the parameters of $\C_{D}$ and conclude that the nonzero weights of $\C_{D}$ take values from $\{w_1:=(q-1)(q^{m-1}-2q^{r-1}), w_2:=(q-1)q^{m-1}-q^{r}, w_3:=(q-1)q^{m-1} \}$ with $A_{w_1}=q^m-q^{m-r}$ if $h=1$. Using the Pless Power Moments (see \cite{HWPV}, page 259) and the fact $A_{1}^{\perp}=0$ since $0 \notin D$ give
$$\left\{\begin{array}{lll}
A_{w_{1}}+A_{w_{2}}+A_{w_{3}}=q^{m}-1,\\
w_{1}A_{w_{1}}+w_{2}A_{w_{2}}+w_{3}A_{w_{3}}=q^{m-1}(q-1)n,
\end{array}\right.$$
which leads to $A_{w_2}=(q-1)q^{m-r-1}$ and $A_{w_3}=q^{m-r-1}-1$. The code $\C_{D}$ is a Griesmer code follows from Theorem \ref{optimalcode-con-3}. This completes the proof.
\end{proof}
\begin{example}
Let $q=2$, $m=6$, $r=2$ and $\theta_{1}=\alpha$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[56,6,28]$ binary linear code with the weight enumerator $1+56z^{28}+7z^{32}$, which is consistent with our result in Theorem \ref{optimalcode-con-3-1}. This code is a Griesmer code and distance-optimal due to \cite{GMB}.
\end{example}
\begin{example}
Let $q=3$, $m=4$, $r=2$ and $\theta_{1}=\alpha$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[63,4,42]$ linear code with the weight enumerator $1+72z^{42}+6z^{45}+2z^{54}$, which is consistent with our result in Theorem \ref{optimalcode-con-3-1}. This code is a Griesmer code and distance-optimal due to \cite{GMB}.
\end{example}
\begin{thm} \label{optimalcode-con-3-2}
Let $\C_{D}$ be defined by \eqref{CD} and \eqref{CD1-D3} with $h=2$. Let $\tau=|\{(u,v)\in \fq^2: u\theta_{1}+v\theta_{2}\in \fqr \}|$ and $m, r$ be positive integers satisfying $m>r+2$ when $q=2$ and $m>r+1$ when $q=3$. Then the weight distribution of $\C_{D}$ is given by
\begin{center}
\begin{tabular}{lll}
\hline weight $w$ & Multiplicity $A_{w}$\\ \hline
$0$ & $1$ \\
$(q-1)(q^{m-1}-3q^{r-1})$ & $q^{m-r}(q^{r}-1)$ \\
$(q-1)q^{m-1}$ & $\tau q^{m-r-2}-1$ \\
$(q-1)q^{m-1}-2q^{r}$ & $(q^{2}-2q+\tau)q^{m-r-2}$ \\
$(q-1)q^{m-1}-q^{r}$ & $2(q-\tau)q^{m-r-2}$ \\
\hline
\end{tabular}
\end{center}
Moreover, $\C_{D}$ is a Griesmer code when $q>2$ and for $q=2$ it is distance-optimal if $r=1$.
\end{thm}
\begin{proof}
For $h=2$, by Theorem \ref{optimalcode-con-3}, the nonzero weight $wt(c_{a})$ of $c_{a}\in\C_{D}$ belongs to $\{w_{1}:=(q-1)(q^{m-1}-3q^{r-1}), w_{2}:=(q-1)q^{m-1}, w_{3}:=(q-1)q^{m-1}-2q^{r}, w_{4}:=(q-1)q^{m-1}-q^{r}\}$. Moreover, similar to the proof of Theorem \ref{optimalcode-con-3}, for $a\in \fqm^*$, one can obtain
\begin{align*}
wt(c_{a})=\left\{\begin{array}{ll}
(q-1)(q^{m-1}-3q^{r-1}), & \mbox{if $\Tr_{q^{r}}^{q^{m}}(a)\not=0$}, \\
(q-1)q^{m-1},& \mbox{if $\Tr_{q^{r}}^{q^{m}}(a)=0$ and $\Tr_{q}^{q^{m}}(a\theta_{1})=0$ and $\Tr_{q}^{q^{m}}(a\theta_{2})=0$}, \\
(q-1)q^{m-1}-2q^{r},& \mbox{if $\Tr_{q^{r}}^{q^{m}}(a)=0$ and $\Tr_{q}^{q^{m}}(a\theta_{1})\not=0$ and $\Tr_{q}^{q^{m}}(a\theta_{2})\not=0$}, \\
(q-1)q^{m-1}-q^{r}, & \mbox{otherwise}
\end{array}\right.
\end{align*}
and consequently one gets $A_{w_{1}}=q^m-q^{m-r}$ and
\begin{align*}
1+A_{w_{2}}=&|\{x\in \fqm: \Tr_{q^{r}}^{q^{m}}(x)=\Tr_{q}^{q^{m}}(\theta_{1}x)=\Tr_{q}^{q^{m}}(\theta_{2}x)=0\}|\\
=&\frac{1}{q^r}|\{x\in \fqm: \Tr_{q}^{q^{m}}(\theta_{1}(x^{q^r}-x))=\Tr_{q}^{q^{m}}(\theta_{2}(x^{q^r}-x))=0\}|\\
=&\frac{1}{q^{r+2}}\sum_{x\in \fqm}\sum_{u\in \fq}\chi(u\Tr_{q}^{q^{m}}(\theta_{1}(x^{q^r}-x)))
\sum_{v\in \fq}\chi(v\Tr_{q}^{q^{m}}(\theta_{2}(x^{q^r}-x)))\\
=&\frac{1}{q^{r+2}}\sum_{u\in \fq}\sum_{v\in \fq}\sum_{x\in \fqm}
\chi(\Tr_{q}^{q^{m}}((u\theta_{1}+v\theta_{2})(x^{q^r}-x)))\\
=&\frac{1}{q^{r+2}}\sum_{u\in \fq}\sum_{v\in \fq}\sum_{x\in \fqm}
\chi(\Tr_{q}^{q^{m}}(((u\theta_{1}+v\theta_{2})-(u\theta_{1}+v\theta_{2})^{q^r})x))\\
=&\tau q^{m-r-2}
\end{align*}
by using Lemma \ref{lem-trace}.
Then, the weight distribution of $\C_{D}$ follows from the first two Pless Power Moments as we did before.
Theorem \ref{optimalcode-con-3} implies that $\C_{D}$ is a Griesmer code when $q\geq3$. For $q=2$, we have
\[g(m,d+1)=\sum_{i=0}^{m-1} \lceil \frac{2^{m-1}-2^{r+1}+1}{2^i}\rceil
=2^m-2^{r+2}+r+2,
\]
which implies that $\C_{D}$ for $q=2$ is distance-optimal if $r=1$ since $g(m,d+1)-n=r+2-2^r>0$ holds if and only if $r=1$.
This completes the proof.
\end{proof}
\begin{remark}
Note that $\tau\geq 1$ since $(0,0)\in \{(u,v)\in \fq^2: u\theta_{1}+v\theta_{2}\in \fqr \}$ and $\C_{D}$ in Theorem \ref{optimalcode-con-3-2} is reduced to $3$-weight if $q=3$. In particular, we have $\tau=1$ if $q=2$ by the definition of $\theta_{i}$'s.
\end{remark}
\begin{example}
Let $q=2$, $m=6$, $r=1$, $\theta_{1}=\alpha$ and $\theta_{2}=\alpha^2$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[58,6,28]$ binary linear code with the weight enumerator $1+8z^{28}+32z^{29}+16z^{30}+7z^{32}$, which is consistent with our result in Theorem \ref{optimalcode-con-3-2}. This code is distance-optimal due to \cite{GMB}.
\end{example}
\begin{example}
Let $q=3$, $m=4$, $r=1$, $\theta_{1}=\alpha$ and $\theta_{2}=\alpha^2$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[72,4,48]$ linear code with the weight enumerator $1+66z^{48}+12z^{51}+2z^{54}$, which is consistent with our result in Theorem \ref{optimalcode-con-3-2}. This code is a Griesmer code and it is distance-optimal due to \cite{GMB}.
\end{example}
\section{The third family of optimal linear codes} \label{section-5}
In this section, we study the linear codes $\C_{D}$ of the form \eqref{CD} for
\begin{eqnarray}\label{CD1-D4}
D= \fqm \backslash \Omega_3,\;\;\Omega_3=\cup_{i=1}^{h}(\theta_{i}*\fqr),
\end{eqnarray}
where $m>1$, $r<m$ are positive integers satisfying $r|m$ and $\theta_{i}\in \fqm^{*}$ for $1\leq i \leq h$ satisfying $\theta_{i}/\theta_{j}\notin\fqr$ for any $1\leq i<j \leq h$. For simplicity, define
\begin{eqnarray*}
\Theta_3=\{a\in \fqm:\Tr_{q^{r}}^{q^{m}}(a\theta_{i})\not=0\,\,{\rm for\,\,any\,\,} 1\leq i \leq h\}.
\end{eqnarray*}
\begin{thm} \label{optimalcode-con-4}
Let $\C_{D}$ be defined by \eqref{CD} and \eqref{CD1-D4}. If $h<q^{m-r}$ and $\Theta_3\ne \emptyset$, then\\
$1).$ $\C_{D}$ is a $[q^m-hq^r+h-1,m,(q-1)(q^{m-1}-hq^{r-1})]$ linear code;\\
$2).$ $\C_{D}$ is at most $(h+1)$-weight and its weights take values from
\[\{(q-1)(q^{m-1}-iq^{r-1}): i=0,1,2,\cdots,h\};\]
$3).$ $\C_{D}$ is a Griesmer code if and only if $h=1$ and it is a near Griesmer code if and only if $h=2$ or $(q,h)=(2,3)$. When $h>1$, $\C_{D}$ is distance-optimal if $r>\sum_{i=r}^{m-1}\lfloor \frac{h(q-1)q^{r-1}-1}{q^i}\rfloor$;\\
$4).$ $\C_{D}$ is self-orthogonal if $(q,r)\notin \{(2,1),(2,2),(3,1)\}$;\\
$5).$ $\C_{D}$ is minimal if $q^{m-r-1}> h$.
\end{thm}
\begin{proof}
The length of $\C_{D}$ is $n=|D|=q^m-hq^r+h-1$ since $(\theta_{i}*\fqr)\cap(\theta_{j}*\fqr)=\{0\}$ due to $\theta_{i}/\theta_{j}\notin\fqr$ for any $1\leq i<j \leq h$. For $a\in \fqm^*$, the Hamming weight $wt(c_{a})$ of the codeword $c_{a}$ in $\C_{D}$ is $n-N_{a}$, where $N_{a}=|\{x\in \fqm\backslash \Omega_3: \Tr_{q}^{q^{m}}(ax) = 0\}|$. Using the orthogonal property of nontrivial additive characters leads to
\begin{align*}
N_{a}=&\frac{1}{q}\sum_{x\in \fqm\backslash \Omega_3}\sum_{u\in \fq} \chi(u\Tr_{q}^{q^{m}}(ax))\\
=&\frac{1}{q}\sum_{u\in \fq}(\sum_{x\in \fqm} \chi(u\Tr_{q}^{q^{m}}(ax))
-\sum_{i=1}^{h}\sum_{x\in \theta_{i}*\fqr} \chi(u\Tr_{q}^{q^{m}}(ax)))+h-1\\
=&\frac{1}{q}(q^m-\sum_{i=1}^{h}\sum_{u\in \fq}\sum_{x\in \fqr} \chi(u\Tr_{q}^{q^{m}}(a\theta_{i}x)))+h-1,
\end{align*}
which leads to
\[wt(c_{a})=(q-1)q^{m-1}-hq^r+\frac{1}{q}\sum_{i=1}^{h}\sum_{u\in \fq}\sum_{x\in \fqr} \chi(u\Tr_{q}^{q^{m}}(a\theta_{i}x)).\]
Note that
\begin{align*}
\frac{1}{q}\sum_{u\in \fq}\sum_{x\in \fqr} \chi(u\Tr_{q}^{q^{m}}(a\theta x))=\left\{\begin{array}{ll}
q^{r}, & \mbox{if $\Tr_{q^{r}}^{q^{m}}(a\theta)=0$}, \\
q^{r-1}, & \mbox{if $\Tr_{q^{r}}^{q^{m}}(a\theta)\ne0$}
\end{array}\right.
\end{align*}
holds for any $\theta\in \fqm^{*}$. Thus, for $a\in\fqm^*$, one can conclude tha $wt(c_{a})$ takes value from
\[\left\{(q-1)q^{m-1},(q-1)(q^{m-1}-q^{r-1}),\cdots,(q-1)(q^{m-1}-hq^{r-1})\right\}\]
and $wt(c_{a})=(q-1)(q^{m-1}-hq^{r-1})$ if and only if $\Tr_{q^{r}}^{q^{m}}(a\theta_{i})\not=0$ for any $1\leq i \leq h$.
Thus, $\C_{D}$ is at most $(h+1)$-weight.
Since $\Theta_{3}\ne \emptyset$, we have $d=(q-1)(q^{m-1}-hq^{r-1})>0$ due to $q^{m-r}>h$. This implies that the dimension of $\C_{D}$ is $m$. According to the Griesmer bound, we have
\begin{align*}
g(m,d)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)(q^{m-1}-hq^{r-1})}{q^i}\rceil
=q^{m}-hq^{r}+h-1-\sum_{i=1}^{m-r}\lfloor \frac{h(q-1)}{q^i}\rfloor.
\end{align*}
It can be readily verified that $n-g(m,d)=0$ if and only if $h=1$ and $n-g(m,d)=1$ if and only if $h=2$ or $(q,h)=(2,3)$.
When $h>1$, we have
\begin{align*}
g(m,d+1)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)(q^{m-1}-hq^{r-1})+1}{q^i}\rceil
=n+r-\sum_{i=r}^{m-1}\lfloor \frac{h(q-1)q^{r-1}-1}{q^i}\rfloor.
\end{align*}
Therefore, $\C_{D}$ is distance-optimal if $r>\sum_{i=r}^{m-1}\lfloor \frac{h(q-1)q^{r-1}-1}{q^i}\rfloor$.
Then the proof is completed due to Lemmas \ref{self-orthogonalF}, \ref{self-orthogonalD-} and \ref{minimal}.
\end{proof}
\begin{remark}
The condition $\Theta_3\ne \emptyset$ always holds for $h< q^{r}$ since $|\Theta_3|\geq q^m-hq^{m-r}>0$ due to the fact that $|\{a\in \fqm: \Tr_{q^{r}}^{q^{m}}(a\theta_{i})=0\}|=q^{m-r}$ for any $\theta_{i}\ne 0$. Moreover, similar to the discussion in Remark \ref{remark-7}, we conclude that there must exist $\theta_{i}$'s such that $|\Theta_3|>0$ for $h\leq q^{m-r}$ since $\theta_{i}/\theta_{j}\notin\fqr^*$ for any $1\leq i<j \leq h$. In addition, assume that $\theta_{i}$'s are linearly independent over $\fqr$ for $1\leq i \leq h$ which implies $h\leq m/r$, then by the property of trace functions, for any $(v_{1},\cdots,v_{h})\in \fqr^{h}$, we have $|\{x\in \fqm: \Tr_{q^{r}}^{q^{m}}(x\theta_{i})=-v_{i}\,\,{\rm for\,\,all\,\,} 1\leq i \leq h\}|=q^{m-hr}$ which indicates that $|\Theta_3|=(q^{r}-1)^{h}q^{m-hr}>0$.
\end{remark}
\begin{example}
Let $q=2$, $m=12$, $r=3$, $\theta_{1}=1$, $\theta_{2}=\alpha$, $\theta_{3}=\alpha^2$ and $\theta_{4}=\alpha^3$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[4067,12,2032]$ binary linear code with the weight enumerator $1+2401z^{2032}+1372z^{2036}+294z^{2040}+28z^{2044}$, which is consistent with our result in Theorem \ref{optimalcode-con-4}. This code is distance-optimal due to the Griesmer bound and it is also self-orthogonal and minimal.
\end{example}
The Griesmer code $\C_{D}$ in Theorem \ref{optimalcode-con-4} for $h=1$ has the same parameters and weight distribution as the one in Theorem \ref{optimalcode-con-1-1}. In the following, we determine the weight distribution of $\C_{D}$ in Theorem \ref{optimalcode-con-4} for $h=2$.
\begin{thm} \label{optimalcode-con-4-1}
Let $\C_{D}$ be defined by \eqref{CD} and \eqref{CD1-D4} with $h=2$ and $q^{m-r}>2$. Then $\C_{D}$ is a $3$-weight $[q^m-2q^r+1,m,(q-1)(q^{m-1}-2q^{r-1})]$ linear code with the following weight distribution:
\begin{center}
\begin{tabular}{lll}
\hline weight $w$ & Multiplicity $A_{w}$\\ \hline
$0$ & $1$ \\
$(q-1)q^{m-1}$ & $q^{m-2r}-1$ \\
$(q-1)(q^{m-1}-2q^{r-1})$ & $q^m-2q^{m-r}+q^{m-2r}$ \\
$(q-1)(q^{m-1}-q^{r-1})$ & $2(q^{m-r}-q^{m-2r})$ \\
\hline
\end{tabular}
\end{center}
Moreover, $\C_{D}$ is a near Griesmer code and it is distance-optimal if
$r+\lfloor \frac{2}{q}\rfloor>1$.
\end{thm}
\begin{proof}
According to the proof of Theorem \ref{optimalcode-con-4}, one can get the parameters of $C_{D}$ and claim that the weight of $c_{a}\in C_{D}$ for $a\in\fqm^*$ is
\begin{align}\label{thm10-equ-1}
wt(c_{a})=\left\{\begin{array}{ll}
w_{1}:=(q-1)q^{m-1}, & \mbox{if $\Tr_{q^{r}}^{q^{m}}(a\theta_{1})=0$ and $\Tr_{q^{r}}^{q^{m}}(a\theta_{2})=0$}, \\
w_{2}:=(q-1)(q^{m-1}-2q^{r-1}), & \mbox{if $\Tr_{q^{r}}^{q^{m}}(a\theta_{1})\ne0$ and $\Tr_{q^{r}}^{q^{m}}(a\theta_{2})\ne0$}, \\
w_{3}:=(q-1)(q^{m-1}-q^{r-1}), & \mbox{otherwise}.
\end{array}\right.
\end{align}
By Lemma \ref{lem-trace}, $\theta_{2}/\theta_{1}\notin \fqr$ and \eqref{thm10-equ-1}, one obtains
\begin{align*}
1+A_{w_{1}}
=&|\{x\in \fqm: \Tr_{q^{r}}^{q^{m}}(x)=0\,\,
{\rm and}\,\,\Tr_{q^{r}}^{q^{m}}(x\theta_{2}/\theta_{1})=0\}| \\
=&\frac{1}{q^r}|\{x\in \fqm: \Tr_{q^{r}}^{q^{m}}(\theta_{2}/\theta_{1}(x^{q^r}-x))=0\}|\\
=&\frac{1}{q^r}|\{x\in \fqm: \Tr_{q^{r}}^{q^{m}}((\theta_{2}/\theta_{1}-(\theta_{2}/\theta_{1})^{q^r})x)=0\}|\\
=&q^{m-2r}.
\end{align*}
Then, the weight distribution of $\C_{D}$ follows from the first two Pless Power Moments.
$\C_{D}$ is a near Griesmer code due to Theorem \ref{optimalcode-con-4}. By the Griesmer bound, we have
\begin{align*}
g(m,d+1)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)(q^{m-1}-2q^{r-1})+1}{q^i}\rceil
=\left\{\begin{array}{ll}
q^{m}-2q^{r}+r+1, & \mbox{if $q=2$}, \\
q^{m}-2q^{r}+r, & \mbox{if $q>2$} \\
\end{array}\right.
\end{align*}
and then $\C_{D}$ is distance-optimal if
$r+\lfloor \frac{2}{q}\rfloor>1$.
This completes the proof.
\end{proof}
\begin{example}
Let $q=2$, $m=6$, $r=2$, $\theta_{1}=1$ and $\theta_{2}=\alpha$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[57,6,28]$ binary linear code with the weight enumerator $1+36z^{28}+24z^{30}+3z^{32}$, which is consistent with our result in Theorem \ref{optimalcode-con-4-1}. This code is distance-optimal due to \cite{GMB}.
\end{example}
\begin{example}
Let $q=3$, $m=4$, $r=2$, $\theta_{1}=1$ and $\theta_{2}=\alpha$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[64,4,42]$ linear code with the weight enumerator $1+64z^{42}+16z^{48}$, which is consistent with our result in Theorem \ref{optimalcode-con-4-1}. This code is distance-optimal due to \cite{GMB}.
\end{example}
The weight distribution of $\C_{D}$ in Theorem \ref{optimalcode-con-4} can be determined for $h=3$ as follow.
\begin{thm} \label{optimalcode-con-4-2}
Let $\C_{D}$ be defined by \eqref{CD} and \eqref{CD1-D4} with $h=3$ and $q^{m-r}>3$. Then $\C_{D}$ is a $[q^m-3q^r+2,m,(q-1)(q^{m-1}-3q^{r-1})]$ linear code with the following weight distribution
\begin{center}
\begin{tabular}{lll}
\hline weight $w$ & Multiplicity $A_{w}$\\ \hline
$0$ & $1$ \\
$(q-1)q^{m-1}$ & $q^{m-2r}-1$ \\
$(q-1)(q^{m-1}-2q^{r-1})$ & $3q^{m-2r}(q^{r}-1)$ \\
$(q-1)(q^{m-1}-3q^{r-1})$ & $q^m-3q^{m-r}+2q^{m-2r}$ \\
\hline
\end{tabular}
\end{center}
if $\frac{\theta_{3}/\theta_{1}-(\theta_{3}/\theta_{1})^{q^r}}{\theta_{2}/\theta_{1}-(\theta_{2}/\theta_{1})^{q^r}} \in \fqr$, and for $\frac{\theta_{3}/\theta_{1}-(\theta_{3}/\theta_{1})^{q^r}}{\theta_{2}/\theta_{1}-(\theta_{2}/\theta_{1})^{q^r}} \not\in \fqr$, its weight distribution is given by
\begin{center}
\begin{tabular}{lll}
\hline weight $w$ & Multiplicity $A_{w}$\\ \hline
$0$ & $1$ \\
$(q-1)q^{m-1}$ & $q^{m-3r}-1$ \\
$(q-1)(q^{m-1}-q^{r-1})$ & $3(q^r-1)q^{m-3r}$ \\
$(q-1)(q^{m-1}-2q^{r-1})$ & $3q^{m-r}-6q^{m-2r}+3q^{m-3r}$ \\
$(q-1)(q^{m-1}-3q^{r-1})$ & $q^m-3q^{m-r}+3q^{m-2r}-q^{m-3r}$ \\
\hline
\end{tabular}
\end{center}
Moreover, $\C_{D}$ is a near Griesmer code when $q=2$, and it is distance-optimal if $r>1$ (resp. $r>2$) when $q=2,\,3$ (resp. $q>3$).
\end{thm}
\begin{proof}
According to the proof of Theorem \ref{optimalcode-con-4}, one can obtain the parameters of $\C_{D}$ and conclude that $\C_{D}$ has the following four possible nonzero weights:
$w_{1}=(q-1)q^{m-1}$, $w_{2}=(q-1)(q^{m-1}-q^{r-1})$, $w_{3}=(q-1)(q^{m-1}-2q^{r-1})$ and $w_{4}=(q-1)(q^{m-1}-3q^{r-1})$.
Further, one can also have that $wt(c_{a})=w_{1}$ if and only if $\Tr_{q^{r}}^{q^{m}}(a\theta_{1})=\Tr_{q^{r}}^{q^{m}}(a\theta_{2})=\Tr_{q^{r}}^{q^{m}}(a\theta_{3})=0$ and $wt(c_{a})=w_{2}$ if and only if exactly one value in the set $$\{\Tr_{q^{r}}^{q^{m}}(a\theta_{1}),\Tr_{q^{r}}^{q^{m}}(a\theta_{2}),\Tr_{q^{r}}^{q^{m}}(a\theta_{3})\}$$ is not equal to $0$.
Similar to the calculation of $A_{w_{1}}$ in Theorem \ref{optimalcode-con-4-1}, for $h=3$, one can derive
\begin{align*}
A_{w_{1}}=&\left\{\begin{array}{ll}
q^{m-2r}-1, & \mbox{if $\frac{\theta_{3}/\theta_{1}-(\theta_{3}/\theta_{1})^{q^r}}{\theta_{2}/\theta_{1}-(\theta_{2}/\theta_{1})^{q^r}} \in \fqr$}, \\
q^{m-3r}-1, & \mbox{otherwise} \\
\end{array}\right.
\end{align*}
by using the same manner.
Next, we calculate $A_{w_{2}}$ for $h=3$. Assume that $\{i,j,k\}=\{1,2,3\}$. Since $\theta_{k}/\theta_{i},\,\, \theta_{j}/\theta_{i} \notin \fqr$, then by employing the same technique, we have
\begin{align*}
&|\{x\in \fqm: \Tr_{q^{r}}^{q^{m}}(x\theta_{i})=0\,\,
{\rm and}\,\,\Tr_{q^{r}}^{q^{m}}(x\theta_{j})=0\,\,{\rm and}\,\,\Tr_{q^{r}}^{q^{m}}(x\theta_{k})\ne 0\}| \\
=&\frac{1}{q^{2r}}|\{x\in \fqm: \Tr_{q^{r}}^{q^{m}}(\frac{\theta_{k}/\theta_{i}-(\theta_{k}/\theta_{i})^{q^r}}{\theta_{j}/\theta_{i}-(\theta_{j}/\theta_{i})^{q^r}}
(x^{q^r}-x))\ne 0\}|\\
=&\left\{\begin{array}{ll}
0, & \mbox{if $\frac{\theta_{k}/\theta_{i}-(\theta_{k}/\theta_{i})^{q^r}}{\theta_{j}/\theta_{i}-(\theta_{j}/\theta_{i})^{q^r}} \in \fqr$}, \\
(q^r-1)q^{m-3r}, & \mbox{otherwise}
\end{array}\right.
\end{align*}
which implies that
\begin{align*}
A_{w_{2}}=\left\{\begin{array}{ll}
0, & \mbox{if $\frac{\theta_{3}/\theta_{1}-(\theta_{3}/\theta_{1})^{q^r}}{\theta_{2}/\theta_{1}-(\theta_{2}/\theta_{1})^{q^r}} \in \fqr$}, \\
3(q^r-1)q^{m-3r}, & \mbox{otherwise}
\end{array}\right.
\end{align*}
since $\delta_{1}:=\frac{\theta_{3}/\theta_{1}-(\theta_{3}/\theta_{1})^{q^r}}{\theta_{2}/\theta_{1}-(\theta_{2}/\theta_{1})^{q^r}} \in \fqr$, $\frac{\theta_{2}/\theta_{1}-(\theta_{2}/\theta_{1})^{q^r}}{\theta_{3}/\theta_{1}-(\theta_{3}/\theta_{1})^{q^r}} \in \fqr$ and $\delta_{2}:=\frac{\theta_{1}/\theta_{2}-(\theta_{1}/\theta_{2})^{q^r}}{\theta_{3}/\theta_{2}-(\theta_{3}/\theta_{2})^{q^r}} \in \fqr$ hold simultaneously due to $\delta_{1}\in \fqr$ if and only if $\delta_{2}\in \fqr$. Note that $\delta_{1}\ne 0$ and $\delta_{2} \ne 0$ since $\theta_{i}/\theta_{j}\notin\fqr$ for any $1\leq i<j \leq 3$. It can be readily verified that $\delta_{2}(\theta_{3}/\theta_{1}-(\theta_{2}/\theta_{1})\delta_{1})=1$. If $\delta_{1}\in \fqr$, we have $(\frac{1}{\delta_{2}})^{q^{r}}-\frac{1}{\delta_{2}}=(\theta_{3}/\theta_{1})^{q^{r}}-\theta_{3}/\theta_{1}-
((\theta_{2}/\theta_{1})^{q^{r}}-\theta_{2}/\theta_{1})\delta_{1}=0$ which implies $\delta_{2}\in \fqr$. Similarly we have $\delta_{1}\in \fqr$ if $\delta_{2}\in \fqr$.
Then, the values of $A_{w_{3}}$ and $A_{w_{4}}$ follow from the first two Pless Power Moments.
By Theorem \ref{optimalcode-con-4}, $\C_{D}$ is a near Griesmer code when $q=2$. A straightforward calculation gives
\begin{align*}
g(m,d+1)=\sum_{i=0}^{m-1} \lceil \frac{(q-1)(q^{m-1}-3q^{r-1})+1}{q^i}\rceil
=\left\{\begin{array}{ll}
q^{m}-3q^{r}+r+1, & \mbox{if $q=2 \,\,{\rm or\,\,}3$}, \\
q^{m}-3q^{r}+r, & \mbox{if $q>3$}. \\
\end{array}\right.
\end{align*}
Thus, when $q=2$ or $q=3$, $\C_{D}$ is distance-optimal if $r>1$ and when $q>3$ it is distance-optimal if $r>2$.
This completes the proof.
\end{proof}
\begin{example}
Let $q=2$, $m=6$, $r=2$, $\theta_{1}=1$, $\theta_{2}=\alpha$ and $\theta_{3}=1+\alpha$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[54,6,26]$ binary linear code with the weight enumerator $1+24z^{26}+36z^{28}+3z^{32}$, which is consistent with our result in Theorem \ref{optimalcode-con-4-2}. This code is a near Griesmer code and it is distance-optimal due to \cite{GMB}.
\end{example}
\begin{example}
Let $q=3$, $m=8$, $r=2$, $\theta_{1}=1$, $\theta_{2}=\alpha$ and $\theta_{3}=\alpha^2$, where $\alpha$ is a primitive element of $\fqm$. Magma experiments show that $\C_{D}$ is a $[6536,8,4356]$ linear code with the weight enumerator $1+4608z^{4356}+1728z^{4362}+216z^{4368}+8z^{4374}$, which is consistent with our result in Theorem \ref{optimalcode-con-4-2}. This code is distance-optimal due to the Griesmer bound.
\end{example}
\section{The fourth family of optimal linear codes} \label{section-6}
In this section, we study the linear codes $\C_{D}$ of the form \eqref{CDbi} with the defining set
\begin{eqnarray}\label{CD2-D2}
D=\{(x,y):x\in \fqm\backslash \fqr, y \in \fqk\backslash \fqs \},
\end{eqnarray}
where $m$, $k$, $r<m$, $s<k$ are positive integers satisfying $r|m$, $s|k$.
\begin{thm} \label{optimalcode-con-2}
Let $\C_{D}$ be defined by \eqref{CDbi} and \eqref{CD2-D2}. If $m+s\geq k+r$ and $q^{m-r}>q^{m-r+s-k}+1$, then \\
$1).$ $\C_{D}$ is a $[(q^{m}-q^{r})(q^{k}-q^{s}),m+k,(q-1)(q^{m+k-1}-q^{m+s-1}-q^{k+r-1})]$ linear code;\\
$2).$ $\C_{D}$ is $4$-weight with the following weight distribution:
\begin{center}
\begin{tabular}{llll}
\hline weight $w$ & Multiplicity $A_{w}$\\ \hline
$0$ & $1$ \\
$(q-1)(q^{m+k-1}-q^{k+r-1})$ & $q^{k-s}-1$ \\
$(q-1)(q^{m+k-1}-q^{m+s-1})$ & $q^{m-r}-1$ \\
$(q-1)(q^{m+k-1}-q^{m+s-1}-q^{k+r-1})$ & $(q^{k-s}-1)(q^{m-r}-1)$ \\
$(q-1)(q^{m+k-1}-q^{m+s-1}-q^{k+r-1}+q^{r+s-1})$ & $q^{m+k}-q^{m+k-r-s}$ \\
\hline
\end{tabular}
\end{center}
$3).$ $\C_{D}$ is distance-optimal if $k+r>q^{r+s}$ (resp. $1+k+r>q^{r+s}$) when $m+s = k+r$ and $q\not=2$ (resp. $m+s \ne k+r$ or $q=2$); \\
$4).$ $\C_{D}$ is self-orthogonal if $(q,r+s)\notin \{(2,1),(2,2),(3,1)\}$;\\
$5).$ $\C_{D}$ is minimal if $q^{m+k}> q^{m+s+1}+q^{k+r}$.
\end{thm}
\begin{proof}
It is obvious that the length of $\C_{D}$ is $n=|D|=(q^{m}-q^{r})(q^{k}-q^{s})$.
For $(a,b)\not =(0,0)$, the Hamming weight $wt(c_{a,b})$ of the codeword $c_{a,b}$ in $\C_{D}$ is $n-N_{a,b}$, where $N_{a,b}=|\{(x,y) \in (\fqm\backslash \fqr) \times (\fqk\backslash \fqs): \Tr_{q}^{q^{m}}(ax)+\Tr_{q}^{q^{k}}(by) = 0\}|$. Using the orthogonal property of nontrivial additive characters gives
\begin{eqnarray*}
N_{a,b}&=&\frac{1}{q}\sum_{x\in \fqm\backslash \fqr}\sum_{y\in \fqk\backslash \fqs}
\sum_{u\in \fq} \chi(u(\Tr_{q}^{q^{m}}(ax)+\Tr_{q}^{q^{k}}(by)))\\
&=&\frac{1}{q}\sum_{u\in \fq} \sum_{x\in \fqm\backslash \fqr}\chi(u\Tr_{q}^{q^{m}}(ax))\sum_{y\in \fqk\backslash \fqs} \chi(u\Tr_{q}^{q^{k}}(by))
\end{eqnarray*}
which can be further expressed as
\begin{align*}
N_{a,b}=&\frac{1}{q}\sum_{u\in \fq}
(\sum_{x\in \fqm}\chi(u\Tr_{q}^{q^{m}}(ax))
\sum_{y\in \fqk}\chi(u\Tr_{q}^{q^{k}}(by))
-\sum_{x\in \fqm}\chi(u\Tr_{q}^{q^{m}}(ax))
\sum_{y\in \fqs}\chi(u\Tr_{q}^{q^{k}}(by))\\
&-\sum_{x\in \fqr}\chi(u\Tr_{q}^{q^{m}}(ax))
\sum_{y\in \fqk}\chi(u\Tr_{q}^{q^{k}}(by))
+\sum_{x\in \fqr}\chi(u\Tr_{q}^{q^{m}}(ax))
\sum_{y\in \fqs}\chi(u\Tr_{q}^{q^{k}}(by))).
\end{align*}
Note that
\[\frac{1}{q}\sum_{u\in \fq}\sum_{x\in \fqm}\chi(u\Tr_{q}^{q^{m}}(ax))\sum_{y\in \fqk}\chi(u\Tr_{q}^{q^{k}}(by))=q^{m+k-1}\]
holds for $(a,b)\not=(0,0)$. Then, it can be readily verified that
\begin{align*}
N_{a,b}=&\left\{\begin{array}{llll}
q^{m+k-1}-q^{m+s}-q^{k+r-1}+q^{r+s}, & \mbox{if}\,\,a=0,\,\,b \not=0,\,\, \Tr_{q^{s}}^{q^{k}}(b)=0, \\
q^{m+k-1}-q^{m+s-1}-q^{k+r}+q^{r+s}, & \mbox{if}\,\, a\not =0,\,\,b =0,\,\,\Tr_{q^{r}}^{q^{m}}(a)=0, \\
q^{m+k-1}-q^{m+s-1}-q^{k+r-1}+q^{r+s}, & \mbox{if}\,\, a\not =0,\,\,b \not=0,\,\,\Tr_{q^{r}}^{q^{m}}(a)=\Tr_{q^{s}}^{q^{k}}(b)=0, \\
q^{m+k-1}-q^{m+s-1}-q^{k+r-1}+q^{r+s-1}, & \mbox{otherwise}.
\end{array}\right.
\end{align*}
Consequently, for $(a,b)\not =(0,0)$, $wt(c_{a,b})$ is equal to
\begin{align*}
&\left\{\begin{array}{llll}
w_1:=(q-1)(q^{m+k-1}-q^{k+r-1}), & \mbox{if}\,\,a=0,\,\,b \not=0,\,\, \Tr_{q^{s}}^{q^{k}}(b)=0, \\
w_2:=(q-1)(q^{m+k-1}-q^{m+s-1}), & \mbox{if}\,\, a\not =0,\,\,b =0,\,\,\Tr_{q^{r}}^{q^{m}}(a)=0, \\
w_3:=(q-1)(q^{m+k-1}-q^{m+s-1}-q^{k+r-1}), & \mbox{if}\,\, a\not =0,\,\,b \not=0,\,\, \Tr_{q^{r}}^{q^{m}}(a)=\Tr_{q^{s}}^{q^{k}}(b)=0, \\
w_4:=(q-1)(q^{m+k-1}-q^{m+s-1}-q^{k+r-1}+q^{r+s-1}), & \mbox{otherwise}.
\end{array}\right.
\end{align*}
Observe that $d=w_{3}=(q-1)q^{k+r-1}(q^{m-r}-q^{m-r+s-k}-1)>0$ since $q^{m-r}>q^{m-r+s-k}+1$.
This shows that the dimension of $\C_{D}$ is equal to $m+k$. According to the balanced property of trace functions, we have
$A_{w_{1}}=q^{k-s}-1$, $A_{w_{2}}=q^{m-r}-1$ and $A_{w_{3}}=(q^{k-s}-1)(q^{m-r}-1)$, which leads to $A_{w_{4}}=q^{m+k}-q^{m+k-r-s}$ due to $A_{w_{1}}+A_{w_{2}}+A_{w_{3}}+A_{w_{4}}=q^{m+k}-1$.
By using the Griesmer bound, one can obtain
\begin{align*}
g(m+k,d)
\left\{\begin{array}{ll}
q^{m+k}-q^{m+s}-q^{k+r}+1, & \mbox{if}\,\,m+s\not = k+r, \\
q^{m+k}-q^{m+s}-q^{k+r}, & \mbox{if}\,\,m+s = k+r
\end{array}\right.
\end{align*}
and
\begin{align*}
g(m+k,d+1)
\left\{\begin{array}{ll}
q^{m+k}-q^{m+s}-q^{k+r}+k+r, & \mbox{if}\,\,m+s = k+r \,\,{\rm and}\,\, q\not=2, \\
q^{m+k}-q^{m+s}-q^{k+r}+k+r+1, & \mbox{otherwise}.
\end{array}\right.
\end{align*}
Thus, when $m+s=k+r$ and $q\not=2$, $\C_{D}$ is distance-optimal if $k+r>q^{r+s}$; and when $m+s \ne k+r$ or $q=2$, $\C_{D}$ is distance-optimal if
$1+k+r>q^{r+s}$.
To prove the self-orthogonality of $\C_D$, define $D_{1}=\fqm \times \fqk$, $D_{2}=\fqm \times \fqs$, $D_{3}=\fqr \times \fqk$ and $D_{4}=\fqr \times \fqs$. Let $\C_{D_{1}}$, $\C_{D_{2}}$, $\C_{D_{3}}$ and $\C_{D_{4}}$ be defined by \eqref{CDbi}, then similar to the proof of Lemma \ref{self-orthogonalF}, the linear codes $\C_{D_{1}}$, $\C_{D_{2}}$, $\C_{D_{3}}$ and $\C_{D_{4}}$ are self-orthogonal if $(q,r+s)\notin \{(2,1),(2,2),(3,1)\}$. Note that $D_{1}=D \cup (D_{2}\backslash D_{4})\cup (D_{3}\backslash D_{4})\cup D_{4}$. Thus, by Lemma \ref{self-orthogonalD-}, one can conclude that $\C_{D}$ is self-orthogonal if $(q,r+s)\notin \{(2,1),(2,2),(3,1)\}$.
The minimality of $\C_{D}$ can be easily verified by Lemma \ref{minimal}. This completes the proof.
\end{proof}
\begin{remark}
Note that $\C_{D}$ in Theorem \ref{optimalcode-con-2} is reduced to a $3$-weight linear code if $m+s=k+r$.
\end{remark}
\begin{example}
Let $q=2$, $m=4$, $k=3$ and $r=s=1$. Magma experiments show that $\C_{D}$ is a $[84,7,40]$ binary linear code with the weight enumerator $1+21z^{40}+96z^{42}+7z^{48}+3z^{56}$, which is consistent with our result in Theorem \ref{optimalcode-con-2}. This code is distance-optimal due to \cite{GMB}.
\end{example}
\begin{example}
Let $q=2$, $m=4$, $k=4$ and $r=s=1$. Magma experiments show that $\C_{D}$ is a $[196,8,96]$ binary linear code with the weight enumerator $1+49z^{96}+192z^{98}+14z^{112}$, which is consistent with our result in Theorem \ref{optimalcode-con-2}. This code is distance-optimal due to \cite{GMB}.
\end{example}
Specially, if one takes $r=s=0$ and defines ${\mathbb F}_{q^{0}}=\{0\}$, then good codes can also be obtained as in Theorem \ref{optimalcode-con-2}. The proof is similar to that of Theorem \ref{optimalcode-con-2} and we omit it here.
\begin{thm} \label{optimalcode-3}
Let $m$, $k$ be positive integers with $k\leq m$ and $q^m>q^{m-k}+1$. Let $\C_{D}$ be defined by \eqref{CDbi} and $D= \{(x,y):x \in \fqm^*, y \in \fqk^* \}$. Then \\
$1).$ $\C_{D}$ is a $[q^{m+k}-q^{m}-q^{k}+1,m+k,(q-1)(q^{m+k-1}-q^{m-1}-q^{k-1})]$ linear code;\\
$2).$ $\C_{D}$ is $3$-weight with the following weight distribution:
\begin{center}
\begin{tabular}{llll}
\hline weight $w$ & Multiplicity $A_{w}$\\ \hline
$0$ & $1$ \\
$(q-1)(q^{m+k-1}-q^{k-1})$ & $q^{k}-1$ \\
$(q-1)(q^{m+k-1}-q^{m-1})$ & $q^{m}-1$ \\
$(q-1)(q^{m+k-1}-q^{m-1}-q^{k-1})$ & $q^{m+k}-q^{m}-q^{k}+1$ \\
\hline
\end{tabular}
\end{center}
$3).$ $\C_{D}$ is a Griesmer code when $m\not=k$ and it is a near Griesmer code ${\rm(}$also distance-optimal if $m+\lfloor \frac{2}{q}\rfloor>1$${\rm)}$ when $m=k$;\\
$4).$ $\C_{D}$ is self-orthogonal if $(q,k)\notin \{(2,1),(2,2),(3,1)\}$;\\
$5).$ $\C_{D}$ is minimal if $q^{m+k}> q^{m+1}+q^{k}$.
\end{thm}
\begin{remark}
The linear code $\C_{D}$ in Theorem \ref{optimalcode-3} is reduced to a 2-weight linear code when $m=k$ and the Griesmer code in Theorem \ref{optimalcode-3} has the same parameters with the Solomon and Stiffler code in the nonprojective case.
\end{remark}
\begin{example}
Let $q=2$, $m=5$, $k=4$. Magma experiments show that $\C_{D}$ is a $[465,9,232]$ binary linear code with the weight enumerator $1+465z^{232}+31z^{240}+15z^{248}$, which is consistent with our result in Theorem \ref{optimalcode-3}. This code is a Griesmer code.
\end{example}
\begin{example}
Let $q=2$, $m=4$, $k=4$. Magma experiments show that $\C_{D}$ is a $[225,8,112]$ binary linear code with the weight enumerator $1+225z^{112}+30z^{120}$, which is consistent with our result in Theorem \ref{optimalcode-3}. This code is a near Griesmer code and it is distance-optimal due to \cite{GMB}.
\end{example}
\begin{remark}
It is known that equivalent codes have the same parameters and weight distribution, but the converse is not necessarily true, so it is normally difficult to discuss the equivalence of codes. In 2020, several infinite families of optimal binary linear codes of the form
$\C_{P}=\{c_{a}=(a\cdot x)_{x\in P}: a\in \ftwo^{m}\}$
were presented in \cite{JYHLL}, where $P=\ftwo^{m}\setminus \Delta$ and $\Delta$ is a simplicial complex in $\ftwo^{m}$.
Firstly, our results in this paper holds for a prime power $q$, thus our codes are new when $q>2$. Secondly, for $q=2$ and $m=4,5$, when $\Delta$ runs through all simplicial complexes in $\ftwo^{4}$ and $\ftwo^{5}$, computer experiments show that linear codes with new parameters can be produced in Sections \ref{section-4} and \ref{section-5} by comparing with all the linear codes $\C_{P}$ obtained in \cite{JYHLL}. Our codes in Section \ref{section-3} have the same parameters as those in \cite{JYHLL} and it should be noted that whether the set $\Omega_1=\cup_{i=1}^{h} {\mathbb F}_{2^{r_{i}}}$ in Section \ref{section-3} is a simplicial complex in $\ftwo^{m}$ depends on the selected basis of $\ftwom$ over $\ftwo$. For $q=2$ and the linear codes in Section \ref{section-6}, the parameters of our codes in Theorem \ref{optimalcode-con-2} are the same with the codes in
\cite[Theorem V.2(iii)]{JYHLL} and the parameters of our codes in Theorem \ref{optimalcode-3} are different from those of \cite[Theorem V.2]{JYHLL}.
\end{remark}
\section{Conclusions}
The construction of optimal linear codes is a hard problem.
In this paper, we constructed four families of linear codes over finite fields via the defining set approach, which can produce infinite families of optimal linear codes, including infinite families of (near) Griesmer codes. Using the Griesmer bound, we characterized the optimality of
these four families of linear codes with an explicit computable criterion and obtained infinite families of distance-optimal linear codes. Moreover, we obtained several classes of distance-optimal linear codes with few weights and completely determined their weight distributions. In addition, we investigated the self-orthogonality and minimality of these linear codes and it is shown that most of them are either self-orthogonal or minimal.
\section*{Acknowledgements}
This work was supported in part by the National Natural Science Foundation of China (Nos. 62072162, 61761166010, 12001176, 61702166), the Application Foundation Frontier Project of Wuhan Science and Technology Bureau (No. 2020010601012189) and the National Key Research and Development Project (No. 2018YFA0704702).
|
1,116,691,499,082 | arxiv | \section{Introduction}
\label{sec:intro}
Networks are mathematical representation of interactions among the components of a system and can be modeled by graphs. A graph G=(V,E) consists of a collection of vertices V, corresponding to the individual units of the observed system, and a collection of edges E, indicating some relation between pairs of vertices.
Graphs modeling real systems, i.e. social, biological, and technological networks, display non trivial topological features. Indeed they present the properties that define a complex network: big inhomogeneities, a broad degree distribution and distribution of edges locally inhomogeneous. In the study of complex networks, a network is said to have a community structure if the vertices can be divided
in $g$ groups, such that nodes belonging to the same group are densely connected and the number of edges between nodes of different groups is minimal.
The problem of community detection (graph partitioning) has been widely studied by researchers in a variety of fields, including statistics, physics, biology, social and computer science in the last 15 years. Finding communities within an arbitrary complex network can be a computationally difficult task. The number of communities, if any, within the network is typically unknown and the communities are often of unequal size and/or density. Despite these difficulties, however, several methods for community finding have been developed and employed with varying levels of success, see \cite{Coscia2011}, \cite{Fortunato2010}, \cite{Goldenberg2010}, \cite{Harenberg2014},
\cite{Kolacyzk2009} and \cite{Porter2009} for reviews.
Our work focusses on the problem of testing the robustness of the recovered partition of a given community detection method. In the following we provide a brief review of the state of the art of the literature addressing this problem. Although the huge work developed for community detection and its applications, the question of the significance of results still remains open. Our proposal represents a first attempt to statistically define the robustness of a clustering and hence can not be directly compared to any of the following described methodologies.
\paragraph{State of the art}
The modularity $Q$ of Newman and Girvan \cite{Newman2004} was the first attempt to give an answer to this question. It is defined as the fraction of the edges that fall within the given groups minus the expected such fraction if edges were distributed at random and is based on the idea that a random graph is not expected to have a cluster structure. However, as pointed out in \cite{Fortunato2010} and \cite{Karrer2008}, there is an important limit. Precisely, networks with a strong community structure have high modularity, on the contrary high modularity does not imply networks with a community structure. Other authors, see \cite{Guimera2004}, \cite{Reichardt2007}, suggested the use of a z-score to compare the maximum modularity of a graph to the maximum attainable modularity in purely random graphs of the same size and expected degree sequence. The problem is that the distribution of the null model, though peaked, is not Gaussian, causing false positives and false negatives.
A different approach was developed in \cite{Massen2006}, where the authors studied how canonical ensembles of network partitions depend on temperature to assess the significance and nature of the community structure obtained by algorithms that optimize the modularity. In this case $-Q$ plays the role of energy, i.e. at temperature $T$, the statistical weight of a given partition in the ensemble is proportional to $\exp(Q/T)$. Typically, as the temperature increases, there is a transition from low entropy/high $Q$ partitions (significant cluster structure) to high entropy/low $Q$ (random partitions). If there is strong community structure, the transition is sharp. The peak is broader for networks with weaker community structure, as there are more reasonable alternative partitions with intermediate values of $Q$, and so the transition occurs over a broader range of temperature. They also introduced an order parameter to measure the similarity of the sampled partitions at a given temperature, i.e. whether there is just a single partition with high $Q$ or a number of competing partitions. Therefore, it is a useful tool to detect false positives. However the methodology is computationally onerous and cannot be easily generalized to other optimization methods.
In \cite{Bianconi2009} the authors introduced the notion of entropy of graph ensembles to assess the relevance of additional information about the nodes of a network using the information that comes from the topology of the network itself. The indicator of clustering significance $\Theta$ they introduce can also reveal statistical regularities that shed light on possible mechanisms underlying the network stability and formation.
In \cite{Lancichinetti2011} the authors presented the Order Statistics Local Optimisation Method ($OSLOM$), a technique based on the local optimization of a fitness function, the C-score \cite{Lancichinetti2010}, expressing the statistical significance of a cluster with respect to random fluctuations. Given a subgraph $C$ in a graph $G$, the C-score measures the probability that the number of links connecting a node to nodes in $C$, where $C$ is embedded within a random graph, is higher than or equal to the value seen in the original graph $G$. This score permits to rank all the vertices external to $C$ (in increasing order of the C-score), having at least one connection with $C$, and to calculate its order statistic distribution $\Omega$. The minimum of $\Omega$ is the random variable whose cumulative is the score of the community $C$. To asses its significance a threshold parameter $P$ is fixed. The procedure is iterated to analyse the full network. The novelty of this approach is the local estimate of the significance, i.e. of single communities, not of partitions; on the contrary a serious limit is due to the lack of a data driven procedure to estimate $P$, indeed the authors fix its value to 0.1.
Recently, \cite{Wilson2014} proposed a testing based community detection procedure called Extraction of Statistically Significant Communities (ESSC). The ESSC procedure measures the statistical significance of connections between a single vertex and a set of vertices in undirected networks under a null distribution derived from the configuration model \cite{Bender1978}. Given an observed network $G_0$ with $n$ vertices and a vertex set $B$, they introduce the statistics $\hat{d}(u:B)$, measuring the number of edges between a vertex $u$ and $B$ in the random model $\hat{G}$, and show that $\hat{d_n}(u:B)$ is approximately binomial as $n\rightarrow \infty$ in the total variation distance between two probability mass functions. This permits to obtain the p-values of the null distribution using the binomial approximation and gives origin to an iterative deterministic procedure that recovers robust communities. The technique has some similarities with OSLOM, indeed both are extraction methods and use the configuration model as reference distribution, but differentiates because the probabilities have a closed form.
Another group of techniques was proposed in \cite{Gfeller2005},\cite{Karrer2008}, \cite{Rosvall2010}, and their conceiving was completely different from previous described methodologies. Indeed they introduce a stochastic component in the network by perturbing the graph structure, measure the effect of the perturbation and compare it with the corresponding value for a null model graph. The basic idea is that a significant partition should not be altered by small modifications, as long as the modification is not too extensive. An interesting feature of these methods is their independence from the community detection technique adopted.
In this paper we present a methodology able to clearly detect if the community structure found by some algorithms is statistically significant or is a result of chance, merely due to edge positions in the network. Given a community detection method and a network of interest, our proposal examines the stability of the partition recovered against random perturbations of the original graph structure. To address this issue, following ideas from \cite{Karrer2008}, we specify a perturbation strategy and a null model to build some procedures based on Variation of Information as stability measure. Given a this measure we address the question of evaluating its the significance. This permits to build the Variation of Information curve as a function of the perturbation percentage and to compare it with the corresponding null model curve using analysis tools set up for functional data analysis.
The rest of the paper is organised as follows. In Section \ref{VI} we introduce the proposed procedures based on Variation of Information and the functional data analysis techniques, including their detailed description. Section \ref{Results} shows the results achieved applying our methodology on simulated and real datasets. Conclusions and ideas for future research are drawn in Section \ref{Discussion}.
\section{Variation of Information hypothesis testing} \label{VI}
Variation of Information (VI) is an information theoretic criterion for comparing two partitions, or clusterings, of the same data set \cite{Meila2007}. It is a metric and measures the amount of information lost and gained in changing from clustering $\mathcal{C}$ to clustering $\mathcal{C'}$. The criterion makes no assumptions about how the clusterings were generated and applies to both soft and hard clusterings.
Given a dataset $D$ and two clusterings $\mathcal{C}$ and $\mathcal{C'}$ of $D$, with $K$ and $K'$ non empty clusters, respectively, VI is defined as
\begin{equation}
VI\left(\mathcal{C,C'}\right)=H\left(\mathcal{C}\right)+H\left(\mathcal{C'}\right)-2I\left(\mathcal{C,C'}\right)
\end{equation}
where $H\left(\mathcal{C}\right)$ is the entropy associated with clustering $\mathcal{C}$
\begin{equation}
H\left(\mathcal{C}\right)=-\sum_{k=1}^K P(k)\log P(k),
\end{equation}
and $I\left(\mathcal{C,C'}\right)$ is the mutual information between $\mathcal{C}$ and $\mathcal{C'}$, i.e the information that one clustering has about the other
\begin{equation}
I\left(\mathcal{C,C'}\right)=\sum_{k=1}^K \sum_{k'=1}^{K'} P\left(k,k'\right)\log \frac{P\left( k,k'\right)}{P\left(k\right)P\left(k'\right)}.
\end{equation}
$P\left(k\right)$ is the probability of a point being in cluster $C_k$ and $P\left(k,k'\right)$ is the probability that a point belongs to $C_k$ in clustering $\mathcal{C}$ and to $C_{k'}$ in $\mathcal{C'}$.
Another equivalent expression for VI is
\begin{equation}
VI\left(\mathcal{C,C'}\right)=H\left(\mathcal{C}|\mathcal{C'}\right)+H\left(\mathcal{C'}|\mathcal{C}\right).
\end{equation}
The first term measures the amount of information about $\mathcal{C}$ that we loose, while the second measures the amount of information about $\mathcal{C'}$ that we gain, when going from clustering $\mathcal{C}$ to clustering $\mathcal{C'}$.
VI metric is the basis of the hypothesis testing procedures we
propose to establish the statistical significance of a recovered
community structure in a complex network.
Our original idea is to generate two different curves based on the VI measure and to
statistically test their difference. The first curve $VIc$ is
obtained computing VI between the partition of our original network
and the partition of different perturbed version of our original
network. The second curve $VIc_{random}$ is obtained computing
VI between the partition of a null random network and the partition
of different perturbed version of such null network. The comparison between the two VI curves turns the question about
the significance of the retrieved community structure into the study
of the stability/robustness of the recovered partition against perturbations.
We expect that it must be robust to small perturbations, because if ``small changes"
in the network imply a completely different partition of the data, it means that the found communities are not trustworthy, and this cannot be due to the failure of the chosen algorithm for the community detection. Indeed the proposed testing procedure is independent from the clustering algorithm and it is easy to check if such a behaviour is due to it.
To understand well this point we must consider the behaviour of the VI curve for networks having a real community structure and those having a very poor community structure. In the first case the VI curve starts at 0, when the perturbation level $p$ is 0$\%$ (unperturbed graph), rises rapidly (perturbation level between 0$\%$ and 40$\%$), then levels off when $50\%<p<100\%$; in the second case the VI curve immediately grows up to a certain value and levels off that value, meaning that whatever partition has been found, at each level of perturbation, it has the same robustness of a random graph. Obviously the set up of a testing procedure is more necessary for all cases where the community structure is moderate or weak and the behaviour of the VI curve could be similar to that of a random graph.
The choice of the null random network is more delicate, because we expect that it has the same structure of our original graph but with completely random edges. This is why our choice relapses on the Configuration Model \cite{Bender1978} associated with the degree sequence of the observed graph $\mathbf{d}=\{d(1),\dots,d(n)\}$ with vertex set $V=\{1,\dots,n\}$, i.e. CM($\mathbf{d}$). The CM($\mathbf{d}$) is a probability measure on the family of multi-graphs with vertex set V and degree sequence $\mathbf{d}$ that reflects, within the constraints of the degree sequence, a random assignment of edges between vertices. The generative form is simple: one can simply cut all the edges in the network, so every node still retains its degree by the number of half-edges or stubs emanating from it. The result will be an even number of half-edges. To create new networks with the same degree, one simply needs to randomly pair all the half-edges, creating the new edges in the network. The Configuration Model generates every possible graph with the given degree distribution with equal probability.
Note that it naturally creates networks with multiple edges between nodes and self-connections between nodes. If such networks are unacceptable, one can reject those samples and try the algorithm again, repeating until one obtains a network without multiple or self-connections. The perturbation strategy adopted is described in Section \ref{permute}. The basic
steps of our method are the following:
\begin{framed}\center{\textbf{Overall Procedure}}\label{procedure}
\begin{enumerate}
\item \label{first} find a partition $\mathcal{C}$ of the given network, with vertex set $V=\{1,\dots,n\}$ and degree sequence $\mathbf{d}=\{d(1),\dots,d(n)\}$, by some chosen method $M$;
\item \label{second} given a perturbation level $p\in \left[0,1\right]$, perturb the network shuffling its edges by the percentage $p$ preserving the original graphs degree distribution;
\item \label{third} using the same method $M$ find a partition $\mathcal{C'}$ for the perturbed network and compute the $VI(\mathcal{C},\mathcal{C'})$ to compare $\mathcal{C}$ and $\mathcal{C'}$ ;
\item \label{fourth} repeat steps \ref{second}-\ref{third} at different perturbation levels $p \in \left[0,1\right]$ so to obtain the $VIc$ curve. ( Note that $p=0$ corresponds to the original unperturbed graph, $p=1$ corresponds to the maximal perturbation level (random graph));
\item \label{fifth} repeat steps \ref{first}-\ref{fourth} starting in step \ref{first} from the CM($\mathbf{d}$) (null model) so to get the $VIc_{random}$ curve;
\item \label{six} compare the $VIc$ and $VIc_{random}$ curves as functions of $p$, statistically testing {\it ``the difference''} between the two curves.
\end{enumerate}
\end{framed}
Note that the variation of $p$ from 0 to 1 induces an intrinsic order to the data structure as in temporal data and can be treated as time point. Moreover, as it will be described in Section \ref{permute}, we generate many perturbed graphs (i.e. $10$) for each different level of $p$ and these are considered as replicates per time points in our strategy.\\
Step \ref{six} of the above procedure is achieved by a functional data analysis approaches aiming to test if the two groups of curves represent ``the same process" or ``different processes". The testing procedure we rely on is based on a tool set up for time course microarray data, namely Gaussian Process (GP) regression \cite{Kalaitzis2011}.
Aim of the GP regression in the context of gene expression data is to identify differentially expressed genes in a one-sample time course microarray experiment, i.e. to detect if the profile has a significant underlying signal or the observations are just random fluctuations. In this case we reformulate the testing problem working on $\mathrm{\log_2} \frac{VIc}{VIc_{random}}$, as described in Section \ref{sec:GP}.
In order to show that our approach is robust with respect to the testing procedure used in Step \ref{six}, we also display the overall results achieved when using other two approaches in Step \ref{six}, described respectively in Section \ref{sec:FPC} and Section \ref{sec:IFT}. Indeed, we can look at the two measured $VI$ curves as independent realisations of two underlying processes say $X_1$ and $X_2$ observed with noise on a finite grid of points $p \in [0,1]$ and to test the null hypothesis
\begin{equation}\label{H0_fadtest}
H_0: \mathrm{X_1 \buildrel d \over{=} X_2}
\end{equation}
versus the alternative hypothesis
\begin{equation*}
H_1: \mathrm{X_1\buildrel d \over{\neq }X_2}
\end{equation*}
where $\buildrel d \over{=}$ means that the processes on either side have the same distribution.
Then, as described in Section \ref{sec:FPC}, taking advantage of the Karhunen- Lo\`{e}ve expansion we explore the methodology developed in \cite{Pomann2016} based on Functional Principal Components Analysis (FPCA) to test (\ref{H0_fadtest}).
On the contrary, the approach described in Section \ref{sec:IFT} addresses a domain-selective inferential procedure, providing an interval-wise non parametric functional testing \cite{Pini2016}, able not only to assess (\ref{H0_fadtest}), but also to point out specific differences.
We will briefly describe GP regression, FPCA and interval-wise functional testing in the following sections. We would like to point out that our overall procedure provides a workflow to validate a community structure under different perspectives that can be investigated in dependence of the specific real problem dealt with. The description of the three different testing procedure is functional to the understanding of our overall procedure and in particular how we exploit the theory underling each single methodology to compare the curves $VIc$ and
$VIc_{random}$. Hence we will summarise the three testing procedures to highlight the key connection to our testing problem. We choose to provide a review of each single methodology to provide awareness of the differences between the three procedures and their link to our testing problem. Even if addressing the same problem they are not equivalent. We refer to the original papers for any theoretical property of such testing procedures, including type $I/II$ error study. We want to stress that the original contribution of our proposal is summarised in the six steps procedure depicted in the above frame.
\subsection{GP regression} \label{sec:GP}
In this section we briefly summarise the methodology proposed in \cite{Kalaitzis2011}, where the authors present an approach to estimate the continuous trajectory of gene expression time-series from microarray through GP regression.
Briefly we recall that a gaussian process is the natural generalisation of a multivariate Gaussian distribution to a Gaussian distribution over a specific family of functions. More precisely, as defined in \cite{gpbook}, a gaussian process is a \textit{collection of random variables, any finite number of which have a joint Gaussian distribution} and is completely specified by its mean function and its covariance function. If we define the mean function $m(x)$ and the covariance function $k(x,x')$ of a real process $f(x)$ as:
\begin{align*}
m(x) &= E[f(x)],\\
k(x,x\textprime) &= E[(f(x) - m(x)(f(x\textprime) - m(x\textprime))]
\end{align*}
then we can write the GP as
\begin{equation}
f(x) \sim \mathcal{GP}(m(x), k(x,x\textprime)).
\end{equation}
The random variables $\mathbf{f}=\left(f\left(X_1\right),\dots,f\left(X_n\right)\right)^T$ represent the value of the function $f(x)$ at time locations $\left(X_i\right)_{i=1,\dots,n}$, being $f(x)$ the true trajectory/profile of the gene. Assuming $f(x)=\Phi(x)^T\mathbf{w}$, where $\Phi(x)$ are projection basis functions, with prior $\mathbf{w} \sim N(\mathbf{0},\sigma_{\mathbf{w}}^2\mathbf{I})$, we have
\begin{eqnarray}
&&E[f(x)]=\Phi(x)^TE[\mathbf{w}]=0 \label{GPbayes_mean}\\
&&E[f(x)f(x)\textprime]=\sigma_{\mathbf{w}}^2\Phi(x)^T\Phi(x) \label{GPbayes_cov}\\
&&f(x) \sim \mathcal{GP}(0,\sigma_{\mathbf{w}}^2\Phi(x)^T\Phi(x)).
\end{eqnarray}
Since observations are noisy, i.e. $\mathbf{Y}=\mathbf{\Phi w}+\pmb{\varepsilon}$, with $\mathbf{\Phi}=(\Phi(X_1)^T,\dots,\Phi(X_n)^T)$, assuming that the noise $\pmb{\varepsilon} \sim N(\mathbf{0},\sigma_n^2\mathbf{I})$ and using
Eq. (\ref{GPbayes_mean})-(\ref{GPbayes_cov}), the marginal likelihood
\begin{eqnarray*}
p(\mathbf{y} | \mathbf{x})=\int p(\mathbf{y} | \mathbf{x},\mathbf{w}) p(\mathbf{w})d\mathbf{w}
\end{eqnarray*}
becomes
\begin{eqnarray}
p(\mathbf{y} | \mathbf{x})=\frac{1}{(2\pi)^{n/2}\left|\mathbf{K_y}\right|^{1/2}}\mathrm{exp}\left(-\frac{1}{2}\mathbf{y}^t\mathbf{K_y}^{-1}\mathbf{y}\right) \label{GPmarginal}
\end{eqnarray}
with $\mathbf{K_y}=\sigma_{\mathbf{w}}^2\mathbf{\Phi}\mathbf{\Phi}^T+\sigma_n^2\mathbf{I}$.
In this framework the hypothesis testing problem can be reformulated, over the perturbation interval $[0,1]$, as:
\begin{eqnarray*}
H_0 : \mathrm{\log_2} \frac{VIc(x)}{VIc_{random}(x)} \sim \mathcal{GP}(0, k(x,x\textprime)) \label{H0_GP}
\end{eqnarray*}
against
\begin{eqnarray*}
H_1 : \mathrm{\log_2} \frac{VIc(x)}{VIc_{random}(x)} \sim \mathcal{GP}(m(x), k(x,x\textprime)).
\end{eqnarray*}
The marginal likelihood derived from Eq. (\ref{GPmarginal}), enables then to compare or rank different models by calculating the Bayes Factor (BF).
More specifically the BF is approximated with a log-ratio of marginal likelihoods of two GPs, each one representing the hypothesis of differential (the profile has a significant underlying signal) and non differential expression (there is no underlying signal in the profile, just random noise).
The significance of the profiles is then assessed based on the BF.
\subsection{Functional Principal Component testing}\label{sec:FPC}
In this section we briefly summarise the approach proposed in \cite{Pomann2016} to test the hypothesis (\ref{H0_fadtest}) when the observed data are realisations of the curves at finite grids and possibly corrupted by noise. Their motivating application is a diffusion tensor imaging study, where the objective is to compare white matter track profiles between healthy individuals and multiple sclerosis patients. They introduce a novel framework based on functional principal component analysis ($FPCA$) of an appropriate mixture process, referred to as marginal $FPCA$. The statistical framework for this problem assumes to observe data arising from two groups, namely $\{ ( t_{1ij}, Y_{1ij}): j=1\ldots m_{1i}\} _{i=1}^{n_1} $ and $\{ ( t_{2ij}, Y_{2ij}): j=1\ldots m_{2i}\} _{i=1}^{n_2} $, where $t_{1ij},t_{2ij} \in T$, a compact interval that in our case is T =[0,1] (time plays the role of perturbation level $p$). It is assumed that the $Y_{1ij}$ and the $Y_{2ij}$ are independent realisations of two underlying processes observed with noise on a finite grid of points:
\begin{subequations}
\begin{align*}
Y_{1ij} & =X_{1ij}+\epsilon_{1ij} \\
Y_{2ij} & =X_{2ij}+\epsilon_{2ij}
\end{align*}
\end{subequations}
where $X_{1ij} \sim^{IID}X_1(\cdot)$ and $X_{2ij} \sim^{IID}X_2(\cdot)$ are independent and square integrable random functions over $T$, for some underlying (latent) random processes $X_1$ and $X_2$. It is assumed that $X_1$ and $X_2$ are second-order stochastic processes with mean functions assumed to be continuous and covariance functions assumed to be continuous and positive semidefinite, both being unknown. The measurement errors $\{\epsilon_{1ij}\} $ and $\{\epsilon_{2ij}\} $ are independent and identically distributed ($IID$), with zero mean and variances $\sigma_1^2$ and $\sigma_2^2$, respectively. The authors exploit the truncated Karhunen-Lo$\grave{e}$ve expansion of the mixture process $X(\cdot)$ of $X_1(\cdot)$ and $X_2(\cdot)$
with mixture probabilities $p$ and $1-p$. Let $Z$ a binary random variable taking values in $\left\{1,2\right\}$ with $P(Z=1)=p$, then $X_1(\cdot)=E\left[(\cdot)/Z=1\right]$ and $X_2(\cdot)=E\left[(\cdot)/Z=2\right]$. Let us consider the truncated Karhunen-Lo$\grave{e}$ve expansion of $X(\cdot)$ and define $X_Z^K (t)=\mu(t)+\sum_{k=1}^K\xi_{Zk}\Phi_k (t)$, $Z=1,2$, testing hypothesis (\ref{H0_fadtest}) reduce to testing if the $FPC$ scores $ \{\xi _1^k\}_{k=1}^K$ and $\{\xi_2^k\}_{k=1}^K$ have the same distribution:
\begin{equation}\label{H0k}
H_0^K: \{ \xi _1^k \}_{k=1}^K \buildrel d \over{=} \{ \xi _1 ^k\}_{k=1}^K
\end{equation}
In practice the authors consider $K$ null hypothesis given the finite truncation level and propose a multiple two-sample univariate test, the $Anderson$-$Darling$ ($AD$) statistic \cite{Petit1976}, combined with a multiple-comparison adjustment. The authors propose a Bonferroni correction, a procedure which controls the probability of erroneously rejecting even one of the true
null hypotheses, the Family Wise Error Rate (FWER). In this case hypothesis (\ref{H0k}) is rejected if
\begin{equation}
\min_{1\leq k \leq K} p_k \leq \alpha / K
\end{equation}
where $p_k$ is the p-value that is obtained by using the chosen univariate two-sample test for each ${H^k_0}$.
The false discovery rate (FDR), suggested in \cite{BH1995} is a different point of view for how the errors in multiple testing could be considered. The FDR is the expected proportion of erroneous rejections among all rejections. If all tested hypotheses are true, controlling the FDR controls the traditional FWER. But when many of the tested hypotheses are rejected, indicating that many hypotheses are not true, the error from a single erroneous rejection is not always as crucial for drawing conclusions from the family tested, and the proportion of errors is controlled instead. Using the individual testing statistics proposed in \cite{Pomann2016} we will therefore adopt this FDR approach to adjust our tests for multiplicity.
Note that this procedure is designed for a more general framework in which the two curves $VI$ and $VIc$ can be observed at different time points (i.e. $p\in[0,1]$).
\subsection{Interval-wise Functional testing}\label{sec:IFT}
In the following we will briefly review the Interval-wise Functional testing procedure ($ITP$) proposed by \cite{Pini2016}, where the authors develop a non-parametric domain-selective inferential methodology for functional data embedded in the $L^2(T)$ space (where $T$ is any limited open interval of $\mathbb{R}$) to test (\ref{H0_fadtest}). Their technique is not only able to assess the equality in distribution between functional populations, but also to point out specific differences. They propose a procedure based on the following three steps:
\begin{enumerate}
\item Basis Expansion: functional data are projected on a functional basis (i.e. Fourier or B-splines expansion);
\item Interval-Wise Testing: statistical tests are performed on each interval of basis coefficients;
\item Multiple Correction: for each component of the basis expansion, an adjusted p-value is computed from the p-values of the tests performed in the previous step.
\end{enumerate}
More in detail, let us assume to observe two independent samples of sizes $n_1$ and $n_2$ of independent random functions on a separable Hilbert space $y_{ij}(t)$, $i=1,\dots,n_j$, $j=1,2$.
In the first step, data are projected on a finite-dimension subspace generated by a reduced basis $y_{ij}(t)=\sum_{k=1}^pc_{ij}\Phi^{(k)}(t)$, where
integer $p$ represents the dimension.
It follows that each of the $n=n_1+n_2$ units can be represented by means of the corresponding p coefficients $\{c_{ij}^{(k)}\}$, $k=1,...,p$; moreover, for each $k$, $c_{i1}^{(k)}$, $i=1,\dots,n_1$ and $c_{i2}^{(k)}$, $i=1,\dots,n_2$ are independent, and $c_{i1}^{(k)} \sim C_1^{(k)}$ and $c_{i2}^{(k)} \sim C_2^{(k)}$ where $C_1^{(k)}$ and $C_2^{(k)}$ denote the (unknown) distributions of the $k$th basis coefficient in the two populations.
In the second step, the authors build a family of multivariate tests for
$$
H_0^{(\mathbf{k})}=\cap_{k \in \mathbf{k}}H_0^{(k)}, \quad H_0^{(k)}: C_1^{(k)}\overset{d}{=}C_2^{(k)},
$$
$k=1,\dots,p$ and $\mathbf{k}$ is a vector of successive indexes in $\{1,\dots,p\}$. In addition they add the multivariate tests on the complementary sets of each interval, i.e., they do also test each hypothesis $H_0^{(\mathbf{k^c})}=\cap_{k \notin \mathbf{k}}H_0^{(k)}$.
The tests are performed by the Nonparametric Combination Procedure (NPC), see \cite{Pesarin2010}, that constructs multivariate permutation tests by means of combining univariate-synchronized permutation tests.
In the third step they obtain the adjusted p-value for the $k$th component $\lambda_{\mathrm{ITP}}^{(k)}$
by computing the maximum over all p-values of interval-wise tests whose null hypothesis implies $H_0^{(k)}$:
$$
\lambda_{\mathrm{ITP}}^{(k)}=\max\left(\max_{\mathbf{k}\ s.t. \ k \in \mathbf{k}}\lambda^{\mathbf{(k)}},\max_{\mathbf{k^c}\ s.t. \ k \in \mathbf{k^c}}\lambda^{\mathbf{(k^c)}}\right),
$$
and prove that, if we reject the $k$th adjusted p-value $\lambda_{\mathrm{ITP}}^{(k)} \leq \alpha$, then, for any interval $\mathbf{k}$ s.t. $H_0^{(k)}$ is true $\forall k \in \mathbf{k}$, the probability of rejecting any $H_0^{(k)}$ is lower or equal to $\alpha$. This property reads interval-wise control of the FWER.
\subsection{Perturbation strategy}\label{permute}
Mimicking the approach proposed by \cite{Karrer2008} and \cite{Cutillo2012}, we restrict our perturbed networks to having the same numbers of vertices and edges as the original unperturbed network, hence only the positions of the edges change. In other words we apply a Degree Preserving Randomization. Our perturbation strategy relies on the $rewire$ function belonging to the $R$ package $igraph$, using the option $keeping\_degseq$.
Moreover, we expect that a network perturbed by only a small amount has just a few edges moved in
different communities, while a maximally perturbed network produces completely random clusters. In \cite{Karrer2008} the perturbation strategy is achieved by removing each edge with a certain probability $\alpha$ and replacing it with another edge between a pair of vertex $(i,j)$ chosen at random with a probability proportional to the degree of $i$ and $j$. Our perturbation strategy consists in randomly rewiring a percentage $p$ of edges while preserving the original graph's degree distribution. The rewiring algorithm indeed chooses two arbitrary edges in each step
(e.g. $(a,b)$ and $(c,d)$) and substitutes them with $(a,d)$ and $(c,b)$, if they do not already exists in the graph. The algorithm does not create multiple edges.
A null percentage of permutation $p=0$ corresponds to the original unperturbed graph,
while $p=1$ corresponds to the maximal perturbation level. Varying the percentage $p$ from $0$ (original graph) to $1$ (maximal perturbation), many perturbed graph are generated and compared to the partition on the original graph by means of $VI$. Indeed we generated $10$ perturbed graph for each different level of $p \in [0,1]$. Then, from each of the obtained graphs, we generated other $10$ graphs rewiring $1\%$ of edges each time. Hence resulting in 100 graphs for each level of $p \in [0,1]$. In our setting we chose $20$ levels of $p$.
\section{Results}
\label{Results}
The overall procedure proposed in the present paper was implemented in $R$ and validated both on simulated and real networks as will be described in the following Sections \ref{SD} and \ref{RD}. In Figure \ref{map} we provide a flow chart that summarises our procedure. For each of the analysed networks (either simulated or real) we performed the first step of the overall procedure described in Section \ref{VI}, using some tools embedded in the $R$ package $igraph$. We chose $igraph$ because it provides an implementation of graph algorithms able to fast identifying community structures in large graphs. In particular we used two community extraction functions, one based on a greedy optimisation of the modularity ($cluster\_fast\_greedy$) and another based on a multi-level optimisation of the modularity ($cluster\_louvain$). More specifically $cluster\_fast\_greedy$ implements the hierarchical agglomeration algorithm for detecting community structure described in \cite{Clauset2004} and $cluster\_louvain$ is based on the hierarchical approach proposed in \cite{Blondel2008}. Both these methods enables for an automatic definition of the optimal number of communities, are specific for large networks and are based on the optimisation of the modularity. They are briefly summarised in the following.
\newline
As regards the testing methodologies we used the bioconductor package gprege available at\\
\url{https://www.bioconductor.org/packages/release/bioc/html/gprege.html} for the GP regression, the R code from the professor Staicu's web-site\\
\url{http://www4.stat.ncsu.edu/~staicu/} for the Functional Principal Component test and the R package fdatest available at \\
\url{https://cran.r-project.org/web/packages/fdatest/index.html} for the Interval-wise Functional test.
\begin{figure}%
\center
\includegraphics[width=0.8\textwidth]{pipelineFig.pdf}
\caption{Overall procedure map}
\label{map}
\end{figure}
\subsection*{Fast greedy method}
$Fast$ $greedy$ is the modularity optimization algorithm introduced by Clauset, Newman and Moore \cite{Clauset2004}. This method is essentially a fast implementation of a previous technique proposed by Newman \cite{Newman2004}. Starting from a set of isolated nodes, the links of the original graph are iteratively added such to produce the largest possible increase of the modularity. Adding a first edge to the set of disconnected vertices reduces the number of groups forming a new partition of the graph. The edge is chosen such that this partition gives the maximum increase (minimum decrease) of modularity with respect to the previous configuration. All other edges are added based on the same principle. At each iteration step, the variation of
modularity given by the merger of any two communities of the running partition is computed and the best merger chosen. The fast version of Clauset, Newman and Moore, which uses more efficient data structures, has a complexity of $O(N \log_2 N)$ on sparse graphs.
\subsection*{Louvain method}
$Louvain$ method is the fast modularity optimization by Blondel et al. \cite{Blondel2008}. This technique consists of two steps, executed alternatively. Initially, each node is in its own community. In step 1, nodes are considered one by one, and each one is placed in the neighbouring community (including its own) that maximizes the modularity gain. This is repeated until no node is moved (the obtained decomposition provides therefore a local optimization of Newman-Girvan modularity). After a partition is identified in this way, in step 2 communities are replaced by super-nodes, yielding a smaller weighted network where two super-nodes are connected if there is at least an edge between vertices of the corresponding communities. The two steps of the algorithm are then repeated until modularity (which is always computed with respect to the original graph) does not increase any further.
As pointed out in \cite{Fortunato2010}, this method offers a fair compromise between the accuracy of the estimate of the modularity maximum, which is better than that delivered by greedy techniques like the one by Clauset et al. above, and computational complexity, which is essentially linear in the number of links of the graph.
\subsection{Application to simulated data} \label{SD}
In order to show the ability of our method to validate a network clustering, we applied it to modular random network graphs generated using the model implemented in \cite{Sah2014}. The model generates undirected, simple, connected graphs with prescribed degree sequences and a specified level of community structure, while maintaining a graph structure that is otherwise as random (uncorrelated) as possible over a broad range of distributions of network degree and community size. The model in \cite{Sah2014} is specified by the network size, the average network degree, the number of modules, the modularity, the degree distribution and the module size distribution. The generated graph results also to be as random as possible, to contain no self loops (edges connecting a node to itself), multi-edges (multiple edges between a pair of nodes), isolate nodes (nodes with no edges), or disconnected components (see \cite{Sah2014} for details). Specifically, we generated a modular random graph for each level of modularity Q = 0, 0.2, 0.4, 0.6, 0.8, using a power law for degree distribution and for module size distribution, with size=2000, number of modules=10 and average degree=10. For each graph, the corresponding null model was generated using the Configuration Model. \\
The application of the overall procedure on the simulated datasets is summarised in Tables \ref{BFsimul} and \ref{FADsimu} and in Figures \ref{fig:VIfastSimu} and \ref{fig:VIlouvainSimu}, respectively.
\subsubsection*{Gaussian Process results}
The application of the Gaussian Processes approach described in Section \ref{sec:GP} to the simulated networks is summarised in Table \ref{BFsimul}.
\begin{table}[h]
\centering
\begin{tabular}{ | l | c | c |}
\hline
Q value & fast greedy & Louvain \\ \hline
0 & 3.194 & 8.356 \\ \hline
0.2 & 3.959 & 59.589 \\ \hline
0.4 & 257.696 & 331.810 \\ \hline
0.6 & 389.301 & 421.305 \\ \hline
0.8 & 443.650 & 454.558 \\ \hline
\end{tabular}
\caption{GP Bayes Factor on the simulated networks with modularity Q $\in \{0,0.2,0.4,0.6,0.8\}$ after clustering via fast greedy and Louvain} \label{BFsimul}
\end{table}
The resulting BF show a growing trend from modularity $Q=0$ to $Q=0.8$ after clustering with either clustering $fast$ $greedy$ or $louvain$.
This gives strong statistical evidence that networks with a high modularity have a robust clustering structure (significantly different from the random).
However, note that $louvain$ method is able to recover a non random clustering also at low modularity ($Q\geq 0.2$), the other way around $fast$ $greedy$ needs a more defined structure ($Q\geq 0.4$).
\subsubsection*{Functional Principal Component testing results}
Similarly, Table \ref{FADsimu} summarises the application of the Functional $Anderson$-$Darling$ test described in Section \ref{sec:FPC} to the simulated data.
\begin{table}[h]
\centering
\begin{tabular}{ | l | c | c |}
\hline
Q value & fast greedy & Louvain \\ \hline
0 & 0.1409 & 0.1184 \\ \hline
0.2 & 0.369 & 0.00386 \\ \hline
0.4 & 0.00016 & 0.00016 \\ \hline
0.6 & 0.00016 & 0.00012 \\ \hline
0.8 & 0.00016 & 8e-05 \\ \hline
\end{tabular}
\caption{FDR adjusted p-values by the FAD test procedure on the simulated networks with modularity Q $\in \{0,0.2,0.4,0.6,0.8\}$ after clustering via fast greedy and Louvain.} \label{FADsimu}
\end{table}
As we can see the False Discovery rate adjusted p-values decreases drastically when the simulated network modularity grows from $Q=0$ to $Q=0.8$, after clustering with either $fast$ $greedy$ or $louvain$ algorithms, also if $louvain$ is able to recover a non random clustering at lower modularity ($Q\geq 0.2$) than $fast$ $greedy$.
This result agree with the previous one obtained by GP.
\subsubsection*{Interval-wise Functional Testing results}
The application of the Interval-wise Testing procedure described in Section \ref{sec:IFT} to the simulated datasets after clustering via $fast \ greedy$ or $louvain$ are depicted respectively in Figures \ref{fig:VIfastSimu} and \ref{fig:VIlouvainSimu}. In each figure, panels $(a),(c),(e),(g),(i)$ show the VI curves for the null model ($VIc_{random}$) and for the actual model ($VIc$). The two curves appear to be very close for low modularity values and depart from each other as the modularity increases till a maximum of $Q=1$. In panels $(b),(d),(f),(h),(j)$ this is quantified locally by a specific adjusted p-value in each sub-interval. Significant p-values are falling under the horizontal red line corresponding to the critical value of 0.05. As we can see, either using $louvain$ or $fast$ $greedy$ as clustering methods yields to the similar $ITP$ results conclusion. Of course when there is no perturbation (i.e. at level $p=0$) the two curves are coincident and hence not significantly different. When $Q\geq 0.4$ the two $VI$ curves are significantly different at any perturbation level, apart from some cases at $p\geq 0.5$ and $Q=0.8$ where the $VIc$ is close to the $VIc_{random}$, indeed note that if we strongly perturb a network ($p\geq 0.5$, i.e. we rewire more than $50\%$ of edges) it approaches a random network. Also in this case $louvain$ is able to recover a non random clustering at lower modularity ($Q\geq 0.2$) than $fast$ $greedy$, confirming the results obtained by the other two approaches. Moreover note that at $Q=0$, when applying $louvain$, the method detects some interval where there is a significant difference between the two curves but for strong perturbation level ($p\geq 0.5$).
\begin{figure}[h]%
\centering
\subfloat[Q=0]{\includegraphics[width=0.4\textwidth]{fda_test_sim00_fastgreedy_VI_functional.pdf}}
\subfloat[Q=0]{\includegraphics[width=0.4\textwidth]{fda_test_sim00_fastgreedy_pvalues.pdf}}\\
\subfloat[Q=0.2]{\includegraphics[width=0.4\textwidth]{fda_test_sim02_fastgreedy_VI_functional.pdf}}
\subfloat[Q=0.2]{\includegraphics[width=0.4\textwidth]{fda_test_sim02_fastgreedy_pvalues.pdf}}\\
\subfloat[Q=0.4]{\includegraphics[width=0.4\textwidth]{fda_test_sim04_fastgreedy_VI_functional.pdf}}%
\subfloat[Q=0.4]{\includegraphics[width=0.4\textwidth]{fda_test_sim04_fastgreedy_pvalues.pdf}}\\
\end{figure}
\setcounter{subfigure}{6}
\begin{figure}[h]
\subfloat[Q=0.6]{\includegraphics[width=0.4\textwidth]{fda_test_sim06_fastgreedy_VI_functional.pdf}}
\subfloat[Q=0.6]{\includegraphics[width=0.4\textwidth]{fda_test_sim06_fastgreedy_pvalues.pdf}}\\
\subfloat[Q=0.8]{\includegraphics[width=0.4\textwidth]{fda_test_sim08_fastgreedy_VI_functional.pdf}}%
\subfloat[Q=0.8]{\includegraphics[width=0.4\textwidth]{fda_test_sim08_fastgreedy_pvalues.pdf}}
\caption{VI plots on the clustering obtained via fastgreedy on 5 simulated datasets with different modularity Q $\in [0,0.2,0.4,0.6,0.8]$ and corresponding adjusted p-values of the Interval Testing procedure. Horizontal red line corresponds to the critical value 0.05. Light gray areas correspond to p-values below 0.05, dark grey areas correspond to p-values below 0.01.}
\label{fig:VIfastSimu}
\end{figure}
\begin{figure}[h]%
\centering
\subfloat[Q=0]{\includegraphics[width=0.4\textwidth]{fda_test_sim00_louvain_VI_functional.pdf}}%
\subfloat[Q=0]{\includegraphics[width=0.4\textwidth]{fda_test_sim00_louvain_pvalues.pdf}}\\
\subfloat[Q=0.2]{\includegraphics[width=0.4\textwidth]{fda_test_sim02_louvain_VI_functional.pdf}}
\subfloat[Q=0.2]{\includegraphics[width=0.4\textwidth]{fda_test_sim02_louvain_pvalues.pdf}}\\
\subfloat[Q=0.4]{\includegraphics[width=0.4\textwidth]{fda_test_sim04_louvain_VI_functional.pdf}}%
\subfloat[Q=0.4]{\includegraphics[width=0.4\textwidth]{fda_test_sim04_louvain_pvalues.pdf}}\\
\end{figure}
\setcounter{subfigure}{6}
\begin{figure}[h]
\centering
\subfloat[Q=0.6]{\includegraphics[width=0.4\textwidth]{fda_test_sim06_louvain_VI_functional.pdf}}
\subfloat[Q=0.6]{\includegraphics[width=0.4\textwidth]{fda_test_sim06_louvain_pvalues.pdf}}\\
\subfloat[Q=0.8]{\includegraphics[width=0.4\textwidth]{fda_test_sim08_louvain_VI_functional.pdf}}%
\subfloat[Q=0.8]{\includegraphics[width=0.4\textwidth]{fda_test_sim08_louvain_pvalues.pdf}}
\caption{VI plots on the clustering obtained via Louvain on 5 simulated datasets with different modularity Q $\in [0,0.2,0.4,0.6,0.8]$ and corresponding adjusted p-values of the Interval Testing procedure . Horizontal red line corresponds to the critical value 0.05. Light gray areas correspond to p-values below 0.05, dark grey areas correspond to p-values below 0.01.} \label{fig:VIlouvainSimu}
\end{figure}
\subsection{Application to real data} \label{RD}
In order to provide an example of our analysis work-flow, we selected four different publicly available datasets namely two biological (protein-protein interaction networks, $Nexus$ 5 and $Barabasi$), one representing the western states power grid of United States ($Nexus$ 15) and a social dataset ($Facebook$).
Note that $Nexus$ is an online repository of networks, with an $API$ that allow programmatic queries against it, and programmatic data download as well. These functions can be used to query it and download data from it, directly as an $igraph$ graph. The total number of nodes and edges of these four real networks are summarised in Table \ref{NetworkStat} and displayed in Figure \ref{fig:realnet}.
\begin{figure}[htp]
\centering
\caption*{Real Networks plots}
\subfloat[Facebook]{\label{fig:facebook}\includegraphics[scale=0.4]{facebook_cuteps.pdf}}
\subfloat[Barabasi]{\label{fig:barabasi}\includegraphics[scale=0.4]{barabasi_cuteps.pdf}}
\\
\subfloat[Nexus 5]{\label{fig:nexus5}\includegraphics[scale=0.4]{nexus5_cuteps.pdf}}
\subfloat[Nexus 15]{\label{fig:nexus15}\includegraphics[scale=0.4]{nexus15_cuteps.pdf}}
\caption{For each real network (Facebook (a), Barabasi (b), Nexus 5 (c) and Nexus 15 (d)) we show the extracted community found by the proposed method $fast \ greedy$. Only the community with more than the $5\%$ of nodes are displayed.}\label{fig:realnet}
\end{figure}
\begin{table}
\caption*{Real Networks summary}
\centering
\begin{tabular}{crrrrc}
\hline
&Facebook&Nexus 5&Nexus 15&Barabasi& \\
\hline
Nodes & 4039 & 2617 & 4941&1870& \\
Edges & 88234 & 11855 & 6594 & 2240
\\ \hline
\end{tabular}
\caption{Nodes and Edges number for each of the four real analysed datasets } \label{NetworkStat}
\end{table}
\subsubsection*{Nexus 5}
This dataset consists of an undirected protein-protein interaction network in yeast. This data set was compiled by von Mering et al. (see \cite{vonMerin2002}) combining various sources. Only the interactions that have high and medium confidence are included here.
\subsubsection*{Barabasi}
This dataset consists of the protein-protein interaction network in Saccharomyces cerevisiae described and analysed in \cite{Barabasi2001}. It is derived from combined, non-overlapping data, obtained mostly by systematic two-hybrid analyses.
Data are available at \url{http://www3.nd.edu/}
\newline
\url{~networks/resources.htm}.
\subsubsection*{Nexus 15}
This dataset is an undirected, unweighted network representing the topology of the Western States Power Grid of the United States and was compiled by Duncan Watts and Steven Strogatz.
Data are available at \url{http://cdg.columbia.edu/cdg/datasets}, \cite{Watts1998}.
\subsubsection*{Facebook}
This dataset consists of `circles' (or `friends lists') from Facebook \cite{McAuley2012}.
The authors obtained profile and network data from 10 ego-networks, consisting of 193 circles and 4,039 users. To do so they developed their own Facebook application and conducted a survey of ten users, who were asked to manually identify all the circles to which their friends belonged. On average, users identified 19 circles in their ego-networks, with an average circle size of 22 friends. Examples of such circles include students of common universities, sports teams, relatives, etc. Data are available at \url{http://snap.stan-} \newline \url{ford.edu/data/egonets-Facebook.html}.\\
The application of the overall procedure on the just described real datasets is summarised in Tables \ref{BFreal} and \ref{FADreal} and in Figures \ref{fig:VIfastReal} and \ref{fig:VIlouvainReal}, respectively.
\subsubsection*{Gaussian Process results}
The application of the Gaussian Processes approach described in Section \ref{sec:GP} to the four real networks is summarised in Table \ref{BFreal}. The resulting BF are very high for either $fast \ greedy$ or $louvain$ clustering. This gives strong statistical evidence that the four analysed networks have a robust clustering structure hence the recovered community structures are not likely to be random.
\begin{table}[h]
\centering
\begin{tabular}{ | l | c | c |}
\hline
Datasets & fast greedy & Louvain \\ \hline
Barabasi & 284.411 & 243.816 \\ \hline
Facebook & 297.251 & 361.060 \\ \hline
Nexus 5 & 340.795 & 431.477 \\ \hline
Nexus 15 & 503.183 & 495.810 \\ \hline
\end{tabular}
\caption{GP Bayes Factor on the 4 datasets after clustering via $fast \ greedy$ and $louvain$} \label{BFreal}
\end{table}
\subsubsection*{Functional Principal Component testing results}
Similarly, Table \ref{FADreal} summarises the application of the Functional $Anderson$-$Darling$ test described in Section \ref{sec:FPC} to the simulated data. As we can see the False Discovery rate adjusted p-values are well lower then the standard significance value $0.05$, after clustering with either clustering $fast \ greedy$ or $louvain$. This result agree with the previous one leading to the same conclusion that analysed real networks have a robust clustering structure.
\begin{table}[h]
\centering
\begin{tabular}{| l | c | c |}
\hline
Datsets & fast greedy & Louvain \\ \hline
Barabasi & 0.00016 & 0.00302 \\ \hline
Facebook & 0.00024 & 8e-05 \\ \hline
Nexus 5 & 0.000765 & 0.00024 \\ \hline
Nexus 15 & 8e-05 & 8e-05 \\ \hline
\end{tabular}
\caption{FDR adjusted p-values by the FAD test procedure on the 4 datasets after clustering via fast greedy and Louvain.}\label{FADreal}
\end{table}
\subsubsection*{Interval-wise Functional Testing results}
The application of the Interval-wise Testing procedure described in Section \ref{sec:IFT} to the real datasets after clustering via $fast \ greedy$ or $louvain$ are depicted respectively in Figures \ref{fig:VIfastReal} and \ref{fig:VIlouvainReal}.
In each figure, panels $(a),(c),(e),(g)$ show the VI curves for the null model ($VIc_{random}$) and for the actual model ($VIc$). In all the cases the two curves appear to be very close for high perturbation values and depart from each other as perturbation level approaches zero. In panels $(b),(d),(f),(h)$ this is quantified locally by a specific adjusted p-value in each sub-interval. Also in this case significant p-values are falling under the horizontal red line corresponding to the critical value of 0.05. As expected, either using $louvein$ or $fast$ $greedy$ as clustering methods yields to similar results conclusion. As already observed for the synthetic datasets, if we strongly perturb a network ($p\geq 0.5$, i.e. we rewire more than $50\%$ of edges) it approaches a random network, indeed the two $VI$ curves become very close, and the p-value could survive the threshold.
\begin{figure}[h]
\centering
\subfloat[barabasi]{\includegraphics[width=0.4\textwidth]{fda_test_barabasi_fastgreedy_VI_functional.pdf}}%
\subfloat[barabasi]{\includegraphics[width=0.4\textwidth]{fda_test_barabasi_fastgreedy_pvalues.pdf}}\\
\subfloat[facebook]{\includegraphics[width=0.4\textwidth]{fda_test_facebook_fastgreedy_VI_functional.pdf}}
\subfloat[facebook]{\includegraphics[width=0.4\textwidth]{fda_test_facebook_fastgreedy_pvalues.pdf}}\\
\end{figure}
\setcounter{subfigure}{4}
\begin{figure}[h]
\centering
\subfloat[nexus 5]{\includegraphics[width=0.4\textwidth]{fda_test_nexus5_fastgreedy_VI_functional.pdf}}%
\subfloat[nexus 5]{\includegraphics[width=0.4\textwidth]{fda_test_nexus5_fastgreedy_pvalues.pdf}}\\
\subfloat[nexus 15]{\includegraphics[width=0.4\textwidth]{fda_test_nexus15_fastgreedy_VI_functional.pdf}}
\subfloat[nexus 15]{\includegraphics[width=0.4\textwidth]{fda_test_nexus15_fastgreedy_pvalues.pdf}}\\
\caption{VI plots on the clustering obtained via Fast Greedy on real datasets and the corresponding adjusted p-values of the Interval Testing procedure. Horizontal red line corresponds to the critical value 0.05. Light grey areas correspond to p-values below 0.05, dark grey areas correspond to p-values below 0.01.}\label{fig:VIfastReal}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[barabasi]{\includegraphics[width=0.4\textwidth]{fda_test_barabasi_louvain_VI_functional.pdf}}%
\subfloat[barabasi]{\includegraphics[width=0.4\textwidth]{fda_test_barabasi_louvain_pvalues.pdf}}\\
\subfloat[facebook]{\includegraphics[width=0.4\textwidth]{fda_test_facebook_louvain_VI_functional.pdf}}
\subfloat[facebook]{\includegraphics[width=0.4\textwidth]{fda_test_facebook_louvain_pvalues.pdf}}\\
\end{figure}
\setcounter{subfigure}{4}
\begin{figure}[h]
\centering
\subfloat[nexus 5]{\includegraphics[width=0.4\textwidth]{fda_test_nexus5_louvain_VI_functional.pdf}}%
\subfloat[nexus 5]{\includegraphics[width=0.4\textwidth]{fda_test_nexus5_louvain_pvalues.pdf}}\\
\subfloat[nexus 15]{\includegraphics[width=0.4\textwidth]{fda_test_nexus15_louvain_VI_functional.pdf}}
\subfloat[nexus 15]{\includegraphics[width=0.4\textwidth]{fda_test_nexus15_louvain_pvalues.pdf}}
\caption{VI plots on the clustering obtained via Louvain on real datasets and the corresponding adjusted p-values of the Interval Testing procedure. Horizontal red line corresponds to the critical value 0.05. Light grey areas correspond to p-values below 0.05, dark grey areas correspond to p-values below 0.01.}\label{fig:VIlouvainReal}
\end{figure}
\section{Conclusions and Discussions}
\label{Discussion}
In this paper we propose an effective procedure to evaluate the robustness of a clustering. Given a community detection method and a network of interest, our methodology enables to clearly detect
if the community structure found by some algorithms is statistically significant or is a result of chance, permitting to examine the stability of the partition recovered. As suggested in \cite{Karrer2008}, we specify a perturbation strategy and a null model to build a set of procedures based on VI as stability measure. This enables to build the VI curve as a function of the perturbation percentage and to compare it with the corresponding null model curve in the functional data analysis framework.
We point out that our methodology could also be used to compare different clustering methodologies, indeed given two clusterings on the same network, we could test the agreement between the two recovered partitions via the direct comparison of the corresponding VI curves as defined by our procedure in Section \ref{VI}. For example, all the three procedures we used point out that $louvain$ method is able to recover a non random clustering also at low modularity ($Q\geq 0.2$), the other way around $fast$ $greedy$ needs a more defined structure ($Q\geq 0.4$).
However, it is out of the scope of the present paper the comparative evaluation of different community extraction methods. The two methodologies $louvain$ and $fast$ $greedy$ were indeed only instrumental to the exemplification of our procedure. Both of them were selected at this stage as they both enables for an automatic definition of the optimal number of communities and are based on the optimisation of the modularity, that plays a key role in describing community structures.
An interesting and straightforward extension of the current paper would be using a different clustering stability measure, for example the $Normalized$ $Mutual$ $Information$ measure proposed in \cite{Danon2005}. This would also lead to a comparison of the performance of different measures for community structure comparison.
|
1,116,691,499,083 | arxiv | \section{Introduction}
\label{SEC:Introduction}
Understanding the transverse structure of hadrons is an important step towards the three-dimensional imaging of hadrons. One of the key quantities that characterizes such transverse structure is the transverse-momentum-dependent parton distribution functions (TMDPDFs), which are a natural generalization of collinear PDFs to incorporate the transverse momentum of partons in the hadron, and provide crucial inputs for describing multi-scale, noninclusive observables at high-energy colliders such as the LHC~\cite{Lin:2020rut}. Currently, our knowledge of TMDPDFs mainly comes from studies of Drell-Yan and semi-inclusive deep-inelastic scattering processes where the transverse momenta of final state particles are measured. QCD factorization theorems allow to relating the relevant experimental observables to TMDPDFs via perturbatively calculable kernels, and thus provide the theoretical basis for extracting TMDPDFs from such observables. In the past, there have been various TMDPDF fittings in the literature~\cite{Bacchetta:2017gcc,Scimemi:2017etj,Bertone:2019nxa,Scimemi:2019cmh,Bacchetta:2019sam,Bacchetta:2020gko}. However, calculating TMDPDFs from first principles has been a challenge, because they are nonperturbative quantities defined in terms of light-cone correlations.
Early lattice efforts have been focused on extracting certain information of TMDPDFs by studying ratios of suitable correlators~\cite{Hagler:2009mb,Musch:2010ka,Musch:2011er,Engelhardt:2015xja,Yoon:2017qzo}, whereas the full distribution also becomes accessible due to the proposal of large momentum effective theory (LaMET)~\cite{Ji:2013dva,Ji:2014gla,Ji:2020ect} which provides, in principle, a general recipe to calculate light-front (LF) correlations from lattice QCD. In the past few years, there has been rapid progress~\cite{Ji:2014hxa,Ji:2018hvs,Ji:2019ewn,Ji:2020jeb,Zhang:2020dbb,Ebert:2018gzl,Ebert:2019okf,Ebert:2019tvc,Shanahan:2020zxr,Shanahan:2019zcq,Ebert:2020gxr,Vladimirov:2020ofp} on how to extract the quark TMDPDFs from appropriately defined quasi-LF correlations involving staple-shaped Wilson line operators. A viable matching between the quasi-TMDPDF and TMDPDF, with a proper Euclidean construction of soft function for the former, has been established, although either for not fully renormalized quasi-TMDPDFs~\cite{Ji:2019ewn} or in a scheme~\cite{Ebert:2019tvc} that introduces undesired nonperturbative effects at large longitudinal distances. In addition, there have been exploratory lattice studies on the soft function~\cite{Zhang:2020dbb} and the Collins-Soper kernel~\cite{Shanahan:2020zxr,Schlemmer:2021aij} controlling the rapidity evolution of the TMDPDFs, as well as on the potential operator mixings under lattice regularization~\cite{Shanahan:2019zcq,Green:2020xco}.
Another important quantity that encompasses information on the transverse structure of hadrons is the TMD wave functions (TMDWFs) or LF wave functions, from which one can actually obtain all parton densities. They are defined by the same staple-shaped Wilson line operators, and thus the lattice computation follows a similar strategy as that for the TMDPDFs~\cite{Ji:2020ect}. The quasi-TMDWF also enters the calculation of soft function through the TMD factorization of a light-meson form factor at large momentum transfer~\cite{Ji:2020ect,Zhang:2020dbb}.
In this work, we perform a systematic analysis of the mixing pattern of staple-shaped Wilson line operators under lattice regularization using symmetry considerations. Similar analysis has been performed for straight Wilson line operators defining the quark quasi-PDFs in Ref.~\cite{Chen:2017mie}, where the authors analyzed the transformation properties of straight Wilson line operators with various Dirac structures and found the same mixing observed in one-loop lattice perturbation theory calculations for Wilson fermions~\cite{Constantinou:2017sej}. The lattice perturbation theory studies have also been extended to quark quasi-TMDPDFs in Ref.~\cite{Constantinou:2019vyb}, revealing certain mixings among operators with different Dirac structures (see also Ref.~\cite{Green:2020xco}). However, a systematic analysis of the operator mixing pattern from symmetry considerations is still missing. Here we generalize the discussion of Ref.~\cite{Chen:2017mie} to staple-shaped Wilson line operators. The results show mixings that are not present in one-loop lattice perturbation theory calculations. We also discuss the renormalization and matching of quasi-TMDPDFs and -TMDWFs in a scheme where no extra non-perturbative effects are introduced at large distances in the renormalization stage, in the same spirit as the hybrid renormalization~\cite{Ji:2020brr} proposed recently for the quasi-PDFs.
The rest of the paper is organized as follows: In Sec.~\ref{SEC:qTMD}, we give a brief overview of the quasi-TMDPDFs and -TMDWFs in LaMET, both are defined in terms of staple-shaped Wilson line operators along spatial directions. We then discuss in Sec.~\ref{SEC:mixing} the transformation properties of such operators and their mixing pattern under lattice regularization. In Sec.~\ref{SEC:renmat} we discuss the renormalization and matching of quasi-TMDPDFs and -TMDWFs in a scheme following the spirit of hybrid renormalization and give the relevant one-loop matching kernel. Finally we conclude in Sec.~\ref{SEC:conclusion}.
\section{Quasi-TMDPDFs and -TMDWFs in LaMET}
\label{SEC:qTMD}
Let us begin with the definition of quasi-TMDPDFs in LaMET with Euclidean metric in four-dimensions~\cite{Ji:2014hxa,Ji:2018hvs,Ebert:2019okf,Ji:2019ewn}
\begin{align}\label{eq:quasi_TMD}
& \tilde f(z ,b_\perp,\mu,P^z) \\
&=\! \lim_{L \rightarrow \infty} \frac{\langle PS| \bar \psi\big(\frac{\vec z+\vec{b}_\perp}{2}\big)\Gamma{\overline W}(\vec z, \vec{b}_\perp;\vec L)\psi\big(-\!\frac{\vec z+\vec{b}_\perp}{2}\big) |PS\rangle}{\sqrt{Z_E(2L,b_\perp,\mu)}} \ , \nonumber
\end{align}
where we have chosen a symmetric setup to simplify the analysis. $P=(P^0,0,0,P^z)$ is the hadron momentum and $S$ denotes its spin, $\vec L\equiv L n_z$, $\vec z=z n_z$ with $ n_z=(0,0,0,1)$ being a unit four-vector along the spatial $z$ direction, and $\vec{b}_\perp=(0,b_1,b_2,0)$. The staple-shaped Wilson line takes the following form
\begin{align}\label{eq:staplez}
{\overline W}(\vec z, \vec{b}_\perp;L)&={W_z^\dagger\Big(\vec L+\frac{\vec b_\perp}{2}; \frac{\vec z}{2}-\vec L\Big)
W_{\perp}\Big(\vec L -\frac{{\vec b}_\perp}{2};\vec b_\perp\Big)\nonumber\\
&\times W_{z}\Big(-\frac{\vec z+\vec{b}_\perp}{2};\vec L+\frac{\vec z}{2}\Big), \nonumber\\
W_{i}(\eta;L)&= {\cal P}{\rm exp}\Big[-ig\int_{0}^{L} dt\, {n}_i\cdot A(\eta^\mu+t n_i^\mu)\Big],
\end{align}
for an illustration see Fig.~\ref{fig:stapleWL}. $\Gamma$ denotes a Dirac matrix.
$\sqrt{Z_E(2L,b_\perp,\mu,0)}$ is the square root of the vacuum expectation value of a flat rectangular Euclidean Wilson-loop along the $n_z$ direction with length $2L$ and width $b_\perp$:
\begin{align}\label{eq:Z_E}
Z_E(2L,b_\perp,\mu)&=\frac{1}{N_c}{\rm Tr}\langle 0|{W_\perp^\dagger(-\vec\xi_-;-b_\perp)W_z^\dagger(\vec\xi_+;-2L)}\nonumber\\
&{\times W_{\perp}(\vec \xi_-; b_\perp) W_z(-\vec \xi_+;2L)}|0\rangle \, ,
\end{align}
where
\begin{align}
\vec \xi_\pm = L \vec n_z \pm \frac{\vec b_\perp}{2}\, .
\end{align}
In contrast to the usual TMDPDF which contains lightlike separations between quark fields, the quasi-TMDPDF defined above involves spatial separations only. However, the same lightcone physics is projected out when the hadron momentum becomes infinite, as one can unboost the hadron at large momentum and apply the boost operator to the spatial correlator in Eq.~(\ref{eq:quasi_TMD}), yielding the same LF correlator defining the TMDPDF~\cite{Ji:2019ewn,Ji:2020ect}. This is similar to shifting from Schr\"odinger picture to Heisenberg picture in quantum mechanics. Note that the LF correlator in TMDPDF leads to rapidity divergences which require a proper regulator. Given the finite hadron momentum, the quasi-TMDPDF can be viewed in a sense as the definition of TMDPDF with the hadron momentum as a rapidity regulator~\cite{Ji:2019ewn}.
\begin{figure}[tbp]
\includegraphics[width=0.25\textwidth]{stapleWL}
\caption{Staple-shaped gauge link used to define the quasi-TMDPDF and -TMDWF.}
\label{fig:stapleWL}
\end{figure}
In the above definition, also the length of the longitudinal link is kept finite to regulate the pinch-pole singularity associated with infinitely long Wilson lines~\cite{Ji:2018hvs}. Such link length dependence drops out in the ratio of Eq.~(\ref{eq:quasi_TMD}) so that the final result has a proper $L\to\infty$ limit. The introduction of $Z_E$ also removes additional contributions arising from the transverse gauge link.
From Eq.~(\ref{eq:quasi_TMD}), the momentum space density is given by the following Fourier transform
\begin{equation} \label{eq:TMD-mom}
\tilde f(x, k_\perp,\mu,\zeta_z) = \int\frac{d\lambda d^2\vec b_\perp}{(2\pi)^3}e^{ix\lambda+i\vec k_\perp\cdot \vec b_\perp}\tilde f(\lambda ,b_\perp,\mu,P^z)\, ,
\end{equation}
with $\lambda=z P^z$ being the quasi-LF distance, and $\zeta_z=(2xP^z)^2$ is the Collins-Soper scale. The thus defined quasi-TMDPDF depends on two scales, $\mu$ and $\zeta_z$. The dependence on $\mu$ is controlled by the renormalization group equation~\cite{Collins:1981uk,Ji:2004wu}
\begin{equation}\label{eq:RG_TMD}
\mu^2\frac{d}{d\mu^2}\ln \tilde f(x, b_\perp,\mu,\zeta_z)=\gamma_F(\alpha_S(\mu)),
\end{equation}
where $\alpha_S=g^2/(4\pi)$, and $\gamma_F$ is most easily obtained from the anomalous dimension of the quark field in the axial gauge $A^z=0$. In the auxiliary field language~\cite{Dorn:1986dt,Ji:2017oey,Green:2017xeu}, a straight segment of Wilson line can be replaced by the two-point function of an auxiliary heavy quark field, $\gamma_F$ then represents the anomalous dimension of the auxiliary heavy-light quark current. The Wilson line cusp anomalous dimension does not enter because it has been canceled between the numerator and denominator in Eq.~(\ref{eq:quasi_TMD}).
The $\zeta_z$ dependence characterizes how the quasi-TMDPDF changes with momentum or rapidity, and the evolution is controlled by the Collins-Soper equation~\cite{Collins:1981uk,Ji:2014hxa}
\begin{equation}\label{eq:CS_TMD}
P^z\frac{d}{dP^z}\ln \tilde f(x, b_\perp, \mu, \zeta_z)=K(b_\perp,\mu)+G(\zeta_z,\mu),
\end{equation}
where $K(b_\perp,\mu)$ is the Collins-Soper kernel that is independent of the rapidity regularization, while $G(\zeta_z,\mu)$ is a perturbative term existing only in the off-light-cone regularization scheme, its explicit expression at one-loop can be found in Ref.~\cite{Ji:2019ewn}.
Analogously, one can define the quasi-TMDWF with the same staple-shaped Wilson line operator, but now between the vaccum and a hadron state~\cite{Ji:2020ect}
\begin{align}\label{eq:qTMDWF}
&\tilde\psi(z,b_\perp,\mu,P^z)\\
&=\! \lim_{L \rightarrow \infty} \frac{\langle 0| \bar \psi\big(\frac{\vec z+\vec{b}_\perp}{2}\big)\Gamma{\overline W}(\vec z, \vec{b}_\perp;\vec L)\psi\big(-\!\frac{\vec z+\vec{b}_\perp}{2}\big) |PS\rangle}{\sqrt{Z_E(2L,b_\perp,\mu)}}. \nonumber
\end{align}
Its scale dependence is controlled by evolution equations similar to Eqs.~(\ref{eq:RG_TMD}) and (\ref{eq:CS_TMD}).
\section{Mixing pattern of staple-shaped Wilson line operators on the lattice}
\label{SEC:mixing}
To calculate the TMDPDFs or TMDWFs, we need to calculate the coordinate space correlation functions defined above on the lattice. A discretized lattice has less symmetry than the continuum, and thus more operator mixings can appear. Moreover, chiral symmetry might be broken after the fermion fields are discretized, leading to additional operator mixings. Nevertheless, the lattice action exhibits important discrete symmetries: parity, time reversal and charge conjugation. Investigating the transformation properties of relevant operators under these symmetries helps to unravel potential mixings that can occur. Such an analysis has been done for straight Wilson line operators defining the quasi-PDFs in Ref.~\cite{Chen:2017mie}. In this section, we extend it to staple-shaped Wilson line operators relevant for the quasi-TMDPDFs and -TMDWFs.
\subsection{${\cal P}$, ${\cal T}$, ${\cal C}$ and axial transformations}
For the convenience of the reader, we briefly summarize in this subsection the transformation properties of fields under parity (${\cal P}$), time-reversal (${\cal T}$), charge conjugation (${\cal C}$), and the axial transformation. We follow the convention of Ref.~\cite{Chen:2017mie} with the Euclidean spacetime coordinates $(x, y, z, \tau) = (1, 2, 3, 4)$. Dirac matrices are chosen to be Hermitian: $\gamma_{\mu}^{\dagger}=\gamma_{\mu}$, and $\gamma_5=\gamma_1\gamma_2\gamma_3\gamma_4$.
Since there is no distinction between time and space in Euclidean space, the parity transformation in the $\mu$-direction, denoted by ${\cal P}_{\mu}$ with $\mu\in\{1,2,3,4\}$, can be defined with respect to any direction
\begin{eqnarray}
\psi(x)&\xrightarrow[]{{\cal P}_{\mu}}&
\psi(x)^{{\cal P}_{\mu}}=\gamma_{\mu}\psi(\mathbb{P}_{\mu}(x)),\\
\overline{\psi}(x)&\xrightarrow[]{{\cal P}_{\mu}}&
\overline{\psi}(x)^{{\cal P}_{\mu}}=\overline{\psi}(\mathbb{P}_{\mu}(x))\gamma_{\mu},\\
U_{\nu\not=\mu}(x)&\xrightarrow[]{{\cal P}_{\mu}}&
{U_{\nu}(x)^{{\cal P}_{\mu}}
=U_{{-\nu}}^{\dagger}(\mathbb{P}_{\mu}(x)
,\\
U_{\mu}(x)&\xrightarrow[]{{\cal P}_{\mu}}&
U_{\mu}(x)^{{\cal P}_{\mu}}=U_{\mu}(\mathbb{P}_{\mu}(x)),
\end{eqnarray}
where $\mathbb{P}_{\mu}(x)$ is the vector $x$ with sign flipped except for the component in the $\mu$-direction. In other words, it is the parity transformation in the $x_\mu$ direction. $U_\mu(x)$ denotes a generic Wilson line along the $\mu$ direction with the starting point at $x$.
Analogously, the time reversal transformation ${\cal T}_{\mu}$ can also be generalized in any direction in Euclidean space
\begin{eqnarray}
\psi(x)&\xrightarrow[]{{\cal T}_{\mu}}&
\psi(x)^{{\cal T}_{\mu}}=\gamma_{\mu}\gamma_5\psi(\mathbb{T}_{\mu}(x)),\\
\overline{\psi}(x)&\xrightarrow[]{{\cal T}_{\mu}}&
\overline{\psi}(x)^{{\cal T}_{\mu}}=
\overline{\psi}(\mathbb{T}_{\mu}(x))\gamma_5\gamma_{\mu},\\
U_{\mu}(x)&\xrightarrow[]{{\cal T}_{\mu}}&
U_{\mu}(x)^{{\cal T}_{\mu}}=U_{{-}\mu}^{\dagger}(\mathbb{T}_{\mu}({x}
),\\
U_{\nu\not=\mu}(x)&\xrightarrow[]{{\cal T}_{\mu}}&
U_{{\nu}}(x)^{{\cal T}_{\mu}}
=U_{{\nu}}(\mathbb{T}_{\mu}(x)),
\end{eqnarray}
where $\mathbb{T}_{\mu}(x)$ is the vector $x$ with sign flipped only in the $\mu$-direction.
Charge conjugation ${\cal C}$ transforms particles into their antiparticle counterparts. Under charge conjugation, one has
\begin{eqnarray}
\psi(x)&\xrightarrow[]{\cal C}&
\psi(x)^{\cal C}=C^{-1}\overline{\psi}(x)^{\top},\\
\overline{\psi}(x)&\xrightarrow[]{\cal C}&
\overline{\psi}(x)^{\cal C}=-\psi(x)^{\top}C,\\
U_{\mu}(x)&\xrightarrow[]{\cal C}&
U_{\mu}(x)^{\cal C}=U_{\mu}(x)^{\ast}=(U_{\mu}^{\dagger}(x))^{\top},
\end{eqnarray}
with $\top$ denoting the transpose operation, and
\begin{eqnarray}
C\gamma_{\mu}C^{-1}=-\gamma_{\mu}^{\top}, \qquad
C\gamma_5C^{-1}=\gamma_5^{\top}.
\end{eqnarray}
The continuous axial rotation $\cal A$ of the fermion field reads
\begin{align}
\psi(x)&\xrightarrow[]{\cal{A}}\psi'(x)=e^{i\alpha\gamma_5}\psi(x),\nonumber\\
\overline{\psi}(x)&\xrightarrow[]{\cal{A}}\overline{\psi}'(x)
=\overline{\psi}(x)e^{i\alpha\gamma_5}.
\label{eq:chitransf}
\end{align}
\subsection{Operator mixings}
Based on transformation properties of the fields listed above, we can investigate the transformation under discrete symmetries of the following nonlocal operators involving a staple-shaped Wilson line
\begin{align}\label{eq:O_Gamma}
O_\Gamma(z, \vec b_\perp, L)&=\bar \psi\big(\frac{\vec z+\vec{b}_\perp}{2}\big)\Gamma{\overline W}(\vec z, \vec{b}_\perp;L)\psi\big(-\!\frac{\vec z+\vec{b}_\perp}{2}\big).
\end{align}
\iffalse
\begin{widetext}
\centering
\hspace{4em}\begin{table}[t]
\begin{tabular}{ccccccccc}
\hline\hline
& $\Gamma={\bf 1}_{1234}$ & $\gamma_{i,1234}$ & $\gamma_{3,1234}$ & $\gamma_{5,1234}$ & $i\gamma_i\gamma_{5,1234}$ & $i\gamma_3\gamma_{5,1234}$ & $\sigma_{i3,1234}$ & $\epsilon_{ijk}\sigma_{jk,1234}$\\
\hline
${\cal P}_3$ & EEEE & OOOO & EEEE & OOOO & EEEE & OOOO & OOOO & EEEE\\
${\cal P}_{l\not=3}$ & EEOO & EEOO$_{(l=i)}$ & OOEE & OOEE & OOEE$_{(l=i)}$ & EEOO & OOEE$_{(l=i)}$ & EEOO$_{(l=i)}$\\
& & OOEE$_{(l\not=i)}$ & & & EEOO$_{(l\not=i)}$ & & EEOO$_{(l\not=i)}$ & OOEE$_{(l\not=i)}$\\
${\cal T}_3$ & EEOO & EEOO & OOEE & OOEE & OOEE & EEOO & OOEE & EEOO\\
${\cal T}_{l\not=3}$ & EEEE & OOOO$_{(l=i)}$ & EEEE & OOOO & EEEE$_{(l=i)}$ & OOOO & OOOO$_{(l=i)}$ & EEEE$_{(l=i)}$\\
& & EEEE$_{(l\not=i)}$ & & & OOOO$_{(l\not=i)}$ & & EEEE$_{(l\not=i)}$ & OOOO$_{(l\not=i)}$\\
${\cal C}$ & EOOE & OEEO & OEEO & EOOE & EOOE & EOOE & OEEO & OEEO\\
${\cal A}$ & V & I & I & V & I & I & V & V\\
\hline\hline
\end{tabular}
\caption{Transformation properties of the staple-shaped Wilson line operators $O_{\Gamma}^\alpha(z, \vec b_\perp, L)$, $i,j,k\not=3$. Other notations follow those of Ref.~\cite{Chen:2017mie}.}
\label{tab:CPTAtransf}
\end{table}
\end{widetext}
\fi
Given that the hadron is to be boosted along the $z$-direction, we treat the $z$-direction differently from other directions, as was done in the case of straight Wilson line operators, and categorize the Dirac structure as follows
\begin{eqnarray}
\Gamma\in\{{\bf{1}}, ~\gamma_i, ~\gamma_3, ~\gamma_5, ~i\gamma_i\gamma_5,
~i\gamma_3\gamma_5, ~\sigma_{i3}, ~\epsilon_{ijk}\sigma_{jk}\},
\end{eqnarray}
where $i, j, k\not=3$.
From the field transformation properties in the previous subsection, one can work out with some effort the transformation properties of $O_{\Gamma}(z, \vec b_\perp, L)$ under discrete symmetries.
\begin{align}
O_{\Gamma}(z, \vec b_\perp, L)&\xrightarrow[]{{\cal P}_{i\not=3}}
O_{\gamma_i\Gamma\gamma_i}(-z,{-}\vec b_\perp^{{(-)^i}}, -L), \nonumber\\
O_{\Gamma}(z,\vec b_\perp, L)&\xrightarrow[]{{\cal P}_3}
O_{\gamma_3\Gamma\gamma_3}(z,-\vec b_\perp,L),\nonumber\\
O_{\Gamma}(z,\vec b_\perp, L)&\xrightarrow[]{{\cal T}_{i\not=3}}
O_{\gamma_5\gamma_i\Gamma\gamma_i\gamma_5}(z,\vec b_\perp^{{(-)^i}}, L),\nonumber\\
O_{\Gamma}(z,\vec b_\perp, L)&\xrightarrow[]{{\cal T}_3}
O_{\gamma_5\gamma_3\Gamma\gamma_3\gamma_5}(-z,\vec b_\perp, -L),
\end{align}
{where $\vec b_{\perp,j}^{(-)^i}\equiv (-1)^{\delta_{ij}} \vec b_{\perp,j}$ with $j=1,2$ labeling the component of the transverse vector $\vec b_\perp$. Here no summation is implied over index $j$.}
Under $\cal C$, one has
\begin{equation}
O_{\Gamma}(z,\vec b_\perp, L)
\xrightarrow[]{\cal C} O_{(C\Gamma C^{-1})^T}(-z,-\vec b_\perp, L).
\end{equation}
{We can, therefore, define the following combinations that are eigenstates under discrete symmetries
\begin{align}
{\cal O}^n_\Gamma(z, \vec b_1, L) & = \big\{\big[\big(O_\Gamma(z,\vec b, L)+s_{n0} O_\Gamma(z,b_1,-b_2, L)\big)\notag\\
&\,\,+s_{n1}\left(b_1\mapsto -b_1\right)\big]+s_{n2}\big[z\mapsto -z]\big\}\notag\\
&\quad+s_{n3}\big\{ L\mapsto -L \big\}\, ,
\end{align}
where $s_{ij}=(-1)^{\floor{i/2^j}}$. ${\cal O}^n_\Gamma (z,\vec b,L)$ with $n=1,2,\cdots,16$ is a linear combination of 16 independent operators constructed from freely choosing the sign in front of each argument $z,b_1,b_2,L$. Here $\floor{x}$ is the floor function.
{$O_{\Gamma}^n$ therefore form an operator basis with definite ${\cal C}, {\cal P}$, and ${\cal T}$ properties, and
only operators with the same ${\cal C}, {\cal P}, {\cal T}$ eigenvalues can mix, making the mixing pattern manifest.
Nevertheless, the $O_\Gamma^n$ basis given above is much more complicated than the original one without $-z$, $-b_\perp$, and $-L$ dependence.
Thus, in the following we present the results for the operators $O_n$ defined in Eq.~\eqref{eq:O_Gamma} only, although our analysis is mainly based on the operator basis $O_{\Gamma}^n$.
For the operators in~\eqref{eq:O_Gamma}, Lorentz covariance helps to identify what mixings are forbidden.
}
For example, the scalar operator $O_1$ does not mix with $O_{\gamma_5}$ and $O_{\gamma_3\gamma_5}$ for the TMDPDF with a staple-shape Wilson line. The same is true for $O_{\gamma_\mu}$ and $O_{\gamma_5}$, $O_{\gamma_3}$ and $O_{\gamma_3\gamma_5}$,
and etc. These patterns are consistent with the observation in lattice computations \cite{Shanahan:2019zcq,Zhang:2020dbb} taking into account the external off-shell quark state.
However, it is worth pointing out that the operator basis $O_\Gamma$'s are not complete by themselves, as they are not operators of definite twist and operators of higher twist mix with operators of higher Fock states with additional elementary fields due to QCD equations of motion. In fact, a proper treatment for the mixings between operators observed from one-loop calculation in~\cite{Ebert:2019tvc} requires the introduction of higher Fock states.
This is already evident from operators with straight Wilson line at twist-3 level, see Ref.~\cite{Braun:2021aon} and references therein.
Lorentz symmetry allows certain mixings to be identified as mixings with higher Fock states which are statistically suppressed in general. This explains the smallness of mixings between certain Dirac structures observed in~\cite{Shanahan:2019zcq}. A thorough investigation including operators of higher Fock state is beyond the scope of the present paper, and left to future work.
On the other hand, we find that $O_\Gamma$ mixes in general with $O_{\gamma_3\Gamma}$ for arbitrary Dirac structure $\Gamma$, provided that a fermion action that does not preserve chiral symmetry is used.
This is in contrast to operators with straight Wilson line, where such mixings do not occur for a specific set of $\Gamma$, e.g., $\Gamma=\gamma_4$ in the unpolarized case. The above mixing pattern contains the mixings observed in lattice perturbation theory calculations ~\cite{Constantinou:2019vyb}, but also contains mixings that are not present in such calculations. Note that the definition of quasi-TMDPDFs or -TMDWFs also involves a factor of $\sqrt {Z_E}$, but it does not change the mixing pattern discussed above, as it is a common factor for operators with all Dirac structures and depends only on the length $b_\perp$ and $L$.
\section{Renormalization and matching of quasi-TMDPDFs and -TMDWFs in a simple scheme}
\label{SEC:renmat}
In this section, we discuss the renormalization and matching of quasi-TMDPDFs and -TMDWFs in a simple scheme that does not introduce extra non-perturbative effects at large distances, following the same spirit as the hybrid renormalization scheme introduced in Ref.~\cite{Ji:2020brr}.
Using the auxiliary field formalism~\cite{Dorn:1986dt}, the straight Wilson line operators have been shown to be multiplicatively renormalized~\cite{Ji:2017oey,Green:2017xeu,Ishikawa:2017faj}. The same can be shown to be true for the staple-shaped operators, with the renormalization factors eliminating the power divergences associated with the Wilson line, the cusp divergences as well as the endpoint divergences~\cite{Ebert:2019tvc}. Thus, the renormalization of the quasi-TMDPDFs and quasi-TMDWFs can be carried out in analogy with that of the quasi-PDFs, and for the latter a commonly used renormalization scheme is the regularization-independent momentum subtraction (RI/MOM) scheme (or its variation $\rm{RI'/MOM}$)~\cite{Constantinou:2017sej,Stewart:2017tvs,Alexandrou:2017huk,Chen:2017mzz} or the ratio scheme~\cite{Radyushkin:2017cyf,Orginos:2017kos,Braun:2018brg,Li:2020xml}. These schemes have the advantage of avoiding certain discretization effects at short distances. The $\rm{RI'/MOM}$ renormalization for the quasi-TMDPDFs has been discussed in the literature~\cite{Constantinou:2019vyb,Ebert:2019tvc,Shanahan:2019zcq}. However, as pointed out in Ref.~\cite{Ji:2020brr}, both the RI/MOM and the ratio schemes introduce undesired non-perturbative effects at large $z$ in the renormalization stage, and thus become unreliable at large distances. This can be clearly seen in a recent analysis of data at multiple lattice spacings in Ref.~\cite{Huo:2021rpe}. In contrast, the Wilson line mass subtraction scheme~\cite{Chen:2016fxx,Ishikawa:2017faj} does not have this issue. Based on this, an alternative renormalization strategy, the hybrid renormalization, has been proposed in Ref.~\cite{Ji:2020brr}, which utilizes the advantages of different schemes at short and long distances. Another issue with the RI/MOM scheme is that it involves off-shell external states, which bring in a lot of complications when going to higher orders in perturbation theory~\cite{Chen:2020ody} or dealing with gauge particles such as gluons~\cite{Zhang:2018diq,Wang:2019tgg}. This can be avoided if one chooses physical matrix elements for renormalization.
The discussion above indicates that for the quasi-TMDPDFs or -TMDWFs, one shall also switch to a more reliable renormalization scheme such as the hybrid scheme. Fortunately, the lattice study of quasi-TMDPDFs/-TMDWFs is focused on non-perturbative or large $b_\perp$ region (for small $b_\perp$ the TMDPDFs can be studied through a factorization into integrated PDFs, similar factorization is expected also for the TMDWFs). Thus, even at small longitudinal separation $z$ one does not need to worry about the discretization effects plaguing the quasi-PDFs. As a result, we can perform the renormalization in a simple manner by removing the logarithmic and linear divergences separately, in the same spirit as the Wilson line mass subtraction scheme~\cite{Chen:2016fxx,Ishikawa:2017faj}.
Bearing in mind the mixing discussed in the previous section, we can write down the following renormalization
\begin{align}\label{eq:renorm}
\begin{pmatrix}
{\bar O}^B_{\Gamma} \\ {\bar O}^B_{\Gamma'}
\end{pmatrix}
&=
{\cal Z}
\begin{pmatrix}
{\bar O}_{\Gamma}^R \\ {\bar O}_{\Gamma'}^R
\end{pmatrix}
=
\begin{pmatrix}
{\cal Z}_{\Gamma\Gamma} & {\cal Z}_{\Gamma\Gamma'}\\ {\cal Z}_{\Gamma'\Gamma} & {\cal Z}_{\Gamma'\Gamma'}
\end{pmatrix}
\begin{pmatrix}
{\bar O}_{\Gamma}^R \\ {\bar O}_{\Gamma'}^R
\end{pmatrix}
+{\rm{h.f.}}
\nonumber\\
&=
Z\begin{pmatrix}
{Z}_{\Gamma\Gamma} & {Z}_{\Gamma\Gamma'}\\ {Z}_{\Gamma'\Gamma} & {Z}_{\Gamma'\Gamma'}
\end{pmatrix}
\begin{pmatrix}
{\bar O}_{\Gamma}^R \\ {\bar O}_{\Gamma'}^R
\end{pmatrix}+{\rm{h.f.}}\ ,
\end{align}
where {${\rm h.f.}$ stands for higher Fock states which are an integral part of a complete operator basis.
They are generated by terms that are proportional to the external quark momenta should an external quark state in momentum space is used in loop calculations.
The investigation of their contributions is beyond the scope of this paper and left for future work.} The superscript $B$ and $R$ denotes bare and renormalized operators, respectively, $\Gamma'=\gamma_3\Gamma$, and we assume the factor $\sqrt {Z_E}$ has been divided in all $\bar O_\Gamma$'s, and ignore the arguments for notational simplicity. In the second row, we have separated an overall renormalization factor $Z$ because the multiplcative renormalization is independent of the Dirac structure involved in the operator~\cite{Ji:2017oey,Musch:2010ka}. Moreover, the linear and cusp divergences associated with the Wilson line are canceled by $\sqrt {Z_E}$, thus one only needs to renormalize the remaining logarithmic divergences at the endpoints of the operator. If the operator mixing is absent, there is an easy way to achieve this. One can calculate the straight line operator matrix elements at two different distances $z_0$ and ${z_0}/2$ (with $z_0$ being in the perturbative region) and form the ratio which effectively removes the factorized linear divergence ${\rm e}^{\delta m |z|}$ from the self-energy of the Wilson-line while still retains the required logarithmic divergence,
\begin{align}\label{eq:renormZ}
Z&=\frac{\tilde h_\Gamma^2(z_0/2, P^z)}{\tilde h_\Gamma(z_0, P^z)}\, ,
\end{align}
where $\tilde h_\Gamma$ can be chosen, e.g., as the zero-momentum hadron matrix element used in the ratio scheme or the off-shell quark matrix element used in the RI/MOM scheme, and $P^z$ is not necessarily the same as the momentum used in calculating the quasi-TMDPDF matrix element. For example, in the unpolarized case, one can use
\begin{align}
&\tilde h_{\gamma^t}(z_0,P^z=0)\\
&=\frac{1}{2P^t}\langle P^z=0|\bar\psi(z_0)\gamma^t W_z(z_0,0)\psi(0)|P^z=0\rangle, \nonumber
\end{align}
where $P$ denotes the hardon momentum, or
\begin{align}
&\tilde h_{\gamma^t}(z_0,p^z=0)\nonumber\\
&=\left.\frac{\sum_s\langle p,s|O_{\gamma^t}(z_0)|p,s\rangle}{\sum_s\langle p,s|O_{\gamma^t}(z_0)|p,s\rangle_{\rm tree}}\right|_{\tiny{p^2=-\mu_R^2,\,p^z=0}}\, ,
\end{align}
with $p,s$ being the off-shell quark momentum and spin, respectively.
Note that although the RI/MOM matrix element exhibits a non-universal linear divergence behavior depending on the lattice action used~\cite{Huo:2021rpe}, it is still allowed here because in $Z$ all such linear divergences cancel out by construction. In this way, what is left in $Z$ is just the endpoint renormalization factors associated with some perturbative corrections.
However, in the presence of operator mixing, one needs to determine the mixing matrix, which requires calculating certain quasi-TMDPDF matrix elements. One can choose, for example, the hadron matrix element of the quasi-TMDPDF operator or the RI/MOM renormalization factor, but at perturbative $z$ and $b_\perp$, so that extra non-perturbative effects are avoided at the renormalization stage.
The first option of determining the mixing matrix entries is to follow the calculation of renormalization factors in the presence of mixing in the RI/MOM scheme~\cite{Constantinou:2017sej}.
Since all $z$- and $b_\perp$-dependent UV divergences have been canceled in $\bar O_\Gamma$, we prefer to choose renormalization conditions such that $Z_{ij}$ are independent of $z$ and $b_\perp$. In other words, we can still apply the RI/MOM renormalization conditions similar to those in~\cite{Constantinou:2019vyb,Shanahan:2019zcq}, but only at a given perturbative $z$ and $b_\perp$, and use the results for the renormalization of correlators at all distances.
In other words, we may require
\begin{align}\label{eq:RIMOMnew}
&Z_q^{-1}(p) {\cal {\bar Z}}_{\Gamma\Gamma'}{\rm Tr}[\Lambda_{{\bar O}_\Gamma}(p)\Gamma']|_{\tiny{z=z_0,b_\perp=b_{\perp 0},p^\mu=p_0^\mu}} \nonumber\\
&={\rm Tr}[\Lambda_{{\bar O}_\Gamma}^{\rm tree}(p)\Gamma']|_{\tiny{z=z_0,b_\perp=b_{\perp 0},p^\mu=p_0^\mu}},
\end{align}
where ${\cal {\bar Z}}={\cal Z}^{-1}$, $z_0, b_{\perp0}$ are chosen within the perburbative region, $p_0^\mu=(p_0,0,0,0)$, $\Lambda_{{\bar O}_\Gamma}$ is the amputated Green's function of the operator ${\bar O}_\Gamma$ in an off-shell quark state, and the superscript ``tree" denotes its tree-level value. $Z_q$ is the quark wave function renormalization factor determined as
\begin{equation}
Z_q(p)=\frac{1}{12}{\rm Tr}[S^{-1}(p)S^{\rm tree}(p)],
\end{equation}
with $S(p)$ and $S^{\rm tree}(p)$ denoting the quark propagator and its tree-level value, respectively.
From Eq.~(\ref{eq:renorm}) and the renormalization factors calculated in Eq.~(\ref{eq:RIMOMnew}), one obtains the renormalized quasi-TMDPDF in the RI/MOM scheme
\begin{align}
\hspace{-1em}{\tilde f}_R(\Gamma,z, b_\perp)&=
{{\cal {\bar Z}}}_{\Gamma\Gamma}\tilde f_B(\Gamma,z,b_\perp)+{{\cal {\bar Z}}}_{\Gamma\Gamma'}\tilde f_B(\Gamma',z,b_\perp),
\end{align}
which can be converted to the $\overline{\rm MS}$ scheme by applying a conversion factor
\begin{align}
{\tilde f}_R^{\overline{\rm MS}}(\Gamma,z,b_\perp)={\tilde C}{\tilde f}_R(\Gamma,z,b_\perp).
\end{align}
The conversion factor in general can take a diagonal form~\cite{Constantinou:2019vyb} or a non-diagonal form~\cite{Ebert:2019tvc}, depending on the choice of projectors. Here we choose to define a diagonal conversion factor which reads for $\Gamma=\gamma^\mu$
\begin{align}
\tilde C_\Gamma&=1+\Big[\frac{1}{6}{\cal V}_\Gamma^{\mu\mu}(\mu,p_0)-Z_q^{(1)}\Big],
\end{align}
where $V^{\mu\mu}_\Gamma$ has been calculated in Ref.~\cite{Ebert:2019tvc}, and no summation is implied over the index $\mu$.
Similar renormalization conditions can be constructed if the quasi-TMDPDF matrix elements of hadrons are used. There are different ways to choose such conditions. We choose to determine the renormalization factors by requiring that the renormalized quasi-TMDPDF matrix element be equal to the tree-level value at given perturbative $z$ and $b_\perp$. To avoid potential collinear divergences in the perturbative result and thus in the renormalization factor, we can set $z=0$ or $P^z=0$. In the following, we illustrate this procedure by taking $\Gamma=\gamma^\mu$ (with $\Gamma'=\gamma_3\gamma^\mu$) as an example, we require that the renormalization factors satisfy
\begin{align}
&\langle P|{\bar O}_\Gamma^B|P\rangle_{\rm tree}=\big[{\cal {\bar Z}}_{\Gamma\Gamma}\langle P|{\bar O}_\Gamma^B|P\rangle\nonumber\\
&+{\cal {\bar Z}}_{\Gamma\Gamma'}\langle P|{\bar O}_{\Gamma'}^B|P\rangle\big]\big|_{z=z_0,b_\perp=b_{\perp 0},P^\mu=p_0^\mu},
\end{align}
where $P_0^\mu=(p^0,0,0,p^z)$. The corresponding conversion factor can be computed with dimensional regularization in the continuum, which also takes a diagonal form. For $z_0=0$, we have
\begin{align}
{\tilde C}_\Gamma=1-\frac{\alpha_S C_F}{2\pi}\Big(\frac{1}{2}L_b^2-\frac{3}{2}L_b+L_b L_z+\frac{1}{2}L_z^2-L_z+\frac{3}{2}\Big),
\end{align}
with $L_b=\ln(b_\perp^2\mu^2 e^{2\gamma_E}/4)$, $L_z=\ln(4p_z^2/\mu^2)$. While for $p^z=0$, we have
\begin{align}
{\tilde C}_\Gamma=1+\frac{\alpha_S C_F}{4\pi}\Big(L_b+4\ln\frac{b_\perp^2+z^2}{b_\perp^2}-4\frac{z}{b_\perp}\arctan\frac{z}{b_\perp}+1\Big).
\end{align}
With the renormalization and conversion factors above, we can convert the renormalized quasi-TMDPDF to the $\overline{\rm MS}$ scheme, and then match it to the TMDPDF. Following Refs.~\cite{Ji:2019ewn,Ji:2020ect}, the connection between the $\overline{\rm MS}$ scheme quasi-TMDPDF $\tilde f$ and TMDPDF $f^{\rm TMD}$ takes the following form,
\begin{align}\label{eq:quasiTMDmatch}
&f^{\rm TMD}(x,b_\perp,\mu,\zeta)=C\left(\frac{\zeta_z}{\mu^2}\right)\\ &\times e^{-\frac{1}{2}\ln (\frac{\zeta_z}{\zeta})K(b_\perp,\mu)}
{\tilde f}(x,b_\perp,\mu,\zeta_z)S_r^{\frac{1}{2}}(b_\perp,\mu)+...\ ,\nonumber
\end{align}
where $C(\zeta_z/\mu^2)$ is a perturbative matching kernel and its explicit expression to $O(\alpha_S)$ can be found in Ref.~\cite{Ji:2020ect}. The exponential term contains the Collins-Soper evolution kernel which can be computed from the ratio of $\tilde f$'s at different rapidity scales. $\zeta$ corresponds to the Collins-Soper scale characterizd by the full hadron momentum. $S_r$ is the so-called reduced soft factor. The omitted terms are power corrections of order ${\cal O}\left(\Lambda_{\rm QCD}^2/\zeta_z,M^2/(P^z)^2,1/(b^2_\perp\zeta_z) \right)$. Similar matching relations have also been discussed in Refs.~\cite{Constantinou:2019vyb,Ebert:2019tvc}.
For completeness, let us also summarize here how the remaining terms in Eq.~(\ref{eq:quasiTMDmatch}) can be calculated. The Collins-Soper kernel can be extracted by forming the ratio of quasi-TMDPDFs at two different $\zeta_z$'s~\cite{Ebert:2019tvc}
\begin{align}
\frac{\tilde f_R(x, b_\perp, \mu, \zeta_{z})}{\tilde f_R(x, b_\perp, \mu, \zeta'_{z})}=\frac{C(\frac{\zeta'_z}{\mu^2})}{C(\frac{\zeta_z}{\mu^2})}\Big(\frac{\zeta_z}{\zeta'_z}\Big)^{K(b_\perp,\mu)/2}\, .
\end{align}
As for the soft function $S_r(b_\perp,\mu)$, it
is calculable from the form factor of a pseudoscalar light-meson state with quark content $\bar\psi\eta$ defined as~\cite{Ji:2020ect,Zhang:2020dbb}
\begin{align}
F(b_\perp, P, P', \mu)=\langle P'| \bar\eta(\vec b_\perp)\Gamma'\eta(\vec b_\perp)\bar\psi(0)\Gamma\psi(0)|P\rangle,
\end{align}
where $P,P'$ are two large momenta approaching opposite light-like directions. Making use of the quasi-TMDWFs $\tilde\psi_{\bar q q}$ in Eq.~\eqref{eq:qTMDWF} with light quark state $q$, the form factor takes the following factorized form,
\begin{align}
&F(b_\perp, P, P', \mu)=\nonumber\\
&\int dx\, dx'\, H(x, x'){\tilde\psi}^\dagger_{\bar q q}(x', b_\perp){\tilde\psi}_{\bar q q}(x, b_\perp)S_r(b_\perp, \mu),
\end{align}
where the perturbative matching kernel up to one-loop has been given in Ref.~\cite{Ji:2020ect}.
The non-perturbative renormalization and conversion factors also apply to the quasi-TMDWFs. After converting to the $\overline{\rm MS}$ scheme quasi-TMDWFs, one can use the matching derived in Ref.~\cite{Ji:2020ect} to obtain the TMDWFs, where the matching relation takes the same form as Eq.~(\ref{eq:quasiTMDmatch}) with a different matching kernel, whose result at one-loop has also been given in Ref.~\cite{Ji:2020ect}.
\section{Conclusion and outlook}
\label{SEC:conclusion}
In this paper, we have investigated the mixing patterns of staple-shaped Wilson line operators defining the quasi-TMDPDFs and -TMDWFs under lattice regularization using symmetry considerations. Our analysis shows that for non-chiral fermions the mixing with other Dirac structures is generally allowed, except for certain specific cases such as $O_1 \mathrel{\ooalign{$\leftrightarrow$\cr\hidewidth$/$\hidewidth}} O_{\gamma_5}$, etc.
There is no Dirac structure, however, that is \textit{completely} free from mixing with other structures.
{To be more specific, all $\Gamma$s mix with $\gamma^3\Gamma$.}
This is in contrast with the case of straight Wilson line operators, where for certain choice of $\Gamma$, no mixing occurs.
{We emphasize that we have excluded operator mixings with higher Fock states, which in itself is not self-consistent as the evolution of higher-twist two-particle TMDPDFs are not autonomous. A complete treatment is, however, beyond the scope of this paper and calls for further investigations.}
We have also discussed the renormalization of quasi-TMDPDFs and -TMDWFs in a simple scheme that does not introduce extra non-perturbative effects at large distances, and presented the relevant one-loop matching. The results will facilitate the numerical calculation of TMDPDFs and TMDWFs on the lattice.
It is worth pointing out that the operator mixing analysis presented here is based on transformation properties of the relevant operators under discrete symmetries, chiral symmetry and Euclidean symmetry only. For a more thorough analysis of the operator mixing pattern, it might be convenient to start from the auxiliary field formalism and replace the nonlocal operators with local ones in the framework of lattice field theory, and study the mixing of the former from that of the latter. This could be important not only for the staple-shaped Wilson line operators but also for the ones with a straight line, given that a different linear divergence behavior has been observed on lattice in the RI/MOM matrix elements from that of hadron matrix elements, which needs to be understood.
\vspace{2em}
\acknowledgments
We thank Yizhuang Liu, Wei Wang, Yibo Yang and Yong Zhao for valuable discussions. YJ is supported by the DFG grant SFB TRR 257. JHZ is supported in part by National Natural Science Foundation of China under grant No. 11975051, No. 12061131006, and by the Fundamental Research Funds for the Central Universities. SZ is supported by Jefferson Science Associates, LLC under U.S. DOE Contract \#DE-AC05-06OR23177 and by U.S. DOE Grant \#DE-FG02-97ER41028. RZ is supported in part by National Natural Science Foundation of China under grant No. 12075124.
\bibliographystyle{apsrev}
|
1,116,691,499,084 | arxiv | \section{Introduction}
A common problem in mathematics is to classify interesting objects up to some
natural notion of equivalence. More precisely, one considers a class of
objects \( X \) and an equivalence relation \( E\) on \( X \), and tries to
find a set of complete invariants \( I \) for \( (X,E) \). To be of any use,
such an assignment of invariants should be as simple as possible. In most
cases, both \( X \) and \( I \) carry some intrinsic Borel structures, so
that it is natural to ask the assignment to be a Borel measurable map.
A classical example is the problem of classifying separable complete metric
spaces, called \emph{Polish metric spaces}, up to isometry. In
\cite{gromov1999} Gromov showed for instance that one can classify compact
Polish metric spaces using (essentially) elements of \( \RR \) as complete
invariants; in this case, one says that the corresponding
classification problem is smooth. However, as pointed out by Vershik in
\cite{vershik1998} the problem of classifying arbitrary Polish metric spaces
is \guillemotleft an enormous task\guillemotright, in particular it is far
from being smooth. Thus it is natural to ask ``how complicated'' is such a
classification problem.
A natural tool for studying the complexity of classification problems is the
notion of Borel reducibility introduced in \cite{Friedman1989}
and in \cite{HKL}: we say that a classification problem \( (X,E) \) is
\emph{Borel reducible} to another classification problem \( (Y,F) \) (in
symbols, \( E \leq_B F \)) if there exists a Borel measurable function \( f
\colon X \to Y \) such that \( x \mathrel{E} x' \iff f(x) \mathrel{F} f(x')
\) for all \( x,x' \in X \). Intuitively, this means that the classification
problem \( (X,E ) \) is not more complicated than \( (Y,F) \): in fact, any
assignment of complete invariants for \( (Y,F) \) may be turned into an
assignment for \( (X,E) \) by composing with \( f \). A comprehensive
reference for the theory of Borel reducibility is \cite{gaobook}.
In the seminal \cite{Gao2003} (see also~\cite{Clemens2001,clemensisometry}),
Gao and Kechris were able to determine the exact complexity of the
classification problem for isometry on arbitrary Polish metric spaces with
respect to Borel reducibility: it is Borel bireducible with the most complex
orbit equivalence relation (so every equivalence relation induced by a Borel
action of a Polish group on a Polish space Borel reduces to it). However they
left the open problems of establishing the complexity of isometry on locally
compact ultrametric and zero-dimensional Polish spaces. We have been able to
solve the first of these questions in \cite{ultrametric} using an approach
that goes back to Clemens \cite{ClemensPreprint} and Gao and Shao
\cite{GaoShao2011}: Clemens studied the complexity of isometry on the
collection of Polish metric spaces using only distances in a set \( A
\subseteq \RR^+ \) fixed in advance, while Gao and Shao considered the
restriction of Clemens' problem to ultrametric Polish spaces.
We answered the questions left open by Gao and Shao in \cite{ultrametric},
where we focused on the study of ultrametric Polish spaces with a fixed set
of distances and, as a byproduct, we showed that isometry on locally compact
(and even discrete) ultrametric Polish spaces is Borel bireducible with
countable graph isomorphism. In this paper we instead settle various
problems, or provide new proofs for known results, about \emph{arbitrary}
Polish metric spaces with a fixed set of distances.
Let \( \RR^+ = \{ r \in \RR \mid r \geq 0 \} \). Let \( (X,d_X) \) be a
Polish metric space, i.e.\ a separable space with a complete metric \( d_X \)
(which often is left implicit). We denote by \( D(X) \) the set of distances
that are realized in \( X \), i.e.
\[
D(X) = \{ r \in \RR^+ \mid \exists x,y \in X (d_X(x,y) = r) \}.
\]
All metric spaces $X$ we consider are always assumed to be nonempty, so that
$0 \in D(X)$.
\begin{defin}
We say that \( A \subseteq \RR^+ \) is a \emph{distance set} if $A =D(X)$ for
some Polish metric space $X$. When $A =D(X)$ we say that \emph{$A$ is
realized by $X$}. Let \( \D \) denote the set of all distance sets $A
\subseteq \RR^+$.
\end{defin}
Clemens characterized the members of \( \D \) in his PhD thesis.
\begin{theorem}[{\cite[Theorem 4.3]{ClemensThesis}}] \label{clemensrealized}
Let $A \subseteq \RR^+$. Then $A$ is a distance set if and only if $A$ is
analytic, $0 \in A$, and either $A$ is countable or $0$ is a limit point of
$A$.
\end{theorem}
Clemens studied also for which $A \in \D$, given a property of Polish spaces
(like being locally compact, or $\sigma$-compact, or discrete, or countable,
and so on) some Polish metric space with this property has distance set $A$
(see Theorem \ref{necessary} below). Here we consider the following dual
question:
\begin{quest}\label{quest1}
For which $A \in \D$ \emph{every} Polish metric space with distance set $A$
has a given property?
\end{quest}
We answer this question in Section \ref{realizable}, and in particular in
Theorem~\ref{proponlyultrametric}. Our results show in particular that lower
bounds for the complexity of the restriction of isometry to zero-dimensional
Polish metric spaces (one of the problems of Gao and Kechris) can be obtained
by classifying the restriction of isometry to spaces with a fixed distance
set which is dense in some right neighborhood of $0$ but does not contain any
such neighborhood.
Another natural question is the following:
\begin{quest}\label{quest2}
Given $A \in \D$, what is the complexity of the collection $\V^\star_A$ of
Polish metric spaces having distance set $A$? What about the collection
$\V_A$ of Polish metric spaces having distance set included in $A$ (in which
case we can drop the requirement $A \in \D$)?
\end{quest}
This (and Question \ref{quest3} below) requires to view Polish metric spaces
as members of some hyperspace of all Polish metric spaces: we describe the
set-up in Section \ref{term} and answer quite satisfactorily Question
\ref{quest2} in Section \ref{complexity}. In particular
Theorems~\ref{vaborel}(2) and~\ref{vastarborel}(1) characterize when $\V_A$
and $\V^\star_A$ are standard Borel. Tables \ref{tableM} and \ref{tableM*}
summarize our results for the complexity of $\V_A$ and $\V^\star_A$ when $A
\in \D$.
As a corollary, in Theorem \ref{Urysohn} we also extend the characterization
of the distance sets $A$ which admit an $A$-Urysohn space obtained by Sauer
\cite{Sauer}\footnote{We thank Joseph Zielinski for directing us to Sauer's
paper.}.
The last main questions we deal with are the original motivation for this
research:
\begin{quest}\label{quest3}
Given $A \in \D$, what is the complexity with respect to Borel reducibility
of isometry and isometric embeddability restricted to $\V^\star_A$ (denoted
respectively ${\isom_A^\star}$ and $\sqsubseteq_A^\star$)?
What about the same problem for isometry and isometric embeddability
restricted to $\V_A$ (denoted respectively ${\isom_A}$ and $\sqsubseteq_A$)?
\end{quest}
We study this question in Section \ref{polish}, and our main results include
the following:
\begin{itemize}
\item The fact that ${\sqsubseteq_A}$ and ${\sqsubseteq^\star_A}$ have the
same complexity for all $A \in \D$ (Corollary \ref{corsqsubseteqA}) and
that ${\isom_A}$ and ${\isom_A^\star}$ have the same complexity when
$A$ is countable (Theorem \ref{thm:countable}).
\item The classification with respect to Borel reducibility of \(
\isom^\star_A \), depending on the properties of \( A \)
(Theorem~\ref{isomstarA}). In particular, we characterize when
countable graph isomorphism Borel reduces to \( \isom^\star_A \)
(Theorem~\ref{appendixisom}).
\item The exhaustive description of the complexity with respect to Borel
reducibility of \( \sqsubseteq^\star_A \)
(Theorem~\ref{sqsubseteqstarA}).
\item The fact that whenever \( \sqsubseteq^\star_A \) is complete
analytic, it has also the stronger property of being invariantly
universal (Theorem~\ref{thm:invuniv}).
\end{itemize}
The first item substantially enriches the picture obtained by Clemens in
\cite{ClemensPreprint}, almost completely solving his original problem about
isometry. We also answer some of the other questions contained in
\cite{ClemensPreprint} (Proposition~\ref{questionclemens},
Theorem~\ref{thm:isomA} and Theorem~\ref{thm:countable}), and their analogues
concerning isometric embeddability (Proposition~\ref{questionclemens} and
Corollary~\ref{corsqsubseteqA}).
\section{Preliminaries}\label{term}
If $\mathcal{A}$ is a countably generated $\sigma$-algebra of subsets of $X$
that separates points we refer to the members of $\mathcal{A}$ as Borel sets
(indeed, as shown e.g.\ in \cite[Proposition 12.1]{Kechris1995}, in this case
$\mathcal{A}$ is the collection of Borel sets of some separable metrizable
topology on $X$), and to \( (X, \mathcal{A}) \) as a Borel space. The Borel
space $(X, \mathcal{A})$ is standard if $\mathcal{A}$ is the collection of
Borel sets of some Polish (i.e.\ separable and completely metrizable)
topology on $X$. A map between two Borel spaces is Borel if the preimages of
Borel sets of the target space are Borel sets of the domain.
We denote by $\SI11(X)$ the family of subsets of the standard Borel spaces
$X$ which are Borel images of a standard Borel space. For $n>0$, $\PI1n(X)$
is the class of all complements of sets in $\SI1n(X)$, and $\SI1{n+1}(X)$ is
the family of Borel images of a set in $\PI1n(Y)$ for some standard Borel
space $Y$. We have $\SI1n \cup \PI1n \subseteq \SI1m \cap \PI1m$ whenever
$n<m$, and for uncountable standard Borel spaces the inclusion is strict.
This hierarchy is the \emph{projective hierarchy}. \SI11 and \PI11 sets are
called resp.\ \emph{analytic} and \emph{coanalytic} sets. The class of
differences of two analytic subsets (equivalently: of intersections of an
analytic and a coanalytic subset) of $X$ is denoted $D_2( \SI{1}{1} )(X)$.
We extend these notions also to Borel spaces $X$ which are not necessarily
standard. In particular we say that $A \subseteq X$ is analytic (or \SI11) if
there exists a standard Borel space $Y \supseteq X$ such that the Borel
subsets of $X$ are the intersections of the Borel subsets of $Y$ with $X$ and
$A$ is the intersection of some $B \in \SI11(X)$ with $X$.\smallskip
If $\mathbf{\Gamma}$ is a class of sets in Borel spaces closed under Borel
preimages (like \SI1n and \PI1n), $Y$ is a standard Borel space and $A
\subseteq Y$, we say that $A$ is \emph{Borel $\mathbf{\Gamma}$-hard} if for
every $B \in \mathbf{\Gamma} (X)$, where $X$ is a standard Borel space, is
\emph{Borel Wadge reducible} to $A$, i.e.\ there exists a Borel function $f:
X \to Y$ such that $f^{-1}(A) = B$. We say that $A$ is \emph{Borel
$\mathbf{\Gamma}$-complete} if, in addition, $A \in \mathbf{\Gamma} (Y)$. If
$A$ is Borel $\mathbf{\Gamma}$-hard and $A$ is Borel Wadge reducible to $B$,
then $B$ is Borel $\mathbf{\Gamma}$-hard as well: this is the typical way to
prove hardness results.
The classes $\mathbf{\Gamma}$ we are interested in are closed under Borel
preimages and such that either $\mathbf{\Gamma}$ or its dual
$\check{\mathbf{\Gamma}}$ (consisting of the complements of the elements of
$\mathbf{\Gamma}$) is closed under intersection with $\PI11$ sets. For these
classes and any Polish topology compatible with the standard Borel spaces,
Borel $ \mathbf{\Gamma } $-hardness can be witnessed by continuous functions:
see \cite{kechri}, where this fact is stated for the class $ \PI11$, but the
argument actually works under our more general assumptions on
$\mathbf{\Gamma}$. Therefore Borel $\mathbf{\Gamma}$-hardness and Borel
$\mathbf{\Gamma}$-completeness coincide with $\mathbf{\Gamma }$-hardness and
$\mathbf{\Gamma }$-completeness, which are the notions used when dealing with
Wadge reducibility. For this reason we drop Borel from this terminology.
Most results in Section \ref{complexity} state that a collection of Polish
metric spaces is $\mathbf{\Gamma}$-complete for some $\mathbf{\Gamma}$, and
thus pinpoint the complexity of that particular collection by showing that it
belongs to $\mathbf{\Gamma}$ and not to any simpler class. When $
\mathbf{\Gamma } \neq \check{ \mathbf{\Gamma }}$, this implies in particular
that such a collection does not belong to $ \check{\mathbf{\Gamma}
}$.\smallskip
Borel Wadge reducibility can be generalized from sets to binary (and in fact,
$n$-ary for any $n$) relations as follows. Let $R$ and $S$ be binary
relations on Borel spaces $X$ and $Y$, respectively. We say that $R$ is
\emph{Borel reducible} to $S$, and we write $R \leq_B S$, if there is a Borel
function $f: X\to Y$ such that $x \mathrel{R} x'$ if and only if $f(x)
\mathrel{S} f(x')$ for all $x,x'\in X$. If $R\leq_BS$ and $S \leq_B R$ we say
that $R$ and $S$ are \emph{Borel bireducible} and we write $R \sim_B S$. If
on the other hand we have $R\leq_BS$ and $S\nleq_BR$ we write $R <_B S$.
If $\mathbf{\Gamma}$ is a class of binary relations on standard Borel spaces
and $S \in \mathbf{\Gamma}$, we say that $S$ is \emph{complete for
$\mathbf{\Gamma}$} if $R \leq_B S$ for all $R \in \mathbf{\Gamma}$. Some
relevant classes $\mathbf{\Gamma}$ one might consider are the collection of
all analytic equivalence relations and the collection of all analytic
quasi-orders. (Recall that a quasi-order is a reflexive and transitive binary
relation.) An example of a complete quasi-order for the latter class is
embeddability between countable graphs, see \cite{louros}. When
$\mathbf{\Gamma}$ is the class of orbit equivalence relations (that is, those
Borel reducible to a relation induced by a Borel action of a Polish group on
a standard Borel space) a complete element for $\mathbf{\Gamma}$ is isometry
on arbitrary Polish spaces, see \cite{Gao2003}. Another important example is
the class of equivalence relations classifiable by countable structures, that
is those Borel reducible to isomorphism on countable structures. The
canonical example of an equivalence relation complete for this class is
countable graph isomorphism, see \cite{Friedman1989}.
We reserve the term \lq\lq complete for $\mathbf{\Gamma}$\rq\rq\ for
relations defined on some standard Borel space. In Section \ref{polish}, we
often consider analytic relations on Borel spaces which (by the results of
Section \ref{complexity}) are not standard. In these cases we rather state
that a relation is \emph{Borel bireducible with a complete for
$\mathbf{\Gamma}$ relation}.\medskip
We denote by $\isom$ and $\sqsubseteq$ the relations of isometry and
isometric embeddability between metric spaces. Recall that a metric space is
Polish if and only if it is isometric to an element of \( F(\mathbb{U}) \),
the collection of all nonempty closed subsets of the Urysohn space \(
\mathbb{U} \) (here we differ slightly from \cite{Kechris1995}, where \(
F(\mathbb{U}) \) includes the empty set). The space $F(\mathbb{U})$ is
endowed with the Effros Borel structure, which turns it into a standard Borel
space: the hyperspace containing all Polish metric spaces up to isometry.
Notice that $\isom$ and $\sqsubseteq$ are analytic relations on
$F(\mathbb{U})$. We fix also a sequence of Borel functions \( (\psi_n)_{n \in
\omega} \) from \( F(\mathbb{U}) \) into \( \mathbb{U} \) such that \( \{
\psi_n(X) \mid n \in \omega \} \) is dense in \( X \) for every \( X \in
F(\mathbb{U}) \), see \cite[Theorem 12.13]{Kechris1995}.
\begin{remark}\label{rem:coding}
Another possible coding for Polish metric spaces (used e.g.\ in
\cite{ClemensPreprint}) is sometimes convenient. In this approach a Polish
metric space $U$ is coded by an element $M$ of a suitable $\mathcal M$, which
is a closed subset of $\pre{\omega\times\omega }{ \RR } $: $U$ is the
completion of a set of points $\{ x_i \mid i\in\omega \}$ such that the
distance between $x_i$ and $x_j$ equals $M(i,j)$. As explained in
\cite[Section 2]{lmrnew}, this coding is equivalent to the one introduced
above, in the sense that there are Borel functions $\Phi: F(\mathbb{U}) \to
\mathcal M$ and $\Psi: \mathcal M \to F(\mathbb{U})$ such that $\Phi(X)$
codes a space isometric to $X$ and $\Psi(M)$ is isometric to the space coded
by $M$.
Therefore the results can be transferred between the two settings. In
particular, it is often easier to check that certain maps are
Borel-measurable using $\mathcal M$ rather than $F(\mathbb{U})$ (see e.g.\
the proof of Proposition~\ref{1531426}).
\end{remark}
Using the coding in $F( \mathbb U )$ we have the following formalizations of the
collections of Polish metric spaces using a prescribed set of distances.
\begin{defin}
Given $A \subseteq \RR^+$, let
\[
\V_A= \set{X\in F( \mathbb U )}{D(X)\subseteq A} \text{ and }
\V_A^\star = \set{X\in F( \mathbb U )}{D(X)=A} .
\]
Let the equivalence relations $ \isom_A $ and $ \isom_A^\star$ and the
quasi-orders $\sqsubseteq_A$ and $\sqsubseteq_A^\star$ be the restrictions of
isometry and isometric embeddability to $\V_A$ and $\V_A^\star$.
\end{defin}
The relations $ \isom_A $, $ \isom_A^\star$, $\sqsubseteq_A$, and
$\sqsubseteq_A^\star$ are defined on Borel spaces which are not necessarily
standard (not even when $A$ is countable, as we will show). We will discuss
the complexity of $\V_A$ and $\V_A^\star$ at length in Section
\ref{complexity}.
In \cite{ultrametric} we studied the restrictions of isometry and isometric
embeddability to
\[
\U_A = \set{X \in \V_A}{d_X \text{ is an ultrametric}} \text{ and }
\U^\star_A = \set{X \in \V^\star_A}{d_X \text{ is an ultrametric}}.
\]
By Theorem \ref{necessary} below the latter is nonempty exactly when $A \in
\D$ is countable. In contrast with the results of Section \ref{complexity},
$\U_A$ and $\U_A^{\star}$ are both standard Borel spaces (see
\cite[Proposition 4.5]{ultrametric}). We first considered the case of $A$
ill-founded (with respect to the standard ordering of the reals) and,
extending results of \cite{Gao2003}, proved the following.
\begin{theorem}[{\cite[Corollary 5.7, and Theorems 6.3 and 6.4]{ultrametric}}]\label{isomuniversalstar}
Let $A \in \D$ be countable and ill-founded. Then:
\begin{enumerate}
\item Isometry on $\U_A$ and on $\U_A^{\star}$ are both complete for
equivalence relations classifiable by countable structures.
\item Isometric embeddability on $\U_A$ and on $\U_A^{\star}$ are both
complete for analytic quasi-orders.
\end{enumerate}
\end{theorem}
Then we dealt with well-founded $A$'s. Lemma 4.11 of \cite{ultrametric}
implies that the complexities of isometry and isometric embeddability on
$\U_A$ and $\U_A^\star$ for $A \in \D$ well-founded depend only on the order
type of $A$. If $\alpha$ with $1 \leq \alpha<\omega_1$ is the order type of
$A$ we can then write, as in \cite{ultrametric}, ${\isom_{\alpha}}$,
${\isom^\star_{\alpha}}$, ${\sqsubseteq_\alpha}$, and
${\sqsubseteq^\star_\alpha}$ in place of ${{\isom} \restriction \U_A}$,
${{\isom} \restriction \U^\star_A}$, ${{\sqsubseteq} \restriction \U_A}$ and
${{\sqsubseteq} \restriction \U^\star_A}$, respectively. Our results include
the following.
\begin{theorem}[{\cite[Lemma 5.8, and Theorems 5.15 and 6.11]{ultrametric}}]\label{lemmaequivalence}
\emph{ }
\begin{enumerate}[(1)]
\item For every $\alpha$ such that \( 1\leq\alpha < \omega_1 \) we have ${
\isom_{\alpha }} \sim_B {\isom_{\alpha }^\star}$ and these equivalence
relations are classifiable by countable structures.
\item The relations \( \isom_\alpha \), for $1\leq\alpha <\omega_1$, form a
strictly increasing chain under $\leq_B$ of Borel equivalence relations
which is cofinal below countable graph isomorphism (i.e.\ cofinal among
Borel equivalence relations classifiable by countable structures).
\item For every $\alpha$ such that \( 1\leq\alpha < \omega_1 \) we have \(
{\sqsubseteq_\alpha} \sim_B {\sqsubseteq^\star_\alpha} \).
\item Let $1\leq \alpha < \omega_1$. Then
\begin{enumerate}[(i)]
\item if \( \alpha \leq \omega \), then \( \sqsubseteq_\alpha \) is Borel;
\item if $\alpha > \omega$, \( \sqsubseteq_\alpha \) contains both upper
and lower cones that are \SI11-complete, and hence
$\sqsubseteq_\alpha$ is analytic non-Borel;
\item all classes of the equivalence relation induced by \(
\sqsubseteq_{\omega+1} \) are Borel, hence \( \sqsubseteq_{\omega+1}
\) is not complete for analytic quasi-orders;
\item for all \( \alpha < \beta \leq \omega+2 \), \( {\sqsubseteq_\alpha}
<_B {\sqsubseteq_\beta} \).
\end{enumerate}
\end{enumerate}
\end{theorem}
The problem of establishing the exact complexity of $\sqsubseteq_\alpha$ for
$\alpha \geq \omega+2$ is still open.
Basic tools to change from a set of distances to another one are \emph{metric
preserving functions}, i.e.\ functions $f \colon A \to \RR^+$ such that for
every metric $d$ on a space $X$ with range contained in $A$ we have that $f
\circ d$ is still a metric on $X$. There is a vast literature about metric
preserving functions defined on the whole $\RR^+$ (see \cite{Dobos,Cor} for
surveys). Since we are dealing with Polish metric spaces we introduce the
following definition, where we consider functions with a possibly proper
subset of $\RR^+$ as domain.
\begin{defin}
A function $f \colon A \to \RR^+$ is \emph{Polish metric preserving} if for
every complete metric $d$ with range contained in $A$ on a Polish space $X$
we have that $(X, f \circ d)$ is still a Polish metric space.
\end{defin}
\begin{proposition}\label{Polish_metric_pres}
A function $f:A\to \RR^+$ is Polish metric preserving if and only if it is
metric preserving and for every sequence $(x_n)_{n \in \omega}$ in $A$,
\begin{equation} \label{15031241}
\lim_{n\to\infty }x_n=0\quad \text{if and only if} \quad\lim_{n\to\infty
}f(x_n)=0.
\end{equation}
\end{proposition}
\begin{proof}
Let $f$ be Polish metric preserving. First we prove that $f$ is metric
preserving. Let $(X,d)$ be a metric space with distances in $A$: if
$(X,f\circ d)$ were not a metric space this would be witnessed on a subset
$X' \subseteq X$ of size two or three; then $(X',d)$ is Polish but
$(X',f\circ d)$ is not even a metric space.
Fix now a sequence $(x_n)_{n \in \omega}$ in $A$. If $(x_n)_{n \in \omega}$
converges to $0$ but $(f(x_n))_{n \in \omega}$ does not, then it can be
assumed that $(x_n)_{n \in \omega}$ is strictly decreasing and $(f(x_n))_{n
\in \omega}$ is bounded away from $0$. Let $d$ the metric on ${}^{\omega
}\omega $ defined by letting, for distinct $\alpha ,\beta\in {}^{\omega
}\omega $, $d(\alpha ,\beta )=x_n$ where $n$ is least such that $\alpha
(n)\ne\beta (n)$. Then $({}^{\omega }\omega ,d)$ is a Polish metric space,
while in $({}^{\omega }\omega ,f\circ d)$ every point is isolated, so
$({}^{\omega }\omega ,f\circ d)$ is not separable and hence not Polish.
Conversely, if $(x_n)_{n \in \omega}$ does not converge to $0$ but
$(f(x_n))_{n \in \omega}$ does, it can be assumed that $(x_n)_{n \in \omega}$
is bounded away from $0$. Let $X=\{ x_n\mid n\in\omega\} $, and define the
distance $d$ on $X$ by letting $d(x_n,x_m)=\max\{x_n,x_m\}$ if $x_n \neq
x_m$. Then $(X,d)$ is a discrete Polish ultrametric space. Then the sequence
$(x_n)_{n \in \omega}$ is a Cauchy sequence in $(X,f\circ d)$ but it does not
converge, as $(X,f\circ d)$ is discrete.
Assume now that $f$ is metric preserving and that condition \eqref{15031241}
holds for every sequence $(x_n)_{n \in \omega}$ in $A$. This means that if
$(X,d)$ is a metric space with distances in $A$, then the identity is a
homeomorphism between $(X,d)$ and $(X,f\circ d)$. In particular, if $(X,d)$
is Polish, then $(X,f\circ d)$ is separable; to conclude that $(X,f\circ d)$
is complete too, notice that a sequence in $X$ is $d$-Cauchy if and only if
it is $f\circ d$-Cauchy.
\end{proof}
A Polish metric preserving function $f: A \to A'$ transforms a space $(X,d)
\in \V_A$ into the space $(X, f \circ d) \in \V_{A'}$, which is homeomorphic
to $(X,d)$ via the identity function by the previous characterization. The
following proposition ensures that this transformation is always Borel.
\begin{proposition}\label{prop:automaticBorel}
Let $A \in \D$. Every Polish metric preserving $f:A \to \RR^+$ is Borel-measurable.
\end{proposition}
\begin{proof}
Fix $(X,d) \in \V_A^\star$ and a countable dense $D = \{x_i \mid i \in
\omega\} \subseteq X$. Let $r_{i,j} = d(x_i, x_j)$ and $r'_{i,j} =
f(r_{i,j})$. Define $F \subseteq \RR^+ \times \RR^+$ by setting $(a,b) \in F$
if and only if
\begin{align*}
\exists (i_k)_{k \in \omega}, (j_k)_{k \in \omega} \big[ & (x_{i_k})_{k \in \omega} \text{ and } (x_{j_k})_{k \in \omega}
\text{ are Cauchy sequences in } X\\
& \qquad \land \lim r_{i_k,j_k} = a \land \lim r'_{i_k,j_k} = b \big].
\end{align*}
The set $F$ is clearly analytic. Since $\mathrm{id}_X$ is a homeomorphism
between $(X,d)$ and $(X, f \circ d)$, the set $F$ is the graph of $f$. Using
the fact that Souslin's Theorem holds for analytic spaces (\cite[Exercise
28.3]{Kechris1995}), we have that the proof of \cite[Theorem
14.12]{Kechris1995} shows that functions on analytic spaces with analytic
graphs are Borel. Therefore, since $A$ is analytic by Theorem
\ref{clemensrealized}, we have that $f$ is Borel.
\end{proof}
\section{Distance sets of particular Polish metric spaces}\label{realizable}
Beside proving Theorem \ref{clemensrealized} characterizing distance sets,
Clemens also characterized the \( A \in \D \) which can be realized by Polish
metric spaces in a given class.
\begin{theorem}[\cite{ClemensThesis}]\label{necessary}
Let $A \in \D$. Then:
\begin{enumerate}
\item \( \V^\star_A \) always contains a zero-dimensional Polish metric
space;
\item \( \V^\star_A \) contains a Polish ultrametric space if and only if
it contains a discrete Polish metric space, if and only if \( A \) is
countable;
\item \( \V^\star_A \) contains a connected Polish metric space if and only
if it contains a path-connected Polish metric space if and only if \( A
\) is an interval with left endpoint \( 0 \);
\item \( \V^\star_A \) contains a compact Polish metric space if and only
if \( A \) is compact and either it is finite or it has \( 0 \) as
limit point;
\item \( \V^\star_A \) contains a locally compact Polish metric space if
and only if it contains a \( \sigma \)-compact Polish metric space, if
and only if \( A \) is either countable or it is $\sigma$-compact and
has \( 0 \) as limit point.
\end{enumerate}
\end{theorem}
We now consider the dual problem of determining when \( A \in \D\) is
realized \emph{only} by Polish metric spaces in a given class. We need the
following construction, which will be used repeatedly throughout the paper.
Let \( (X, d_X ) \) and \( (Y, d_Y ) \) be metric spaces. Given two points \(
\bar{x} \in X \) and \( \bar{y} \in Y \) and a real \( r>0 \) we can extend
the metrics \( d_X \) and \( d_Y \) to the disjoint union \( Z = X \cup Y \)
by setting \( d_Z(x,y) = \max\{ d_X(x, \bar{x}), d_Y(y, \bar{y}), r\} \) for
\( x \in X \) and \( y \in Y \).
\begin{lemma}\label{oplus}
The function \(d_Z \) defined above is a metric. Moreover $(Z,d_Z)$ is Polish
whenever \( (X, d_X ) \) and \( (Y, d_Y ) \) are Polish.
\end{lemma}
\begin{proof}
To prove that $d_Z$ is a metric, we just need to check $d_Z(a,b) \leq
d_Z(a,c) + d_Z(c,b)$ for distinct points $a,b,c \in X \cup Y$. If $a,b,c \in
X$ or $a,b,c \in Y$ this is trivial.
Now assume $a,b \in X$ and $c \in Y$: then $d_Z(a,b) = d_X(a,b) \leq
d_X(a,\bar{x}) + d_X(\bar{x},b) \leq d_Z(a,c) + d_Z(c,b)$. The case $a,b \in
Y$ and $c \in X$ is symmetric.
Assume now $a \in X$ and $b \in Y$ (the symmetric case is analogous). We
distinguish three cases.
\begin{itemize}
\item Assume $d_Z(a,b)=r$. If $c \in X$ then $d_Z(c,b) \geq r$, while if
$c \in Y$ then $d_Z(a,c) \geq r$. In both cases $d_Z(a,b)=r \leq
d_Z(a,c) + d_Z(c,b)$.
\item Assume $d_Z(a,b)=d_X(a,\bar{x})$. If $c \in X$ then $d_Z(a,b) =
d_X(a,\bar{x}) \leq d_X(a,c) + d_X(c,\bar{x}) \leq d_Z(a,c) +
d_Z(c,b)$. If instead $c \in Y$ then $d_X(a,\bar{x}) \leq d_Z(a,c)$,
whence $d_Z(a,b) = d_X(a,\bar{x}) \leq d_Z(a,c) + d_Z(c,b)$.
\item The case $d_Z(a,b)=d_Y(b,\bar{y})$ is similar to the previous one.
\end{itemize}
Polishness is preserved because every Cauchy sequence in $Z$ is eventually
contained either in $X$ or in $Y$, so the construction does not add new
limits of Cauchy sequences.
\end{proof}
We denote the metric space $(Z,d_Z)$ by $X \oplus_r Y$, omitting reference to
$\bar{x}$ and $\bar{y}$ because the choice of these two points will be
irrelevant in most of our applications.
The following proposition shows that given \( r > 0 \), the map \(
F(\mathbb{U}) \times F(\mathbb{U}) \to F(\mathbb{U}) \) sending \( X,Y \in
F(\mathbb{U}) \) to \( X \oplus_r Y \) may be construed as a Borel function.
\begin{proposition}\label{1531426}
There is a Borel-measurable function $f:F( \mathbb U )\times F( \mathbb U
)\to F( \mathbb U )$ such that $f(X,Y)$ is isometric to $X\oplus_rY$ for some
choice of the gluing points $ \bar x \in X$ and $\bar y \in Y$.
\end{proposition}
\begin{proof}
This is immediate using Remark \ref{rem:coding}, as it is easy to define a
Borel counterpart of this function from $\mathcal M \times \mathcal M$ to
$\mathcal M$.
\end{proof}
The most important property of this construction is the following:
\begin{fact}\label{fact:oplus}
For every $r$ we have $D(X \oplus_r Y) = D(X) \cup D(Y) \cup \{r\}$.
\end{fact}
\begin{defin}
\( A \in \D \) is \emph{well-spaced} if \( r < r' \) implies \( 2r < r' \)
for all \( r,r' \in A \).
\end{defin}
Notice that if \( A \) is well-spaced and infinite then \( A \setminus \{0\}
\) is either a decreasing sequence converging to $0$, an unbounded increasing sequence, or the union of these two.
This follows from the fact that if $A$ is well-spaced then for $n\in \ZZ $ the set $A \cap [2^n,2^{n+1})$ contains at most one point.
\begin{theorem} \label{proponlyultrametric}
Let \( A \in \D \).
\begin{enumerate}
\item All spaces in $\V^\star_A$ are zero-dimensional if and only if \( A
\) does not contain a right neighborhood of \( 0 \).
\item All spaces in $\V^\star_A$ are ultrametric if and only if \( A \) is
well-spaced.
\item All spaces in $\V^\star_A$ are discrete if and only if they are all
locally compact, if and only if they are all \( \sigma \)-compact, if
and only if $0$ is isolated in \( A \).
\item All spaces in $\V^\star_A$ are connected if and only if they are all
compact, if and only if they are all singletons, if and only if $A= \{0\}$.
\end{enumerate}
All the above characterizations remain true if we replace $\V^\star_A$ with
$\V_A$.
\end{theorem}
\begin{proof}
To prove all forward directions we construct spaces of the form $X \oplus_r
Y$ with $r \in A$, $X \in \V^\star_A$, and $Y \in \V_A$ lacking the relevant
topological properties. We always have $X \oplus_r Y \in \V^\star_A$ by Fact
\ref{fact:oplus}.
(1) Suppose that \( A \) contains an interval \( [0,r] \) for some \( r > 0
\). Fix $X \in \V^\star_A$ and let \( Y = [0,r] \subseteq \RR\) with the
usual metric. Then \( X \oplus_r Y\) belongs to $\V^\star_A$ but is not
zero-dimensional.
Conversely, if \( 0 \) is a limit point of \( \RR^+ \setminus A \), then for
any $X \in \V^\star_A$ the collection of balls with radius in \( \RR^+
\setminus A \) is a clopen basis for $X$.
(2) Recall that a space is ultrametric if and only if every triangle is
isosceles with legs not shorter than the base. Suppose that \( A \) is not
well-spaced and pick \( r,r' \in A \) with \( r < r' \leq 2r \). Fix $X \in
\V^\star_A$ and let \( Y \) be a triangle with two sides of length \( r \)
and one of length \( r' \). Then \( X \oplus_r Y \) belongs to $\V^\star_A$
but is not ultrametric.
Conversely, if \( X \in \V^\star_A\) is not ultrametric, then it must
contain a triangle with sides of length \( r'' \leq r < r' \). Then by the
triangle inequality \( r' \leq r'' + r \leq 2 r \), and hence \( A \) is not
well-spaced.
(3) Suppose that \( A \) contains a decreasing sequence \( (r_n)_{n \in
\omega} \) converging to \( 0 \). Fix again \( X \in \V^\star_A\) and let \(
Y \) be the Baire space \( \pre{\omega}{\omega} \) equipped with the metric
defined by $d_Y(y,y') = r_n$ where $n$ is least such that $y(n) \neq y'(n)$.
Then the space \( X \oplus_{r_0} Y \) belongs to $\V^\star_A$ but is not \(
\sigma \)-compact (here we are also using the fact that $Y$ is closed in \( X
\oplus_{r_0} Y \)), and hence, by separability, neither locally compact nor discrete.
Conversely, if $0$ is isolated in \( A \), then any \( X \in \V^\star_A\) is
discrete, and thus also locally compact and \( \sigma \)-compact.
(4) Suppose that $r>0$ belongs to $A$ and $X \in \V^\star_A$. Let $Y$ be the
countable space with all distinct points at distance $r$: then $X \oplus_r Y$
belongs to $\V^\star_A$ but is neither connected nor compact. The other
implications are obvious.\smallskip
Finally, consider the statements for $\V_A$. The forward directions follow
from $\V^\star_A \subseteq \V_A$. For the backward directions, let $X \in
\V_A$ and set $A' = D(X)$ so that $X \in \V^\star_{A'}$: since $A' \subseteq
A$ and all the stated conditions on \( A \) are inherited by subsets, the
results follow from what we proved above.
\end{proof}
Theorem~\ref{proponlyultrametric}(1) shows that restricting the attention to
Polish metric spaces with a fixed set of distance \( A \in \D \) may provide
useful information on the complexity of the isometry relation on
zero-dimensional Polish metric spaces: indeed, if \( A \) does not contain a
right neighborhood of \( 0 \), then \( \isom^\star_A \) is a lower bound for
such a relation. In contrast, (3) and (4) of Theorem
\ref{proponlyultrametric} imply that the approach of restricting isometry to
Polish metric spaces using a specific distance set cannot provide interesting
lower bounds for the complexity of locally compact or connected Polish metric
spaces.
\section{The complexity of $\V_A$ and $\V^\star_A$}\label{complexity}
We consider the problem of determining the complexity of the subspaces $\V_A$
and $\V^\star_A$ of $F(\mathbb{U})$, in particular characterizing when they
are standard Borel spaces. While it is worth studying $\V_A$ for any $A
\subseteq \RR^+$ with $0 \in A$, we have $\V^\star_A \neq \emptyset$ only
when $A \in \D$.
Notice the following fact, immediate from the definitions:
\begin{fact}\label{upperV}
If $A$ is analytic (in particular if $A \in \D$) then both $\V_A$ and
$\V^\star_A$ are \PI12, while when $A$ is Borel then $\V_A$ is \PI11. If $A$
is countable then $\V_A$ is \PI11 and $\V_A^\star$ belongs to $D_2( \SI11)$.
\end{fact}
The following reductions are easy to prove and very general.
\begin{proposition}\label{basicVA}
Let $0 \in A\subseteq \RR^+$.
\begin{enumerate}
\item $A$ Borel reduces to $\V_A$;
\item if $A \in \D$ then $\V_A$ Borel reduces to $\V^\star_A$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) One can define a Borel function $f: \RR^+ \to F( \mathbb U )$ such that
$f(0)$ is a singleton and otherwise $f(r)$ is a space consisting of two
points at distance $r$: then $A =f^{-1}( \V_A)$.
(2) If $A=\{ 0\} $, then $ \V_A= \V_A^{\star }$.
Otherwise, fix $Y \in \V^\star_A$ and $r \in A \setminus \{0\}$. The Borel map $X
\mapsto X \oplus_r Y$ reduces $\V_A$ to $\V^\star_A$.
\end{proof}
To obtain sharper results we make extensive use of the following Borel
construction of Polish metric spaces.
\begin{defin}\label{def:ts}
A triple $((r_n)_{n\in\omega }, (r'_n)_{n\in\omega }, x)$ is
\emph{tree-suitable} if
\begin{itemize}
\item $x>0$;
\item $(r_n)_{n\in\omega }$ is strictly decreasing and converges to $0$;
\item $(r'_n)_{n\in\omega }$ is strictly monotone and converges to $x$;
\item $r_0 < \min (x,r'_0)$;
\item $\forall n\in\omega\ |r'_n - x| <r_n$, so that $\forall n,m\
|r'_n - r'_m| \leq \max \{ r_n,r_m \}$.
\end{itemize}
\end{defin}
In this case one can define an assignment $\Phi_{r_nr'_n}$ that sends a tree
$T \subseteq \omega^{<\omega}$ to some $\Phi_{r_nr'_n}(T) \in F( \mathbb U )$
which is isometric to the completion of $T \cup \{\ast\}$ under the metric
$d$ defined by setting $d(s,t) =r_n$ if $s,t \in T$ are distinct and $n$ is
largest such that $s \restriction n = t \restriction n$, and $d(s,\ast) =r'_{
\lh (s)}$ for $s \in T$. The last of the conditions in the definition of
tree-suitability ensures that $d$ satisfies the triangle inequality. Using
Remark \ref{rem:coding} and going through $\mathcal M$, we can assume that
$\Phi_{r_nr'_n}$ is Borel.
The main property of
$\Phi_{r_nr'_n}(T)$ is the following:
\begin{fact}\label{fact:ts}
If $((r_n)_{n\in\omega }, (r'_n)_{n\in\omega }, x)$ is tree-suitable then for
any tree $T \subseteq \omega^{<\omega}$:
\begin{itemize}
\item $D(\Phi_{r_nr'_n}(T)) \subseteq \{ 0 \} \cup \set{r_n, r'_n}{n \in \omega} \cup
\{x\}$;
\item $x\in D(\Phi_{r_nr'_n}(T))$ if and only if $T$ is ill-founded.
\end{itemize}
\end{fact}
We first study the complexity of $\V_A$.
\begin{theorem}\label{vaborel}
Let $0 \in A\subseteq \RR^+$.
\begin{enumerate}
\item If $A$ is not closed and $0$ is a limit point of $A$, then $\V_A$
is \PI11-hard;
\item $\V_A$ is Borel if and only if either $A$ is closed, or $A$ is
Borel and $0$ is not a limit point of $A$;
\item if $A$ is Borel but not closed and $0$ is a limit point of $A$ then
$\V_A$ is \PI11-complete;
\item if $A$ is \SI11-complete and $0$ is a limit point of $A$ then \(
\V_A \) is \PI12-complete.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Fix $x \in \bar A \setminus A$, and sequences $(r_n)_{n\in\omega}$ and
$(r'_n)_{n\in\omega }$ in $A$ such that $((r_n)_{n\in\omega },
(r'_n)_{n\in\omega }, x)$ is tree-suitable. Then, by Fact \ref{fact:ts}, $T$
is well-founded if and only if $\Phi_{r_nr'_n}(T)\in \V_A$.
(2) Assume first that $A$ is a closed subset of $ \RR^+$, or that $A$ is
Borel and $0$ is not a limit point of $A$. Then, in either case, for any
$X\in F ( \mathbb U )$ we have $X\in \V_A\Leftrightarrow\forall n,m\in\omega\
d(\psi_n(X),\psi_m(X))\in A$: for the backwards implication when $0$ is not a
limit point of $A$, use the fact the the condition $\forall n,m\in\omega\
d(\psi_n(X),\psi_m(X))\in A$ implies that $X$ is discrete, so $X=
\set{\psi_n(X)}{n\in\omega } $.
Conversely, assume that $ \V_A$ is Borel. Then $A$ is Borel by Proposition \ref{basicVA}(1) and if $A$
is not closed and $0$ is a limit point of $A$ we derive a contradiction from
(1).
(3) is immediate from (1) and Fact \ref{upperV}.
To prove (4) first recall that \( \V_A \) is \PI12 by Fact \ref{upperV}.
To show that $\V_A$ is \PI12-hard we fix a strictly decreasing sequence
$(\varepsilon_i)_{i \in \omega}$ in $A$ with $\varepsilon_i < 2^{-i}$.
Let $P \subseteq \Can$ be an arbitrary \PI12 set. We assume first that $A
\subseteq [0,1]$. Since $A$ is \SI11-complete, there exists a continuous
function $f: \Can \times \Can \to [0,1]$ such that $P = \set{\alpha \in
\Can}{\forall \beta \in \Can\, f(\alpha,\beta) \in A}$. Let us define a Borel
function $g$ from $\Can$ to the space of pruned trees on $\{0,1\}$ as
follows. Given $\alpha \in \Can$ consider the compact set $C_\alpha =
\set{f(\alpha,\beta)}{\beta \in \Can} \cup \set{\varepsilon_i}{i \in \omega}
\cup \{0\}$. Define $g(\alpha)$ to be the pruned tree such that $[g(\alpha)]
= \set{\gamma \in \Can}{\sum \gamma(i) 2^{-(i+1)} \in C_\alpha} $.
The function
$g$ is Borel, using the fact that, given $s\in 2^{<\omega }$,
\[
s\in g(\alpha ) \iff
\left [ \sum_{i=0}^{ \lh (s)-1} \frac{s(i)}{2^{i+1}}, \sum_{i=0}^{ \lh (s)-1} \frac{s(i)}{2^{i+1}} + \frac 1{2^{ \lh (s)}} \right ]
\cap C_{\alpha }\neq \emptyset
\]
together with the continuity of the function $ \Can \to
K([0,1]),\alpha\mapsto C_{\alpha }$ (using \cite[Exercise 4.29, iv) and
vi)]{Kechris1995}) and the fact that the relation of non-disjointness is
closed in $(K([0,1]))^2$ (\cite[Exercise 4.29, iii)]{Kechris1995}).
We now apply to $g(\alpha)$ the construction used by Clemens in his proof of
\cite[Theorem 4.7]{ClemensThesis} using the sequence $(\varepsilon_i)_{i \in
\omega}$ we fixed in advance. We thus obtain a function $h: \Can \to
\V_{[0,1]}$ such that $D(h(\alpha)) =C_{\alpha }$ for every $\alpha \in
\Can$. To see that $h$ is Borel one needs to inspect Clemens' construction,
keeping in mind Remark \ref{rem:coding} since $h(\alpha)$ is introduced by
defining the restriction of the distance function to a countable dense
subset. It is immediate that $\alpha \in P$ if and only if $h(\alpha) \in
\V_A$. Thus \( \V_A \) is \PI12-complete.
When $A \subseteq [0,n]$, by rescaling we obtain a function $h_n: \Can \to
\V_{[0,n]}$ reducing $P$ to $\V_A$.
If $A$ is unbounded, let $\varphi : \Can \to \RR^+$ be a continuous function
reducing a \SI11-complete subset of $ \Can $ to $A$. Since the range of
$\varphi$ is bounded, it follows that $A_n = A \cap [0,n]$ is \SI11-complete
for some $n$ and $h_n$ still does the job.
\end{proof}
If we further assume that $A \in \D$ we can draw the following corollaries.
These, combined with Theorem \ref{vaborel}(3), provide a complete picture of
the complexity of $\V_A$ for $A \in \D$ under analytic determinacy (which
ensures that every analytic set which is not Borel is \SI11-complete).
\begin{corollary}\label{VAD}
Let $A \in \D$. The following are equivalent:
\begin{enumerate}[(i)]
\item $\V_A$ is Borel;
\item $\V_A$ is \SI11;
\item $A$ is closed or $0$ is not a limit point of $A$.
\end{enumerate}
\end{corollary}
\begin{proof}
(i) implies (ii) is obvious. (ii) implies (iii) follows from Theorem
\ref{vaborel}(1) because if $\V_A$ is \SI11 then it is not \PI11-hard. To
check that (iii) implies (i) notice that if $A$ is closed then Theorem
\ref{vaborel}(2) applies, yielding immediately (i). If instead $0$ is not a
limit point of $A$ then, since $A \in \D$, $A$ is countable by Theorem
\ref{clemensrealized}. Thus $A$ is Borel and Theorem \ref{vaborel}(2) applies
again.
\end{proof}
\begin{corollary}\label{VAinD}
Let $A \in \D$.
\begin{enumerate}
\item If $A$ is not Borel then $\V_A$ is neither analytic nor coanalytic;
\item if $A$ is \SI11-complete then \( \V_A \) is \PI12-complete.
\end{enumerate}
\end{corollary}
\begin{proof}
(1) If $\V_A$ were analytic then it would be Borel by Corollary \ref{VAD},
and then $A$ would be Borel by Proposition \ref{basicVA}(1). Since $A$ is
analytic by Theorem \ref{clemensrealized}, Proposition \ref{basicVA}(1)
implies also that $\V_A$ is not coanalytic.
(2) Since any \SI11-complete set is uncountable, the result follows immediately
from Theorems \ref{clemensrealized} and \ref{vaborel}(4).
\end{proof}
\begin{table}
\begin{tabular}{|c|c|c|} \hline
\textbf{Properties of $A$} & \textbf{complexity of $\V_A$} & \textbf{Reference}\\
\hline\hline
$A$ closed or $0$ isolated in $A$ & Borel & \ref{VAD}\\
\hline
$A$ Borel not closed and $0$ not isolated in $A$ & \PI11-complete & \ref{vaborel}(3)\\
\hline
$A$ true analytic & neither \SI11 nor \PI11 & \ref{VAinD}(1)\\
\hline
$A$ \SI11-complete & \PI12-complete & \ref{VAinD}(2)\\
\hline
\end{tabular}\medskip
\caption{Summary of the complexity of $\V_A$ for $A \in \D$\label{tableM}}
\end{table}
For the reader's convenience we summarize in Table \ref{tableM} our results
for the complexity of $\V_A$ when $A \in \D$.
We now show that the complexity of $\V_A^\star$ often depends on the limit
points of $A$.
\begin{theorem}\label{vastarborel}
Let $A\in \D $.
\begin{enumerate}
\item $\V_A^\star$ is Borel if and only if either $0$ is not a limit
point of $A$ or $0$ is the unique limit point of $A$.
\item Suppose $0$ is a limit point of $A$ and $A$ has other limit points
(which may belong to $A$ or not).
\begin{enumerate}
\item If $A$ is closed then $ \V_A^\star$ is \SI11-hard;
\item if $A$ is not closed then $ \V_A^\star$ is \PI11-hard;
\item if $A$ is not closed and at least one of its limit points different
from $0$ belongs to $A$, then $ \V_A^\star$ is $D_2(
\SI11)$-hard.
\end{enumerate}
\item if $A$ is \SI11-complete then \( \V_A^\star \) is
\PI12-complete.
\end{enumerate}
\end{theorem}
\begin{proof}
We start from (2). For (a) pick $y \neq 0$ which is a limit point of $A$:
obviously $y \in A$. Fix now sequences $(s_n)_{n\in\omega
},(s'_n)_{n\in\omega }$ in $A$ such that the triple $((s_n)_{n\in\omega },
(s'_n)_{n\in\omega }, y)$ is tree-suitable.
As $A\setminus\{ y\}\in \mathcal D $ by Theorem \ref{clemensrealized}, fix also a space $Y \in
\V^\star_{A\setminus\{ y\} }$. Then, using Facts \ref{fact:ts} and
\ref{fact:oplus}, the function $T \mapsto \Phi_{s_ns'_n}(T) \oplus_{s_0} Y$
is a Borel reduction from ill-founded trees to $ \V_A^\star$.
(b) follows from Proposition \ref{basicVA}(2) and Theorem \ref{vaborel}(1).
To prove (c) let $x \in \bar A \setminus A$, and let $y \in A \setminus
\{0\}$ which is a limit point of $A$. Fix sequences $(r_n)_{n\in\omega }$,
$(s_n)_{n\in\omega }$, $(t_n)_{n\in\omega }$ in $A\setminus\{ y\}$ such that
both $((r_n)_{n\in\omega }, (s_n)_{n\in\omega }, x)$ and $((r_n)_{n\in\omega
}, (t_n)_{n\in\omega }, y)$ are tree-suitable. By Theorem
\ref{clemensrealized} again, $A\setminus\{ y\}\in \mathcal D $, so fix $X \in
\V^\star_{A \setminus\{ y\}}$. Then, by Facts \ref{fact:ts} and
\ref{fact:oplus}, the Borel function
\[
(U,T) \mapsto \Theta (U,T)=(\Phi_{r_ns_n}(U) \oplus_{r_0} \Phi_{r_nt_n}(T)) \oplus_{r_0} X
\]
is such that $\Theta (U,T)\in \V_A^\star$ if and only if $U$ is well-founded
and $T$ is ill-founded.\smallskip
We now deal with (1). The forward direction follows from (2), since if the
thesis were false one of (a) and (b) would apply.
Conversely, assume first that $0$ is not a limit point of $A$. Then, by Theorem \ref{clemensrealized}, $A$ is
countable and all members of $ \V_A^\star$ are discrete. Then for any $X\in F
( \mathbb U )$ we have that $X\in \V_A^\star$ if and only if $X \in \V_A
\land \forall a \in A\; \exists m_1, m_2 \in \omega\;
d(\psi_{m_1}(X),\psi_{m_2}(X)) = a$. Theorem \ref{vaborel}(2) allows to
conclude in this case.
Finally, suppose $0$ is the unique limit point of $A$. Then $A$ is closed and
countable and all elements of $A$ different from $0$ are isolated in $A$.
Thus for any $X\in F ( \mathbb U )$ we have again that $X \in \V_A^\star$ if
and only if $X \in \V_A \land \forall a \in A\; \exists m_1, m_2 \in \omega\;
d(\psi_{m_1}(X),\psi_{m_2}(X)) = a$, which allows to conclude by applying
Theorem \ref{vaborel}(2).\smallskip
(3) follows immediately from Proposition \ref{basicVA}(2) and Corollary
\ref{VAinD}(2).
\end{proof}
We now consider the case when $A$ is countable. Then $\V_A$ is either Borel
or \PI11-complete according to Theorem \ref{vaborel}(2 and 3). We can obtain
a complete classification of the complexity of $\V_A^\star$ as well.
\begin{theorem}\label{complcount}
Let $A$ be a countable subset of $ \RR^+$, with $0\in A$.
\begin{enumerate}
\item If $0$ is not a limit point of $A$ or $0$ is the unique limit point
of $A$, then $ \V_A^\star$ is Borel;
\item if $0$ is a limit point of $A$ and $A$ is closed having other limit
points besides $0$, then $ \V_A^\star$ is \SI11-complete;
\item if $0$ is a limit point of $A$, $A$ is not closed, and all limit
points of $A$ different from $0$ do not belong to $A$, then $
\V_A^\star$ is \PI11-complete;
\item if $0$ is a limit point of $A$, $A$ is not closed and contains a
limit point different from $0$, then $ \V_A^\star$ is $D_2(
\SI11)$-complete.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) follows from Theorem \ref{vastarborel}(1).
(2) In this case, by Theorem \ref{vaborel}(2), $\V_A$ is Borel. Since $X \in
\V^\star_A$ if and only if $X \in \V_A \land \forall a \in A\, \exists x,y
\in X\, d(x,y) = a$, we have that $\V^\star_A$ is \SI11. Completeness follows
from Theorem \ref{vastarborel}(2a).
(3) In this case, all points of $A$ different from $0$ are isolated in $A$ so
that, as at the end of the proof of Theorem \ref{vastarborel}(1), we have $X
\in \V_A^\star$ if and only if $X \in \V_A \land \forall a \in A\, \exists
m_1,m_2\, d(\psi_{m_1}(X),\psi_{m_2}(X))=a$. Hence $\V_A^\star$ is \PI11
because $\V_A$ is \PI11 by Fact \ref{upperV}. Completeness follows from
Theorem \ref{vastarborel}(2b).
(4) By Fact \ref{upperV} $\V^\star_A$ is $D_2( \SI11)$ and completeness
follows from Theorem \ref{vastarborel}(2c).
\end{proof}
\begin{remark}
In the literature, there are very few ``natural'' examples of sets
belonging to the class $D_2( \SI11)$ but not to simpler ones. The set \(
\V_A^\star \) with \( A \) as in Theorem \ref{complcount}(4) is one of these. Other notable examples
are: the collection of countable graphs whose automorphism group is isomorphic to ${}^{\omega } \ZZ_p$ for $p$ a prime number \cite{ck2000}; the collection of countable linear orders which are not strongly
surjective~\cite{cammarcar}; and some collection of measurable sets generated
using the density function on the Cantor space \cite{AC}.
\end{remark}
\begin{table}
\begin{tabular}{|c|c|c|} \hline
\textbf{Properties of $A$} & \textbf{complexity of $\V^\star_A$} & \textbf{Reference}\\
\hline\hline
\makecell{$0$ isolated in $A$\\ or $0$ unique limit point of $A$} & Borel & \ref{vastarborel}(1)\\
\hline
\makecell{$0$ not isolated in $A$,\\ $A$ closed with other limit points} & \SI11-hard & \ref{vastarborel}(2)(a)\\
\hline
\makecell{$0$ not isolated in $A$,\\ $A$ \emph{countable} closed with other limit points} & \SI11-complete & \ref{complcount}(2)\\
\hline
\makecell{$0$ not isolated in $A$,\\ $A$ not closed with other limit points} & \PI11-hard & \ref{vastarborel}(2)(b)\\
\hline
\makecell{$0$ not isolated in $A$, $A$ \emph{countable} not closed\\ with all other limit points not in $A$} & \PI11-complete & \ref{complcount}(3)\\
\hline
\makecell{$0$ not isolated in $A$,\\ $A$ not closed with other limit points in $A$} & $D_2( \SI11)$-hard & \ref{vastarborel}(2)(c)\\
\hline
\makecell{$0$ not isolated in $A$, $A$ \emph{countable} not closed\\ with other limit points in $A$} & $D_2( \SI11)$-complete & \ref{complcount}(4)\\
\hline
$A$ \SI11-complete & \PI12-complete & \ref{vastarborel}(3)\\
\hline
\end{tabular}\medskip
\caption{Summary of the complexity of $\V^*_A$ for $A \in \D$\label{tableM*}}
\end{table}
Table \ref{tableM*} summarizes our results for the complexity of $\V^\star_A$
when $A \in \D$.\medskip
Several sections of Gao and Shao's paper \cite{GaoShao2011} are devoted to
different ways of constructing, for every countable $A \in \D$ (which, by
Theorem \ref{necessary}(2), means for every $A$ such that $\U^\star_A \neq
\emptyset$), a Polish $A$-ultrametric spaces $X$ which is $A$-universal
(i.e.\ such that $X \in \U^\star_A$, and $Y \sqsubseteq X$ for every $Y \in
\U_A$) and ultrahomogeneous. They call such a space $A$-ultrametric Urysohn.
The analogous question for Polish spaces was considered by Sauer in
\cite{Sauer} (beware that Sauer calls homogeneous the spaces we call
ultrahomogeneous, and that his definition of universality is equivalent to
ours only for ultrahomogeneous and complete spaces).
\begin{defin}
We say that a metric space $X$ is \emph{ultrahomogeneous} if every isometry between finite subsets of $X$ can be extended to an isometry of the whole $X$.
\end{defin}
\begin{defin}
Let $A \in \D$. We say that $X \in \V_A$ is \emph{Polish $A$-universal} if $Y
\sqsubseteq X$ for every $Y \in \V_A$ (clearly $X \in \V^\star_A$ must then
hold). If additionally $X$ is ultrahomogeneous then we say it is \emph{$A$-Urysohn}.
\end{defin}
Here we use Corollary \ref{VAD} to extend Sauer's characterization
(\cite[Theorem 4.13]{Sauer}, which is the equivalence between (i) and (iv) in
Theorem \ref{Urysohn} below) and give a different proof of the necessity of
the condition for the existence of $A$-Urysohn spaces. The following property
was isolated in \cite{DLPS}.
\begin{defin}
A triple $(a, b, c)$ of elements of $\RR^+$ is \emph{metric} if $a \leq b + c$, $b
\leq a + c$, and $c \leq a + b$. A set $A \subseteq \RR^+$ satisfies the
\emph{$4$-values condition} if for all pairs of metric triples of numbers in
$A$ of the form $(a, b, x)$ and $(c, d, x)$ there exists $y \in A$ such that
both $(b, c, y)$ and $(a, d, y)$ are metric triples.
\end{defin}
\begin{theorem}\label{Urysohn}
Let $A \in \D$. The following are equivalent:
\begin{enumerate}[(i)]
\item $A$ satisfies the $4$-values condition and either is closed or $0$
is not a limit point of $A$;
\item $A$ satisfies the $4$-values condition and $\V_A$ is Borel;
\item $A$ satisfies the $4$-values condition and $\V_A$ is \SI11;
\item there exists $X \in \V_A$ which is $A$-Urysohn;
\item $A$ satisfies the $4$-values condition and there exists $X \in
\V_A$ which is Polish $A$-universal.
\end{enumerate}
\end{theorem}
\begin{proof}
The equivalence between (i), (ii), and (iii) follows immediately from
Corollary \ref{VAD}.
(i) implies (iv) is obtained by Sauer repeating the classical construction of
the Urysohn space (which in our terminology would be $\RR^+$-Urysohn) using
only spaces in $\V_A$: we amalgamate (using the $4$-values condition) the
finite members of $\V_A$ obtaining an ultrahomogeneous $Z$ with $D(Z) = A$
which isometrically embeds all countable metric spaces using distances in
$A$. We then let $X$ to be the completion of $Z$, and we need to check that
(i) guarantees that $X$ does not use distances outside $A$. If $0$ is not a
limit point of $A$ then $Z$ is discrete so that $X=Z \in \V_A$. Otherwise
$D(X) \subseteq \overline{D(Z)} = \overline{A}$, thus if $A$ is closed we
have $X \in \V_A$. Moreover $X$ is still ultrahomogeneous by \cite[Theorem
4.4]{Sauer}.
To prove (iv) implies (v) we need Theorem 3.9 of \cite{Sauer}, stating that
the set of distances of an ultrahomogeneous universal metric space satisfies
the $4$-values condition.
We complete the proof by showing that (v) implies (iii). This is immediate
because if a Polish $A$-universal $X$ exists, then $Y \in \V_A$ if and only
if $Y \sqsubseteq X$, and $\sqsubseteq $ is analytic.
\end{proof}
\section{Isometry and isometric embeddability}\label{polish}
As a first step in our analysis of isometry and isometric embeddability
restricted to $\V_A$ and $\V^\star_A$ we prove the following proposition,
which also answers the first part of \cite[Question 3]{ClemensPreprint} by
showing that if \( A,A' \in \D \) and \( A \subseteq A' \), then \(
{\isom^\star_A} \leq_B {\isom^\star_{A'} } \).
\begin{proposition} \label{questionclemens}
Let $A,A' \in \D$ and assume $A \subsetneq A'$. Then ${\isom_A^\star} \leq_B
{\isom_A} \leq_B {\isom_{A'}^\star} \leq_B {\isom_{A'}}$ and
${\sqsubseteq_A^\star} \leq_B {\sqsubseteq_A} \leq_B {\sqsubseteq_{A'}^\star}
\leq_B {\sqsubseteq_{A'}}$.
\end{proposition}
\begin{proof}
For any $A$ we have $\V^\star_A \subseteq \V_A$ and hence ${\isom_A^\star}
\leq_B {\isom_A}$ and ${\sqsubseteq^\star_A} \leq_B {\sqsubseteq_A}$. Thus we
need only to prove ${\isom_A} \leq_B {\isom_{A'}^\star}$ and ${\sqsubseteq_A}
\leq_B {\sqsubseteq_{A'}^\star}$.
Suppose first that $A'=A\cup\{ r_0\} $ and $a<r_0$ for all $a\in A$. Fix a
space $Z \in \V_A^\star$. Given $X\in \V_A$, define $X'= X \oplus_{r_0} Z \in
\V^\star_A$ by Fact \ref{fact:oplus}: this map is Borel by Proposition
\ref{1531426}. We show that it is the required reduction. Notice that by case
assumption $d_{X'}(x,z)=r_0$ whenever $x\in X$ and $z\in Z$. If $\psi\colon
X\to Y$ is an isometric embedding (respectively, an isometry), then $\psi\cup
{\rm id}_Z \colon X'\to Y'$ is an isometric embedding (respectively, an
isometry) as well. Conversely, suppose $\psi \colon X'\to Y'$ is an isometric
embedding. Since the distance \( r_0 \) is never realized inside any of \( X
\), \( Y \), and \( Z \), either $\psi(X)\subseteq Y$ and $\psi(Z)\subseteq
Z$, in which case $\psi \restriction X$ witnesses $X \sqsubseteq Y$, or else
$\psi(X)\subseteq Z$ and $\psi(Z)\subseteq Y$. In the latter case, $X
\sqsubseteq Z$ and $Z \sqsubseteq Y$, so again $X \sqsubseteq Y$. If moreover
$\psi$ is onto, then we have either $\psi(X)=Y$, or else $\psi(X)=Z$ and
$\psi(Z)=Y$, which allows us to conclude that $X \isom Y$.
Otherwise there exist $r_0 \in A' \setminus A$ and $r_1 \in A'$ with
$r_0<r_1$. Notice that there is $Z\in \V_{A'}^\star$ with the property that
$d_Z(z_0,z_1)=r_0$ for exactly one pair of points $\{ z_0,z_1\} \subseteq
Z$. Indeed $A'\setminus\{ r_0\}\in \mathcal D $ by Theorem \ref{clemensrealized}, so let $Z = W \oplus_{r_1} V$ where $W \in \V_{A' \setminus \{r_0\}
}^\star$ and $V = \{z_0,z_1\}$ consists of two points at distance $r_0$, so
that $d_Z(w,z_i) \geq r_1$ for every $w \in W$ and $i \in \{0,1\}$.
We show that the mapping that associates to every $X\in \V_A$ the space
$X\times Z$ with the product metric $d_{X\times Z}((x,z),(x',z'))=\max
\{d_X(x,x'),d_Z(z,z')\}$ is the required Borel reduction. Borelness is proved
using Remark \ref{rem:coding}.
If $\psi \colon X\to Y$ is an isometric embedding (respectively, an
isometry), then $\psi\times {\rm id}_Z \colon X\times Z\to Y\times Z$ is an
isometric embedding (respectively, an isometry). Conversely, suppose $\psi
\colon X\times Z\to Y\times Z$ is an isometric embedding. Then for every
$x\in X$ there exists $y\in Y$ such that $\psi(x,z_0)\in\{ (y,z_0),(y,z_1)\}
$, since in both $X\times Z$ and $Y\times Z$ the points having second
coordinate equal to $z_0$ or $z_1$ are the only points that realize the
distance $r_0$ by our choice of $Z$. This defines a function $\varphi \colon
X\to Y$ with the property that for every $x\in X$ we have
$\psi(x,z_0)=(\varphi(x),z_i)$ for some $i\in\{ 0,1\} $. In order to prove
that $\varphi$ is an isometric embedding, let $x,x'\in X$ and let
$r=d_X(x,x')=d_{X\times Z}((x,z_0),(x',z_0))=d_{Y\times
Z}(\psi(x,z_0),\psi(x',z_0))$. Since we have $\psi(x,z_0)=(\varphi(x),z_i)$
and $\psi(x',z_0)=(\varphi(x'),z_j)$ for some $i,j\in\{ 0,1\} $, we obtain $r
= \max \{d_Y(\varphi(x),\varphi(x')),d_Z(z_i,z_j)\}$; recalling that $r\neq
r_0 = d_Z(z_0,z_1)$, the equality $d_Y(\varphi(x), \varphi(x'))=r$ follows.
So $\varphi$ is an isometric embedding. Assume now $\psi$ is surjective and
let $y\in Y$. So, suppose $\psi(x,z)=(y,z_0)$ and $\psi(x',z')=(y,z_1)$.
Recall that $r_0\notin A$ is not realized in $X\in \V_A$ and is realized only
by the pair $\{ z_0,z_1\} $ in $Z$. Since $d_{Y \times
Z}((y,z_0),(y,z_1))=r_0$ we have that $\{ z,z'\} =\{ z_0,z_1\} $. If $z=z_0$
then $\varphi(x)=y$, while if $z'=z_0$ then $\varphi(x')=y$. So $\varphi$ too
is surjective.
\end{proof}
\begin{remark}\label{remark}
One may be interested in analogues of Proposition~\ref{questionclemens}
obtained by restricting the relations of isometry and isometric embeddability
to a given class of Polish metric spaces.
The same proof shows that the conclusion of the proposition holds for
ultrametric, zero-dimensional, countable, locally compact, $\sigma $-compact,
and discrete spaces. This is the case because such classes are closed under
finite products and the operations $\oplus_r$, and whenever they have an
element in \( \V^\star_{A'} \), for every $r_0 \in A'$ they also have an
element in $ \V_{A'\setminus\{ r_0\} }$ and contain a space consisting of two
points at distance $r_0$. For the classes mentioned above the latter property
follows from Theorem \ref{necessary}: this is clear for ultrametric,
zero-dimensional, countable, and discrete spaces; for the classes of locally
compact and $\sigma $-compact spaces notice that removing a point from a
$\sigma $-compact subset of $\RR$ yields a $\sigma $-compact set. For
contrast, this argument does not work for compact metric spaces\footnote{In a
previous version of the paper we claimed that this was the case: we thank the
referee for pointing out our mistake.} and we do not know whether
Proposition~\ref{questionclemens} holds restricted to this class.
One can also restrict attention to spaces of a fixed dimension different from
$0$ and obtain the same results even if these classes are not closed under
finite products: this is because in the last paragraph of the proof of
Proposition~\ref{questionclemens} we can require (by Theorem
\ref{necessary}(1)) $Z$ to be zero-dimensional, so that $X \times Z$ has the
same dimension of $X$.
\end{remark}
We will use the following folklore construction to turn a countable graph
into a discrete metric space. Fix $r,r' \in \RR$ with $r<r'\leq 2r$. To each
graph $G$ on $\omega$ associate the metric space $X_G=(G,d_G)$ by letting
$d_G(a,b)=r$ if $(a,b)$ is an edge in $G$, and $d(a,b)=r'$ if $a\neq b$ and
$(a,b)$ is not an edge in $G$. The following Lemma is straightforward.
\begin{lemma}\label{graphs}
The map $G \mapsto X_G$ Borel reduces countable graph isomorphism and
countable graph embeddability to $\isom_{\{0,r,r'\}}$ and
$\sqsubseteq_{\{0,r,r'\}}$, respectively. Moreover, if we restrict the map to
nontrivial graphs (i.e.\ different from the empty graph and from the
countable clique), we get a reduction to $\isom^\star_{\{0,r,r'\}}$ and
$\sqsubseteq^\star_{\{0,r,r'\}}$.
\end{lemma}
\subsection{Isometry}
The study of the complexity of $\isom_A^\star$ was started by Clemens in
\cite{ClemensPreprint}, where such relation is called $E_A$. Clemens' main
results about $ \isom_A^\star$ are summarized in the following theorem.
\begin{theorem}[{\cite[Theorem 23]{ClemensPreprint}}] \label{thmclemens}
Let \( A \in \D \).
\begin{enumerate}
\item If $A$ contains a right neighborhood of $0$, then $ \isom_A^\star$ is
Borel bireducible with any complete orbit equivalence relation.
\item If $A$ is dense in some right neighborhood of $0$ but does not
contain any of them, then both the action of the density ideal on $
\pre{\omega }{2} $ and countable graph isomorphism are Borel reducible
to $ \isom_A^\star$. In particular, $ \isom_A^\star$ is strictly above
countable graph isomorphism with respect to \(\leq_B \).
\item If $A$ is not dense in any right neighborhood of $0$ and either $0$
is a limit point of $A$ or \( A \) is not well-spaced, then $
\isom_A^\star$ is Borel bireducible with countable graph isomorphism.
\item If $A = \{ 0\} \cup \{ r_i \mid i \in \omega \}$ with $0<r_i<r_{i+1}$
is well-spaced, then \( \isom^\star_A \) is Borel bireducible with
isomorphism between reverse trees (as defined in
\cite{ClemensPreprint}).
\item If $A=\{ 0,r_0,\ldots ,r_{n-1}\}$ with $0<r_i<r_{i+1}$ is
well-spaced, then $ \isom_A^\star$ is Borel bireducible with
isomorphism between trees of height $n$. Thus these relations form a
$\leq_B$-strictly increasing chain of equivalence relations
classifiable by countable structures and they are strictly below
countable graph isomorphism.
\end{enumerate}
\end{theorem}
Notice that the conditions considered in Theorem \ref{thmclemens} are
exhaustive. In fact, if $0$ is not isolated in $A$ or $A$ is not well-spaced
then we are in either case (1), (2), or (3). If $0$ is isolated in $A$ and \(
A \) is well-spaced then $A$ is well-founded, since strictly decreasing sequences in a
well-spaced set converge to $0$. Since a well-founded and well-spaced set
has order type \( \leq \omega \), we are either in case (4) or in case (5).
Proposition~\ref{questionclemens} may be used to give a simpler proof of part
(1) of Theorem~\ref{thmclemens}. Let $r$ be such that $[0,r]\subseteq A$. To
any Polish metric space $(X,d)$, associate the space $(X,d')$, where
$d'(x,y)=r \cdot \frac{d(x,y)}{1+d(x,y)} $. This reduces isometry on all
Polish metric spaces to $ \isom_{[0,r)}$, which in turn reduces to $
\isom_A^\star$ by Proposition~\ref{questionclemens}. Since isometry on
arbitrary Polish metric spaces is Borel bireducible with the complete orbit
equivalence relation by \cite[Theorem 1]{Gao2003}, we are done.
Theorem \ref{thmclemens} yields the following sufficient condition for
countable graph isomorphism being Borel reducible to \( \isom^\star_A \)
\begin{corollary} \label{corClemens}
Let \( A \in \D \). If \( A \) is ill-founded or not well-spaced, then
countable graph isomorphism Borel reduces to \( \isom^\star_A \).
\end{corollary}
We will now show how the results from \cite{ultrametric} cited in Section
\ref{term} can be used to complete the description of the behaviour of
$\isom_A^\star$. We begin by proving the converse of Corollary
\ref{corClemens}, i.e.\ we characterize when countable graph isomorphism is
Borel reducible to \( \isom^\star_A\).
\begin{theorem} \label{appendixisom}
Let $A \in \D$. Countable graph isomorphism Borel reduces to $ \isom_A^\star$
if and only if $A$ is either ill-founded or not well-spaced.
\end{theorem}
\begin{proof}
One direction is Corollary~\ref{corClemens}, but for the reader's convenience
we give here an alternative and simpler proof. If $A$ is ill-founded, let $(
r_n)_{n\in\omega }$ be a decreasing sequence in $A$. Since countable graph
isomorphism Borel reduces to $ \isom_{( r_n)_{n \in \omega}}^\star$ by
Theorem \ref{isomuniversalstar}(1), it suffices to use
Proposition~\ref{questionclemens} when $( r_n)_{n\in\omega } \subsetneq A$.
If instead $A$ is not well-spaced fix $r,r'\in A$ with $r<r'\leq 2r$. Lemma
\ref{graphs} gives a Borel reduction of isomorphism between nontrivial
countable graphs to $ \isom^\star_{\{ 0,r,r'\} }$. Then apply
Proposition~\ref{questionclemens} if \( \{ 0,r,r' \} \subsetneq A \).
Finally, assume that $A$ is well-founded and well-spaced. Then $ \V_A^\star=
\U_A^\star$ by Theorem \ref{proponlyultrametric}(2), and countable graph
isomorphism does not Borel reduce to $ \isom_A^\star$ because the latter is
Borel by Theorem \ref{lemmaequivalence}(2).
\end{proof}
We now have the following fairly complete picture of the structure of the
relations $ \isom^\star_A$. Notice that conditions (1)--(4) exhaust all
possible cases for $A$.
\begin{theorem}\label{isomstarA}
Let $A \in \D$.
\begin{enumerate}
\item The relations \( \isom^\star_A \) with \( A \) well-founded and
well-spaced form a strictly increasing chain of order type \( \omega +
1 \) under $\leq_B$, consisting of Borel equivalence relations, and
they are Borel reducible to all the other \( \isom^\star_{A'} \) with
$A' \in \D$.
\item If $A$ is either ill-founded or not well-spaced, and moreover $A$ is
not dense in any right neighborhood of $0$, then $ \isom_A^\star$ is
Borel bireducible with countable graph isomorphism.
\item If $A$ is dense in some right neighborhood of $0$ but does not
contain any such neighborhood, then $\isom_A^\star$ is strictly above
countable graph isomorphism and Borel reducible to any complete orbit
equivalence relation.
\item If $A$ contains a right neighborhood of $0$, then $ \isom_A^\star$ is
Borel bireducible with any complete orbit equivalence relation.
\end{enumerate}
\end{theorem}
\begin{proof}
Parts (2)--(4) follow from Theorem~\ref{thmclemens} and the observations
following it, so let us prove (1).
Let $\alpha$ be the order type of $A$. The fact that $A$ is well-spaced
implies \( 1 \leq \alpha \leq \omega \) and $\V^\star_A = \U^\star_A$ (by
Theorem \ref{proponlyultrametric}(2)). Thus ${\isom^\star_A}$ and
${\isom^\star_{\alpha}}$ are the same relation. By Theorem \ref{lemmaequivalence}(2), when \( \alpha
\) varies between \( 1 \) and \( \omega \) these equivalence relations form a
strictly increasing chain of length $\omega+1$ under $\leq_B$ and they are
Borel equivalence relations. Finally, since \( \isom^\star_{\alpha} \) Borel
reduces to countable graph isomorphism by Theorem \ref{lemmaequivalence}(1),
it follows from Theorem~\ref{appendixisom} that \( \isom^\star_A \) reduces
to any other \( \isom^\star_{A'} \) for \( A' \) not satisfying the
conditions of (1).
\end{proof}
We will now partially answer also the second part of \cite[Question
3]{ClemensPreprint}, which asked whether ${\isom_A} \sim_B {\isom^\star_A}$
for every \( A \in \D \).
\begin{theorem}\label{thm:isomA}
Let $A \in \D$ satisfy at least one of the following conditions:
\begin{enumerate}[(i)]
\item $A$ is not dense in any right neighborhood of $0$;
\item $A$ has maximum;
\item there exists $f \colon A \to A$ which is Polish metric preserving,
injective, and non-surjective.
\end{enumerate}
Then ${\isom_A} \sim_B {\isom^\star_A}$.
\end{theorem}
\begin{proof}
For any $A \in \D$ we have ${\isom^\star_A} \leq_B {\isom_A}$ because
$\V^\star_A \subseteq \V_A$.\smallskip
Assume first that $A$ is not dense in any right neighborhood of $0$. In this
case ${\isom_A}$ is classifiable by countable structures. In fact the
argument of \cite[Proposition 18]{ClemensPreprint} applies not only to
${\isom^\star_A}$ but also to ${\isom_A}$. We distinguish two cases. If $A$
is either ill-founded or not well-spaced, then ${\isom^\star_A}$ is Borel
bireducible with countable graph isomorphism by Theorem \ref{isomstarA}(2),
and hence ${\isom_A} \leq_B {\isom^\star_A}$. If instead $A$ is well-founded
and well-spaced then $\V_A = \U_A$ and $\V^\star_A = \U^\star_A$, so that we
can use Theorem \ref{lemmaequivalence}(1).\smallskip
Now assume that $r = \max A$. Since $ \V_{\{ 0\} }= \V_{\{ 0\} }^\star$, we
may assume that $A$ has more than one element. Fix $Z \in \V^\star_{A
\setminus \{r\}}$ and consider the map sending $X \in \V_A$ to $X' = X
\oplus_r Z \in \V^\star_A$, so that by case assumption $d_{X'}(x,z) =r$
whenever $x \in X$ and $z \in Z$. The map is Borel and we claim that it
witnesses ${\isom_A} \leq_B {\isom^\star_A}$. To show this we use an argument
similar to the one employed in the first part of the proof of Theorem
\ref{lemmaequivalence}(1) in \cite{ultrametric}.
Fix \(
X,Y \in \V_A \). First assume that \( \varphi \colon X \to Y \) witnesses \(
X \isom_A Y \): then \( {\varphi} \cup {\operatorname{id}_Z} \) is a witness
of \( X' \isom^\star_A Y' \).
Conversely, let \( \psi \) be an isometry between \( X' \) and \( Y' \), and
let \( X_0 = \psi^{-1}(Z) \), so that \( \psi (X' \setminus X_0 ) = Y \).
Notice that \( d_{X'}(x_0,x_1) < r \) for every \( x_0,x_1 \in X_0 \) since
\( \psi(x_0),\psi(x_1) \in Z \in \V^\star_{A \setminus \{r\}} \). Hence, by
construction of \( X' \), either \( X_0 \subseteq Z \) or \( X_0 \subseteq X
\). In the former case \( X_0 = Z \) because $\psi(Z)$ cannot intersect both
$Y$ and $Z$, since any two points of $Z$ are less than $r$ apart. Thus \(
\psi(X) = \psi(X' \setminus X_0) = Y \) and \( \psi \restriction X \) is an
isometry between \( X \) and \( Y \).
If instead $X_0 \subseteq X$, we claim that \( \varphi = \psi \restriction
(X \setminus X_0) \cup (\psi \circ \psi) \restriction X_0 \) witnesses \( X
\isom_A Y \). Notice that \( \varphi \) is well-defined by the fact that by
definition \( \psi( X_0) = Z \subseteq X' \). For the same reason, the range
of \( \varphi \) equals the range of \( \psi \restriction (X \setminus X_0)
\cup \psi \restriction Z = \psi \restriction (X' \setminus X_0) \), thus \(
\varphi \) is a surjection from \( X \) onto \( Y \).
Finally, we check that \( \varphi \) preserves distances. It is clearly
enough to show that for \( x \in X \setminus X_0 \) and \( x' \in X_0 \) we
have \( d_X(x,x') = d_Y(\varphi(x),\varphi(x')) \). Since \( \psi(x) \in Y \)
and \( \psi(x') \in Z \), we have \( d_{Y'}(\psi(x),\psi(x')) = r \), whence
\( d_X(x,x') = d_{X'}(x,x') = d_{Y'}(\psi(x),\psi(x')) = r \). Since \(
\psi(x') \in Z \subseteq X'\) and \( x \in X \), \( d_{X'}(x, \psi(x')) = r
\), therefore \( d_{Y'}(\psi(x), \psi(\psi(x'))) = r \). Since \( \psi(x) =
\varphi(x) \) and \( \psi(\psi(x')) = \varphi(x') \) by definition of \(
\varphi \), we have \( d_Y(\varphi(x),\varphi(x')) =
d_{Y'}(\varphi(x),\varphi(x')) = r = d_X(x,x') \), as required.
\smallskip
Finally, assume that $f$ is as in (iii) and let $A' \subsetneq A$ be the
range of $f$. We map any $X \in \V_A$ to $X' \in \V_{A'}$ by composing the
metric \( d_X \) with $f$: the fact that $f$ is Polish metric preserving
makes sure that \( d_{X'} = f \circ d_X \) is still a Polish metric space.
Consider the map \( X \mapsto X' \). It is Borel because $f$ is such by
Proposition \ref{prop:automaticBorel}, and, since $f$ is injective, witnesses
${\isom_A} \leq_B {\isom_{A'}}$. By Proposition~\ref{questionclemens} we have
${\isom_{A'}} \leq_B {\isom_A^\star}$, and thus ${\isom_A} \leq_B
{\isom_A^\star}$.
\end{proof}
It is not obvious when Condition (iii) of Theorem \ref{thm:isomA} holds.
Notice that a sufficient condition for a nondecreasing $f \colon A \to \RR$
to be metric preserving is that for all $r,s,t\in A$, if $s \le t < r \le
s+t$ then $f(r)\le f(s)+f(t)$. Using this we see that Condition (iii) holds
for instance when $A = \QQ^+$ or $A = (\RR^+ \setminus \QQ) \cup \{0\}$, as
witnessed by the map $f(r) = r/(1+r)$. The same is true when $A$ contains an
interval $[a,b]$, as witnessed by
\[
f(r) =
\begin{cases}
r & \text{if $r<a$;}\\
\displaystyle{a + (b-a)\frac{r-a}{1+(r-a)}} & \text{if $r \geq a$.}
\end{cases}
\]
(Notice that $b$ does not belong to the range of $f$.) On the other hand,
Condition (iii) can fail even for countable sets: if $A = \{0\} \cup \{2^k
\mid k \in \ZZ\}$ then every injective Polish metric preserving function $f:A
\to A$ satisfies $f(2^k) = 2^{k+z}$ for some $z \in \ZZ$, and hence is
surjective. Notice however that this particular $A$ satisfies Condition (i)
of Theorem \ref{thm:isomA}.
In fact, we do not know if there exists $A \in \D$ which does not satisfy any
of the conditions of Theorem \ref{thm:isomA}. However if such an $A$ exists,
it must be uncountable. To show this, we need the following fact, which might
be of independent interest.
\begin{proposition}\label{prop:countable}
Let $A \in \D$ be such that $A$ is dense in $(a,b)$ and $A \cap (a, +\infty)$
is countable for some $a,b$ with $0<a<b$. Then Condition (iii) of Theorem \ref{thm:isomA}
holds, i.e.\ there exists $f \colon A \to A$ which is Polish metric
preserving, injective, and non-surjective.
\end{proposition}
\begin{proof}
We may assume that $a,b \in A$. We will define $f$ strictly increasing which
is the identity up to $a$ and maps $A \cap (a, +\infty)$ into $(a,b)$, so
that $b \in A$ is not in the range of $f$.
Let $(a_n)_{n\in \omega}$ be an enumeration without repetitions of $A \cap
(a, +\infty)$. Since $A$ is dense in $(a,b)$ we can recursively define
$f(a_n)$ so that:
\begin{itemize}
\item $f(a_n)<a_n$;
\item if $c_0=a < c_1 < \dots < c_{n+1}$ enumerate in increasing order
$\{a\} \cup \{ a_m \mid m \leq n\}$ then the slope of the segment
with endpoints $(c_i, f(c_i))$ and $(c_{i+1}, f(c_{i+1}))$ is larger
than the slope of the segment with endpoints $(c_{i+1}, f(c_{i+1}))$
and $(c_{i+2}, f(c_{i+2}))$ for each $i<n$.
\end{itemize}
It remains to prove that $f$ is Polish metric preserving. Since $f$ is
nondecreasing, by Proposition \ref{Polish_metric_pres} and the observation
after Theorem \ref{thm:isomA}, it suffices to show that for all $r,s,t\in A$,
if $s \le t < r \le s+t$ then $f(r)\le f(s)+f(t)$. By construction we have
that if $x,y,z \in A$ with $x<y<z$ then $S(y,z) \leq S(x,z) \leq S(x,y)$
where $S(v,w)$ is the slope of the segment with endpoints $(v, f(v))$ and
$(w, f(w))$. From this it follows that for $r,s,t$ as above $S(t,r) \leq
S(0,s)$, whence
\[
f(r) =f(t) + (r-t) \cdot S(t,r) \leq f(t) + s \cdot S(t,r) \leq f(t) + s \cdot S(0,s) = f(t) + f(s).
\]
\end{proof}
\begin{theorem} \label{thm:countable}
If $A \in \D$ is countable then ${\isom_A} \sim_B {\isom^\star_A}$.
\end{theorem}
\begin{proof}
If $A$ is not dense in any right neighborhood of $0$ we are in case (i) of
Theorem \ref{thm:isomA}. Otherwise by Proposition \ref{prop:countable} we are
in case (iii) of Theorem \ref{thm:isomA}.
\end{proof}
\subsection{Isometric embeddability}
Recall again that if $A$ is well-founded and well-spaced, then $ \V_A^\star=
\U_A^\star$ and the order type of $A$ is $\leq \omega$. Hence in this case
the structure of the relations $\sqsubseteq_A^\star$ is described by (i) and
(iv) of Theorem \ref{lemmaequivalence}(4). The remaining case is settled by
the following proposition.
\begin{proposition}\label{prop:completeqo}
Let $A\in \D$. If $A$ is either ill-founded or not well-spaced, then
$\sqsubseteq_A^\star$ is Borel bireducible with a complete analytic
quasi-order.
\end{proposition}
\begin{proof}
First notice that, even if $\V_A^\star$ may not be a standard Borel space,
the relation $\sqsubseteq_A^\star$ still Borel reduces to isometric
embeddability on all Polish metric spaces, which is a complete analytic
quasi-order on a standard Borel space (in fact, Louveau and Rosendal
\cite{louros} showed that isometric embeddability restricted to ultrametric
Polish spaces is a complete analytic quasi-order, and we strengthened this in
\cite{cammarmot} by showing that it has the stronger property of being
invariantly universal).
If $A$ is ill-founded, let $( r_n)_{n\in\omega }$ be a decreasing sequence in
$A$, and let \( A' = \{ 0 \} \cup \{ r_n \mid n \in \omega \} \). Then
isometric embeddability on $\U^\star_{A'}$ is a complete analytic quasi-order
by Theorem \ref{isomuniversalstar}(2). Hence so is $\sqsubseteq^\star_{A'}$
because $\U^\star_{A'} \subseteq \V^\star_{A'}$. By applying
Proposition~\ref{questionclemens} in case \( A' \subsetneq A \), we get the
desired result.
Suppose now that there are $r,r'\in A$ with $r<r'\leq 2r$. Then Lemma
\ref{graphs} yields a Borel reduction of embeddability between nontrivial
graphs on $\omega $ to $\sqsubseteq_{\{ 0,r,r'\} }^\star$. Now apply
Proposition~\ref{questionclemens} again in case \( \{0,r,r'\} \subsetneq A
\).
\end{proof}
Summing up, we have the following full description of the relations \(
\sqsubseteq^\star_A \) for \( A \in \D \).
\begin{theorem}\label{sqsubseteqstarA}
Let $A \in \D$.
\begin{enumerate}
\item The relations \( \sqsubseteq^\star_A \) with \( A \) well-founded and
well-spaced form a strictly increasing chain of order type \( \omega +
1 \) consisting of Borel quasi-orders, i.e.\ the quasi-orders \(
\sqsubseteq^\star_\alpha \) for \( \alpha \leq \omega \) from (i) and
(iv) of Theorem \ref{lemmaequivalence}(4). These relations are Borel
reducible to all remaining \( \sqsubseteq^\star_{A'} \) for $A'\in \mathcal D $.
\item The relations \( \sqsubseteq^\star_A \) when \( A \) is either
ill-founded or not well-spaced are Borel bireducible with a complete
analytic quasi-order.
\end{enumerate}
\end{theorem}
The following is the analogue of Theorem \ref{thm:isomA}, but in this case
the result is unconditional.
\begin{corollary}\label{corsqsubseteqA}
If \( A \in \D \) then ${\sqsubseteq_A} \sim_B {\sqsubseteq^\star_A}$.
\end{corollary}
\begin{proof}
For any $A$ we have ${\sqsubseteq^\star_A} \leq_B {\sqsubseteq_A}$ because
$\V^\star_A \subseteq \V_A$.
If we are in case (1) of the previous Theorem, ${\sqsubseteq_A} \leq_B
{\sqsubseteq^\star_A}$ follows from Theorem \ref{lemmaequivalence}(3) because
in this case $\V_A = \U_A$ and $\V^\star_A = \U^\star_A$ by Theorem
\ref{proponlyultrametric}(2). If we are in case (2), notice that
${\sqsubseteq_A}$ is Borel reducible to isometric embeddability on arbitrary
Polish spaces. The latter, being an analytic quasi-order on a standard Borel
space, is Borel reducible to any complete analytic quasi-order and hence to
${\sqsubseteq^\star_A}$.
\end{proof}
We observe that Theorem \ref{sqsubseteqstarA} holds even if we further
restrict the relation $\sqsubseteq^\star_A$ to zero-dimensional spaces. In
fact in case (1) all elements of $\V^\star_A$ are discrete and hence
zero-dimensional, while in case (2), which is based on
Proposition~\ref{prop:completeqo}, we use Remark \ref{remark}. For any other
topological dimension we have that $\V^\star_A$ contains such spaces if and
only if $A$ includes a right neighborhood of $0$ (by minor modifications of
the proof of Theorem \ref{proponlyultrametric}(1)). Theorem
\ref{sqsubseteqstarA} holds also in this case by Remark \ref{remark}, but
only case (2) can occur. Similar observations apply to Corollary
\ref{corsqsubseteqA}.
The analogous problems about isometry between spaces of fixed dimension
(different from $\infty$) appear to be more delicate and are discussed more
in detail in \cite[Question 7.1 and the ensuing discussion]{ultrametric}.
The only result about isometry that we know still holds after fixing
dimension is Theorem \ref{thm:isomA}, because we can still use Remark
\ref{remark}.
Proposition \ref{prop:completeqo} can be strengthened by replacing
completeness with invariant universality (the notion originates in
\cite{frimot}, and was formally introduced in \cite{cammarmot}).
\begin{defin}\label{def:invariantlyuniversal}
Let the pair \( (S,E) \) consist of an analytic quasi-order \( S \) and an
analytic equivalence relation \( E \subseteq S \), with both relations
defined on the same standard Borel space $X$. Then $(S,E)$ is
\emph{invariantly universal} (for analytic quasi-orders) if for any analytic
quasi-order $R$ there is a Borel $B\subseteq X$ invariant under $E$ such that
$R\sim_BS\restriction B$.
When $E$ is isometry and $S$ is isometric embeddability on some class of
metric spaces, we just say that $S$ is invariantly universal.
\end{defin}
Notice that if $(S,E)$ is invariantly universal, then $S$ is complete for
analytic quasi-orders.
A notion strictly connected to invariant universality is the following (see
\cite{ultrametric}). Given a pair \( (S,E) \) as above, we denote by \( S/E
\) the \( E \)-quotient of \( S \), i.e.\ the quasi-order on $X/E$ induced by
$S$. If $F$ and $E$ are equivalence relations on sets $X$ and $Y$ and $f
\colon X/F\to Y/E$, then a \emph{lifting} of $f$ is a function $ \hat f
\colon X\to Y$ such that $[ \hat f (x)]_E=f([x]_F)$ for every $x\in X$.
\begin{defin}\label{def:cB}
Let $(R,F)$ and $(S,E)$ be pairs consisting of a quasi-order and an
equivalence relation on some standard Borel spaces, with $F \subseteq R$ and
$E \subseteq S$.
We say that $(R,F)$ is \emph{classwise Borel isomorphic} to $(S,E)$, in
symbols $(R,F) \simeq_{cB} (S,E)$, if there is an isomorphism of quasi-orders
\( f \) between \( R/F \) and \( S/E \) such that both \( f \) and \( f^{-1}
\) admit Borel liftings.
When the equivalence relations $F$ and $E$ are clear from the context we just
say that $R$ is classwise Borel isomorphic to
$S$, and
write \( R \simeq_{cB} S \)
\end{defin}
It is easy to see that if $(R,F)$ is invariantly universal and for some Borel
$E$-invariant $B$ we have $(R,F) \simeq_{cB} (S \restriction B, E
\restriction B)$ then $(S,E)$ is invariantly universal as well.
\begin{lemma} \label{lemma:invuniv1}
Let \( A = \{ 0,r,r' \} \) with \( r < r' \leq 2r \). Then \( \V^\star_A \)
is Borel in \( F(\mathbb{U}) \) and \( \sqsubseteq^\star_A \) is invariantly
universal.
\end{lemma}
\begin{proof}
The set $\V^\star_A$ is Borel by Theorem \ref{vastarborel}. We now show that
embeddability between nontrivial countably infinite graphs is classwise Borel
isomorphic to the restriction of $\sqsubseteq^\star_A$ to the class of
infinite spaces. This suffices because embeddability between these graphs is
invariantly universal by \cite{frimot}. First notice that the class of
infinite spaces in $\V^\star_A$ is Borel because is the collection of \( X
\in \V^\star_A \) satisfying
\[
\forall n \in \omega \, \exists m \in \omega \, \forall i \leq n (\psi_m(X) \neq \psi_i(X)).
\]
The classwise Borel isomorphism between the quotient quasi-orders is the
quotient of the map introduced before Lemma \ref{graphs}. The inverse of this
map is induced by the Borel function $X\mapsto G_X$ where $(n,m)$ is an edge
in $G_X$ if and only if $d_X(\psi_n(X),\psi_m(X))=r$.
\end{proof}
\begin{remark} \label{rmk:invuniv1}
The proof of \cite[Theorem 3.9]{frimot} actually shows that the embeddability
relation is already invariantly universal when restricted to the (Borel)
class of connected graphs on \(\omega\): this ensures that the restriction of
\( \sqsubseteq^\star_A \) (for $A$ as in the hypothesis of Lemma
\ref{lemma:invuniv1}) to spaces in which every point realizes the distance \(
r \) is still invariantly universal.
\end{remark}
\begin{lemma} \label{lemma:invuniv2}
Let \( A = \{ 0 \} \cup \{ r_n \mid n \in \omega \} \) be well-spaced with \(
(r_n)_{n \in \omega} \) a strictly decreasing sequence converging to \( 0 \).
Then \( \V^\star_A \) is Borel in \( F(\mathbb{U}) \) and \(
\sqsubseteq^\star_A \) is invariantly universal.
\end{lemma}
\begin{proof}
The fact that under the hypotheses of the lemma the set \( \V^\star_A \) is Borel follows from
Theorem~\ref{proponlyultrametric}(2): since \( A \) is well-spaced, then \(
\V^\star_A = \U^\star_A \), and the latter is a Borel subset of \(
F(\mathbb{U}) \). The fact that \( \sqsubseteq^\star_A \) is invariantly
universal is proved in~\cite[Theorem 5.19]{cammarmot} for the special case \(
r_n = 2^{-n} \), but notice that the same proof works with any other choice
for the \( r_n \)'s as well.
\end{proof}
\begin{remark}\label{rmk:decreasing}
The proof of~\cite[Theorem 5.19]{cammarmot} actually shows that \(
\sqsubseteq^\star_A \) (for $A$ as in the hypothesis of Lemma
\ref{lemma:invuniv2}) is already invariantly universal when restricted to
spaces \( X \in \V^\star_A \) such that for all \( n \in \omega \) there is
\( m \in \omega \) with \( d_X(\psi_n(X),\psi_m(X)) = r_0 \).
\end{remark}
\begin{theorem}\label{thm:invuniv}
Let \( A \in \D \), and assume that \( A \) is either ill-founded or not
well-spaced. Then \( \sqsubseteq^\star_A \) is invariantly universal, meaning that
\footnote{Formally, the definition of invariant universality is given only
for quasi-orders with a standard Borel domain (Definition~\ref{def:cB}). Here
we are considering the natural generalization of this concept to quasi-orders
whose domain is an arbitrary Borel space.} For every analytic quasi-order \(
R \) on a standard Borel space there is a Borel subset \( C' \) of \(
F(\mathbb{U}) \) invariant under isometry such that $C' \subseteq \V^\star_A$
and \( R \sim_B {\sqsubseteq \restriction C'} \).
\end{theorem}
\begin{proof}
By Lemma~\ref{lemma:invuniv1} and Lemma~\ref{lemma:invuniv2}, we can assume
that \( A \) is neither of the form \( \{ 0,r,r' \} \) with \( r < r' \leq 2r
\), nor of the form \( \{ 0 \} \cup \{ r_n \mid n \in \omega \} \) with \(
(r_n)_{n \in \omega} \) a strictly decreasing sequence converging to \( 0 \)
such that \( 2r_{n+1} < r_n \).
\begin{claim} \label{claim:invuniv1}
There exist \( A' \subseteq A \), \( B \subseteq \V^\star_{A'} \), \( r_0 \in
A' \), and \(r_1 , \bar{r} \in A \) such that:
\begin{enumerate}
\item \( B \) is a Borel subset of \( F(\mathbb{U}) \) which is invariant
under isometry;
\item \( \sqsubseteq \restriction B \) is invariantly universal;
\item \(r_0, r_1 < \bar{r}\), \(r_0 \neq r_1\), and \( r \leq \bar{r} \)
for all \( r \in A' \);
\item for every \( X \in B \) and for every countable dense \( D \subseteq
X \) it holds
\[
\forall x \in X \, \exists y \in D \, (d_X(x,y) = r_0).
\]
\end{enumerate}
\end{claim}
\begin{proof}
Suppose first that \( A \) is not well-spaced, and let \( r,r' \in A \) be
such that \( r < r' \leq 2r \). Set \( A' = \{ 0, r, r' \} \). Set \( r_0 =
r \), and pick \( \bar{r} \in A \cap (r', + \infty) \) if such set is
nonempty, and \( \bar{r} = r' \) otherwise. Finally, let \( r_1 \) be any
element of \( A \) distinct from \( r_0 \) and smaller than \( \bar{r} \);
indeed, if \( \bar{r} \neq r' \) we can just take \( r_1 = r' \), otherwise
the existence of such an \( r_1 \) is guaranteed by the fact that \( \bar{r}
= r' = \max A \) and \( A \neq A' \). These choices ensure that (3) is
satisfied.
Since $\V^\star_{A'}$ is Borel by Lemma \ref{lemma:invuniv1} the set \( B
\subseteq F(\mathbb{U}) \) consisting of the $X \in \V^\star_{A'}$ such that
\[
\forall n\, \exists m\, \forall i \leq n (\psi_m(X) \neq \psi_i(X)) \wedge
\forall n\, \exists m (d_X(\psi_n(X),\psi_m(X)) = r)
\]
is Borel. Notice that every \( X \in B \) is discrete and infinite, so that
the \( \psi_n \)'s enumerate the entire \( X \) and no distances outside $A'$
are possible. It easily follows that \( B \) is invariant under isometry so
that (1) is satisfied. Condition (4) is satisfied as well by the second part
of the definition of \( B \) and the fact that we set \( r_0 = r \). Finally,
condition (2) follows from Lemma~\ref{lemma:invuniv1} and
Remark~\ref{rmk:invuniv1}.
Assume now that \( A \) is well-spaced and ill-founded. Notice that any
strictly decreasing sequence \( (r_n)_{n \in \omega} \) in \( A \) must
converge to \( 0 \). Also we may
assume without loss of generality that there is \( \bar{r} \in A \) with \(
r_0 < \bar{r} \) (otherwise we shift the decreasing sequence by one). Then
setting \( A' = \{ 0 \} \cup \{ r_n \mid n \in \omega \} \) we get that (3)
is satisfied. Moreover, \( A' \) is well-spaced by case assumption, hence \(
\V^\star_{A'} \) is Borel by Lemma~\ref{lemma:invuniv2}.
Let \( B \) be the collection of those \( X \in \V^\star_{A'} \) such that
\[
\forall n \exists m (d_X(\psi_n(X),\psi_m(X)) = r_0).
\]
The set \( B \) is clearly Borel in \( F(\mathbb{U}) \). We will now show
that (4) is satisfied for such \( B \): since condition (4) is preserved by
isometry, it will also follow that \( B \) is invariant under isometry, i.e.\
that (1) is satisfied. So let \( X \in B \), let \( D \) be dense in \( X \),
and let \( x \in X \) be arbitrary. Let \( n \in \omega \) be such that \(
d_X(\psi_n(X),x) < r_0 \), and let \( m \in \omega \) be such that \(
d_X(\psi_n(X),\psi_m(X)) = r_0 \), which exists because \( X \in B \). Then
since \( X \) is ultrametric by Theorem~\ref{proponlyultrametric}(2) and the
fact that \( A' \) is well-spaced, we also get \( d_X(x, \psi_m(X)) = r_0 \).
Using the density of \( D \), pick \( y \in D \) such that \(
d_X(\psi_m(X),y) < r_0 \): using again the fact that \( X \) is ultrametric,
we get \( d_X(x,y) = r_0 \), as required. Finally, part (2) follows from
Lemma~\ref{lemma:invuniv2} and Remark~\ref{rmk:decreasing}.
\end{proof}
\begin{claim} \label{claim:invuniv2}
Let \( A' \), \( B \subseteq \V^\star_{A'} \), \( r_0 \), \(r_1 \), and \(
\bar{r} \) be as in Claim~\ref{claim:invuniv1}. Then there is a Borel map \(
f \colon B \to \V^\star_A \) such that
\begin{enumerate}
\item \( C = [f(B)]_{\cong} = \{ Y \in F(\mathbb{U}) \mid \exists X \in B
(f(X) \cong Y) \} \) is Borel (and obviously invariant under
isometry);
\item \( f \) reduces \( \sqsubseteq \restriction B \) to \(
\sqsubseteq^\star_A \) and \( \cong \restriction B \) to \(
\cong^\star_A \);
\item there is a Borel \( g \colon C \to B \) such that \( (g \circ f)(X)
\cong X \) for all \( X \in B \) and \( (f \circ g)(Y) \cong Y \) for
all \( Y \in C \).
\end{enumerate}
\end{claim}
\begin{proof}
Fix \( W \in \V^\star_{A \setminus \{ r_0,r_1 \bar{r} \}} \), and let \( Z =
W \oplus_{\bar{r}} V \), where \( V \) consists of two points \(z_0\) and \(
z_1 \) at distance \( r_1 \). Notice that by Fact \ref{fact:oplus} \( Z \in
\V^\star_{A \setminus \{ r_0 \}} \) and that \( z_0, z_1 \) are the unique
points of \(Z \) which realize the distance \( r_1 \). Moreover, they are
isolated in \( Z \), so they belong to any dense subset of \( Z \). Finally,
notice that \( d_Z(z_0,z) = d_Z(z_1,z) \) for any \( z \in W \): it follows
that if \( X \in B \subseteq \V^\star_{A'} \), the choice of the gluing point
in \( X \) (because of Claim~\ref{claim:invuniv1}(3)) and of \( z_0 \) or \(
z_1 \) as gluing point in $Z$ does not change the space \( X
\oplus_{\bar{r}} Z \).
Let now \( f \) be the Borel map
\[
f \colon B \to \V^\star_A, \qquad X \mapsto \widetilde{X} = X \oplus_{\bar{r}} Z,
\]
where \( X \) and \( Z \) are glued using one of \( z_0, z_1 \). Notice that
\( \widetilde{X} \in \V^\star_A \) because \( r_0 \in A' \), \( X \in
\V^\star_{A'} \), and \( Z \in \V^\star_{A \setminus \{ r_0 \}} \), so that
\( f \) is well-defined.
It is not hard to see that \( f \) satisfies (2). Indeed, if \( \varphi \) is
an isometry (respectively, an isometric embedding) between \( X, X' \in B \),
then \( \varphi \cup \mathrm{id}_Z \) is an isometry (respectively, an
isometric embedding) between \( \widetilde{X} \) and \( \widetilde{X}' \).
Conversely, if \( \psi \) is an isometry (respectively, an isometric
embedding) between \( \widetilde{X} \) and \( \widetilde{X}' \), then \( \psi
\restriction X \) is an isometry (respectively, an isometric embedding)
between \( X \) and \( X' \) because of Claim~\ref{claim:invuniv1}(4) and the
fact that \( r_0 \notin D(Z) \).
We now check that \( C = [f(B)]_{\cong} \) is Borel in \( F(\mathbb{U}) \).
First observe that since topologically \( \widetilde{X} \) is a direct sum
of \( X \) and \( Z \), then for any dense subset $D$ of $ \tilde X $ one has
$\forall x\in X\ \exists y\in D\ d_X(x,y)=r_0$. Together with the facts that
\( r_0 \notin D(Z) \) and $r_0 \neq \bar r$, this allows us to recover \( X
\) from the space \( \widetilde{X} \) as the completion (i.e.\ closure in \(
\mathbb{U} \)) of
\[
\{ \psi_n(\widetilde{X}) \mid \exists m (d(\psi_n(\widetilde{X}),\psi_m(\widetilde{X}))) = r_0 \},
\]
and \( Z \) as the completion of
\[
\{ \psi_n(\widetilde{X}) \mid \neg \exists m (d(\psi_n(\widetilde{X}),\psi_m(\widetilde{X}))) = r_0 \}.
\]
We now generalize this process to an arbitrary \( Y \in F(\mathbb{U}) \). Let
\( \mathrm{Rlz}(n,Y,r) \) be an abbreviation for the Borel condition
\[
\exists m (d(\psi_n(Y),\psi_m(Y)) = r),
\]
and set
\[
X(Y) = \mathrm{cl}(\{ \psi_n(Y) \mid \mathrm{Rlz}(n,Y,r_0) \})
\]
and
\[
Z(Y) = \mathrm{cl}(\{ \psi_n(Y) \mid \neg \mathrm{Rlz}(n,Y,r_0) \}).
\]
Notice that the maps \( Y \mapsto X(Y) \) and \( Y \mapsto Z(Y) \) are Borel
functions from \( F(\mathbb{U}) \) into itself, and for \( X \in B \) we have
\( X(\widetilde{X}) \cong X \) and \( Z(\widetilde{X}) \cong Z \). Then for an
arbitrary \( Y \in F(\mathbb{U}) \) we have that \( Y \in C \) if and only if
\( \exists X \in B \, (Y \cong \widetilde{X}) \), if and only if \( X(Y) \in
B \), \( Z(Y) \cong Z \), and \( Y \cong X(Y) \oplus_{\bar{r}} Z(Y) \) (where
the two spaces are glued via one of the only two points in \( Z(Y) \)
realizing \( r_1 \) in \( Z(Y) \)), if and only if
\begin{multline*}
X(Y) \in B \wedge Z(Y) \cong Z \wedge
\exists n [\psi_n(Y) \in Z(Y) \wedge \mathrm{Rlz}(n,Y,r_1) \wedge \\
\forall m,k (\psi_m(Y) \in X(Y) \wedge \psi_k(Y) \in Z(Y) \Rightarrow d(\psi_m(Y),\psi_k(Y)) = \max \{ \bar{r},d(\psi_k(Y),\psi_n(Y)) \})].
\end{multline*}
This is a Borel condition (here we are also using the fact that since
isometry on \( F(\mathbb{U}) \) is Borel bi-reducible with an orbit
equivalence relation, then the isometry class of a fixed element \( Z \) is
Borel), whence \( C \) is a Borel subset of \( F(\mathbb{U}) \).
Finally, let
\[
g \colon C \to B, \qquad Y \mapsto X(Y),
\]
where \( X(Y) \) is as above. As already noticed, such map is Borel and \( (g
\circ f)(X) = g(\widetilde{X}) = X(\widetilde{X}) \cong X \) for every \( X
\in B \). Conversely, for every \( Y \in C \) we have \( (f \circ g)(Y) =
\widetilde{X(Y)} \cong X(Y) \oplus_{\bar{r}} Z \): since \( Y \cong X(Y)
\oplus_{\bar{r}} Z(Y) \) where \( Z(Y) \cong Z \) and \( X(Y) \) and \( Z(Y)
\) are glued using one of the only two points of \( Z(Y) \) which realize the
distance \( r_1 \) in \( Z(Y) \), it follows that \( (f \circ g)(Y) \cong
X(Y) \oplus_{\bar{r}} Z \cong X(Y) \oplus_{\bar{r}} Z(Y) \cong Y \).
\end{proof}
Notice that conditions (2) and (3) of Claim~\ref{claim:invuniv2} imply that
\( g \) reduces \( \sqsubseteq \restriction C \) and \( \cong \restriction C
\) to \( \sqsubseteq \restriction B \) and \( \cong \restriction B \),
respectively.
Let now \( R \) be an arbitrary analytic quasi-order on a standard Borel
space. By Claim~\ref{claim:invuniv1} there is \( B' \subseteq B \) Borel (in
both \( B \) and \( F(\mathbb{U}) \)) and invariant under isometry such that
\( R \sim_B {\sqsubseteq \restriction B'} \); let \( h_1 \colon \dom(R) \to
B' \) and \( h_2 \colon B' \to \dom(R) \) be witnesses of this last fact. Let
\( C' = [f(B')]_{\cong} = \{ Y \in F(\mathbb{U}) \mid \exists X \in B' (f(X)
\cong Y) \} \). Then \( C' \) is invariant under isometry by definition, and
it is a Borel subset of \( F(\mathbb{U}) \): indeed, this follows from the
fact that \( C'= g^{-1}(B') \) by Claim~\ref{claim:invuniv2}(3), and that \(
C \) is Borel in \( F(\mathbb{U}) \) by Claim~\ref{claim:invuniv2}(1).
Moreover, \( C' \subseteq C \subseteq \V^\star_A \). Finally, \( f \circ h_1
\) witnesses \( R \leq_B {\sqsubseteq \restriction C'} \), while \( h_2 \circ
(g \restriction C') \) witnesses \( {\sqsubseteq \restriction C'} \leq_B R \)
by parts (2) and (3) of Claim~\ref{claim:invuniv2}.
\end{proof}
\section{Open problems}
Theorem \ref{isomstarA} gives a fairly neat picture of the behaviour of the
relations $ \isom_A^\star$, except for the following case.
\begin{question}\label{qA}
If $A \in \D$ is dense in some right neighborhood of $0$ but does not contain
entirely any of them, can something more precise be said about the complexity
of $\isom_A^\star$?
\end{question}
Since for such $A$ we have that $\V^\star_A$ consists only of
zero-dimensional spaces by Theorem \ref{proponlyultrametric}(1), this
question is linked to one of the main questions of \cite{Gao2003} which is
still open (this question is discussed more at depth in \cite{ultrametric},
where it is labeled Question 7.1).
\begin{question}
What is the complexity of isometry between zero-dimensional Polish metric
spaces?
\end{question}
A step to get some insight into Question \ref{qA} would be to answer the
following:
\begin{question}
Which $A$ and $A'$ as in Question \ref{qA} are such that ${\isom_A^\star}
\leq_B {\isom_{A'}^\star}$?
\end{question}
This question has a positive answer for a given pair \( A,A' \) whenever
there exists $f \colon A \to A'$ which is injective and Polish metric
preserving. In fact, arguing as in the proof of case (iii) of
Theorem~\ref{thm:isomA} one can build a witness for ${\isom_A^\star} \leq_B
{\isom_{f(A)}^\star}$ by changing the distances of the spaces using $f$, and,
if $f(A) \subsetneq A'$, successively applying
Proposition~\ref{questionclemens} to obtain ${\isom_{f(A)}^\star} \leq_B
{\isom_{A'}^\star}$. For example, in this way one sees that ${\isom_{
\QQ^+}^\star} \leq_B {\isom_{(\RR^+ \setminus \QQ) \cup \{0\}}^\star}$ via
the map $f(q) = \sqrt2\, q$.
Another problem related to Question \ref{qA} is the following:
\begin{question}
Is it the case that ${\isom_A} \sim_B {\isom^\star_A}$ for every $A \in \D$?
\end{question}
Theorem \ref{thm:isomA} and the results following it give a positive answer
to this question under a wide range of hypotheses on $A$, but we do not know
if there are distance sets which do not satisfy any of those hypotheses.
\bibliographystyle{alpha}
|
1,116,691,499,085 | arxiv |
\section{Introduction}
\label{s:intro}
In a Universe of ever-growing chemical complexity local, metal-poor massive stars
represent a strong link to the past.
They hold the key to interpret medium to high redshift starburst galaxies, supernovae and
$\gamma$--ray bursts,
and make fundamental ingredients to simulate the evolution of galaxies.
They are likely involved in the formation of stellar-size black hole binaries
whose collapse we are now able to detect \textit{via} gravitational waves.
And finally, they set a proxy to the physics of the very massive, metal-free first stars.
The dwarf irregular galaxy Sextans~A \citep[$\rm 10^h 11^m 00.^s8 \, -04^d41^m34^s$, dIrr, \textit{aka} DDO~75][]{McC12}
is interesting in this context
because of its very poor metal content.
The abundances derived from young \ion{H}{II} regions range 12+log(O/H)= 7.49--7.71 \citep{SKH89a,Pi01,Kal05,MLC05}
and the stellar abundances of $\alpha-$ and Fe--group
elements are similarly low ( $ [Fe/H] , [Cr/H] , [Mg/H] \sim -1 $, \citet{KVal04}).
Its poor metal content is also supported by the rather flat UV--continuum
of OB-type stars \citep{Gal17}.
Sextans~A is hence remarkable because
its population is within the grasp of 8-m telescopes \citep[1.33~Mpc,][]{TRS11}
and its $\lesssim$1/10$\rm \, Z_{\odot}$~ metallicity is lower than
all other Local Group dwarf galaxies targeted by
studies of massive stars: IC~1613, WLM, NGC~3109, NGC~6822, IC~10 and the Magellanic Clouds.
Sextans~A has an intriguing squared shape with spectacular bubbles and structures
of ionized hydrogen that evince the presence of hot massive stars.
Ongoing star formation has been detected in the regions
A, B and C marked in Fig.~\ref{F:chart} \citep{vDPW98,DPal02}.
Region--A would be the oldest one
with 400 million years (Myr), and about to exhaust the local gas content and halt star formation.
Region--B overlaps with a vast mass of \ion{H}{I}
and region--C seems to follow the ridge of another over-density of \ion{H}{I}.
They have been forming stars for the past 200~Myr~ and 20~Myr~ respectively.
Our team confirmed, for the first time, the presence of blue
massive stars in Sextans~A
with long-slit spectroscopic observations \citep{Cal16}.
In parallel, \citet{BBM14,BBM15} conducted a successful search of
red supergiant (RSG) candidates from \textit{Spitzer} photometry
that were subsequently confimed by spectroscopy.
Massive stars in both OB-type and RSG flavours were found in the three regions of
star formation, and \citet{Cal16} reported
stars as young as 4~Myr~ in regions--B and --C.
We concluded that even though region--C may have been activated later on in galactic history, region--B
had been more prolific forming stars and has sustained star formation until
the present day.
This example illustrates the potential of synergies between spectroscopic surveys
to unveil and characterize massive stars in star-forming galaxies,
with studies of the galaxies themselves.
O-stars and B-supergiants are
H- and very early He-burning massive stars,
younger than $\lesssim$~ 30~Myr~ \citep[e.g.][]{M13}.
They pinpoint star formation in both space and time
and can inform current efforts
to understand the nature and fuel of star formation in dwarf galaxies.
On the one hand, studies based on resolved colour--magnitude diagrams and FUV knots indicate
that star formation can proceed despite the low-density
of the neutral-gas phase \citep{McQ12b,Hunt16}.
On the other hand, an important part of the puzzle is missing because
the combination of far distance and low metallicity
prevents the detection of molecular gas in many of these systems.
As a consequence, fundamental questions remain open such as
the apparently more prominent role of \ion{H}{I} over \textsc{H}$\rm _2$~ on star formation in this regime
\citep[e.g.][]{HPBB13},
whether the mechanisms to form stars differ from higher
density environments, and whether these
would translate into a different sampling or slope of the initial mass function.
This paper presents first results of our multi-object spectroscopic programme in Sextans~A,
designed to produce
the first large sample of resolved, sub-SMC metallicity massive stars.
We focus on a group of very young O-stars that the observations unveiled at the outskirts,
far from the previously considered sites of star formation.
Data and reduction are described in Sect.~\ref{s:redu}.
The new stars and their spectral types are discussed in Sect.~\ref{s:SpT},
and the colour--magnitude diagram of the new star formation region in Sect.~\ref{s:CMD}.
Sect.~\ref{s:discu} explores the implications of the detected stars
in the context of the initial mass function and star formation studies.
Finally, summary and conclusions are provided in
Sect.~\ref{s:sum}.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{./F_SA.ps}
\caption{Sextans~A, $H \, {\alpha}$~ narrow-band image by \citet{PMal07}.
The regions where star formation had been previously registered are
marked A, B and C, and region--D is newly reported in this paper.
Orange squares mark the location of the O-type stars unveiled
by our spectroscopic run s1--s4, star OB221 from \citet{Cal16},
and the RSG BBM15-9 from \citet{BBM15}.
\label{F:chart}
}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{./F_spec_noreb_vfts.ps}
\caption{\textit{GTC--OSIRIS} spectra of the discovered O-type stars in Sextans~A.
Their signal to noise ratio is SNR=30--40.
All the spectra exhibit one or more spectral lines of \ion{He}{II}.
The Balmer series is clearly seen in absorption in the spectra of s3 and s4,
but an incomplete nebular subtraction precludes the identification of the stellar
component for stars s1 and s2.
To aid the classification of s2 the top plot includes the \textit{VLT--FLAMES} spectrum of the LMC star VFTS-586,
degraded to match the spectral quality of our \textit{OSIRIS} run (R=1000 and SNR=40).
}
\label{F:sp}
\end{figure*}
\section{Observations, data reduction and preliminary analysis}
\label{s:redu}
Data were taken as part of our guaranteed time
programme GTC3-14AGOS, PI A. Herrero.
The observations consisted on mask multi-object spectroscopy (MOS)
with the Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy (\textit{OSIRIS})
installed at the 10.4-m Gran Telescopio Canarias (\textit{GTC}).
The combination of 1.2~arcsec wide slits and the R2000B VPH
granted resolution $R \sim 1000$~ in the $\sim$4000--5500\AA~ range,
although the actual spectral coverage depends on the slit location within the mask.
The programme was granted 13 hours of gray sky, $<$1.2~arcsec seeing conditions, broken into 1 hour long observing blocks
to accumulate exposure time.
Targets were selected from their optical $ \left(U-B \right)$~ and $Q$=$ \left(U-B \right) - 0.72 \mathbf{\cdot} \left(B-V \right)$~ colours,
and ultraviolet \textit{GALEX} photometry,
following the criteria described in \citet{GH13a,Cal16}.
UV sources with $V \leq 21$~ and $Q \leq $--0.8
in \citet{PMal07}'s catalogue were assigned top priority.
The data were reduced with the \textsc{gtcmos}
\textsc{iraf}\footnote{
\textsc{iraf} is distributed by the National Optical Astronomy Observatory, operated by the
Association of Universities for Research in Astronomy (AURA) under agreement with the National Science Foundation.
}
pipeline developed by Divakara Mayya \citep{GMR16}.
The script transforms the raw CCD images into mosaics, performs
the bias correction and wavelength calibration.
It delivered 13 sky-subtracted, wavelength
calibrated 2-dimensional spectral images, one per observing block.
The spectra were then extracted with our semi-automatic \textsc{iraf} script
\citep{GH13a} that also performed a second sky subtraction,
and set the spectra to the heliocentric
standard of rest.
The observations did not capture any spectroscopic binary,
as no radial velocity variations were detected among the
13 individual spectra extracted for a given star.
Finally, the individual contributions were coadded
weighing by their signal to noise ratio (SNR), and normalized.
We assigned spectral types following classical criteria, namely
the relative strength of the
lines of different ionization stages of helium
(\ion{He}{I} \textit{vs}~ \ion{He}{II}) and silicon (mostly \ion{Si}{III}~ \textit{vs}~ \ion{Si}{IV}).
Luminosity class was constrained both by the width of the Balmer lines
and whether \ion{He}{II}4686 is in absorption/emission,
although we note that the latter may be affected by the
low-metallicity of the targets and their weaker winds.
The spectra of the targets subject to this paper are shown in Fig.~\ref{F:sp}
and a detailed discussion of their classification is provided
in the next Section.
\section{A new region of star formation in the South: region--D}
\label{s:SpT}
The spectral classification of the MOS observations revealed the presence of three young O-type dwarf stars
and one O-type bright giant in the South of the galaxy (see Fig.~\ref{F:chart}).
Two of them have the earliest, youngest, spectral types reported in Sextans~A
and it was striking to find them far from the seemingly more active regions B and C.
None the less, three of the O--stars are
located near
a faint rim of ionized hydrogen,
a B2.5~supergiant detected by our previous long-slit programme, OB221,
and RSG number \#9 from \citet[][BBM15-9]{BBM15}.
The area, that we will name region--D from now on, had been unnoticed by
previous studies targeting the youngest population \citep[e.g.][]{BEH12}.
This is not surprising since region--D is more sparsely populated than regions B and C,
and lacks the classical signs of active star formation: intense UV emission and complex $H \, {\alpha}$~ structures.
Region--D's unconspicuous appearance could either evince a different star formation regime (Sect.~\ref{ss:SF})
or reflect an incomplete accounting of a heavily reddened population.
In fact, \textit{Herschel} has detected significant amounts of dust in several
locations of the galaxy, some of them close to region--D \citep{SAH14}.
Our own results indicate a non-neglible amount of patchy reddening in the area (Sect.~\ref{s:CMD}).
The membership of the newly discovered
region--D O--stars to the galaxy is supported by the absolute magnitudes
estimated from spectral types and observed photometry.
Their radial velocities are also consistent with \citet{Sal88}'s radial velocity curve
\citep[contrast against e.g. Fig.~3 from ][]{Cal16},
and with the central \ion{H}{I} velocity $v_{cen}$=324.8km/s \citep{Oal12}.
These data are listed in Table~\ref{T:phot},
together with identification tags and photometry by \citet{PMal07}.
\input{./star_phot.tex}
\subsection{Comments on targets}
\textbf{s1 (O9.5~II):}
The spectrum lacks strong Balmer lines due to nebular contamination,
and the most clearly detected features belong to \ion{He}{I}.
It shows \ion{He}{II}~4686 but no \ion{He}{II}~4542,
and the presence of the \ion{Si}{IV}~4089 and \ion{Si}{IV}~4116 lines cannot be
assessed because of the extremely poor SNR at $\lesssim$ 4100\AA.
The \ion{Si}{III}~4552 triplet is present but no \ion{Mg}{II}~4481 is observed,
suggesting O9.5 type.
The weak \ion{He}{II}~4686 absorption and the \ion{He}{II}~4686/\ion{He}{I}~4713 ratio
indicate luminosity class II.
s1 is the farthest sample star from the centre of the galaxy and it is
located in a poorly populated region.
It is puzzling that it shows strong nebular contamination considering that
no extended structure is seen in $H \, {\alpha}$~ imaging,
hence ionization must be local
and could be circumstellar.
We note that it experiences enhanced reddening compared to the other sample stars
(compare e.g. against s3: both have similar spectral type and V--mag$\sim$20.8,
but different colours and luminosity class).
Both pieces of evidence are consistent with s1 located well within the \ion{H}{I} cloud (see Sect.~\ref{ss:SF})
which, in turn,
is additional proof of its Sextans~A membership.
In fact, the neutral hydrogen column--density that would be inferred from
s1's colour excess $ E \left( B-V \right)$=0.251 using \citet{BSD78}'s relations is $N_{HI} $=$ 1.2 \mathbf{\cdot} 10^{21} {\rm cm}^{-2}$,
of the order of the values indicated at its location by the \ion{H}{I} maps \citep{Oal12}.
The presence of s1 is also suggestive that star formation is ongoing in
this region but is undetected because of enhanced extinction.
\textit{Spitzer} does detect additional sources in the surroundings of s1 but
without a thorough analysis no conclusion can be drawn on their ages.
\textbf{s2 (O3--O5~Vz)}
shows wide \ion{He}{II}~4542 and \ion{He}{II}~4686 lines,
and high ionization transitions of silicon,
\ion{Si}{IV}~4089 and \ion{Si}{IV}~4116, whereas the \ion{Si}{III}~4552 triplet is absent.
The lack of \ion{He}{I}~4471 suggests very early spectral type O3
but the combination of nebular contamination, clearly present on the Balmer series,
and poor SNR may be preventing the detection of this line.
A conservative O3--O5 spectral type is assigned.
Because \ion{He}{II}~4542 is weaker than \ion{He}{II}~4686
we assigned Vz luminosity class \citep[e.g.][]{SSal14}.
The spectrum of s2 is reminiscent of star VFTS-586 (O4~V((n))((fc))z) from the Large Magellanic Cloud (LMC),
which supports its O3--O5~Vz classification.
VFTS-586's high resolution, high SNR \textit{VLT--FLAMES} spectrum shows strong \ion{He}{II} absorptions,
but only narrow nebular emissions at the \ion{He}{I} transitions
\citep[see e.g.][]{SSSD17}.
In order to match the spectral quality of our observing run, we
degraded the \textit{FLAMES} data to R=1000 and SNR=40.
The overall spectral morphology of VFTS-586 now resembles s2,
both showing \ion{He}{II}~4542 and \ion{He}{II}~4686 as the most
prominent stellar features (Fig.~\ref{F:sp}).
\textbf{s3 (O9~V)}
shows weak spectral lines of \ion{He}{II}.
The \ion{He}{II}~4542/\ion{He}{I}~4471 ratio is compatible with spectral type O9--O9.7
and, because the \ion{Si}{III}~4552 triplet is not detected, O9 spectral type is assigned.
The \ion{He}{II}~4686 absorption is more intense than \ion{He}{II}~4542 and \ion{He}{I}~4713,
which indicates luminosity class III--V. Since Balmer lines are broad,
luminosity class V is adopted.
However, we note that the absolute magnitude \mbox{$M_V$}=--5.02 is closer to the
calibrated values for O9~III stars (\mbox{$M_V$}=--5.1) than O9~V (\mbox{$M_V$}=--4.4).
\textbf{s4 (O6~Vz):}
The \ion{He}{II} lines are strong in absorption, with \ion{He}{II}~4542 slightly
stronger than \ion{He}{I}~4471, indicating O6 type.
The \ion{He}{II}~4200/\ion{He}{I+II}~4026 ratio is concurrent.
Because \ion{He}{II}~4686 is stronger than \ion{He}{II}~4542 and \ion{He}{I}~4471
the assigned luminosity class is Vz.
The absolute magnitude is slightly under-luminous compared
with the calibrated value from Milky Way stars \mbox{$M_V$}=--5.2.
However, we note that Evans et al. (submitted) have reported that
massive stars in very metal-poor environments
may be up to 0.5~mag fainter than Galactic analogues with the same spectral type.
Two out of the three O~dwarfs reported by this paper have the Vz qualifier,
in line with the trend of increased Vz/V ratios expected
in metal-poorer environments.
\citet{SSal14} used an extensive grid of synthetic models
to study the combination of stellar parameters that could produce the
Vz morphological signature (\ion{He}{II}~4686 absorption stronger than both \ion{He}{II}~4542 and \ion{He}{I}~4471).
They concluded that weak winds are needed to reproduce the Vz characteristics at \mbox{$T_{\rm eff}$}$\gtrsim$35000~K,
whereas at lower temperatures no combination of stellar parameters would produce them.
The fact that the earliest O~dwarfs of our sample have the qualifier Vz
is consistent with this result
and also suggests that the O~Vz stars reflect the weak winds expected at the low-metallicity of Sextans~A.
\section{Colour--magnitude diagram}
\label{s:CMD}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./CMD.ps}
\caption{
Colour--magnitude diagram of Sextans~A.
\citet{PMal07}'s catalogue for the full galaxy is plotted in black, with region--B stars highlighted in violet
and region--D stars in blue. Error bars are omitted for clarity but are shown in Fig.~\ref{F:CMDz}.
The figure also includes Z=0.001 isochrones by \citet{LJ01}, shifted to account for the distance modulus DM=25.63
and foreground extinction $ E \left( B-V \right)_{fg}$=0.044 of Sextans~A \citep[][]{TRS11}.
Their ${\rm log} \left( age \right)$ is colour--coded as indicated in the legend;
younger isochrones are not included because they would overlap with the ${\rm log} \left( age \right)$=6.19~dex one.
Red squares mark the observed magnitudes of the programme stars and BBM15-9.
We also applied an additional reddening correction to the OB-type stars
from tabulated intrinsic colours
and $\rm R_V $=3.1 (black squares).
}
\label{F:CMD}
\end{figure}
The colour--magnitude diagram (CMD) of Sextans~A has been
previously analysed in the literature to decipher the galactic
star formation history (see e.g. Introduction section).
We focus now on the new region
of star formation defined by s1--s4,
that also includes the B2.5 supergiant
OB221
and the RSG BBM15-9.
To guide the discussion we will use as reference region--B,
that exhibits more conspicuous signs of star formation,
and has been previously reported to host young stars.
Fig.~\ref{F:CMD} shows Sextans~A's full CMD
built with \citet{PMal07}'s catalogue,
with region--D stars highlighted in blue
and region--B stars in violet.
Most of the stars of both region--B and --D are located in Sextans~A blue plume,
with additional stars with intermediate colours and in the area of red giants/supergiants.
Region--B hosts a comparatively larger number of blue stars
that form a blue envelope to the bulk of the galaxy in the CMD.
Because of the smaller number of stars in region--D, its CMD is scarcely populated
and patchy.
None the less, it also hosts very blue stars and once
their relative numbers are taken into account,
regions--B and --D roughly overlap.
The spectral types of our zone--D sample stars also suggest that
both regions may be similarly young.
The programme stars s1--s4 are found in the bulk of the blue plume of region--D
(Fig.~\ref{F:CMD}).
BBM15-9 is found in the area of low-mass RSG ($\sim$9$\rm \, M_{\odot}$), but its location is not
reproduced by any of the isochrones. The discrepancy between evolutionary models and observations
of RSG is a known problem of the field \citep[e.g.][]{DKP13}.
s3 and s4 are among the bluest stars of the galaxy, but s1 and s2 are located
at the red edge of the blue-plume.
In sight of this diagram only OB221,
and perhaps s3 and s4, would have been selected as candidate blue massive stars.
The interpretation of the CMD radically changes when reddening is calculated towards each
individual line of sight.
We estimated extinction ($ E \left( B-V \right)$~ and $\rm A_V$) using the observed photometry
and intrinsic colours calibrated for the target's spectral type (listed in Table~\ref{T:phot})
and $\rm R_V $=3.1.
The so-called \textit{spectroscopic reddening} inherits the uncertainty of the spectral classification ($\pm$2 spectral sub-types) and the tabulated photometry, but the degenerate colors of O-stars in the optical
range minimize the intrinsic error associated to $ \left(B-V \right)_0$.
The unknown value of $\rm R_V $, that varies with dust composition, may play a more prominent role.
A close-up of the CMD is shown in Fig.~\ref{F:CMDz}
that now includes error bars accounting for
photometric errors, uncertainty in the distance modulus,
a conservative error for $\rm R_V $~ ($\Delta \rm R_V $=2),
and $\Delta E \left( B-V \right)_{fg}$=0.1 or $\Delta \left(B-V \right)_0$=0.05
depending on whether foreground or spectroscopic reddening is considered.
Targets s1 and s2 experienced the largest reddening correction and
are now located at the bluest extent of the blue plume.
Even when the error bars are considered,
the new reddening estimate results in a distinct location in the CMD.
In particular, the updated locus of s2
better matches its spectral type O3--5~Vz.
s1 seems to be much hotter than its assigned O9.5~II spectral type,
although a misclassification seems unlikely in sight of the \ion{He}{II} features.
Reddening would have severely impacted the derived ages for the sample stars.
We have included \citet{LJ01}'s isochrones for Z=0.001 stars ($\sim$ 0.05$\rm \, Z_{\odot}$) in Figs.~\ref{F:CMD}~and~\ref{F:CMDz}.
Without any further information on spectral type or reddening, the inferred age of s3 and s4 would have been
${\rm log} \left( age \right)$=6.19--6.80~dex,
and 7.40--7.69~dex for s1 and s2.
After the \textit{spectroscopic} reddening correction
s1 overlaps with the 6.80~dex isochrone ($\sim$ 6.3~Myr),
rendering a much younger age.
s2, s3 and s4 align around the ${\rm log} \left( age \right)$=6.19--6.5~dex
isochrones (1.5--3.2~Myr) and it would be tempting to consider them a coeval OB-association.
However, the population would subtend $\sim$ 25~arcsec (162~pc),
one order of magnitude too loose compared
to, e.g., the 17.5~pc typical sizes of associations in IC~1613 with 3 OB members \citep[Fig.7 from][]{GHC10}.
In sight of Fig.~\ref{F:CMD} all stars of region--D, including OB221, are younger than
the 7.09dex isochrone (12.3~Myr).
The large colour excess of both s1 and s2 also evinces that internal reddening in
Sextans~A is significant and non-uniform,
contrary to what is usually assumed for dwarf irregular galaxies.
Whether this is caused by a patchy distribution of extinction working
at the cluster scale, or by circumstellar structures surrounding s1 and s2,
cannot be discerned with current evidence.
\textit{Spitzer} imaging does not reveal clear point-sources at the
location of the stars, although this may be a sensitivity issue,
and \textit{Herschel} data lack the required spatial resolution.
Both s1 and s2 would have been missed by classical CMD cuts
looking for young, massive stars (but not by colour--cuts based
on the reddening-free Q parameter).
The CMD analysis would have yielded masses under 12$\rm \, M_{\odot}$~ for both stars,
while evolutionary tracks assign
25$\rm ^{\texttt{+} 15}_{\texttt{-}10}$$\rm \, M_{\odot}$~ and 40$\pm$15$\rm \, M_{\odot}$~
for the unreddened
locii of s1 and s2, taking into account the error bars (Fig.~\ref{F:CMDz}).
Likewise, the width of region--D's blue plume could be
caused by extinction rather than differences in stellar masses or ages,
rendering the mass of this population heavily underestimated
if internal reddening is neglected.
Integral field spectroscopy covering completely region--D
would at the same time provide stellar masses of all the stars
in the region, and
a measure of extinction towards their line of sight.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./CMD_zoom.ps}
\caption{
Same as Fig.~\ref{F:CMD}, zooming into the main sequence
and including evolutionary tracks and error bars.
s2 is on the track of 40$\rm \, M_{\odot}$, s4 and s3 between 25--40$\rm \, M_{\odot}$, and s1 at 25$\rm \, M_{\odot}$
}
\label{F:CMDz}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./WFPC2_circulos.ps}
\caption{
\textit{HST-WFPC2-F555W} observations of Sextans~A (programme U2X50205T, PI E. D. Skillman), zooming into s2 (bottom) s3 (middle) and s4 (top).
s1 is not covered by these or any \textit{HST} observations.
North is up and East to the left. The circles have 0.77~arcsec and 1.54~arcsec radii,
corresponding to 5~pc and 10~pc. The stars are isolated except for very faint targets around s4.
}
\label{F:HST}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./WFC3_circulos.ps}
\caption{
\textit{HST-WFC3-F127M} IR imaging (programme icyj11010, PI M. Boyer) around s2--s4 (from bottom to top).
North is up and East to the left, and the circles have 0.77~arcsec and 1.54~arcsec radii.
There are two IR-bright stars near s4, but
s3 and the very early s2 are isolated.
}
\label{F:HSTIR}
\end{figure}
\section{Discussion}
\label{s:discu}
With ages $\lesssim$~ 10~Myr, the stars reported in this
paper provide both spatially and temporally resolved
information on Sextans~A's present day star formation.
In this section we explore what can be learnt from the
location and ages of the stars.
\subsection{Are s1--s4 isolated massive stars? Implications on the initial mass function}
How massive stars form is a long-standing debate
with two main scenarios:
competitive accretion, in which massive stars are formed in the same
gravitational well of the whole cluster \citep[e.g.][]{BBC97} and possibly
leading to mergers \citep{Sal12},
and monolithic collapse, in which one single star is
formed from the collapse of one single cloud \citep[e.g.][]{Kral09}.
The occurrence of massive stars in
isolation is a natural consequence of the latter,
and some examples have been reported
in the SMC \citep{LOW10} and
in the nearby starburst 30~Doradus \citep{BBE12}.
Besides the lack of nearby stars, \citet{BBE12}
set a number of conditions for the stars to be considered
isolated, including,
$v_{\rm rad}$~ constraints to minimize the chances that the stars
are runaway or binaries,
and the presence of gaseous filaments that
could host star formation and minimize the incidence
of runaways in the plane of the sky.
The lack of a host cluster or OB-association was enforced
in a length-scale of 5~pc, since the
gravitational perturbation of star
formation at longer distances would be negligible
to the forming star.
Lacking multi-epoch observations and high-resolution spectroscopy
the possibility that s1--s4 are binaries or runaway stars cannot be fully discarded,
but their radial velocities are consistent with the $v_{\rm rad}$-curve of the galaxy \citep{Sal88,Cal16}
reducing that possibility. We note that the stars are too faint
as to have reliable proper motions registered in \textit{GAIA}-DR2.
Likewise, at the distance of Sextans~A it is not possible
to identify local filaments of gas although
there is a reservoir of neutral hydrogen at their location (see next Section)
and a faint \ion{H}{II} structure encloses s2, s3 and s4.
Nevertheless, we can check whether the stars are spatially isolated or are part
of a cluster of fainter stars.
We first checked their coordinates against \citet{BBF14}'s list of compact clusters in Sextans~A.
We then examined the archive of the Hubble Space Telescope (\textit{HST}),
looking for observations that would cover them.
Fig.~\ref{F:HST} shows a \textit{WFPC2-F555W} image with s2--s4
enclosed in a r=5~pc (0.77~arcsec) circle each
to look for a host population,
and circumscribed by a r=10~pc (1.54~arcsec) circle
as a control field.
At the depth of the \textit{HST} observations,
none of the stars has a similarly bright nearby source within r=5~pc.
There are very faint nearby stars, but none of the them bright enough to be registered even
in \citet{BBF14}'s deep photometric catalogue.
No sources are detected near OB221, and BBM15-9 seems to have a faint, very near target at the North.
s1 is at the outskirts of the galaxy and has not been covered by any \textit{HST} observations.
It looks isolated in the ground-based optical and IR images, but
both lack spatial resolution to provide meaningful information for this discussion.
We argue in Sects.~\ref{s:SpT}~and~\ref{ss:SF} that at least 2 of the sample stars are embedded
in neutral hydrogen, and that internal reddening is significant in Sextans~A,
hence it is plausible that the lack of detection of additional stars could be caused by extinction.
\textit{Spitzer} imaging does not reveal a significant dust-enshrouded population in the area
although this could be a matter of sensitivity.
Finally, we examined \textit{HST} near-IR observations
taken with \textit{WFC3-F127M} \citep{BMcQ17}.
In this image there is evidence of a small size cluster around
OB221 and there is no detection of the star close to BBM15-9, which otherwise seems isolated.
s2--s4 are shown in Fig.~\ref{F:HSTIR}:
there are two IR sources near s4
but no embedded cluster is detected near the target stars and, most importantly,
s3 and the very early s2 are isolated.
Following \citet{LOW10} and \citet{BBE12}, we used
the relation
between the mass of the cluster and the highest stellar mass
$ M_{max} - M_{cl}$~ from \citet{WKB10,WKPA13}
to estimate the size that a hypothetical cluster hosting s2 (the most massive star of the sample) would have.
\citet{MSH05}'s calibration assigns 47$\rm \, M_{\odot}$~ to its spectral type O3--5~Vz.
According to \citet{WKPA13}'s analytical $ M_{max} - M_{cl}$~ relation
the host cluster should have a total mass of $M_{cl} \sim $ 3900$\rm \, M_{\odot}$~ and it should have been detected
\citep[e.g. compare against the typical sizes of OB-associations in IC~1613,][]{GHC10}.
If instead we used the lower-limit we derived for s2 mass from the CMD analysis
(Sect.~\ref{s:CMD}), the host cluster would be much smaller (725$\rm \, M_{\odot}$) but still detectable.
We considered the final possibility that s2 was the product of a stellar merger within a cluster that then would become
an outlier from the $ M_{max} - M_{cl}$~ relation \citep[][]{OK18}.
In the most favourable case, this would require 2 stars of 23$\rm \, M_{\odot}$~ each ($ M_{max}$), for which a cluster of
at least 600$\rm \, M_{\odot}$~ is needed.
Such a cluster should have also been detected.
Morever, the occurrence of the merger is very unlikely if we consider that
only 8 per cent of the clusters simulated by \citet{OK18} are capable
of producing stars with $ 2 \mathbf{\cdot} M_{max}$~ mass via this mechanism.
Current evidence thus indicates that at least
s2, s3 and perhaps BBM15-9
have formed in isolation
and suggests that monolithic collapse is at work.
In this scenario, star formation does not need to meet the $ M_{max} - M_{cluster}$~ relation
and the initial mass function (IMF) can be populated randomly \citep[see discussion by ][]{BBE12}.
The stochastic sampling of the IMF has already been proposed to
explain that the linear correlation
between star formation rate (SFR) measured from the UV and $H \, {\alpha}$~
breaks down in low-mass, low-density galaxies
similar to Sextans~A \citep{TMcNN16}.
If low-mass environments can stochastically populate the initial mass function,
then we could find very massive stars in Sextans~A and
other metal-poor, non-starbursty dwarf irregular galaxies of the Local Group.
Spectroscopically confirmed massive stars have the advantage
of providing precise masses to build the IMF.
A systematic spectroscopic search of massive stars, targeting galaxies
of decreasing metallicity and varying gas masses could shed important light
on to the gas fragmentation properties and
the IMF sampling of these systems.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./F_HI.ps}
\caption{Sextans~A, \textit{LITTLE THINGS} neutral hydrogen map \citep{HFA12}
overlaid on optical V--band observations \citep{PMal07}.
The OB-type stars of region--D are highlighted with orange squares.
We note the comparatively lower, but non-negligible, gas density
at their location.
BBM15-9 (red square) is on the edge of the \ion{H}{I} distribution.
}
\label{F:chart2}
\end{figure}
\subsection{The mechanisms driving star formation in dwarf galaxies}
\label{ss:SF}
The fact that the earliest O-stars reported to date in Sextans~A
have been found in the outskirts of the galaxy,
far from its giant \ion{H}{II} shells and without a supporting cluster, was unexpected.
This section discusses whether it is extraordinary that Sextans~A sustains star formation in the outer regions
and what mechanisms could be driving it.
The first radio observations of Sextans~A showed that
the main stellar body is enclosed within an \ion{H}{I} cavity,
with the youngest regions--B and --C
located just at the cavity rim or its inner edge.
\citet{vDPW98} proposed that 50~Myr~ ago star formation began at the centre of the galaxy
and the cavity was produced by the ensuing supernova explosions
which in turn would induce new star formation at the shocked
outer boundary.
\citet{DPal02} found older stars in the \ion{H}{I} rim, hence arguing against the outwards propagation of star formation,
but proposed that star formation was confined in the cavity edge.
Higher sensitivity \textit{VLA} data \citep{HFA12,Oal12}
showed that the central part of the galaxy is not totally devoid of gas
($N_{HI} \, \sim 0.5 - 1 \mathbf{\cdot} 10^{21} \, {\rm cm}^{-2}$),
and that \ion{H}{I} rotates as a solid body
with no clear signature of inside--out motion \citep{Oal12,BBF14}.
These observations not only refute the multiple SNe scenario
but also open the possibility of star formation occurring
at additional sites other than the over-densities that delimited
the alleged cavity.
Unfortunately, star formation cannot be traced through
molecular gas in Sextans~A and
most of the sub-SMC metallicity dwarf galaxies of the Local Group.
They are too far to provide a direct measurement
of the weak signature of cold \textsc{H}$\rm _2$, and their low metallicity prevents using the \textsc{CO} molecule as a proxy.
The situation may improve with \textit{ALMA}, but
at the moment there is only a marginal detection of \textsc{CO} in Sextans~A with \textit{IRAM} \citep{SWZ15}.
A tantalizing alternative is that star formation could proceed directly from \ion{H}{I}
as it has been shown possible at extremely low-metallicities from a theoretical perspective \citep{Kr12}.
This mechanism has also been proposed to explain that a fraction of $z < $0.12 GRBs
are hosted by low-metallicity,
\ion{H}{I}-rich but \textsc{H}$\rm _2$-deficient dwarf galaxies: the galaxy would accrete pristine, cold neutral gas
that could directly feed star formation \citep{MGH15}.
Whether acting as fuel or as proxy of molecular gas,
neutral hydrogen seems a good tracer of star formation in low-density galaxies.
We are finding that the location
of massive stars and \ion{H}{I} is always related.
In IC~1613, as in Sextans~A, O-stars and OB-associations are either found
overlapping the highest
concentrations of \ion{H}{I}, or on the ridges of \ion{H}{I} clouds \citep{GHC10}.
A similar link
has also been found by \ion{H}{I} surveys on
a significant fraction of star-forming dwarf galaxies \citep[e.g.][]{TMcNN16,HAE18}
and \citet{HPBB13} proposed that \ion{H}{I} is the dominant phase that regulates star
formation in these systems.
Our sample stars are located in a region that contains \ion{H}{I} (see Fig.~\ref{F:chart2}).
The column density is low $N_{HI} \, \sim 0.5 - 1 \mathbf{\cdot} 10^{21} \, {\rm cm}^{-2}$~ \citep{Oal12}
but close to \citet{S87}'s $1 \mathbf{\cdot} 10^{21} \, {\rm cm}^{-2}$~ threshold for star formation,
and of the order of the specific value for
low-density regions of metal-poor dIrr galaxies proposed by \citet[][1$\rm \, M_{\odot}$/$pc^{-2}$]{Hunt16} .
The detection of the O--stars s1--s4 proves that Sextans~A is forming stars in a region with a very
low concentration of gas,
and provides spectroscopic confirmation to similar
findings in the dwarf galaxies targeted by the \textit{LITTLE THINGS} and {SHIELD} \ion{H}{I} surveys \citep{Hunt16,TMcNN16}.
\citet{McQ12b} found that
star formation does not seem spatially concentrated in dwarf galaxies
and that the degree of concentration does not correlate with the peak SFR.
In this context it is plausible that different mechanisms of star formation
work incoherently at a number of galactic locations,
including the low gas density areas.
In particular, we suggest that star formation in Sextans~A is not driven by molecular
cloud collapse or SNe collect and collapse only.
These mechanisms could be at work in regions--B and --C,
where the \ion{H}{II} shells hint wind or SN expansion (Fig.~\ref{F:chart})
and localized intense Far-IR emission evinces dust and high concentrations
of molecular gas \citep{SAH14}.
At low density \ion{H}{I} reservoirs like region--D, or at the ridges of \ion{H}{I} distributions, we propose that
internal instabilities or turbulence will break down
the neutral gas clouds and proceed directly to
star formation bypassing the molecular gas phase \citep{Kr12}.
This scenario would be favoured by an irregular or clumpy \ion{H}{I} geometry,
which is indeed detected in the automatic morphological study
of the \textit{LITTLE THINGS} and \textit{VLA--ANGST} sample by \citet{HPBB13}.
A similar star formation mechanism could also act at the outskirts
of extended UV--disc galaxies,
a class
defined by
\textit{GALEX} FUV emission beyond $\rm 3-5 \, D_{25}$~ that
signals star formation in the extremely low-density outer disc
\citep{GdP05,TBB05}.
In fact,
the location of s1 at $\sim D_{25}$~ \citep[$R_{25}$=2.9~arcmin,][]{McC12}
and other more remote UV sources make Sextans~A reminiscent of this type of galaxies.
Similarly to what we have detected in Sextans~A, extended UV--disc galaxies are embedded
in large \ion{H}{I} reservoirs, and IR observations do not reveal
an underlying population of low mass or old stars at the sites of
the FUV complexes \citep[see review by][]{B18}.
Finally, an interesting follow-up question is whether star formation at different
gas density environments populates the IMF distinctly, or impingnes a different slope.
We will be able to provide some information on this point
once our spectroscopic survey in Sextans~A is complete.
\section{Summary and concluding remarks}
\label{s:sum}
This paper reports the spectroscopic confirmation of massive stars
at the outskirts of Sextans~A.
s2 and s4 are the earliest, most massive, resolved stars confirmed by spectroscopy
in a galaxy with metal content $\lesssim$1/10$\rm \, Z_{\odot}$.
Massive stars have
been found in the metal-poorer galaxies
SagDIG \citep{G18} and Leo~P (Evans et al., submitted), but their comparatively poor
data quality prevented fine spectral typing
resulting in poorly constrained masses and ages.
Our sample of stars in Sextans~A is of interest to the community of massive stars,
as new subjects to confront observations with the theoretical
predictions of stellar evolution and wind physics
in the metal-poor regime \citep{K02,Szal15}.
s1--s4
are only few million year old, thus demonstrating that star formation is ongoing
in a region of comparatively decreased
stellar and gas density
\citep[$N_{HI} \, \sim 0.5 - 1 \mathbf{\cdot} 10^{21} \, {\rm cm}^{-2}$~
\textit{vs}
the galactic maximum $N_{HI} $=$ 6.1 \mathbf{\cdot} 10^{21} \, {\rm cm}^{-2}$,][]{Oal12}.
However, no direct or indirect signature of molecular
gas has been detected in the area \citep{SAH14,SWZ15}.
Together with the spatial correlation we are finding between
\ion{H}{I} and OB-stars in dwarf irregular galaxies,
this suggests that the neutral phase may be playing a fundamental
role in the process of star formation in low-density environments.
Considering the evidence at hand, two programme stars are isolated and
at least one of them lacks the required
supporting cluster to fully construct the IMF up to its 47$\rm \, M_{\odot}$.
Similar isolated massive stars have been found
in the Magellanic Clouds \citep{LOW10,BBE12}.
Our results proof that low-mass dwarf galaxies can not only sustain star formation
but also form very massive stars and, pending deeper IR observations,
they may do so through stochastic sampling of the IMF.
Whether this episode of star formation is inherent to the galaxy \citep{McQCD15}
or environment-induced \citep{DPal02,BBF14}
is left for future work.
This work puts forward new synergies between the communities studying
massive stars and dwarf galaxies.
Direct spectroscopic observations of massive stars, now at reach
in nearby (out to $\sim$1.4~Mpc) dwarf galaxies with 8--10-m telescopes,
provide ideal means
to study the mechanisms of star formation in these systems.
The joint study of complete, spectroscopic censuses of resolved massive stars,
together with detailed maps of neutral and molecular gas,
will help to establish the connection of star formation and \ion{H}{I},
unravel the relative role played by molecular and neutral gas,
the mechanisms triggering star formation,
whether different mechanisms are at work in different sites of the galaxy,
and whether each of them can populate the IMF distinctly.
At the same time the censuses will enlarge the scarcely populated list of
confirmed massive
stars with metallicity 1/10$\rm \, Z_{\odot}$~ or poorer.
The spectroscopic census should be unbiased and complete for two reasons.
Firstly, this paper has demonstrated that massive stars can occur
far from the smoking-gun diagnostics of star formation: ionized gas shells,
intense UV emission and intense Far-IR dust emission.
\citet{McQ12b} arrived at a similar conclusion
after studying the galaxy-wide star formation histories of 20 starburst
dwarf galaxies.
Secondly, we have also shown that internal extinction is significant and uneven
in Sextans~A, similarly to IC~1613 \citep{GHV09} and SagDIG \citep{G18}.
The ensuing variable reddening severely
hampers the pre-identification of blue massive stars from classical colour--cuts in the CMD.
In this respect, we would like to remark that
an unknown amount of massive stars may be missing when galactic mass is calculated
from photometry, and/or their masses underestimated
because of reddening.
As a consequence, the total galactic stellar mass
may be substantially underestimated with implications
on the computation of the barionic to dark matter ratios.
Our team is already embarked on a vast observational effort
to expose and study the population of massive stars
in Sextans~A using the multi-object spectrographs at the 10-m Gran Telescopio Canarias.
However, the project would greatly benefit from a wide-field
integral-field spectrometer that could comb the whole galaxy, thus avoiding selection biases,
while providing medium resolution spectroscopy at 4000--5000\AA~
to constrain stellar properties.
In this respect, Sextans~A is an ideal target for \textit{BlueMUSE},
an analogue of the powerful \textit{VLT--MUSE} instrument with blue spectral coverage,
currently at concept stage (Bacon et al. 2018, \textit{BlueMUSE} Science Case, Proposal submitted to ESO).
\section{Acknowledgements}
We would like to thank support from MINECO by means of
grants ESP2015-65597-C4-1-R, ESP2017-86582-C4-1-R, AYA2015-68012-C2-1 and SEV2015-0548,
and from the Gobierno de Canarias under project ProID2017010115.
This paper is based on observations made with the Gran Telescopio Canarias (programme ID GTC3-14AGOS)
installed in the Spanish Observatorio del Roque de los Muchachos
of the Instituto de Astrof\'{\i}sica de Canarias, on the island of La Palma.
The work has made use of the \textsc{gtcmos} pipeline for the reduction of the
\textit{GTC--OSIRIS} spectroscopic data for which we thank its author Divakara Mayya.
NASA's Astrophysics Data System,
the SIMBAD database \citep{SIMBAD},
and the Aladin Sky Atlas \citep{aladin1,aladin2} were also extensively used.
P. Massey and his team are warmly thanked for publicly sharing their photometric
observations and data of Local Group galaxies.
Finally, we would like to thank our anonymous referee for very constructive comments
and suggestions.
\input{./biblio}
\bsp
\label{lastpage}
\end{document}
|
1,116,691,499,086 | arxiv | \section{Introduction}
\label{sec:Intro}
\subsection{The classical-statistical approximation}
\label{sec:classical}
The classical-statistical approximation (CS) to real-time quantum field dynamics consists in replacing the evolution of the quantum operators (such as $\hat{\phi}(x,t)$) by classical dynamics of an ensemble of random initial conditions. The ensemble is taken to reproduce the initial correlators of the quantum system, and each random member of the ensemble is evolved by means of the classical equations of motion. The expectation values of observables are then computed as averages over the ensemble.
The CS approximation is reliable only when the occupation numbers (particle numbers) $n_{\bf k}$ of the fields are large, $n_{\bf k}\gg 1$. For a typical massive scalar field, the field equation reads
\begin{eqnarray}
(\partial_t^2-\partial^2_x+m^2)\hat{\phi}(x,t)=-\frac{dV_{\rm nl}(\hat{\phi})}{d\hat{\phi}}(x,t).
\end{eqnarray}
with $V_{\rm nl}$ denoting non-linear self-interactions.
When non-linearities are small, individual momentum modes behave as harmonic oscillators, and we may write for the field and canonical momentum operators
\begin{eqnarray}
\hat{\phi}(x,t)=\int\frac{d^dk}{(2\pi)^d}\hat{\phi}_{\bf k}(t) e^{i{\bf kx}},\quad
\hat{\phi}_{\bf k}(t) = \hat{a}_{\bf k} f_{\bf k}(t) + \hat{a}^\dagger_{\bf k} f_{\bf k}^*(t),\\
\hat{\pi}(x,t)=\int\frac{d^dk}{(2\pi)^d}\hat{\pi}_{\bf k}(t) e^{i{\bf kx}},\quad
\hat{\pi}_{\bf k}(t) = \hat{a}_{\bf k} \dot{f}_{\bf k}(t) + \hat{a}^\dagger_{\bf k} \dot{f}_{\bf k}^*(t),
\end{eqnarray}
Then the occupation number of a field momentum mode is given by
\begin{eqnarray}
\langle\phi_{\bf k}(t)\phi_{\bf k}^\dagger(t)\rangle=\frac{n_{ \bf k}+\frac{1}{2}}{\omega_k}= \frac{\langle a^\dagger_{\bf k}a_{\bf k}+a_{\bf k}a^\dagger_{\bf k}\rangle}{2}|f_{\bf k}(t)|^2=
\left(\langle a^\dagger_{\bf k}a_{\bf k}\rangle+ \frac{1}{2}\right)|f_{\bf k}(t)|^2.
\end{eqnarray}
where by standard convention, we have taken $|f_{\bf k}|^2(0)=1/\omega_{\bf k}$, $\omega_{\bf k}^2=\bold{k}^2+m^2$.
The zero-point fluctuations (corresponding to the zero point energy of a harmonic oscillator) is the ``1/2'', while the excitations of the system above the vacuum are the $n_{\bf k}$. The classical limit corresponds to $n_{\bf k}\gg \frac{1}{2}$.
This argument relies on the particle numbers $n_{\bf k}$ which is a quasi-particle concept, valid at weak coupling. The argument may be generalised and made more precise in the context of the Keldysh formalism and Kadanoff-Baym equations for the real-time correlation functions (see for instance \cite{Aarts:1997kp}).
It is convenient to introduce the ``statistical'' and ``spectral'' propagators
\begin{eqnarray}
F(x,y) = \frac{1}{2} \langle [\phi(x),\phi(y)]_+ \rangle,\quad \rho(x,y) = i \langle [\phi(x),\phi(y)]_- \rangle,
\end{eqnarray}
so that the complete propagator (on the Keldysh contour $\mathcal{C}$) may be written:
\begin{eqnarray}
G(x,y) = \langle T\{\phi(x),\phi(y)\}\rangle= F(x,y)-\frac{i}{2}\textrm{sign}_{\mathcal{C}}(x^0-y^0)\rho(x,y).
\end{eqnarray}
The real-time evolution of correlators may be expressed through diagram expansions in terms of $F$ and $\rho$, both for quantum and classical field theory \cite{Aarts:1997kp,Aarts:2001yn} (see also \cite{Rajantie:2006gy}). In the classical approximation, certain diagrams turn out to be absent\footnote{In terms of the Keldysh field basis of $\phi_{cl}$ and $\phi_q$, for instance in $\lambda \phi^4$-theory, the 3-$\phi_{q}$ vertex is absent, and any diagram involving this vertex.} so that whenever the quantum theory self-energy contains a combination of the form\footnote{For different theories, diagrams and self-energies, prefactors may vary. For instance, in $\lambda\phi^4$-theory, the sunset diagram produces $3F^2-\rho^2/4$ in the self-energy component for $\rho$, while $F^2-3\rho^2/4$ appears in the self-energy component for $F$ \cite{Arrizabalaga:2005tf}.}
\begin{eqnarray}
\Sigma_{\rm quantum}\simeq F^2(x,y)-\rho^2(x,y)/4,
\end{eqnarray}
in the classical theory the same diagram has only
\begin{eqnarray}
\Sigma_{\rm classical}\simeq F^2(x,y).
\end{eqnarray}
Hence, the classical approximation is good, whenever $\rho^2$ can in fact be neglected, $F^2\gg \rho^2$. For weak coupling and in the quasi-particle picture, $F\simeq (n_{\bf k}+1/2)/\omega_{\bf k}$, $\rho\simeq 1$, in which case the criterion for classicality again amounts to $n_{\bf k} \gg 1$.
A third fully non-perturbative derivation of the classical-statistical approximation follows directly from the Keldysh contour path integral \cite{Mou:2019gyl}\footnote{The primary focus of \cite{Mou:2019gyl} is the subsequent evaluation of the path integral in the Picard-Lefschetz formalism, but the relation to the CS approximation is independent of that further application.}. In short, whereas the quantum result comes about from averaging over the field variables at all times on the Keldysh contour (all``paths''), the CS approximation amounts to only averaging over the field variables at the initial time, corresponding to the ensemble of initial conditions.
The authors of \cite{Mou:2019gyl} proceed to show that the phenomenon of tunneling in quantum mechanics may be computed from the complete path integral or the Schr\"odinger equation yielding the same result, while the CS approximation fails to correctly reproduce the tunneling rate. Similarly, the CS approximation fails to describe the famous quantum violation of Bell or Leggett-Garg inequalities \cite{Millington:2020vkg}, even in the free-field limit. In fact, the CS approximation may only be used to compute certain ``classical'' observables.\footnote{The ones involving the $\phi_{\rm cl}$-field and not the $\phi_q$-field in the Keldysh basis.} One obvious example is that the fundamental commutator $[\phi(x),\pi(y)]_-=i\delta^d(x-y)$, which is inherently ``quantum'', vanishes in the CS approximation.\footnote{Note that in actual classical field theory, one may define objects with similar properties, such as the Poisson bracket of canonical variables. But although much of the concrete numerical computation is the same, conceptually classical field theory and the CS approximation to quantum fields are distinct. }
\subsection{Classical-Statistical simulations and the ``half''}
\label{sec:classtat}
There is nothing to prevent us from performing CS computations from any initial condition, provided we are able to somehow generate the configurations making up the initial ensemble. One example is the classical thermal equilibrium state, parametrized by some temperature $T$, $n_{\bf k}+1/2=T/\omega_k$, which up to corrections from non-linear interactions is a fixed point of the dynamics. But evolving some generic initial ensemble amounts to classical field theory from a non-equilibrium initial state, not necessarily with any connection to a quantum system.
As we have seen, only for large occupation numbers (or large $F$) can a CS computation be expected to yield a good approximation to the quantum result, and only for appropriate observables.
Most physical phenomena are dominated by some characteristic momentum range, and the spectrum of momentum modes split up into regions with large ($\gg 1$), moderate ($\simeq 1$) and small ($<1$) occupation numbers. As long as the modes relevant for the phenomenon of interest are highly occupied, the expectation is that classical dynamics will give a reliable result when applied to all modes. Typical examples include (near-to-)equilibrium systems at high temperature (see for instance \cite{Moore:1999fs,Berges:2007re}), large objects such as topological defects or sphalerons (for instance \cite{Rajantie:2010tb,DOnofrio:2014rug}), as well as high-occupancy phenomena such as resonances and instabilities (for instance \cite{Rajantie:2000nj,Greene:1997fu,Kofman:1997yn,Felder:2000hj,Bodeker:2007fw,Rebhan:2004ur,Tranberg:2003gi,Garcia-Bellido:1999xos}).
For a few very specific cases, a very special initial condition has been employed dubbed ``the quantum half'' \cite{Rajantie:2000nj,Garcia-Bellido:2002fsq,Smit:2002yg}. The prescription is to represent an initial quantum vacuum state with $n_{\bf k}+\frac{1}{2} = \frac{1}{2}$ by an ensemble of classical initial conditions, and evolve the system classically from there. In most cases, this is very problematic, since $n_{\bf k}\gg 1$ is certainly not satisfied, and the energy density of the initial state is cut-off dependent and divergent.
Another issue is that while the true quantum dynamics ensures that the zero-point fluctuations stay put in each mode \cite{Arrizabalaga:2004iw}, allowing only the exchange of the $n_k$ between modes, the classical dynamics does not distinguish between the $n_{\bf k}$ and the $1/2$ excitations, and will allow all to be exchanged. Extracting energy from the zero-point fluctuations in this way is an unphysical effect, which is negligible if $n_{\bf k}+1/2$ is anyway large, but may be very important when $n_{\bf k}+1/2\simeq 1/2$.
However, one property can make it reasonable to describe the quantum dynamics of a quantum-like ``half'' initial condition by the CS approximation: For non-interacting fields, the operator field equations are linear, as described above allowing us to expand the Heisenberg field operators as independent time-independent harmonic oscillators. To compute numerically
\begin{eqnarray}
\langle\phi_{\bf k}(t)\phi_{\bf k}^\dagger(t)\rangle = \left(a_{\bf k}^\dagger a_{\bf k}+\frac{1}{2}\right) |f_{\bf k}(t)|^2,
\end{eqnarray}
we only need to solve for $f_{\bf k}(t)$, while the $a_{\bf k}$, $a_{\bf k}^\dagger$ are time-independent operators containing the information about the initial state. Since the evolution is linear, it makes no difference whether we evolve from the initial condition $f_{\bf k}(0)=1/\sqrt{\omega_k}$ and multiply by $n_{\bf k}+1/2$ at the end, or whether we classically evolve an ensemble of initial conditions $\phi_{\bf k}(0)$ with the property that $\langle \phi_{\bf k}(0)\phi^\dagger_{\bf k}(0) \rangle = (n_{\bf k}+\frac{1}{2})/\omega_k$. This is the CS approximation, and so for a non-interacting field, the approximation to the evolution is exact, irrespective of $n_{\bf k}$\footnote{See \cite{Millington:2020vkg} for a detailed discussion of what observables this prescription allows us to compute.}.
This means that for systems, where for some reason the occupation numbers grow large while still in the linear regime (for small coupling, say), we are allowed to initialise the classical system in the quantum-vacuum like state $n_{\bf k}=1/2$, and evolve the system using classical equations of motion throughout; at early times because the system is linear, at late times because the system has large occupation numbers. We only require that occupation numbers grow large before self-interactions become important.
To summarize: The ideal prescription to simulate a phenomenon arising from a quantum vacuum initial condition is to 1) start off with $1/2$ in all modes, 2) evolve them all with the (quantum, but equivalently classical) linear equations until non-linear self-interactions become important, 3) discard all the modes that have not by then acquired large occupation numbers, and only 4) continue the now classical evolution of the highly occupied modes. Various levels of adherence to these rules can be argued for on a case-by-case level.
Examples, where this applies include:
\begin{itemize}
\item The primordial perturbations responsible for cosmological structure formation. These are the zero-point fluctuations of a weakly coupled scalar field, that grow because the accelerated expansion of space introduces non-adiabatic evolution of the modes \cite{Mukhanov:1990me}. This is one instance of the phenomenon known as ``squeezing'' of an initial vacuum state. Observations show that non-Gaussianities are minute, and so the entire early-time evolution from vacuum fluctuations ($1/2$) to non-vacuum ($n_{\bf k}\gg 1$) may be simulated using (almost linear) classical evolution.
\item Resonant preheating after inflation arises when at the end of inflation, the oscillating inflaton mean field is in resonance with certain field modes (whether of another field or the inflaton itself). This resonance amplifies these modes from an initial vacuum state to large occupation numbers \cite{Greene:1997fu,Kofman:1997yn}. Since the self-interaction is usually quite small ($\lambda\simeq 10^{-12}$ or smaller for many inflation models), occupation numbers can grow very large before non-linearities become important. And so the CS approximation is valid all the way from the quantum vacuum initial state.
\item Tachyonic preheating (or spinodal decomposition) occurs in hybrid inflation-type models, where a negative curvature of the potential $V$ triggers an instability of certain modes $k^2+V''<0$ \cite{Felder:2000hj}. These modes grow exponentially, until self-interactions become important. If the self-interaction is small, the classical evolution again holds from the initial quantum vacuum state (when the evolution is linear), and also in the subsequent non-linear regime, because occupation numbers are by then $\gg 1$ (see for instance \cite{Garcia-Bellido:2002fsq,Smit:2002yg}).
\item Certain plasma instabilites in gauge theories can also be described as unstable modes, at weak coupling \cite{Bodeker:2007fw,Rebhan:2004ur}. As these acquire large occupation numbers, the CS approximation can be applied also in the context of the approach to thermal equilibrium in heavy-ion collisions.
\end{itemize}
A final point worth mentioning is that the classical regime with large occupation numbers does not imply that one particular classical realization (one member of the ensemble) is singled out. All observables must be computed as statistical expectation values over the whole classical ensemble of configurations, which is then expected to reproduce well the expectation values over the wave function (or density matrix) of the quantum system.
\subsection{Classical simulations of vacuum decay}
\label{sec:intitcond}
A quantum system at zero temperature in a local potential minimum (a ``false'' vacuum) may decay into a state in the global minimum (the ``true'' vacuum) through quantum mechanical tunneling. In the Euclidean formulation of quantum field theory the transition is described by an instanton \cite{colman1}, and from a path integral point of view, the transition is mediated by non-classical paths, paths that do not satisfy the classical equations of motion. The transition rate is straightforwardly computed in quantum mechanics, but is substantially harder to extract in quantum field theory.
In \cite{braden}, an approximate agreement was reported between the instanton computation of the transition rate in 1+1 space-time dimensions, and the CS evolution of a vacuum (``half'') initial state in the unstable vacuum. The authors were surprised and intrigued by their result, since tunneling is precisely the type of very quantum processes, where one would expect the CS approximation to fail. Indeed in quantum mechanics (field theory in 0+1 dimensions), the CS approximation does fail to reproduce the quantum tunneling rate \cite{Mou:2019gyl}.
Classical simulations of bubble nucleation are natural in the context of a finite-temperature phase transition, where the initial state is described by the finite temperature distribution of occupation numbers above the unstable vacuum. Then the transition is a classical effect whereby there is some (Boltzmann) probability that the ambient thermal fluctuations manage to spontaneously form a true-vacuum bubble, large enough to make it over the potential barrier and expand to eventually fill the whole of space (we will return to this point in more detail below).
It follows that the finite-temperature bubble nucleation rate can in principle be computed by classically evolving all field configurations starting in the local potential minimum, and then averaging them over the initial Boltzmann distribution, schematically
\begin{eqnarray}
\label{eq:clasrateaverage}
\Gamma_{\rm Finite\,T} = \int P_{\rm Boltzmann,T}[\textrm{configuration}]\times \textrm{transition rate of the configuration}.\nonumber\\
\end{eqnarray}
The result of \cite{braden} would suggest that the quantum tunneling rate follows from the same set of classical trajectories, but averaged over the quantum vacuum-like initial distribution
\begin{eqnarray}
\Gamma_{\rm Quantum} = \int P_\textrm{Vacuum, $\frac{1}{2}$}[\textrm{configuration}]\times \textrm{transition rate of the configuration}.\nonumber\\
\end{eqnarray}
This is a surprising result, and warrants further scrutiny. In particular, since classical evolution conserves energy, it would imply that quantum tunneling is simply the classical evolution of the subset of the initial condition ensemble, that have enough energy to nucleate a bubble.
In \cite{hertz}, the numerical computations of \cite{braden} were reproduced, although it was pointed out that to get the reported agreement between numerical and instanton results, a ``fudge'' factor $\epsilon$ had to be introduced. The agreement occurs for $\epsilon\simeq 1/2$ which amounts to rescaling the zero-point fluctuations from $n_{\bf k}=\frac{1}{2}$ to $\frac{1}{8}$.
We will expand further on that analysis, and argue that the reported agreement is a coincidence to do with the choice of the parameters of the model, the lattice cut-off and the fudge factor, and that it is not specific to the ``half'' initial condition. We will also generalise the simulations to 2+1 dimensions, and show that there is no agreement there. We will see that there are some essential differences between nucleation in 1+1 and higher dimensions.
\section{Tunneling and Bubble Nucleation}
\label{sec:tunneling}
Consider a potential $V$ with two non-degenerate minima, with a barrier in-between. If the system is initially in the local minimum with highest energy, a transition may occur whereby the system moves to the global minimum with lowest energy (``false vacuum decay''). Energetically, it is very expensive for the field to move across the barrier in all of space simultaneously. Instead, one local region of space (a bubble) is created with the field in the global vacuum inside, in the local minimum outside, and with the field continuously interpolating between the two on the boundary (the wall).
\subsection{Classical Bubble Nucleation}
\label{sec:clastunneling}
Classical bubble nucleation is the process by which random classical fluctuations (for instance in equilibrium at a temperature $T$) by chance organise themselves into such a bubble. This happens all the time, but most bubbles are so small, that they collapse again. The energy criterion controlling the process is the balance between the energy cost of creating the bubble wall, interpolating between vacua, and the energy gain from the inside of the bubble having a lower potential energy than when the bubble is not there. In the simplest approximation one may write
\begin{eqnarray}
E = \textrm{Surface}\times\sigma +\textrm{Volume}\times\Delta V ,
\end{eqnarray}
where $\sigma$ is the surface tension, the energy associated with the interpolating field wall, and $\Delta V$ is the difference in potential at the two minima $V_{\rm global}-V_{\rm local}$ (which is negative).
In 1+1 dimensions, the volume is the distance between walls, $2R$, while the surface is just a factor of 2 (2 walls),
\begin{eqnarray}
E_{1}= 2\sigma + 2R\Delta V,
\end{eqnarray}
In order for a transition to happen, a random fluctuation has to occur that creates a pair of walls. Once these walls are established, there is no further energy cost in increasing the size of the bubble. The total energy is linearly decreasing with increasing $R$. We define the critical energy and the critical radius to be
\begin{eqnarray}
E_{\rm crit,1 }= 2\sigma,\qquad R_{\rm crit} = 0\quad(\textrm{or the width of a wall}).
\end{eqnarray}
In 2+1 dimension, things are qualitatively different. Now
\begin{eqnarray}
E_{2}= 2\pi R\sigma + \pi R^2\Delta V ,
\end{eqnarray}
which is maximised to give the saddle point solution
\begin{eqnarray}
E_{\rm crit, 2}=\frac{\pi \sigma^2}{\Delta V} ,\qquad R_{\rm crit, 2}=-\frac{\sigma}{\Delta V}.
\end{eqnarray}
In most cases, a random fluctuation does not acquire this critical radius, and the transition does not complete. The bubble shrinks again. But occasionally, a critical-size bubble is generated, which then continues to grow.
In 3+1 dimensions, we have
\begin{eqnarray}
E_{3}= 4\pi R^2\sigma + \frac{4\pi}{3} R^3\Delta V ,
\end{eqnarray}
so that
\begin{eqnarray}
E_{\rm crit, 3}=\frac{16\pi}{3}\frac{\sigma^3}{\Delta V^2} ,\qquad R_{\rm crit, 3}=-\frac{2\sigma}{\Delta V}.
\end{eqnarray}
Throughout, we have assumed that the bubble is spherical, since this maximises the volume/area. There will be subleading contributions from many other near-spherical configurations.
In thermal equilibrium, the bubble nucleation rate is then proportional to the Boltzmann probability of a large enough random fluctuation
\begin{eqnarray}
\frac{\Gamma}{V t}\propto e^{-E_{\rm crit}/T}.
\end{eqnarray}
Dividing by the volume $V$ (not to be confused with the potential) and $t$ normalises the rate to unit volume and time, respectively. A more detailed numerical analysis along the lines of (\ref{eq:clasrateaverage}) allows the direct computation of this quantity \cite{Moore:2000jw,Moore:2001vf}.
In a non-thermal environment, for instance a state with some non-thermal occupation numbers $n_{\bf k}$, the probability of creating such bubbles will depend on the state. As for the thermal equilibrium state, it may require that random multi-wavelength fluctuations manage to organise themselves into a large enough bubble configuration. But one could also imagine a state with only long-wavelength fluctuations (say, of size $R_{\rm crit}$), in which critical-size bubbles are ubiquitous.
There is also the possibility that the state (whether thermal or not), simply has an energy density (much) larger than the height of the potential barrier. Then the system hardly notices, that the minima are separated, and will not need to minimise energy into a spherical bubble to perform the transition. In this case, transitions are common and fast. If the energy density is larger than $|\Delta V|$, one may also get transitions back again.
Finally, there is the possibility that the entire physical volume has too little energy to even make a single critical bubble. This is only a practical issue in a numerical simulation of a finite volume, and hence finite total energy. Then a transition will never happen, if the dynamics are classical and energy conserving.
\subsection{Quantum Bubble Nucleation}
\label{sec:quantunneling}
Quantum tunneling is most apparent in situations where a barrier separates two local minima of the potential, and the energy of the state is smaller than the height of the barrier. In quantum mechanics (field theory in 0+1 dimensions), starting in one minimum, one may straightforwardly solve for the wavefunction of the system, giving a non-zero probability of finding the particle inside, and on the other side of the barrier. In time, there is an ever increasing probability for the particle to be measured in the other minimum. In the case when the second minimum is in fact the global minimum, we speak of vacuum decay.
In field theory, the analogous process can also be interpreted in terms of Euclidean instanton paths, famously in \cite{colman1}. This instanton is a 4-D spherically symmetric saddle point of the Euclidean action. One may again write down
\begin{eqnarray}
S_{\rm crit, 4}=\frac{27 \pi^2}{2}\frac{\sigma^4}{\Delta V^3} ,\qquad R_{\rm crit, 4}=-\frac{3\sigma}{\Delta V}.
\end{eqnarray}
To a good approximation, the rate of tunneling may then be written as
\begin{eqnarray}
\frac{\Gamma}{Vt} \propto e^{-S_{\rm crit,4}},
\end{eqnarray}
but keeping in mind that this is the saddle point action rather than an energy, and that no temperature is involved.
\subsection{The wall tension $\sigma$}
\label{sec:wealltension}
Whereas $\Delta V$ is simply the difference between potential minima, computing the wall tension $\sigma$ in the general case requires knowledge of the wall profile. For classical nucleation, an approximation is found by solving the (spherically symmetric) equation of motion for a static field profile interpolating between the two minima:
\begin{eqnarray}
\label{eq:bounce}
\partial_t\phi=0\rightarrow \Big(\partial_r^2+\frac{(d-1)}{r}\partial_r \Big)\phi =\frac{dV}{d\phi},
\end{eqnarray}
with the boundary conditions, $\phi(r=\infty)=\phi_{\rm local}$, $\partial_r\phi(0)=0$, $\phi(0)=\phi_{\rm global}$.
For $d=1$, this looks like time evolution in the potential $-V$, and is usually solved by numerical
means (shooting) \cite{CosmoT}. Then one may compute the wall tension as
\begin{eqnarray}
R^d\sigma = \int_0^{\infty} dr\, r^{d}\Big[\frac{1}{2}(\partial_r\phi)^2+V(\phi)\Big].
\label{eq:anatension}
\end{eqnarray}
In the limit when the wall is much thinner than the size of the bubble, the term
$(d-1)/r$ may be neglected. Then it is not necessary to know
the detailed shape of the wall, as one may rewrite (\ref{eq:anatension}) into
\begin{eqnarray}
\sigma = \int_{\phi_{\rm local}}^{\phi_{\rm global}} \sqrt{2V(\phi)}d\phi
\end{eqnarray}
which is easily computed, at least numerically.
For the 4-dimensional instanton we must first rotate to Euclidean space, the saddle point equation in $d+1$ dimensions becomes
\begin{eqnarray}
(\partial_\tau^2+\partial_{\bf x}^2)\phi=\frac{\partial V}{\partial \phi},
\end{eqnarray}
which in 4-dimensional spherical coordinates is equivalent to Eq. (\ref{eq:bounce}), in one dimension higher. Hence for a thin wall, the calculation of the wall tension proceeds in exactly the same way. This does not directly imply a relation between the tunneling rates, since $E_{\rm crit}$ and $S_E$ are very different objects.
\subsection{A convenient toy model potential}
\label{sec:potential}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{img/Potential_phi0dependence.png}
\caption{The potential in (\ref{eq:action}) for different values $\phi_0$.}
\label{fig:Potential}
\end{figure}
Following \cite{braden} we will focus on a specific potential, defined by the action
\begin{eqnarray}
S = \int dx^{d+1} \left[\frac{1}{2} \partial_{\mu} \phi \partial^{\mu} \phi - V_0 \Big( -\cos \bigg( \frac{\phi}{\phi_0} \bigg) + \frac{\lambda^2}{2} \sin^2 \bigg( \frac{\phi}{\phi_0} \bigg) -1 \Big)\right].
\label{eq:action}
\end{eqnarray}
It is parameterized by three quantities, $\lambda$, $\phi_0$ and $V_0$.
For $\lambda>1$ the periodic potential has global and local minima at $\phi=2n \pi\phi_0$ and $\phi=(2n+1)\pi\phi_0$, respectively, with integer $n$. The potential is chosen to have $V(\phi_{\rm local})=0$ and $\Delta V=-2V_0$, and we define the masses
\begin{eqnarray}
m_{f}^2 &=& \frac{d^2V}{d\phi^2} \Big|_{\phi=\phi_{\rm local}} = \frac{V_0}{\phi_0^2} (-1 + \lambda^2),\\
m_{t}^2 &=& \frac{d^2V}{d\phi^2} \Big|_{\phi=\phi_{\rm global}} = \frac{V_0}{\phi_0^2} (1 + \lambda^2).
\end{eqnarray}
The height of the potential barrier separating the two minima is given by
\begin{eqnarray}
V_{\rm max} = m_f^2 \phi_0^2 \Big( \frac{-1 + \lambda^2}{ 2 \lambda^2} \Big).
\end{eqnarray}
We will follow \cite{braden} and set $\lambda=1.2$. The potential is therefore parametrized by $m_f$ and $\phi_0$. In this parametrization $\phi_0$ fixes the location of the local vacuum but also influences the relative height of the potential barrier. We show in Fig.~\ref{fig:Potential} the potential for example sets of parameters. We will compute the bubble nucleation rate primarily as a function of $\phi_0$, and from the potential alone, we expect the rate to decrease with increasing $\phi_0$.
\subsection{Numerical implementation}
\label{sec:numerical_2}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{img/Potential_phiDistribution_Nx1000000_sdim1_M30.png}
\caption{The potential for $\phi_0=1$ and $\phi_0=1.5$ for $am_f=0.8$.
The superimposed histograms show the initial distribution of $\phi(x)$ which depends on both the fudge factor $\epsilon$ and the mass $am_f$.}
\label{fig:Potdist}
\end{figure}
We discretize the action on a space-time lattice, and solve the classical equation of motion,
\begin{align}
\dot{\phi} &= \pi, \\
\dot{\pi} &= \nabla^2 \phi - V^{\prime}(\phi).
\end{align}
A symplectic integrator scheme is used to ensure energy conservation for long simulation times.
The lattice has periodic boundary conditions and the number of lattice sites per dimension and the spacing are denoted as $N_x$ and $a$, giving the linear lattice size $L=N_xa$. We recast the lattice action in lattice units, whereby all dimensionfull quantities appear in dimensionless versions by means of the lattice spacing as $am_f$, $a^{d+1}V_0$, $a^2k^2$ and so on. Consequently, the dispersion relation on the lattice is determined by the discretized Laplacian and given by
\begin{eqnarray}
a^2\omega_k^2 = k_L^2 + a^2m_f^2, \qquad k_L^2 = \sum_{i=1}^d 2 - 2 \cos(k_i),
\end{eqnarray}
where for each spatial dimension $i$, $k_i=n_i \frac{2 \pi}{N_x}$ for $n_i =-N_x/2+1, ...,N_x/2$.
The quantity $am_f$ then defines the lattice cut-off, since if the maximum momentum is $a\Lambda\simeq \pi $ then the cut-off in physical units is $\Lambda/m_f=\frac{\pi}{am_f}$. As $am_f$ decreases, the cut-off increases. We will in the following only explicitly write out powers of $a$ when needed.
The quantum-like initial conditions are Gaussian distributed field fluctuations defined by
\begin{eqnarray}
\label{eq:fluct}
\langle \phi_{\bold{k}} \phi_{\bold{k^{\prime}}} \rangle &= \epsilon^2 \frac{1}{2 \omega_k} \delta^d _{\bold{k} - \bold{k^{\prime} }} \qquad
\langle \pi_{\bold{k}} \pi_{\bold{k^{\prime}}} \rangle &= \epsilon^2 \frac{\omega_k}{2} \delta^d _{\bold{k} - \bold{k^{\prime}}}
\end{eqnarray}
These vacuum fluctuations are added to a homogeneous field placed initially at the local minimum
$\phi(x)= \pi \phi_0$.
The ``fudge factor'' $\epsilon$ was introduced by \cite{hertz} to parametrically fit
tunneling rates to instanton results. $\epsilon =1 $ is the physical value that mimics a quantum vacuum state, whereas other values have no obvious physical interpretation. As we will see, and consistent with \cite{hertz}, the apparent agreement between CS results and the instanton rate arises for $\epsilon\simeq 0.5$.
\begin{figure}[!hb]
\centering
\includegraphics[width=8cm]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0100_M6_Mod1_Fud100_vev1_binning29_F1_turateHerz_fit.png}
\caption{Example in 1+1 dimensions of the time dependence $N_{\rm surv}(t)$ and a fit to the form $N_0 e^{-t \Gamma}$. The plot shows the exponential behaviour starting from roughly 60 percent of the total number of configurations, which was $N=100$.}
\label{fig:1+1fitexample}
\end{figure}
In preparation of the later discussion, it is instructive to generate a single initial condition $\phi(x)$, and simply compute the distribution of local field values. Fig.~\ref{fig:Potdist} shows a histogram superimposed on the potential. For a fudge factor of $\epsilon=0.5$, we see that the entire field configuration is inside the false vacuum initially. But for $\epsilon=1$, already at the initial time, the field is on the other side of the potential barrier in some small parts of space.
Following \cite{braden,hertz}, as the simulation proceeds, we monitor the observable $\langle\cos(\phi/\phi_0)\rangle$, where $\langle . \rangle$ refers to the ensemble average, to define whether a configuration has transitioned to one of the neighboring global minima.
For homogeneous configurations at the local/global minima this observable takes the value $-1$ or $+1$.
A configuration is then said to have transitioned if
\begin{eqnarray}
\langle\cos(\phi/\phi_0)\rangle > \langle\cos(\phi/\phi_0)\rangle_{t=0} + 10 \Delta_{t=0} ,
\end{eqnarray}
where $\Delta_{t=0}$ is the standard deviation of the same observable $\cos(\phi/\phi_0)$ at the initial time.
Given an ensemble of $N$ configurations, we define $N_{\rm surv}(t)$ to be the number of these configurations that by a given time $t$ have not yet transitioned. We then perform a fit to the form
\begin{eqnarray}
N_{\rm surv}(t) = N_0 e^{-\Gamma t},
\end{eqnarray}
where $N_0$ refers to the starting point of the fit. Then $\Gamma$ is the bubble nucleation rate.
Typically it takes some time before the configurations begin to transition. The fit was therefore performed from a time when $N_0$ was 60 percent of the total configurations in the simulation. An example is shown in Fig. \ref{fig:1+1fitexample}.
\section{The rate in 1+1 dimensions.}
\label{sec:1D}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{img/F1DRun_turateHerzM6_60.png}
\caption{The nucleation rate from the instanton model and the simulations with $\epsilon = 0.5$ and $\epsilon = 1$.
The lattice simulation has $N_x=512$, $am_f=0.06$ and integration step size $dt=0.05$.}
\label{fig:1+1results}
\end{figure}
We first consider the system in 1+1 dimensions, exactly as in \cite{braden,hertz}. We compute the nucleation rate for different values of $\phi_0$ and for different values of the fudge factor, shown in Fig. \ref{fig:1+1results}. The number of configurations is $N=100$, which was sufficient to get convincing results.
The instanton estimate of the quantum tunneling rate is obtained via the expression \cite{hertz}
\begin{eqnarray}
\frac{\Gamma}{L} = 2 m_f^2 \Big( \frac{S_B}{2 \pi} \Big) e^{-S_B} ,
\end{eqnarray}
Where $S_B$ refers to bounce action computed with the tool CosmoTransitions \cite{CosmoT}.
Making allowance for possible small differences in fitting procedure and numerical implementation, this reproduces the results of \cite{braden} and \cite{hertz}, which may be summarized as follows: In 1+1 dimensions, CS simulations of a quantum vacuum-like initial ensemble produces a bubble nucleation rate of a similar order of magnitude as the quantum instanton result, at least for $\phi_0\leq 1.25$. However, this agreement is achieved from a quantum-like initial condition not with occupation numbers of 1/2, but instead 1/8, hence a fudge factor of 1/2 \cite{hertz}. In fact, tuning the fudge factor down from 1, one may achieve different levels of agreement with the instanton result at different values of $\phi_0$. The ``half'' initial condition (fudge factor 1) overestimates the quantum nucleation rate by several orders of magnitude, and has a weak dependence on the shape of the potential, $\phi_0$.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{img/F1DRun_turateHerzM6_60_thermal.png}
\caption{The thermal nucleation rate for different initial temperatures, compared to the instanton rate and the quantum-$\epsilon=0.5$ result. The cut-off is $am_f=0.3$.}
\label{fig:1+1thermal}
\end{figure}
In Fig. \ref{fig:1+1thermal} we consider another choice of initial condition, namely the classical equilibrium mentioned above
\begin{eqnarray}
n_{\bf k}+\frac{1}{2}\rightarrow \frac{T}{\omega_k}.
\end{eqnarray}
While the quantum vacuum has constant occupation number for all modes, in the classical equilibrium they are suppressed in the UV. The energy density is however still divergent as the cut-off $am_f$ goes to zero. We perform the same simulation procedure as previously, but now for different values of the parameter $T$. We see that just as we did for the fudge factor $\epsilon$, we may also tune $T$ to a semi-quantitative agreement with the instanton nucleation rate, in this case $T=0.1 m_f$.
\begin{figure}[htp]
\centering
\includegraphics[width=12cm]{img/F1Drun_cutoffstudy_60_tunnelingrate_new.png}
\caption{The dependence of the nucleation rate on the lattice cut-off in 1+1 dimensions, for quantum-$\epsilon=0.5$ initial conditions and comparing to the instanton rate.}
\label{fig:1+1cutoff}
\end{figure}
Since the initial conditions correspond to a divergent energy density in the continuum, it is also prudent to test the robustness of our results to changing the cut-off, in our parametrization the quantity $am_f$. The result of this procedure is shown in Fig. \ref{fig:1+1cutoff} for $\epsilon=0.5$. We see that while giving a weaker effect than varying $\epsilon$, changing the cut-off is an alternative way of tuning the rate to match the instanton rate. As might be expected, smaller $am_f$ corresponding to larger cut-off and more energy in the system leads to a larger nucleation rate.
We may consider introducing a mass counterterm (or more generally, renormalise the potential \cite{braden2}) to counter the effect of the divergent initial condition. But because the zero point fluctuations do not stay put in classical simulations, this is difficult to achieve (see for instance \cite{Arrizabalaga:2004iw}), and does not in itself solve the problem of a divergent energy being available for tunneling. The particular potential considered here is also not readily renormalisable.
\subsection{Looking for bubbles and energy considerations in 1+1 dimensions}
\label{sec:bubblesenergy}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud100_vev1_binning29_F1_turateHerz_Runs1_100_28T4_8M_cos.png}
\includegraphics[width=0.48\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud50_vev1_binning29_F2_turateHerz_Runs1_100_28T4_8M_cos.png}
\includegraphics[width=0.48\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud100_vev1_binning29_F1_turateHerzbubbles_config.png}
\includegraphics[width=0.48\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud50_vev1_binning29_F2_turateHerzbubbles_config.png}
\caption{The time evolution of $\langle\cos(\phi/\phi_0)\rangle$ (top) for $\epsilon = 1.0$ (left) and $\epsilon = 0.5$ (right). Below are examples of configurations of one of the simulations.
Other simulation parameters are $N_x = 512$, $dt=0.05$, $\phi_0=1.2$, $am_f=0.3$. Each ensemble consists of 100 individual configurations.}
\label{fig:bubbleconfig}
\end{figure}
The nucleation rate for $\epsilon=1$ has a weak dependence on $\phi_0$, and similarly for $\epsilon=0.5$ for small $\phi_0$. We can begin to understand this at least qualitatively by considering the energy density of the configurations.
The top left panel of Fig.~\ref{fig:bubbleconfig} shows the time-evolution of the
observable $\langle\cos(\phi/\phi_0)\rangle$ for individual configurations for $\epsilon=1$ at $\phi_0=1.2$.
The bottom left panel is one
field configuration in space at different times, labelled by the value of $\langle\cos(\phi/\phi_0)\rangle$ at that time. We see that all configurations transition through the threshold value $\simeq -0.6$ almost immediately, and that the field configurations have many nuclei and bubbles. This is an example of an initial condition with an energy density $\rho$ larger than the potential barrier $V_{\rm max}$. There is no need for the configuration to randomly organise itself into a critical bubble for the transition to take place. In contrast, the right-hand panels of Fig.~\ref{fig:bubbleconfig} show a simulation at $\epsilon=0.5$ and $\phi_0=1.2$. Here, the transitions happen as an exponential decay. Also, field configurations evolve over time from a single, initially small, bubble (light blue) to a larger and larger bubble.
To make this more explicit, we compute the total energy and average energy density of the configurations.
Fig.~\ref{fig:1+1density} shows the dependence of the average energy density on $\phi_0$ and $\epsilon$.
The grey shaded region is where the average energy density
is smaller than the potential barrier, while above, the energy density is larger than the barrier. Roughly speaking, one would expect the rate of nucleation to be exponentially suppressed only in the grey region, as a critical bubble needs to emerge through a stochastic process. And one would expect the rate to be unsuppressed everywhere else. Of course, individual field configurations are inhomogeneous, multiple nuclei complicate the picture, and some out-of-equilibrium initial states may have special properties enhancing nucleation. And so the boundaries of the grey region should be considered fuzzy.
We see that for $\epsilon=1$, we only enter the grey region far beyond the range of the figure. And that for $\epsilon=0.5$, we enter the region around $\phi_0=1.05$, corresponding roughly to where the exponential dependence on $\phi_0$ kicks in in Fig. \ref{fig:1+1results}.
\begin{figure}[htp]
\centering
\includegraphics[width=12cm]{img/Energydensity_Nx128_sdim1_Configs1000_final.png}
\caption{The dependence of energy density on $\phi_0$ and $\epsilon$. We also see a small dependence on the cut-off $am_f$.}
\label{fig:1+1density}
\end{figure}
Since the energy density is approximately $\propto \epsilon^2$ and the potential barrier $\propto \phi_0^2$, the criterion for entering the grey region $\rho=V_{\rm max}$ amounts to $\phi_0\propto \epsilon$. The proportionality constant in the case depicted here happens to be $\simeq 2.05$, and so the rate for the $\epsilon=1$ initial condition of physical relevance only becomes exponentially suppressed around $\phi_0=2.1$, where the instanton rate is $ \frac{\Gamma}{L} \frac{\phi_0^2}{V_0} = 4.7 \times 10^{-9}$.
\begin{figure}[htp]
\centering
\includegraphics[width=10cm]{img/F1DRun_turateHerzM6_60_extended.png}
\caption{Tunneling rate in 1+1 dimension for larger values of $\phi_0$. Simulations parameters are $N_x=512$, $am_f=0.06$.}
\label{fig:1+1rateextended}
\end{figure}
In Fig. \ref{fig:1+1rateextended} we have extended the range of Fig. \ref{fig:1+1results} to include the exponentially suppressed region for $\epsilon=1$. We see again that the CS approximation overestimates the nucleation rate by several orders of magnitude.
\begin{figure}[htp]
\centering
\includegraphics[width=10cm]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi050_M30_Mod1_Fud17_vev1_binning29_F2_singlebubble_bubbles_BubbleEnergyvsRad.png}
\caption{The energy of two bubble walls and the bubble interior versus $\langle \cos \frac{\phi}{\phi_0} \rangle$. The simulation parameters are $N_x=512$, $\phi_0=0.5$ and $\epsilon=0.17$.}
\label{fig:Ebubb}
\end{figure}
We can attempt to compute the wall tension in 1+1 dimensions from further developing the naive model of Sec.~\ref{sec:clastunneling}. We note that if the bubble is really in the global minimum inside the bubble ($\cos(\phi/\phi_0)=1$) and in the local minimum outside ($\cos(\phi/\phi_0)=-1$), then for a single configuration
\begin{eqnarray}
\langle \cos \frac{\phi}{\phi_0} \rangle = \frac{4R-N_x}{N_x},
\end{eqnarray}
where $2R$ is the wall separation, and $R$ hence the radius of the bubble. We now compute numerically the energy of bubbles, where we by hand force the interior and exterior to be in the minima. Then
\begin{eqnarray}
E_{\rm Bubble} = 2\sigma + 2 R\Delta V = a + b \cos \frac{\phi}{\phi_0}.
\end{eqnarray}
We fit the parameters $a$ and $b$ for each critical bubble and relate to $\sigma $ and $\Delta V$ via
\begin{eqnarray}
\sigma = a-b; \quad \Delta V = \frac{2b}{N_x}.
\end{eqnarray}
In this way, an estimate for $\sigma$ can be obtained by extrapolating $E_{\rm Bubble}$ to $\langle \cos \frac{\phi}{\phi_0} \rangle = -1$. Fig. \ref{fig:Ebubb} shows an example fit to a critical bubble obtained with $\phi_0=0.5$, $\epsilon=0.17$.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel={$\phi_0^2$},
ylabel={$\frac{E_{bubble}}{m_{f}}$},
xmin=0.25, xmax=2.25,
ymin=0, ymax=10,
legend pos=north west,
ymajorgrids=true,
grid style=dashed,
]
\addplot[
color=blue,
mark=square,
]
coordinates {
(0.25,1.3)(0.49,1.85)(1,3.86)(1.44,6.33)(1.96,9.22)(2.25,9.51)
};
\end{axis}
\end{tikzpicture}
\caption{The critical bubble energy in 1+1 dimensions for different values of $\phi_0$.}
\label{fig:Ecrit}
\end{figure}
Fig.~\ref{fig:Ecrit} shows the corresponding energy of the critical bubble, $2\sigma$, for different values of $\phi_0$. This then is the minimal energy required for a configuration to classically transition to the global minimum.
If the volume, cut-off, and $\epsilon$ is such that the energy is smaller than this value, the evolution of this interacting scalar field initially in an out-of-equilibrium state, will eventually drive the system to the classical equilibrium state in the local minimum.
\subsection{Energy depletion and thermalization in 1+1 dimensions}
\label{sec:thermalisation}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0110_M6_Mod1_Fud50_vev1_binning29_F1_turateHerz_Runs1_100_28T4_8M_noeps_Time_n_range.png}
\caption{The evolution of the occupation numbers in 1+1 dimensions, $\epsilon=0.5$, $\phi_0=1.5$, $am_f=0.06$.}
\label{fig:1+1particlenumbers1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0150_M30_Mod1_Fud17_vev1_binning29_F1_thermalquestionmark_Runs1_50_28T4_8M_noeps_Time_n_range.png}
\caption{The evolution of the occupation numbers in 1+1 dimensions,
$\epsilon=0.17$, $\phi_0=1.5$, $am_f=0.06$.}
\label{fig:1+1particlenumbers2}
\end{figure}
Since the occupation numbers of the modes play an important role in the CS approximation, we also compute these through
\begin{eqnarray}
n_{\bf k}+1/2= \sqrt{\langle\pi_{\bf k}^\dagger\pi_{\bf k}\rangle\langle\phi_{\bf k}^\dagger\phi_{\bf k}\rangle}.
\label{eq:nkdef}
\end{eqnarray}
In Fig. \ref{fig:1+1particlenumbers1} we show the occupation numbers for a set of modes in time\footnote{The modes are collected in finite bins with several modes in each, enumerated by their central $k_L$-value.}. We see that initially, the occupation numbers (\ref{eq:nkdef}) are indeed $\epsilon^2/2$, and as the nucleation is triggered (within a time of a few in mass units), they increase as potential energy is converted into excitations. The energy is mostly deposited in IR modes.
As discussed in the preceding section, we can engineer an initial configuration with total energy less than the $E_{\rm crit}$ which will never transition. An example of this is shown in Fig. \ref{fig:1+1particlenumbers2}. For very long time, we expect the particle numbers to slowly reorganise themselves into a classical thermal spectrum. Clearly, in 1+1 dimensions, this is an extremely long time, longer than we are able to simulate. This also implies that the nucleation rates that we have computed so far indeed arise from the quantum-like initial state. It is not such, that the initial state first thermalises to the equilibrium, after which the nucleation takes place. We will return to this point when considering 2+1 dimensional simulations.
\section{Generalising to 2+1 dimensions.}
\label{sec:2+1D}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{img/F2DRun_turateHerz_60.png}
\caption{The nucleation rate for different values of $\phi_0$ and $\epsilon$.
We used $N_x^{2} = 512^2$, except for $\epsilon=0.5$ where $N_x^{2} = 128^2$ was used. We take
$am_f = 0.3$ and $dt=0.05$.}
\label{fig:1+2rate}
\end{figure}
We now perform 2+1 dimensional simulations completely analogously to the 1+1 dimensional case.
We discretize
the 2+1 dimensional action on a quadratic lattice of size $N_x^2$. The scale is still set by the mass $am_f$, and the other dimensionless combinations are now $a^3V_0$ and $a^{1/2}\phi_0$.
The instanton prediction is obtained by generalizing the 1+1 dimensional case, and is given by
\begin{eqnarray}
\frac{\Gamma}{L^2} = 2 m_f^3 \Big( \frac{S_B}{2 \pi} \Big)^{3/2} e^{-S_B},
\end{eqnarray}
where the bounce action $S_B$ is again obtained from CosmoTransitions \cite{CosmoT}.
In Fig.~\ref{fig:1+2rate} we show the transition rates for different values
of $\phi_0$ for lattice simulations with fudge factors $\epsilon=1$,
$\epsilon=0.8$, $\epsilon=0.5$, respectively, and again comparing to the instanton prediction.
It is clear that the CS rate even when applying a fudge factor vastly overestimates the quantum tunneling rate. Quantum bubble nucleation and false vacuum decay cannot be modelled in this way. Simulations for large values of $\phi_0$ and $\epsilon=0.5$ did not transition to a global minimum at all.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{img/F2DRun_cutoffstudy_60.png}
\caption{The cut-off dependence of the nucleation rate, again for different values $\phi_0$.}
\label{fig:1+2ratecutoff}
\end{figure}
Fig.~\ref{fig:1+2ratecutoff} shows the nucleation rate for different values of the cut-off $am_f$ for $\epsilon=0.5$. As in 1+1 dimensions a higher cutoff (low $am_f$) results in higher rates, but the dependence is much stronger in 2+1 dimensions than it was in 1+1 dimensions. In 2 spatial dimensions, the number of UV modes grows much faster as the cut-off increases, and the initial energy density then also increases faster.
\subsection{Looking for bubbles and energy considerations in 2+1 dimensions}
\label{sec:2+1bubblesenergy}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud100_vev1_binning29_F1_turateHerz_Runs1_100_28T4_8M_modepsilon.png}
\includegraphics[width=0.45\textwidth]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud80_vev1_binning29_F1_turateHerz_Runs1_100_28T4_8M_longer_modepsilon.png}
\includegraphics[width=0.45\textwidth]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud100_vev1_binning29_F2_turateHerzbubbles.png}
\includegraphics[width=0.45\textwidth]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud80_vev1_binning29_F2_turateHerznew.png}
\caption{The time evolution of $\langle\cos(\phi/\phi_0)\rangle$ (top) for $\epsilon = 1.0$ (left) and $\epsilon = 0.5$ (right), both at $\phi_0=1.2$. Below are example configurations, with snapshots at different times in the evolution, again for $\epsilon = 1.0$ (left) and $\epsilon = 0.5$ (right).
The simulation parameters are $N_x = 512$, $dt=0.05$, and $am_f=0.3$. Each ensemble consists of 100 individual configurations.}
\label{fig:1+2bubbles}
\end{figure}
We will again take a closer look at the configurations close to the transition. Fig.~\ref{fig:1+2bubbles} shows two sets of simulations with high and low transition rates, respectively.
The simulations shown in the left-hand panels have $\epsilon=1.0$ and $\phi_0=1.2$ while the right-hand panels correspond to $\epsilon=0.5$ and $\phi_0=1.2$.
We observe a clear difference in that on the left-hand side, transitions happen almost immediately, and the field configurations display many small bubbles nucleating close to each other. As in 1+1 dimensions, this is a case of the energy density being larger than the potential barrier. On the right-hand side, we have an exponential decay, with just a single bubble nucleating in the entire volume.
In a similar way as in 1+1 dimensions, we compute the initial energy density and compare it to the potential barrier. This is shown in Fig.~\ref{fig:2+1density} where again the grey area corresponds to parameter combinations where the energy density is smaller than the barrier, and a bubble must be created for the transition to take place.
\begin{figure}[htp]
\centering
\includegraphics[width=12cm]{img/Energydensity_Nx128_sdim2_Configs100_final.png}
\caption{The initial energy density for 2+1 dimensional configurations for different values of
$\epsilon$, $\phi_0$ and cut-off $am_f$. Configurations with $\epsilon=0.5$ can have a smaller
average energy density than the potential barrier.}
\label{fig:2+1density}
\end{figure}
We can again estimate the wall tension by computing the energy of 2-dimensional bubbles, but by hand fixing the inside and outside to the local and global minimum values.
Fig.~\ref{fig:Ebubb2} shows the energy as a function of the radius of a growing bubble. As we argued in Sec.~\ref{sec:clastunneling}, the critical bubble in 2+1 and higher dimensions have a non-zero $R_{\rm crit}$, in contrast to the 1+1 dimensional case.
From the quadratic fit in the figure we can tentatively estimate the critical bubble size to be $Rm_f=15-20$.
\begin{figure}[htp]
\centering
\includegraphics[width=10cm]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0150_M30_Mod1_Fud80_vev1_binning29_F2_turateHerznew_bubbles_insidezero_Erplot_new.png}
\caption{The energy of the bubble as a function of its radius. The simulations parameters are $N_x^2=512^2$, $\phi_0=1.5$, $am_f=0.3$ and $\epsilon=0.8$.}
\label{fig:Ebubb2}
\end{figure}
\subsection{Energy depletion and thermalization in 2+1 dimensions}
\label{sec:2+1thermalisation}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{img/Nx128_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud50_vev1_binning29_F1_turateHerzBig_Runs1_93_28T4_8M_noeps_Time_n_range.png}
\includegraphics[width=0.7\textwidth]{img/Nx128_sdim2_dt50_init0_ON1_L120_phi0140_M30_Mod1_Fud50_vev1_binning29_F1_turateHerz2clean_Runs1_16_28T4_8M_noeps_Time_n_range.png}
\caption{Time evolution of occupation numbers for simulations with $N_x=128$, $\epsilon=0.5$ and different values of $\phi_0$. Top $\phi_0=1.2$, bottom $\phi_0=1.4$}
\label{fig:2+1nkplots}
\end{figure}
Finally, we will consider the evolution of the occupation numbers of the fields, also in 2+1 dimensions.
Fig.~\ref{fig:2+1nkplots} shows the occupation numbers as defined in (\ref{eq:nkdef}) for simulations with $\epsilon=0.5$. The top panel shows the case of $\phi_0=1.2$, where the configurations transition around $m_ft=4\times 10^6$. The bottom panel has $\phi_0=1.4$, where the configuration does not transition at all (note the time-axis extends to $2\times 10^7$).
In 1+1 dimensions, we saw that the initial quantum-like distribution is essentially unchanged up until the transition takes place. But in 2+1 dimensions, even before the transition happens the dynamics have begun redistributing the energy to approach the thermal equilibrium state. For $\phi_0=1.2$ this process does not have time to complete, but for $\phi_0=1.4$, the transition rate is so small, that the system thermalises, reaching an asymptotic state. This would not happen in the true quantum system.
It seems that in 1+1 dimensions, the time scales are such that classical nucleation is always much faster than thermalization. While in 2+1 dimensions, kinetic equilibration is often well underway by the time the transition happens. This ordering of time scales is dependent on the potential (the strength of self-interactions), the initial condition ($\epsilon$, say) and the cut-off ($am_f$).
\section{Conclusions}
\label{sec:Conclusions}
Motivated by the intriguing possibility proposed in \cite{braden}, that classical-statistical simulations could have something to say about quantum vacuum decay, we have investigated such simulations, both in 1+1 and 2+1 dimensions. The conclusion is disappointing, although perhaps not wholly unexpected.
The reported approximate agreement between the instanton calculation and the CS simulations is there, but as we have seen it arises through arbitrarily adjusting the parameters of the initial conditions (the amplitude $\epsilon$, cut-off $am_f$), and is also not specific to the quantum-like state with equal occupation numbers in all modes (thermal initial conditions work just as well, when tuning $T$). In fact, the actual, $\epsilon=1$, ``half'' initial condition intended to be mimic the zero-point fluctuations of the false vacuum produces a nucleation rate several orders of magnitude larger than the instanton nucleation rate, also in the range of $\phi_0$, where the energy density is smaller than the barrier. In addition, obtaining even approximate agreement between CS simulations and the instanton result is specific to 1+1 dimensions. In 2+1 dimensions, the CS simulations consistently overestimate the nucleation rate by many orders of magnitude. We also attempted simulations along the same lines for the physically relevant case of 3+1 dimensions, but the nucleation rate is then far below our numerical reach, and advanced Monte-Carlo techniques are likely required to compute also the classical rate \cite{Moore:2000jw,Moore:2001vf}.
As mentioned in the introduction, the CS-approximation may be derived directly as a limit of the full real-time path integral \cite{Mou:2019gyl}. It is only a good approximation for interacting quantum evolution for large occupation numbers, and even then only when computing ``classical'' observables. Quantum vacuum decay is both inherently quantum and by construction has an initial condition with occupation numbers $\ll 1$. Such initial conditions can only reliably be simulated in the CS approximation for very small coupling, when the evolution equations are (approximately) linear. But we have seen that even the proposed ``half'' initial condition probes the non-linear regions of the potential considered here (Fig. \ref{fig:Potdist}).
We conclude that computing quantum tunneling rates in field theory beyond \cite{colman1} remains a difficult task, which cannot be simulated using classical dynamics of an ensemble of configurations. It likely requires non-perturbative numerical methods at the level of the path integral, known to be challenging for real-time systems out of equilibrium (although see \cite{Berges:2000ur,Berges:2004yj,Arrizabalaga:2004iw, Arrizabalaga:2005tf} and \cite{Mou:2019gyl}). Fortunately, in almost all cases, phase transitions involve non-vacuum initial states, for which the quantum rate is insignificant compared to the classical nucleation rate. And classical nucleation rates may be computed using CS simulations or stochastic evolution in effective theories \cite{Moore:2000jw,Moore:2001vf}.
\section*{Acknowledgement}
We thank Paul Saffin, Peter Millington, Zong-Gang Mou and Alexander Rothkopf for collaboration on related topics as well as useful discussions and comments on the present manuscript.
\section{Introduction}
\label{sec:Intro}
\subsection{The classical-statistical approximation}
\label{sec:classical}
The classical-statistical approximation (CS) to real-time quantum field dynamics consists in replacing the evolution of the quantum operators (such as $\hat{\phi}(x,t)$) by classical dynamics of an ensemble of random initial conditions. The ensemble is taken to reproduce the initial correlators of the quantum system, and each random member of the ensemble is evolved by means of the classical equations of motion. The expectation values of observables are then computed as averages over the ensemble.
The CS approximation is reliable only when the occupation numbers (particle numbers) $n_{\bf k}$ of the fields are large, $n_{\bf k}\gg 1$. For a typical massive scalar field, the field equation reads
\begin{eqnarray}
(\partial_t^2-\partial^2_x+m^2)\hat{\phi}(x,t)=-\frac{dV_{\rm nl}(\hat{\phi})}{d\hat{\phi}}(x,t).
\end{eqnarray}
with $V_{\rm nl}$ denoting non-linear self-interactions.
When non-linearities are small, individual momentum modes behave as harmonic oscillators, and we may write for the field and canonical momentum operators
\begin{eqnarray}
\hat{\phi}(x,t)=\int\frac{d^dk}{(2\pi)^d}\hat{\phi}_{\bf k}(t) e^{i{\bf kx}},\quad
\hat{\phi}_{\bf k}(t) = \hat{a}_{\bf k} f_{\bf k}(t) + \hat{a}^\dagger_{\bf k} f_{\bf k}^*(t),\\
\hat{\pi}(x,t)=\int\frac{d^dk}{(2\pi)^d}\hat{\pi}_{\bf k}(t) e^{i{\bf kx}},\quad
\hat{\pi}_{\bf k}(t) = \hat{a}_{\bf k} \dot{f}_{\bf k}(t) + \hat{a}^\dagger_{\bf k} \dot{f}_{\bf k}^*(t),
\end{eqnarray}
Then the occupation number of a field momentum mode is given by
\begin{eqnarray}
\langle\phi_{\bf k}(t)\phi_{\bf k}^\dagger(t)\rangle=\frac{n_{ \bf k}+\frac{1}{2}}{\omega_k}= \frac{\langle a^\dagger_{\bf k}a_{\bf k}+a_{\bf k}a^\dagger_{\bf k}\rangle}{2}|f_{\bf k}(t)|^2=
\left(\langle a^\dagger_{\bf k}a_{\bf k}\rangle+ \frac{1}{2}\right)|f_{\bf k}(t)|^2.
\end{eqnarray}
where by standard convention, we have taken $|f_{\bf k}|^2(0)=1/\omega_{\bf k}$, $\omega_{\bf k}^2=\bold{k}^2+m^2$.
The zero-point fluctuations (corresponding to the zero point energy of a harmonic oscillator) is the ``1/2'', while the excitations of the system above the vacuum are the $n_{\bf k}$. The classical limit corresponds to $n_{\bf k}\gg \frac{1}{2}$.
This argument relies on the particle numbers $n_{\bf k}$ which is a quasi-particle concept, valid at weak coupling. The argument may be generalised and made more precise in the context of the Keldysh formalism and Kadanoff-Baym equations for the real-time correlation functions (see for instance \cite{Aarts:1997kp}).
It is convenient to introduce the ``statistical'' and ``spectral'' propagators
\begin{eqnarray}
F(x,y) = \frac{1}{2} \langle [\phi(x),\phi(y)]_+ \rangle,\quad \rho(x,y) = i \langle [\phi(x),\phi(y)]_- \rangle,
\end{eqnarray}
so that the complete propagator (on the Keldysh contour $\mathcal{C}$) may be written:
\begin{eqnarray}
G(x,y) = \langle T\{\phi(x),\phi(y)\}\rangle= F(x,y)-\frac{i}{2}\textrm{sign}_{\mathcal{C}}(x^0-y^0)\rho(x,y).
\end{eqnarray}
The real-time evolution of correlators may be expressed through diagram expansions in terms of $F$ and $\rho$, both for quantum and classical field theory \cite{Aarts:1997kp,Aarts:2001yn} (see also \cite{Rajantie:2006gy}). In the classical approximation, certain diagrams turn out to be absent\footnote{In terms of the Keldysh field basis of $\phi_{cl}$ and $\phi_q$, for instance in $\lambda \phi^4$-theory, the 3-$\phi_{q}$ vertex is absent, and any diagram involving this vertex.} so that whenever the quantum theory self-energy contains a combination of the form\footnote{For different theories, diagrams and self-energies, prefactors may vary. For instance, in $\lambda\phi^4$-theory, the sunset diagram produces $3F^2-\rho^2/4$ in the self-energy component for $\rho$, while $F^2-3\rho^2/4$ appears in the self-energy component for $F$ \cite{Arrizabalaga:2005tf}.}
\begin{eqnarray}
\Sigma_{\rm quantum}\simeq F^2(x,y)-\rho^2(x,y)/4,
\end{eqnarray}
in the classical theory the same diagram has only
\begin{eqnarray}
\Sigma_{\rm classical}\simeq F^2(x,y).
\end{eqnarray}
Hence, the classical approximation is good, whenever $\rho^2$ can in fact be neglected, $F^2\gg \rho^2$. For weak coupling and in the quasi-particle picture, $F\simeq (n_{\bf k}+1/2)/\omega_{\bf k}$, $\rho\simeq 1$, in which case the criterion for classicality again amounts to $n_{\bf k} \gg 1$.
A third fully non-perturbative derivation of the classical-statistical approximation follows directly from the Keldysh contour path integral \cite{Mou:2019gyl}\footnote{The primary focus of \cite{Mou:2019gyl} is the subsequent evaluation of the path integral in the Picard-Lefschetz formalism, but the relation to the CS approximation is independent of that further application.}. In short, whereas the quantum result comes about from averaging over the field variables at all times on the Keldysh contour (all``paths''), the CS approximation amounts to only averaging over the field variables at the initial time, corresponding to the ensemble of initial conditions.
The authors of \cite{Mou:2019gyl} proceed to show that the phenomenon of tunneling in quantum mechanics may be computed from the complete path integral or the Schr\"odinger equation yielding the same result, while the CS approximation fails to correctly reproduce the tunneling rate. Similarly, the CS approximation fails to describe the famous quantum violation of Bell or Leggett-Garg inequalities \cite{Millington:2020vkg}, even in the free-field limit. In fact, the CS approximation may only be used to compute certain ``classical'' observables.\footnote{The ones involving the $\phi_{\rm cl}$-field and not the $\phi_q$-field in the Keldysh basis.} One obvious example is that the fundamental commutator $[\phi(x),\pi(y)]_-=i\delta^d(x-y)$, which is inherently ``quantum'', vanishes in the CS approximation.\footnote{Note that in actual classical field theory, one may define objects with similar properties, such as the Poisson bracket of canonical variables. But although much of the concrete numerical computation is the same, conceptually classical field theory and the CS approximation to quantum fields are distinct. }
\subsection{Classical-Statistical simulations and the ``half''}
\label{sec:classtat}
There is nothing to prevent us from performing CS computations from any initial condition, provided we are able to somehow generate the configurations making up the initial ensemble. One example is the classical thermal equilibrium state, parametrized by some temperature $T$, $n_{\bf k}+1/2=T/\omega_k$, which up to corrections from non-linear interactions is a fixed point of the dynamics. But evolving some generic initial ensemble amounts to classical field theory from a non-equilibrium initial state, not necessarily with any connection to a quantum system.
As we have seen, only for large occupation numbers (or large $F$) can a CS computation be expected to yield a good approximation to the quantum result, and only for appropriate observables.
Most physical phenomena are dominated by some characteristic momentum range, and the spectrum of momentum modes split up into regions with large ($\gg 1$), moderate ($\simeq 1$) and small ($<1$) occupation numbers. As long as the modes relevant for the phenomenon of interest are highly occupied, the expectation is that classical dynamics will give a reliable result when applied to all modes. Typical examples include (near-to-)equilibrium systems at high temperature (see for instance \cite{Moore:1999fs,Berges:2007re}), large objects such as topological defects or sphalerons (for instance \cite{Rajantie:2010tb,DOnofrio:2014rug}), as well as high-occupancy phenomena such as resonances and instabilities (for instance \cite{Rajantie:2000nj,Greene:1997fu,Kofman:1997yn,Felder:2000hj,Bodeker:2007fw,Rebhan:2004ur,Tranberg:2003gi,Garcia-Bellido:1999xos}).
For a few very specific cases, a very special initial condition has been employed dubbed ``the quantum half'' \cite{Rajantie:2000nj,Garcia-Bellido:2002fsq,Smit:2002yg}. The prescription is to represent an initial quantum vacuum state with $n_{\bf k}+\frac{1}{2} = \frac{1}{2}$ by an ensemble of classical initial conditions, and evolve the system classically from there. In most cases, this is very problematic, since $n_{\bf k}\gg 1$ is certainly not satisfied, and the energy density of the initial state is cut-off dependent and divergent.
Another issue is that while the true quantum dynamics ensures that the zero-point fluctuations stay put in each mode \cite{Arrizabalaga:2004iw}, allowing only the exchange of the $n_k$ between modes, the classical dynamics does not distinguish between the $n_{\bf k}$ and the $1/2$ excitations, and will allow all to be exchanged. Extracting energy from the zero-point fluctuations in this way is an unphysical effect, which is negligible if $n_{\bf k}+1/2$ is anyway large, but may be very important when $n_{\bf k}+1/2\simeq 1/2$.
However, one property can make it reasonable to describe the quantum dynamics of a quantum-like ``half'' initial condition by the CS approximation: For non-interacting fields, the operator field equations are linear, as described above allowing us to expand the Heisenberg field operators as independent time-independent harmonic oscillators. To compute numerically
\begin{eqnarray}
\langle\phi_{\bf k}(t)\phi_{\bf k}^\dagger(t)\rangle = \left(a_{\bf k}^\dagger a_{\bf k}+\frac{1}{2}\right) |f_{\bf k}(t)|^2,
\end{eqnarray}
we only need to solve for $f_{\bf k}(t)$, while the $a_{\bf k}$, $a_{\bf k}^\dagger$ are time-independent operators containing the information about the initial state. Since the evolution is linear, it makes no difference whether we evolve from the initial condition $f_{\bf k}(0)=1/\sqrt{\omega_k}$ and multiply by $n_{\bf k}+1/2$ at the end, or whether we classically evolve an ensemble of initial conditions $\phi_{\bf k}(0)$ with the property that $\langle \phi_{\bf k}(0)\phi^\dagger_{\bf k}(0) \rangle = (n_{\bf k}+\frac{1}{2})/\omega_k$. This is the CS approximation, and so for a non-interacting field, the approximation to the evolution is exact, irrespective of $n_{\bf k}$\footnote{See \cite{Millington:2020vkg} for a detailed discussion of what observables this prescription allows us to compute.}.
This means that for systems, where for some reason the occupation numbers grow large while still in the linear regime (for small coupling, say), we are allowed to initialise the classical system in the quantum-vacuum like state $n_{\bf k}=1/2$, and evolve the system using classical equations of motion throughout; at early times because the system is linear, at late times because the system has large occupation numbers. We only require that occupation numbers grow large before self-interactions become important.
To summarize: The ideal prescription to simulate a phenomenon arising from a quantum vacuum initial condition is to 1) start off with $1/2$ in all modes, 2) evolve them all with the (quantum, but equivalently classical) linear equations until non-linear self-interactions become important, 3) discard all the modes that have not by then acquired large occupation numbers, and only 4) continue the now classical evolution of the highly occupied modes. Various levels of adherence to these rules can be argued for on a case-by-case level.
Examples, where this applies include:
\begin{itemize}
\item The primordial perturbations responsible for cosmological structure formation. These are the zero-point fluctuations of a weakly coupled scalar field, that grow because the accelerated expansion of space introduces non-adiabatic evolution of the modes \cite{Mukhanov:1990me}. This is one instance of the phenomenon known as ``squeezing'' of an initial vacuum state. Observations show that non-Gaussianities are minute, and so the entire early-time evolution from vacuum fluctuations ($1/2$) to non-vacuum ($n_{\bf k}\gg 1$) may be simulated using (almost linear) classical evolution.
\item Resonant preheating after inflation arises when at the end of inflation, the oscillating inflaton mean field is in resonance with certain field modes (whether of another field or the inflaton itself). This resonance amplifies these modes from an initial vacuum state to large occupation numbers \cite{Greene:1997fu,Kofman:1997yn}. Since the self-interaction is usually quite small ($\lambda\simeq 10^{-12}$ or smaller for many inflation models), occupation numbers can grow very large before non-linearities become important. And so the CS approximation is valid all the way from the quantum vacuum initial state.
\item Tachyonic preheating (or spinodal decomposition) occurs in hybrid inflation-type models, where a negative curvature of the potential $V$ triggers an instability of certain modes $k^2+V''<0$ \cite{Felder:2000hj}. These modes grow exponentially, until self-interactions become important. If the self-interaction is small, the classical evolution again holds from the initial quantum vacuum state (when the evolution is linear), and also in the subsequent non-linear regime, because occupation numbers are by then $\gg 1$ (see for instance \cite{Garcia-Bellido:2002fsq,Smit:2002yg}).
\item Certain plasma instabilites in gauge theories can also be described as unstable modes, at weak coupling \cite{Bodeker:2007fw,Rebhan:2004ur}. As these acquire large occupation numbers, the CS approximation can be applied also in the context of the approach to thermal equilibrium in heavy-ion collisions.
\end{itemize}
A final point worth mentioning is that the classical regime with large occupation numbers does not imply that one particular classical realization (one member of the ensemble) is singled out. All observables must be computed as statistical expectation values over the whole classical ensemble of configurations, which is then expected to reproduce well the expectation values over the wave function (or density matrix) of the quantum system.
\subsection{Classical simulations of vacuum decay}
\label{sec:intitcond}
A quantum system at zero temperature in a local potential minimum (a ``false'' vacuum) may decay into a state in the global minimum (the ``true'' vacuum) through quantum mechanical tunneling. In the Euclidean formulation of quantum field theory the transition is described by an instanton \cite{colman1}, and from a path integral point of view, the transition is mediated by non-classical paths, paths that do not satisfy the classical equations of motion. The transition rate is straightforwardly computed in quantum mechanics, but is substantially harder to extract in quantum field theory.
In \cite{braden}, an approximate agreement was reported between the instanton computation of the transition rate in 1+1 space-time dimensions, and the CS evolution of a vacuum (``half'') initial state in the unstable vacuum. The authors were surprised and intrigued by their result, since tunneling is precisely the type of very quantum processes, where one would expect the CS approximation to fail. Indeed in quantum mechanics (field theory in 0+1 dimensions), the CS approximation does fail to reproduce the quantum tunneling rate \cite{Mou:2019gyl}.
Classical simulations of bubble nucleation are natural in the context of a finite-temperature phase transition, where the initial state is described by the finite temperature distribution of occupation numbers above the unstable vacuum. Then the transition is a classical effect whereby there is some (Boltzmann) probability that the ambient thermal fluctuations manage to spontaneously form a true-vacuum bubble, large enough to make it over the potential barrier and expand to eventually fill the whole of space (we will return to this point in more detail below).
It follows that the finite-temperature bubble nucleation rate can in principle be computed by classically evolving all field configurations starting in the local potential minimum, and then averaging them over the initial Boltzmann distribution, schematically
\begin{eqnarray}
\label{eq:clasrateaverage}
\Gamma_{\rm Finite\,T} = \int P_{\rm Boltzmann,T}[\textrm{configuration}]\times \textrm{transition rate of the configuration}.\nonumber\\
\end{eqnarray}
The result of \cite{braden} would suggest that the quantum tunneling rate follows from the same set of classical trajectories, but averaged over the quantum vacuum-like initial distribution
\begin{eqnarray}
\Gamma_{\rm Quantum} = \int P_\textrm{Vacuum, $\frac{1}{2}$}[\textrm{configuration}]\times \textrm{transition rate of the configuration}.\nonumber\\
\end{eqnarray}
This is a surprising result, and warrants further scrutiny. In particular, since classical evolution conserves energy, it would imply that quantum tunneling is simply the classical evolution of the subset of the initial condition ensemble, that have enough energy to nucleate a bubble.
In \cite{hertz}, the numerical computations of \cite{braden} were reproduced, although it was pointed out that to get the reported agreement between numerical and instanton results, a ``fudge'' factor $\epsilon$ had to be introduced. The agreement occurs for $\epsilon\simeq 1/2$ which amounts to rescaling the zero-point fluctuations from $n_{\bf k}=\frac{1}{2}$ to $\frac{1}{8}$.
We will expand further on that analysis, and argue that the reported agreement is a coincidence to do with the choice of the parameters of the model, the lattice cut-off and the fudge factor, and that it is not specific to the ``half'' initial condition. We will also generalise the simulations to 2+1 dimensions, and show that there is no agreement there. We will see that there are some essential differences between nucleation in 1+1 and higher dimensions.
\section{Tunneling and Bubble Nucleation}
\label{sec:tunneling}
Consider a potential $V$ with two non-degenerate minima, with a barrier in-between. If the system is initially in the local minimum with highest energy, a transition may occur whereby the system moves to the global minimum with lowest energy (``false vacuum decay''). Energetically, it is very expensive for the field to move across the barrier in all of space simultaneously. Instead, one local region of space (a bubble) is created with the field in the global vacuum inside, in the local minimum outside, and with the field continuously interpolating between the two on the boundary (the wall).
\subsection{Classical Bubble Nucleation}
\label{sec:clastunneling}
Classical bubble nucleation is the process by which random classical fluctuations (for instance in equilibrium at a temperature $T$) by chance organise themselves into such a bubble. This happens all the time, but most bubbles are so small, that they collapse again. The energy criterion controlling the process is the balance between the energy cost of creating the bubble wall, interpolating between vacua, and the energy gain from the inside of the bubble having a lower potential energy than when the bubble is not there. In the simplest approximation one may write
\begin{eqnarray}
E = \textrm{Surface}\times\sigma +\textrm{Volume}\times\Delta V ,
\end{eqnarray}
where $\sigma$ is the surface tension, the energy associated with the interpolating field wall, and $\Delta V$ is the difference in potential at the two minima $V_{\rm global}-V_{\rm local}$ (which is negative).
In 1+1 dimensions, the volume is the distance between walls, $2R$, while the surface is just a factor of 2 (2 walls),
\begin{eqnarray}
E_{1}= 2\sigma + 2R\Delta V,
\end{eqnarray}
In order for a transition to happen, a random fluctuation has to occur that creates a pair of walls. Once these walls are established, there is no further energy cost in increasing the size of the bubble. The total energy is linearly decreasing with increasing $R$. We define the critical energy and the critical radius to be
\begin{eqnarray}
E_{\rm crit,1 }= 2\sigma,\qquad R_{\rm crit} = 0\quad(\textrm{or the width of a wall}).
\end{eqnarray}
In 2+1 dimension, things are qualitatively different. Now
\begin{eqnarray}
E_{2}= 2\pi R\sigma + \pi R^2\Delta V ,
\end{eqnarray}
which is maximised to give the saddle point solution
\begin{eqnarray}
E_{\rm crit, 2}=\frac{\pi \sigma^2}{\Delta V} ,\qquad R_{\rm crit, 2}=-\frac{\sigma}{\Delta V}.
\end{eqnarray}
In most cases, a random fluctuation does not acquire this critical radius, and the transition does not complete. The bubble shrinks again. But occasionally, a critical-size bubble is generated, which then continues to grow.
In 3+1 dimensions, we have
\begin{eqnarray}
E_{3}= 4\pi R^2\sigma + \frac{4\pi}{3} R^3\Delta V ,
\end{eqnarray}
so that
\begin{eqnarray}
E_{\rm crit, 3}=\frac{16\pi}{3}\frac{\sigma^3}{\Delta V^2} ,\qquad R_{\rm crit, 3}=-\frac{2\sigma}{\Delta V}.
\end{eqnarray}
Throughout, we have assumed that the bubble is spherical, since this maximises the volume/area. There will be subleading contributions from many other near-spherical configurations.
In thermal equilibrium, the bubble nucleation rate is then proportional to the Boltzmann probability of a large enough random fluctuation
\begin{eqnarray}
\frac{\Gamma}{V t}\propto e^{-E_{\rm crit}/T}.
\end{eqnarray}
Dividing by the volume $V$ (not to be confused with the potential) and $t$ normalises the rate to unit volume and time, respectively. A more detailed numerical analysis along the lines of (\ref{eq:clasrateaverage}) allows the direct computation of this quantity \cite{Moore:2000jw,Moore:2001vf}.
In a non-thermal environment, for instance a state with some non-thermal occupation numbers $n_{\bf k}$, the probability of creating such bubbles will depend on the state. As for the thermal equilibrium state, it may require that random multi-wavelength fluctuations manage to organise themselves into a large enough bubble configuration. But one could also imagine a state with only long-wavelength fluctuations (say, of size $R_{\rm crit}$), in which critical-size bubbles are ubiquitous.
There is also the possibility that the state (whether thermal or not), simply has an energy density (much) larger than the height of the potential barrier. Then the system hardly notices, that the minima are separated, and will not need to minimise energy into a spherical bubble to perform the transition. In this case, transitions are common and fast. If the energy density is larger than $|\Delta V|$, one may also get transitions back again.
Finally, there is the possibility that the entire physical volume has too little energy to even make a single critical bubble. This is only a practical issue in a numerical simulation of a finite volume, and hence finite total energy. Then a transition will never happen, if the dynamics are classical and energy conserving.
\subsection{Quantum Bubble Nucleation}
\label{sec:quantunneling}
Quantum tunneling is most apparent in situations where a barrier separates two local minima of the potential, and the energy of the state is smaller than the height of the barrier. In quantum mechanics (field theory in 0+1 dimensions), starting in one minimum, one may straightforwardly solve for the wavefunction of the system, giving a non-zero probability of finding the particle inside, and on the other side of the barrier. In time, there is an ever increasing probability for the particle to be measured in the other minimum. In the case when the second minimum is in fact the global minimum, we speak of vacuum decay.
In field theory, the analogous process can also be interpreted in terms of Euclidean instanton paths, famously in \cite{colman1}. This instanton is a 4-D spherically symmetric saddle point of the Euclidean action. One may again write down
\begin{eqnarray}
S_{\rm crit, 4}=\frac{27 \pi^2}{2}\frac{\sigma^4}{\Delta V^3} ,\qquad R_{\rm crit, 4}=-\frac{3\sigma}{\Delta V}.
\end{eqnarray}
To a good approximation, the rate of tunneling may then be written as
\begin{eqnarray}
\frac{\Gamma}{Vt} \propto e^{-S_{\rm crit,4}},
\end{eqnarray}
but keeping in mind that this is the saddle point action rather than an energy, and that no temperature is involved.
\subsection{The wall tension $\sigma$}
\label{sec:wealltension}
Whereas $\Delta V$ is simply the difference between potential minima, computing the wall tension $\sigma$ in the general case requires knowledge of the wall profile. For classical nucleation, an approximation is found by solving the (spherically symmetric) equation of motion for a static field profile interpolating between the two minima:
\begin{eqnarray}
\label{eq:bounce}
\partial_t\phi=0\rightarrow \Big(\partial_r^2+\frac{(d-1)}{r}\partial_r \Big)\phi =\frac{dV}{d\phi},
\end{eqnarray}
with the boundary conditions, $\phi(r=\infty)=\phi_{\rm local}$, $\partial_r\phi(0)=0$, $\phi(0)=\phi_{\rm global}$.
For $d=1$, this looks like time evolution in the potential $-V$, and is usually solved by numerical
means (shooting) \cite{CosmoT}. Then one may compute the wall tension as
\begin{eqnarray}
R^d\sigma = \int_0^{\infty} dr\, r^{d}\Big[\frac{1}{2}(\partial_r\phi)^2+V(\phi)\Big].
\label{eq:anatension}
\end{eqnarray}
In the limit when the wall is much thinner than the size of the bubble, the term
$(d-1)/r$ may be neglected. Then it is not necessary to know
the detailed shape of the wall, as one may rewrite (\ref{eq:anatension}) into
\begin{eqnarray}
\sigma = \int_{\phi_{\rm local}}^{\phi_{\rm global}} \sqrt{2V(\phi)}d\phi
\end{eqnarray}
which is easily computed, at least numerically.
For the 4-dimensional instanton we must first rotate to Euclidean space, the saddle point equation in $d+1$ dimensions becomes
\begin{eqnarray}
(\partial_\tau^2+\partial_{\bf x}^2)\phi=\frac{\partial V}{\partial \phi},
\end{eqnarray}
which in 4-dimensional spherical coordinates is equivalent to Eq. (\ref{eq:bounce}), in one dimension higher. Hence for a thin wall, the calculation of the wall tension proceeds in exactly the same way. This does not directly imply a relation between the tunneling rates, since $E_{\rm crit}$ and $S_E$ are very different objects.
\subsection{A convenient toy model potential}
\label{sec:potential}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{img/Potential_phi0dependence.png}
\caption{The potential in (\ref{eq:action}) for different values $\phi_0$.}
\label{fig:Potential}
\end{figure}
Following \cite{braden} we will focus on a specific potential, defined by the action
\begin{eqnarray}
S = \int dx^{d+1} \left[\frac{1}{2} \partial_{\mu} \phi \partial^{\mu} \phi - V_0 \Big( -\cos \bigg( \frac{\phi}{\phi_0} \bigg) + \frac{\lambda^2}{2} \sin^2 \bigg( \frac{\phi}{\phi_0} \bigg) -1 \Big)\right].
\label{eq:action}
\end{eqnarray}
It is parameterized by three quantities, $\lambda$, $\phi_0$ and $V_0$.
For $\lambda>1$ the periodic potential has global and local minima at $\phi=2n \pi\phi_0$ and $\phi=(2n+1)\pi\phi_0$, respectively, with integer $n$. The potential is chosen to have $V(\phi_{\rm local})=0$ and $\Delta V=-2V_0$, and we define the masses
\begin{eqnarray}
m_{f}^2 &=& \frac{d^2V}{d\phi^2} \Big|_{\phi=\phi_{\rm local}} = \frac{V_0}{\phi_0^2} (-1 + \lambda^2),\\
m_{t}^2 &=& \frac{d^2V}{d\phi^2} \Big|_{\phi=\phi_{\rm global}} = \frac{V_0}{\phi_0^2} (1 + \lambda^2).
\end{eqnarray}
The height of the potential barrier separating the two minima is given by
\begin{eqnarray}
V_{\rm max} = m_f^2 \phi_0^2 \Big( \frac{-1 + \lambda^2}{ 2 \lambda^2} \Big).
\end{eqnarray}
We will follow \cite{braden} and set $\lambda=1.2$. The potential is therefore parametrized by $m_f$ and $\phi_0$. In this parametrization $\phi_0$ fixes the location of the local vacuum but also influences the relative height of the potential barrier. We show in Fig.~\ref{fig:Potential} the potential for example sets of parameters. We will compute the bubble nucleation rate primarily as a function of $\phi_0$, and from the potential alone, we expect the rate to decrease with increasing $\phi_0$.
\subsection{Numerical implementation}
\label{sec:numerical_2}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{img/Potential_phiDistribution_Nx1000000_sdim1_M30.png}
\caption{The potential for $\phi_0=1$ and $\phi_0=1.5$ for $am_f=0.8$.
The superimposed histograms show the initial distribution of $\phi(x)$ which depends on both the fudge factor $\epsilon$ and the mass $am_f$.}
\label{fig:Potdist}
\end{figure}
We discretize the action on a space-time lattice, and solve the classical equation of motion,
\begin{align}
\dot{\phi} &= \pi, \\
\dot{\pi} &= \nabla^2 \phi - V^{\prime}(\phi).
\end{align}
A symplectic integrator scheme is used to ensure energy conservation for long simulation times.
The lattice has periodic boundary conditions and the number of lattice sites per dimension and the spacing are denoted as $N_x$ and $a$, giving the linear lattice size $L=N_xa$. We recast the lattice action in lattice units, whereby all dimensionfull quantities appear in dimensionless versions by means of the lattice spacing as $am_f$, $a^{d+1}V_0$, $a^2k^2$ and so on. Consequently, the dispersion relation on the lattice is determined by the discretized Laplacian and given by
\begin{eqnarray}
a^2\omega_k^2 = k_L^2 + a^2m_f^2, \qquad k_L^2 = \sum_{i=1}^d 2 - 2 \cos(k_i),
\end{eqnarray}
where for each spatial dimension $i$, $k_i=n_i \frac{2 \pi}{N_x}$ for $n_i =-N_x/2+1, ...,N_x/2$.
The quantity $am_f$ then defines the lattice cut-off, since if the maximum momentum is $a\Lambda\simeq \pi $ then the cut-off in physical units is $\Lambda/m_f=\frac{\pi}{am_f}$. As $am_f$ decreases, the cut-off increases. We will in the following only explicitly write out powers of $a$ when needed.
The quantum-like initial conditions are Gaussian distributed field fluctuations defined by
\begin{eqnarray}
\label{eq:fluct}
\langle \phi_{\bold{k}} \phi_{\bold{k^{\prime}}} \rangle &= \epsilon^2 \frac{1}{2 \omega_k} \delta^d _{\bold{k} - \bold{k^{\prime} }} \qquad
\langle \pi_{\bold{k}} \pi_{\bold{k^{\prime}}} \rangle &= \epsilon^2 \frac{\omega_k}{2} \delta^d _{\bold{k} - \bold{k^{\prime}}}
\end{eqnarray}
These vacuum fluctuations are added to a homogeneous field placed initially at the local minimum
$\phi(x)= \pi \phi_0$.
The ``fudge factor'' $\epsilon$ was introduced by \cite{hertz} to parametrically fit
tunneling rates to instanton results. $\epsilon =1 $ is the physical value that mimics a quantum vacuum state, whereas other values have no obvious physical interpretation. As we will see, and consistent with \cite{hertz}, the apparent agreement between CS results and the instanton rate arises for $\epsilon\simeq 0.5$.
\begin{figure}[!hb]
\centering
\includegraphics[width=8cm]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0100_M6_Mod1_Fud100_vev1_binning29_F1_turateHerz_fit.png}
\caption{Example in 1+1 dimensions of the time dependence $N_{\rm surv}(t)$ and a fit to the form $N_0 e^{-t \Gamma}$. The plot shows the exponential behaviour starting from roughly 60 percent of the total number of configurations, which was $N=100$.}
\label{fig:1+1fitexample}
\end{figure}
In preparation of the later discussion, it is instructive to generate a single initial condition $\phi(x)$, and simply compute the distribution of local field values. Fig.~\ref{fig:Potdist} shows a histogram superimposed on the potential. For a fudge factor of $\epsilon=0.5$, we see that the entire field configuration is inside the false vacuum initially. But for $\epsilon=1$, already at the initial time, the field is on the other side of the potential barrier in some small parts of space.
Following \cite{braden,hertz}, as the simulation proceeds, we monitor the observable $\langle\cos(\phi/\phi_0)\rangle$, where $\langle . \rangle$ refers to the ensemble average, to define whether a configuration has transitioned to one of the neighboring global minima.
For homogeneous configurations at the local/global minima this observable takes the value $-1$ or $+1$.
A configuration is then said to have transitioned if
\begin{eqnarray}
\langle\cos(\phi/\phi_0)\rangle > \langle\cos(\phi/\phi_0)\rangle_{t=0} + 10 \Delta_{t=0} ,
\end{eqnarray}
where $\Delta_{t=0}$ is the standard deviation of the same observable $\cos(\phi/\phi_0)$ at the initial time.
Given an ensemble of $N$ configurations, we define $N_{\rm surv}(t)$ to be the number of these configurations that by a given time $t$ have not yet transitioned. We then perform a fit to the form
\begin{eqnarray}
N_{\rm surv}(t) = N_0 e^{-\Gamma t},
\end{eqnarray}
where $N_0$ refers to the starting point of the fit. Then $\Gamma$ is the bubble nucleation rate.
Typically it takes some time before the configurations begin to transition. The fit was therefore performed from a time when $N_0$ was 60 percent of the total configurations in the simulation. An example is shown in Fig. \ref{fig:1+1fitexample}.
\section{The rate in 1+1 dimensions.}
\label{sec:1D}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{img/F1DRun_turateHerzM6_60.png}
\caption{The nucleation rate from the instanton model and the simulations with $\epsilon = 0.5$ and $\epsilon = 1$.
The lattice simulation has $N_x=512$, $am_f=0.06$ and integration step size $dt=0.05$.}
\label{fig:1+1results}
\end{figure}
We first consider the system in 1+1 dimensions, exactly as in \cite{braden,hertz}. We compute the nucleation rate for different values of $\phi_0$ and for different values of the fudge factor, shown in Fig. \ref{fig:1+1results}. The number of configurations is $N=100$, which was sufficient to get convincing results.
The instanton estimate of the quantum tunneling rate is obtained via the expression \cite{hertz}
\begin{eqnarray}
\frac{\Gamma}{L} = 2 m_f^2 \Big( \frac{S_B}{2 \pi} \Big) e^{-S_B} ,
\end{eqnarray}
Where $S_B$ refers to bounce action computed with the tool CosmoTransitions \cite{CosmoT}.
Making allowance for possible small differences in fitting procedure and numerical implementation, this reproduces the results of \cite{braden} and \cite{hertz}, which may be summarized as follows: In 1+1 dimensions, CS simulations of a quantum vacuum-like initial ensemble produces a bubble nucleation rate of a similar order of magnitude as the quantum instanton result, at least for $\phi_0\leq 1.25$. However, this agreement is achieved from a quantum-like initial condition not with occupation numbers of 1/2, but instead 1/8, hence a fudge factor of 1/2 \cite{hertz}. In fact, tuning the fudge factor down from 1, one may achieve different levels of agreement with the instanton result at different values of $\phi_0$. The ``half'' initial condition (fudge factor 1) overestimates the quantum nucleation rate by several orders of magnitude, and has a weak dependence on the shape of the potential, $\phi_0$.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{img/F1DRun_turateHerzM6_60_thermal.png}
\caption{The thermal nucleation rate for different initial temperatures, compared to the instanton rate and the quantum-$\epsilon=0.5$ result. The cut-off is $am_f=0.3$.}
\label{fig:1+1thermal}
\end{figure}
In Fig. \ref{fig:1+1thermal} we consider another choice of initial condition, namely the classical equilibrium mentioned above
\begin{eqnarray}
n_{\bf k}+\frac{1}{2}\rightarrow \frac{T}{\omega_k}.
\end{eqnarray}
While the quantum vacuum has constant occupation number for all modes, in the classical equilibrium they are suppressed in the UV. The energy density is however still divergent as the cut-off $am_f$ goes to zero. We perform the same simulation procedure as previously, but now for different values of the parameter $T$. We see that just as we did for the fudge factor $\epsilon$, we may also tune $T$ to a semi-quantitative agreement with the instanton nucleation rate, in this case $T=0.1 m_f$.
\begin{figure}[htp]
\centering
\includegraphics[width=12cm]{img/F1Drun_cutoffstudy_60_tunnelingrate_new.png}
\caption{The dependence of the nucleation rate on the lattice cut-off in 1+1 dimensions, for quantum-$\epsilon=0.5$ initial conditions and comparing to the instanton rate.}
\label{fig:1+1cutoff}
\end{figure}
Since the initial conditions correspond to a divergent energy density in the continuum, it is also prudent to test the robustness of our results to changing the cut-off, in our parametrization the quantity $am_f$. The result of this procedure is shown in Fig. \ref{fig:1+1cutoff} for $\epsilon=0.5$. We see that while giving a weaker effect than varying $\epsilon$, changing the cut-off is an alternative way of tuning the rate to match the instanton rate. As might be expected, smaller $am_f$ corresponding to larger cut-off and more energy in the system leads to a larger nucleation rate.
We may consider introducing a mass counterterm (or more generally, renormalise the potential \cite{braden2}) to counter the effect of the divergent initial condition. But because the zero point fluctuations do not stay put in classical simulations, this is difficult to achieve (see for instance \cite{Arrizabalaga:2004iw}), and does not in itself solve the problem of a divergent energy being available for tunneling. The particular potential considered here is also not readily renormalisable.
\subsection{Looking for bubbles and energy considerations in 1+1 dimensions}
\label{sec:bubblesenergy}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud100_vev1_binning29_F1_turateHerz_Runs1_100_28T4_8M_cos.png}
\includegraphics[width=0.48\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud50_vev1_binning29_F2_turateHerz_Runs1_100_28T4_8M_cos.png}
\includegraphics[width=0.48\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud100_vev1_binning29_F1_turateHerzbubbles_config.png}
\includegraphics[width=0.48\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud50_vev1_binning29_F2_turateHerzbubbles_config.png}
\caption{The time evolution of $\langle\cos(\phi/\phi_0)\rangle$ (top) for $\epsilon = 1.0$ (left) and $\epsilon = 0.5$ (right). Below are examples of configurations of one of the simulations.
Other simulation parameters are $N_x = 512$, $dt=0.05$, $\phi_0=1.2$, $am_f=0.3$. Each ensemble consists of 100 individual configurations.}
\label{fig:bubbleconfig}
\end{figure}
The nucleation rate for $\epsilon=1$ has a weak dependence on $\phi_0$, and similarly for $\epsilon=0.5$ for small $\phi_0$. We can begin to understand this at least qualitatively by considering the energy density of the configurations.
The top left panel of Fig.~\ref{fig:bubbleconfig} shows the time-evolution of the
observable $\langle\cos(\phi/\phi_0)\rangle$ for individual configurations for $\epsilon=1$ at $\phi_0=1.2$.
The bottom left panel is one
field configuration in space at different times, labelled by the value of $\langle\cos(\phi/\phi_0)\rangle$ at that time. We see that all configurations transition through the threshold value $\simeq -0.6$ almost immediately, and that the field configurations have many nuclei and bubbles. This is an example of an initial condition with an energy density $\rho$ larger than the potential barrier $V_{\rm max}$. There is no need for the configuration to randomly organise itself into a critical bubble for the transition to take place. In contrast, the right-hand panels of Fig.~\ref{fig:bubbleconfig} show a simulation at $\epsilon=0.5$ and $\phi_0=1.2$. Here, the transitions happen as an exponential decay. Also, field configurations evolve over time from a single, initially small, bubble (light blue) to a larger and larger bubble.
To make this more explicit, we compute the total energy and average energy density of the configurations.
Fig.~\ref{fig:1+1density} shows the dependence of the average energy density on $\phi_0$ and $\epsilon$.
The grey shaded region is where the average energy density
is smaller than the potential barrier, while above, the energy density is larger than the barrier. Roughly speaking, one would expect the rate of nucleation to be exponentially suppressed only in the grey region, as a critical bubble needs to emerge through a stochastic process. And one would expect the rate to be unsuppressed everywhere else. Of course, individual field configurations are inhomogeneous, multiple nuclei complicate the picture, and some out-of-equilibrium initial states may have special properties enhancing nucleation. And so the boundaries of the grey region should be considered fuzzy.
We see that for $\epsilon=1$, we only enter the grey region far beyond the range of the figure. And that for $\epsilon=0.5$, we enter the region around $\phi_0=1.05$, corresponding roughly to where the exponential dependence on $\phi_0$ kicks in in Fig. \ref{fig:1+1results}.
\begin{figure}[htp]
\centering
\includegraphics[width=12cm]{img/Energydensity_Nx128_sdim1_Configs1000_final.png}
\caption{The dependence of energy density on $\phi_0$ and $\epsilon$. We also see a small dependence on the cut-off $am_f$.}
\label{fig:1+1density}
\end{figure}
Since the energy density is approximately $\propto \epsilon^2$ and the potential barrier $\propto \phi_0^2$, the criterion for entering the grey region $\rho=V_{\rm max}$ amounts to $\phi_0\propto \epsilon$. The proportionality constant in the case depicted here happens to be $\simeq 2.05$, and so the rate for the $\epsilon=1$ initial condition of physical relevance only becomes exponentially suppressed around $\phi_0=2.1$, where the instanton rate is $ \frac{\Gamma}{L} \frac{\phi_0^2}{V_0} = 4.7 \times 10^{-9}$.
\begin{figure}[htp]
\centering
\includegraphics[width=10cm]{img/F1DRun_turateHerzM6_60_extended.png}
\caption{Tunneling rate in 1+1 dimension for larger values of $\phi_0$. Simulations parameters are $N_x=512$, $am_f=0.06$.}
\label{fig:1+1rateextended}
\end{figure}
In Fig. \ref{fig:1+1rateextended} we have extended the range of Fig. \ref{fig:1+1results} to include the exponentially suppressed region for $\epsilon=1$. We see again that the CS approximation overestimates the nucleation rate by several orders of magnitude.
\begin{figure}[htp]
\centering
\includegraphics[width=10cm]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi050_M30_Mod1_Fud17_vev1_binning29_F2_singlebubble_bubbles_BubbleEnergyvsRad.png}
\caption{The energy of two bubble walls and the bubble interior versus $\langle \cos \frac{\phi}{\phi_0} \rangle$. The simulation parameters are $N_x=512$, $\phi_0=0.5$ and $\epsilon=0.17$.}
\label{fig:Ebubb}
\end{figure}
We can attempt to compute the wall tension in 1+1 dimensions from further developing the naive model of Sec.~\ref{sec:clastunneling}. We note that if the bubble is really in the global minimum inside the bubble ($\cos(\phi/\phi_0)=1$) and in the local minimum outside ($\cos(\phi/\phi_0)=-1$), then for a single configuration
\begin{eqnarray}
\langle \cos \frac{\phi}{\phi_0} \rangle = \frac{4R-N_x}{N_x},
\end{eqnarray}
where $2R$ is the wall separation, and $R$ hence the radius of the bubble. We now compute numerically the energy of bubbles, where we by hand force the interior and exterior to be in the minima. Then
\begin{eqnarray}
E_{\rm Bubble} = 2\sigma + 2 R\Delta V = a + b \cos \frac{\phi}{\phi_0}.
\end{eqnarray}
We fit the parameters $a$ and $b$ for each critical bubble and relate to $\sigma $ and $\Delta V$ via
\begin{eqnarray}
\sigma = a-b; \quad \Delta V = \frac{2b}{N_x}.
\end{eqnarray}
In this way, an estimate for $\sigma$ can be obtained by extrapolating $E_{\rm Bubble}$ to $\langle \cos \frac{\phi}{\phi_0} \rangle = -1$. Fig. \ref{fig:Ebubb} shows an example fit to a critical bubble obtained with $\phi_0=0.5$, $\epsilon=0.17$.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel={$\phi_0^2$},
ylabel={$\frac{E_{bubble}}{m_{f}}$},
xmin=0.25, xmax=2.25,
ymin=0, ymax=10,
legend pos=north west,
ymajorgrids=true,
grid style=dashed,
]
\addplot[
color=blue,
mark=square,
]
coordinates {
(0.25,1.3)(0.49,1.85)(1,3.86)(1.44,6.33)(1.96,9.22)(2.25,9.51)
};
\end{axis}
\end{tikzpicture}
\caption{The critical bubble energy in 1+1 dimensions for different values of $\phi_0$.}
\label{fig:Ecrit}
\end{figure}
Fig.~\ref{fig:Ecrit} shows the corresponding energy of the critical bubble, $2\sigma$, for different values of $\phi_0$. This then is the minimal energy required for a configuration to classically transition to the global minimum.
If the volume, cut-off, and $\epsilon$ is such that the energy is smaller than this value, the evolution of this interacting scalar field initially in an out-of-equilibrium state, will eventually drive the system to the classical equilibrium state in the local minimum.
\subsection{Energy depletion and thermalization in 1+1 dimensions}
\label{sec:thermalisation}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0110_M6_Mod1_Fud50_vev1_binning29_F1_turateHerz_Runs1_100_28T4_8M_noeps_Time_n_range.png}
\caption{The evolution of the occupation numbers in 1+1 dimensions, $\epsilon=0.5$, $\phi_0=1.5$, $am_f=0.06$.}
\label{fig:1+1particlenumbers1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{img/Nx512_sdim1_dt50_init0_ON1_L120_phi0150_M30_Mod1_Fud17_vev1_binning29_F1_thermalquestionmark_Runs1_50_28T4_8M_noeps_Time_n_range.png}
\caption{The evolution of the occupation numbers in 1+1 dimensions,
$\epsilon=0.17$, $\phi_0=1.5$, $am_f=0.06$.}
\label{fig:1+1particlenumbers2}
\end{figure}
Since the occupation numbers of the modes play an important role in the CS approximation, we also compute these through
\begin{eqnarray}
n_{\bf k}+1/2= \sqrt{\langle\pi_{\bf k}^\dagger\pi_{\bf k}\rangle\langle\phi_{\bf k}^\dagger\phi_{\bf k}\rangle}.
\label{eq:nkdef}
\end{eqnarray}
In Fig. \ref{fig:1+1particlenumbers1} we show the occupation numbers for a set of modes in time\footnote{The modes are collected in finite bins with several modes in each, enumerated by their central $k_L$-value.}. We see that initially, the occupation numbers (\ref{eq:nkdef}) are indeed $\epsilon^2/2$, and as the nucleation is triggered (within a time of a few in mass units), they increase as potential energy is converted into excitations. The energy is mostly deposited in IR modes.
As discussed in the preceding section, we can engineer an initial configuration with total energy less than the $E_{\rm crit}$ which will never transition. An example of this is shown in Fig. \ref{fig:1+1particlenumbers2}. For very long time, we expect the particle numbers to slowly reorganise themselves into a classical thermal spectrum. Clearly, in 1+1 dimensions, this is an extremely long time, longer than we are able to simulate. This also implies that the nucleation rates that we have computed so far indeed arise from the quantum-like initial state. It is not such, that the initial state first thermalises to the equilibrium, after which the nucleation takes place. We will return to this point when considering 2+1 dimensional simulations.
\section{Generalising to 2+1 dimensions.}
\label{sec:2+1D}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{img/F2DRun_turateHerz_60.png}
\caption{The nucleation rate for different values of $\phi_0$ and $\epsilon$.
We used $N_x^{2} = 512^2$, except for $\epsilon=0.5$ where $N_x^{2} = 128^2$ was used. We take
$am_f = 0.3$ and $dt=0.05$.}
\label{fig:1+2rate}
\end{figure}
We now perform 2+1 dimensional simulations completely analogously to the 1+1 dimensional case.
We discretize
the 2+1 dimensional action on a quadratic lattice of size $N_x^2$. The scale is still set by the mass $am_f$, and the other dimensionless combinations are now $a^3V_0$ and $a^{1/2}\phi_0$.
The instanton prediction is obtained by generalizing the 1+1 dimensional case, and is given by
\begin{eqnarray}
\frac{\Gamma}{L^2} = 2 m_f^3 \Big( \frac{S_B}{2 \pi} \Big)^{3/2} e^{-S_B},
\end{eqnarray}
where the bounce action $S_B$ is again obtained from CosmoTransitions \cite{CosmoT}.
In Fig.~\ref{fig:1+2rate} we show the transition rates for different values
of $\phi_0$ for lattice simulations with fudge factors $\epsilon=1$,
$\epsilon=0.8$, $\epsilon=0.5$, respectively, and again comparing to the instanton prediction.
It is clear that the CS rate even when applying a fudge factor vastly overestimates the quantum tunneling rate. Quantum bubble nucleation and false vacuum decay cannot be modelled in this way. Simulations for large values of $\phi_0$ and $\epsilon=0.5$ did not transition to a global minimum at all.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{img/F2DRun_cutoffstudy_60.png}
\caption{The cut-off dependence of the nucleation rate, again for different values $\phi_0$.}
\label{fig:1+2ratecutoff}
\end{figure}
Fig.~\ref{fig:1+2ratecutoff} shows the nucleation rate for different values of the cut-off $am_f$ for $\epsilon=0.5$. As in 1+1 dimensions a higher cutoff (low $am_f$) results in higher rates, but the dependence is much stronger in 2+1 dimensions than it was in 1+1 dimensions. In 2 spatial dimensions, the number of UV modes grows much faster as the cut-off increases, and the initial energy density then also increases faster.
\subsection{Looking for bubbles and energy considerations in 2+1 dimensions}
\label{sec:2+1bubblesenergy}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud100_vev1_binning29_F1_turateHerz_Runs1_100_28T4_8M_modepsilon.png}
\includegraphics[width=0.45\textwidth]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud80_vev1_binning29_F1_turateHerz_Runs1_100_28T4_8M_longer_modepsilon.png}
\includegraphics[width=0.45\textwidth]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud100_vev1_binning29_F2_turateHerzbubbles.png}
\includegraphics[width=0.45\textwidth]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud80_vev1_binning29_F2_turateHerznew.png}
\caption{The time evolution of $\langle\cos(\phi/\phi_0)\rangle$ (top) for $\epsilon = 1.0$ (left) and $\epsilon = 0.5$ (right), both at $\phi_0=1.2$. Below are example configurations, with snapshots at different times in the evolution, again for $\epsilon = 1.0$ (left) and $\epsilon = 0.5$ (right).
The simulation parameters are $N_x = 512$, $dt=0.05$, and $am_f=0.3$. Each ensemble consists of 100 individual configurations.}
\label{fig:1+2bubbles}
\end{figure}
We will again take a closer look at the configurations close to the transition. Fig.~\ref{fig:1+2bubbles} shows two sets of simulations with high and low transition rates, respectively.
The simulations shown in the left-hand panels have $\epsilon=1.0$ and $\phi_0=1.2$ while the right-hand panels correspond to $\epsilon=0.5$ and $\phi_0=1.2$.
We observe a clear difference in that on the left-hand side, transitions happen almost immediately, and the field configurations display many small bubbles nucleating close to each other. As in 1+1 dimensions, this is a case of the energy density being larger than the potential barrier. On the right-hand side, we have an exponential decay, with just a single bubble nucleating in the entire volume.
In a similar way as in 1+1 dimensions, we compute the initial energy density and compare it to the potential barrier. This is shown in Fig.~\ref{fig:2+1density} where again the grey area corresponds to parameter combinations where the energy density is smaller than the barrier, and a bubble must be created for the transition to take place.
\begin{figure}[htp]
\centering
\includegraphics[width=12cm]{img/Energydensity_Nx128_sdim2_Configs100_final.png}
\caption{The initial energy density for 2+1 dimensional configurations for different values of
$\epsilon$, $\phi_0$ and cut-off $am_f$. Configurations with $\epsilon=0.5$ can have a smaller
average energy density than the potential barrier.}
\label{fig:2+1density}
\end{figure}
We can again estimate the wall tension by computing the energy of 2-dimensional bubbles, but by hand fixing the inside and outside to the local and global minimum values.
Fig.~\ref{fig:Ebubb2} shows the energy as a function of the radius of a growing bubble. As we argued in Sec.~\ref{sec:clastunneling}, the critical bubble in 2+1 and higher dimensions have a non-zero $R_{\rm crit}$, in contrast to the 1+1 dimensional case.
From the quadratic fit in the figure we can tentatively estimate the critical bubble size to be $Rm_f=15-20$.
\begin{figure}[htp]
\centering
\includegraphics[width=10cm]{img/Nx512_sdim2_dt50_init0_ON1_L120_phi0150_M30_Mod1_Fud80_vev1_binning29_F2_turateHerznew_bubbles_insidezero_Erplot_new.png}
\caption{The energy of the bubble as a function of its radius. The simulations parameters are $N_x^2=512^2$, $\phi_0=1.5$, $am_f=0.3$ and $\epsilon=0.8$.}
\label{fig:Ebubb2}
\end{figure}
\subsection{Energy depletion and thermalization in 2+1 dimensions}
\label{sec:2+1thermalisation}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{img/Nx128_sdim2_dt50_init0_ON1_L120_phi0120_M30_Mod1_Fud50_vev1_binning29_F1_turateHerzBig_Runs1_93_28T4_8M_noeps_Time_n_range.png}
\includegraphics[width=0.7\textwidth]{img/Nx128_sdim2_dt50_init0_ON1_L120_phi0140_M30_Mod1_Fud50_vev1_binning29_F1_turateHerz2clean_Runs1_16_28T4_8M_noeps_Time_n_range.png}
\caption{Time evolution of occupation numbers for simulations with $N_x=128$, $\epsilon=0.5$ and different values of $\phi_0$. Top $\phi_0=1.2$, bottom $\phi_0=1.4$}
\label{fig:2+1nkplots}
\end{figure}
Finally, we will consider the evolution of the occupation numbers of the fields, also in 2+1 dimensions.
Fig.~\ref{fig:2+1nkplots} shows the occupation numbers as defined in (\ref{eq:nkdef}) for simulations with $\epsilon=0.5$. The top panel shows the case of $\phi_0=1.2$, where the configurations transition around $m_ft=4\times 10^6$. The bottom panel has $\phi_0=1.4$, where the configuration does not transition at all (note the time-axis extends to $2\times 10^7$).
In 1+1 dimensions, we saw that the initial quantum-like distribution is essentially unchanged up until the transition takes place. But in 2+1 dimensions, even before the transition happens the dynamics have begun redistributing the energy to approach the thermal equilibrium state. For $\phi_0=1.2$ this process does not have time to complete, but for $\phi_0=1.4$, the transition rate is so small, that the system thermalises, reaching an asymptotic state. This would not happen in the true quantum system.
It seems that in 1+1 dimensions, the time scales are such that classical nucleation is always much faster than thermalization. While in 2+1 dimensions, kinetic equilibration is often well underway by the time the transition happens. This ordering of time scales is dependent on the potential (the strength of self-interactions), the initial condition ($\epsilon$, say) and the cut-off ($am_f$).
\section{Conclusions}
\label{sec:Conclusions}
Motivated by the intriguing possibility proposed in \cite{braden}, that classical-statistical simulations could have something to say about quantum vacuum decay, we have investigated such simulations, both in 1+1 and 2+1 dimensions. The conclusion is disappointing, although perhaps not wholly unexpected.
The reported approximate agreement between the instanton calculation and the CS simulations is there, but as we have seen it arises through arbitrarily adjusting the parameters of the initial conditions (the amplitude $\epsilon$, cut-off $am_f$), and is also not specific to the quantum-like state with equal occupation numbers in all modes (thermal initial conditions work just as well, when tuning $T$). In fact, the actual, $\epsilon=1$, ``half'' initial condition intended to be mimic the zero-point fluctuations of the false vacuum produces a nucleation rate several orders of magnitude larger than the instanton nucleation rate, also in the range of $\phi_0$, where the energy density is smaller than the barrier. In addition, obtaining even approximate agreement between CS simulations and the instanton result is specific to 1+1 dimensions. In 2+1 dimensions, the CS simulations consistently overestimate the nucleation rate by many orders of magnitude. We also attempted simulations along the same lines for the physically relevant case of 3+1 dimensions, but the nucleation rate is then far below our numerical reach, and advanced Monte-Carlo techniques are likely required to compute also the classical rate \cite{Moore:2000jw,Moore:2001vf}.
As mentioned in the introduction, the CS-approximation may be derived directly as a limit of the full real-time path integral \cite{Mou:2019gyl}. It is only a good approximation for interacting quantum evolution for large occupation numbers, and even then only when computing ``classical'' observables. Quantum vacuum decay is both inherently quantum and by construction has an initial condition with occupation numbers $\ll 1$. Such initial conditions can only reliably be simulated in the CS approximation for very small coupling, when the evolution equations are (approximately) linear. But we have seen that even the proposed ``half'' initial condition probes the non-linear regions of the potential considered here (Fig. \ref{fig:Potdist}).
We conclude that computing quantum tunneling rates in field theory beyond \cite{colman1} remains a difficult task, which cannot be simulated using classical dynamics of an ensemble of configurations. It likely requires non-perturbative numerical methods at the level of the path integral, known to be challenging for real-time systems out of equilibrium (although see \cite{Berges:2000ur,Berges:2004yj,Arrizabalaga:2004iw, Arrizabalaga:2005tf} and \cite{Mou:2019gyl}). Fortunately, in almost all cases, phase transitions involve non-vacuum initial states, for which the quantum rate is insignificant compared to the classical nucleation rate. And classical nucleation rates may be computed using CS simulations or stochastic evolution in effective theories \cite{Moore:2000jw,Moore:2001vf}.
\section*{Acknowledgement}
We thank Paul Saffin, Peter Millington, Zong-Gang Mou and Alexander Rothkopf for collaboration on related topics as well as useful discussions and comments on the present manuscript.
|
1,116,691,499,087 | arxiv | \section{Paradox Lost?}
\label{sec:intro}
With the advent of microscopic accountings of the density of states of a black hole in string theory~\cite{Strominger:1996sh}, it seemed that a resolution of the black hole information paradox might finally be at hand. But while these accountings provided exemplars of unitary theories of quantum gravity, the horizon scale physics remained obscure and so the flaw in the arguments leading to the information paradox remained similarly obscure.
One suggestion to resolve the information paradox that arose not long after this breakthrough was that of black hole complementarity~\cite{Susskind:1993if}~-- that the black hole interior was secretly encoded in the Hilbert space of the exterior in a complicated way, such that the interior and exterior observables did not commute. The latter property was supposed to preclude various inconsistencies which might arise from various thought experiments in which observers measure the Hawking radiation, and then dive into the black hole to measure the state of the interior. The development of the BFSS matrix model~\cite{Banks:1996vh} hinted that something like black hole complementarity was on the right track; in this model (one of the first instances of gauge/gravity duality), the fundamental degrees of freedom specifying the locations of objects in spacetime are matrices, which become non-commuting in black hole states.
But the details of how this complementarity was to be implemented in practice were never spelled out.
Subsequently, a careful examination of the quantum entanglement properties of states in quantum field theory led to a realization that the notion of black hole complementarity could not resolve the information paradox, and to a fuller appreciation of the fundamental incompatibility of unitarity, locality, and causality in the effective field theory description of black hole evaporation~\cite{Mathur:2009hf,Braunstein:2009my,Almheiri:2012rt}. This realization has led to a sharpening of the information paradox.
Recently, though, a variant of the idea of black hole complementarity has arisen, as a circle of ideas regarding the nature of holography has taken hold. This circle of ideas has crystallized in the following chain of logic:
\begin{enumerate}[start=1,
labelindent=\parindent,
leftmargin =1.7\parindent,
label={\it(\roman*)}]
\item
Quantum entanglement builds geometry~\cite{Maldacena:2001kr,Ryu:2006bv,VanRaamsdonk:2010pw}.
\item
Geometry and entropy are intertwined notions bound together by entanglement, as reflected in the generalized entropy formula associated to a surface $X$ on a Cauchy slice $\Sigma$.
\begin{equation}
\label{genent}
S_{\rm gen}(X) = \frac{A(X)}{4G_N} + S_{\rm semi-cl}\big(\Sigma_X\big) ~,
\end{equation}
where $S_{semi-cl}$ is the entropy of effective field theory on the hypersurface $\Sigma$ outside of $X$. A {\it quantum extremal surface} (QES) extremizes this expression, varying both the slice $\Sigma$ and the surface $X$ subject to appropriate boundary conditions. In the context of AdS/CFT, an {\it entanglement wedge} is the bulk domain of dependence of spatial hypersurface bounded by a QES $X$ and a homologous domain of the conformal boundary $\cB$~\cite{Engelhardt:2014gca}.
\item
The proposed resolution of the black hole information paradox involves the careful application of this formula to the dynamics of evaporating black holes~\cite{Penington:2019npb,Almheiri:2019psf}. Early on, the dominant extremizing surface $X$ is trivial (see figure~\ref{fig:island-scenario}a), and the generalized entropy is that of Hawking radiation embodied in the second term of~\eqref{genent}; but after the black hole has evaporate halfway ({\it i.e.}\ past the {\it Page time}), extremization of the generalized entropy leads to the emergence of a more dominant saddle in which $X$ is a quantum extremal surface near the black hole horizon, as depicted in figure~\ref{fig:island-scenario}b.
\item
A key component of this proposed resolution is the transfer of the portion of the black hole inside the QES, known as the {\it island}, to a subsystem of the radiation Hilbert space. The Hawking quanta exterior to $X$ are entangled with and purified by quanta on the island; if the island is part of the radiation, then these entangled pairs are in a pure state in the radiation Hilbert space and will contribute nothing to the generalized entropy, which is now dominated by the area term. The generalized entropy now decreases as the black hole continues to evaporate; the island grows to encompass more and more of the black hole interior, consistent with unitarity.
After the black hole has evaporated, the black hole interior is entirely contained in the radiation state space, as indicated in figure~\ref{fig:island-scenario}c.
\end{enumerate}
This {\it island scenario} for black hole evaporation is sketched in figure~\ref{fig:island-scenario}.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{islands-3.png}
\caption{\it The standard Penrose diagram for black hole formation and evaporation a holographic CFT coupled to an external radiation bath, showing different Cauchy slices (yellow) in the island scenario. The matter forming the black hole occupies the region in orange; the entanglement wedge of the CFT is shaded green, while that of the external bath is shaded blue.}
\label{fig:island-scenario}
\end{figure}
There remains however a basic tension between the effective field theory description of the black hole interior and the recovery of information during the evaporation process. The analysis of~\cite{Mathur:2009hf,Braunstein:2009my,Almheiri:2012rt} has not been superseded so much as largely ignored, in that the process by which the island region of the black hole interior is transferred to the radiation Hilbert space is left unspecified. One needs more than entanglement~-- one needs a quantum channel ({\it i.e.}\ Hamiltonian dynamics) by which the information is transferred to the radiation state space. Unitary operations restricted to one member of a pair of entangled subsystems will not accomplish this task.
Below we will argue that when the options for such a process are considered more carefully, the information puzzle returns
\footnote{Some of our analysis overlaps with that of~\cite{Almheiri:2013hfa}.}
Any fix which retains the Hawking process at the horizon requires non-local processes that can be detected by observations exterior to the black hole. Such observers would conclude that the black hole is not radiating like an ordinary body would.
\section{The Hawking process}
\label{sec:hawking}
The essence of the Hawking process is that a smooth foliation of spacetime near the horizon of a black hole leads to a stretching of the spatial geometry. This time-dependence continually pulls up modes from the UV and separates them spatially. The UV vacuum entanglement structure, re-expressed in terms of the stretched modes of the out-state, leads to the (Unruh) state
\begin{equation}
\ket{ \Psi_{\rm Unruh} } = \prod_\omega\,{\rm exp}\big[ e^{-\beta\omega}\, b^\dagger_\omega c^\dagger_\omega\big] \ket{ \Psi_{\rm Boulware} }
\end{equation}
that is seen by an external observer as a stream of outgoing particles. Here $\ket{\Psi_{\rm Boulware}}$ is the vacuum for $b$ quanta exterior to the horizon and $c$ quanta interior to it, while $\omega$ is their frequency referred to the asymptotic region, and $\beta$ is the inverse temperature of the black hole.
Working in position space rather than frequency/wavenumber space, one can model this state as a sequence of entangled qubits generated in the near-horizon geometry (see {\it e.g.}~\cite{Mathur:2009hf}), one of which is carried off to the far region, and the other of which remains in the black hole interior. In Eddington-Finkelstein coordinates the process appears as in figure~\ref{fig:HawkingPair}.
\begin{figure}[ht]
\centering
\includegraphics[width=.35\textwidth]{HawkingPair-alt3.png}
\caption{\it
Eddington-Finkelstein radial and ingoing-null coordinates $(r,v)$ near a black hole horizon, displaying a sequence of ``nice slices'', {\it i.e.}\ smooth spacelike hypersurfaces that avoid the black hole singularity. The stretching of these hypersurfaces under time evolution can be taken to be largely confined to the orange-shaded region near the horizon. The time dependent stretching of the geometry in this region in this slicing generates Hawking pairs (red and blue) traveling along outgoing null trajectories.
}
\label{fig:HawkingPair}
\end{figure}
For a black hole of inverse temperature $\beta$, one can choose a foliation such that the stretching of spatial slices takes place in the vicinity of the horizon (indicated by the orange region in the figure), with the slices stretching an amount of order $\beta$ in a time of order $\beta$, during which on average a single Hawking pair is created. Thus one can approximate the Hawking process as the creation of a sequence of entangled pairs
\begin{align}
\label{paircreate}
\ket{\Psi(t_{n+1})} &\approx \ket{\Psi(t_{n})}\otimes \ket{\psi_{\rm pair}}
\nonumber\\[.2cm]
\ket{\psi_{\rm pair}} &= \frac 1{\sqrt2}\big( \ket{0}_\sfb\ket{0}_\sfc+\ket{1}_\sfb\ket{1}_\sfc \big)
\end{align}
such that the state after a modest number $n$ of such creation processes is well approximated as
\begin{equation}
\label{Psi-at-tn}
\bigl|{\Psi(t_n\sim t_0+n\beta)}\big\rangle \approx
\ket{\Psi(t_0)}\otimes \big(\ket{\psi_{\rm pair}}\big)^{\otimes n} ~.
\end{equation}
The process~\eqref{paircreate} is an isometric embedding of the state $\ket{\Psi_{\scriptscriptstyle\rm\! BH}(t_{n})}$ at time $t_n$ into a larger Hilbert space containing two additional qubits at time $t_{n+1}$.
We should not take $n$ to be too large, for instance one only needs this picture to be valid over the time it takes to emit a few Hawking quanta, so that the full state of the black hole can then be approximated as the prior state of the black hole $\ket{\Psi_{\scriptscriptstyle\rm\! BH}}$ (which we model as a state $\ket{\Psi_{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}}$ of $M$ qubits that we label by ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$), tensored with the state $\ket{\psi_{\rm pair}}^{\otimes n}$.
We do not allow an order one modification of the state of the radiated $\sfb$ and $\sfc$ qubits over short time scales ({\it i.e.}\ times much less than the scrambling time), under the assumption that the near-horizon effective field theory applies. Over larger time scales, we should allow for the rapid scrambling of degrees of freedom in the black hole interior, which we model as some pseudorandom unitary $U_{AC}\in U(2^{M+n})$ acting on the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ and $\sfc$ qubits in $\ket{\Psi_{\scriptscriptstyle\rm\! BH}}\in \cH_{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}\otimes\cH_\sfC$
\begin{align}
\label{scramble}
\bigl|{\Psi_{\rm tot}(t_0+t_{\rm scr})}\bigr\rangle & \approx U_{AC} \Big(\ket{\Psi_{\scriptscriptstyle\rm\! BH}(t_0)}\otimes \ket{\psi_{\rm pair}}^{\otimes n} \Big)
\nonumber\\[.2cm]
& = \sum_{\sigmab_\sfb} \ket{\Psi_{\scriptscriptstyle\rm\! BH}(\sigmab_\sfb)}\otimes\ket{\sigmab_\sfb}
\end{align}
(here $\{ \ket{\sigmab_\sfb} \}$ is a basis for $\cH_\sfB$, and $t_{\rm scr}\sim\frac1{2\pi}\beta\log (S\!-\!S_0)$ is the scrambling time)
such that the $\sfb$ quanta are now maximally entangled with the scrambled degrees of freedom in the remaining black hole.
Whatever the scrambling dynamics is, it should be internal to the black hole and should not itself transfer information to the radiation, for the following reasons.
First, a one-sided unitary such as~\eqref{scramble} applied to two entangled subsystems does not transfer information to the complementary subsystem $\sfB$.
Second, by the point in time where scrambling of~\eqref{Psi-at-tn} becomes important, the $\sfb$ quanta are well away from the black hole, and so any unitary acting to mix them with the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ and/or $\sfc$ quanta in the black hole interior amounts to a violation of local effective field theory.
Because the qubit pairs are created in a unique state, the Hawking pair component of the state~\eqref{Psi-at-tn} carries no information. Scrambling it according to~\eqref{scramble} does not encode any information in $\sfB$. The state of the radiated $\sfb$ quanta does however have an ever-rising entanglement entropy with the black hole interior, which is the source of Hawking's conclusion that black holes violate unitarity.
The small corrections theorem~\cite{Mathur:2009hf} and its elaboration in~\cite{Guo:2021blh} forbids a resolution to this conundrum via small modifications of the evolution due to quantum gravity effects~-- the entropy of the radiation increases by an amount close to $\log(2)$ with each emitted quantum. For instance, the reduction in the energy of $\ket{\Psi_{\scriptscriptstyle\rm\! BH}}$ due to the radiation is a small kinematic effect by which the radiation in the near zone interacts with the black hole interior, which does not modify the radiation process at leading order.
The essence of this theorem is that
\begin{enumerate}[start=1,
labelindent=\parindent,
leftmargin =1.7\parindent,
label=(SC\arabic*)]
\item
\label{scta}
If a black hole has a smooth horizon where effective field theory is valid, Hawking quanta are pair created in a fully entangled state $\ket{\psi_{\rm pair}}$~\eqref{paircreate}, up to small corrections. Once created, the pairs are swept apart by the stretching of the near-horizon geometry and soon become well-separated in space. In the context of local effective field theory, they subsequently do not interact.
\item
\label{sctb}
Due to the monogamy of entanglement and strong sub-additivity of subsystems of the Hawking pairs, the von Neumann entropy of the Hawking radiation state is then monotonically rising.
\end{enumerate}
This result is particularly robust; the subsystems involved in the application of strong sub-additivity are the previously radiated Hawking quanta $\{\sfb\}$, and the next created pair $\sfb_{n\tight+1},\sfc_{n\tight+1}$, and so the analysis does not care about the internal dynamics of the black hole on time scales of order the scrambling time or longer such as~\eqref{scramble}.
The powerful constraint the (effective) small corrections theorem imposes on any resolution of the information paradox continues to be somewhat underappreciated.
The pair creation process not only does not radiate away information about the initial state of the black hole, it creates a further ``entanglement deficit'' that must be made up for somehow if unitarity is to be maintained. But the radiation carries away energy in addition to entanglement, and so this deficit has to be repaid, as well as the initial information radiated, with fewer resources than one had in the initial state. If one waits too long to begin addressing this issue, little energy remains in the black hole state, and one must emit a large amount of entropy using little energy, which takes a long time;%
\footnote{This correlation between the energy used to emit the purifying quanta and the time it takes for them to be emitted can be seen explicitly in moving mirror models where a mirror accelerates away for some period of time and then stops accelerating~\cite{Wald:2019ygd}. There is a pool of quanta near the mirror that purify the Hawking-Unruh quanta generated during the accelerating phase that are released once the mirror stops accelerating, but they are exponentially redshifted due to the velocity of the mirror relative to its initial rest frame, and such low-energy quanta take exponentially long to be emitted.}
one has a black hole remnant with all its attendant inconsistencies (particularly in the context of AdS/CFT, where we think we understand the density of states, and there are no such remnants).
A standard setup for the analysis of black hole evaporation in AdS/CFT is to couple the CFT to an auxiliary system or bath. The earliest computations of the evaporation process in AdS/CFT~\cite{Callan:1996dv,Das:1996wn} indeed proceeded by adding a term to the CFT Hamiltonian involving the the operator $\cO_{\scriptscriptstyle\rm\! CFT}^\phi$ dual to some bulk scalar field $\phi$. The coupling of such a bulk mode to a bath is schematically
\begin{equation}
\label{Hint}
H_{\rm int} = \lambda_\phi \int \! d^dx\, \cO^\phi_{\scriptscriptstyle\rm\! CFT}(x) \, \cO_{\rm bath}(x) ~.
\end{equation}
This coupling reproduces bulk dynamics of a near-extremal black hole in asymptotically flat spacetime as follows.
If the AdS decoupling limit is not taken, and the AdS throat opens out into asymptotically flat spacetime at some radial scale $r_{\rm throat}\gg r_{\rm hor}$, then the solution to the bulk wave equation proceeds by matching the large radius behavior of wave modes in AdS to an outgoing spherical wave in flat spacetime (see for instance~\cite{Maldacena:1996ix}).
The modeling of the asymptotically flat region could for instance consist of field theory in a large volume in which $\cO_{\rm bath}$ is a spherical source of size $r^{~}_{\rm AdS}$.
The holographic map near the AdS boundary
\begin{equation}
\label{opmap}
\lim_{r\to\infty} r^\Delta \phi^{~}_{\rm bulk}(r,x) = \cO_{{\scriptscriptstyle\rm\! CFT}}(x)
\end{equation}
relates the large radius asymptotics of a bulk supergravity field to a corresponding local operator in the CFT (whose conformal dimensions is $\Delta$).
At leading order, one can thus mock up the transmission of an excitation of $\phi$ from the AdS region into the asymptotically flat region in the non-decoupled geometry, via the operator~\eqref{Hint} in the decoupled theory whose effect is to absorb a quantum at the top of the AdS throat and transfer the energy to a quantum in the bath~\cite{Chowdhury:2007jx,Chowdhury:2008uj,Almheiri:2013hfa}.
For instance, for a minimally coupled scalar in the full geometry where an $AdS_3\times{\mathbb S}^3\times {\mathbb T}^4$ throat opens out into flat spacetime at a radius $r_{\rm throat}$, the decay rate is proportional to~\cite{Avery:2009tu}
\begin{equation}
\label{decayrate}
\frac{d\Gamma}{dE} \propto \left(\frac{R_{AdS}^2 E}{r_{\rm throat}}\right)^{2(\ell+1)}
\left| \bra{{\it final}} \cO^\phi_{{\scriptscriptstyle\rm\! CFT}} \ket{{\it init}} \right|^2
\end{equation}
This result tells us that, absent peculiar non-local effects, the CFT operator employed in the interaction Hamiltonian~\eqref{Hint} acts on the $\sfb$ qubits of the Hawking radiation when they reach the AdS boundary and doesn't touch the $\sfc$ qubits, if the Hawking process (with small corrections) is occurring at the horizon.
In principle, we would include such a coupling for all the bulk modes of the effective supergravity theory; however, for addressing issues of principle, we always have the option of tailoring the interaction to include only particular modes of particular fields according to the issue at hand.
We will thus simplify matters in several respects. First, we can restrict to S-wave modes since these are the dominant component of Hawking radiation; the coupling is then only dependent on time along the AdS boundary. We take the bath to be a free field in a large volume, with the operator $\cO_{\rm bath}$ a localized source (smeared over a spherical region of size $r_{\rm AdS}$) and the coupling $\lambda_\phi$ controlling the rate at which $\sfb$ quanta reaching the top of the throat are transmitted into the bath as opposed to reflected back down the throat.
We might for instance turn down the coupling by decreasing~$\lambda_\phi$, to the point that Hawking quanta are transferred to the bath one at a time at well-separated intervals, radiating out from the source region never to return (or at least, not on the time scale it takes for the evaporation to complete). We can thus treat the bath as a collection of free field modes, and the state generated by the coupling to the CFT as occupying such modes, spatially and temporally well separated and thus non-interacting.%
\footnote{The setup is designed so as to avoid the possibility that the radiation modes interact with one another via their coupling to the CFT at the source.}
Once again, we further pare down the model by treating the bath modes as a bunch of qubits $\sfr$ in a Hilbert space $\cH_\sfR$, initially all in the ``vacuum'' state $\ket{0}_\sfr$.
\section{Paradox regained}
\label{sec:islands}
In the attempt to rescue unitarity while maintaining effective field theory in the vicinity of the horizon, the island scenario transfers much of the black hole interior to the radiation state space, in order to restore purity to the radiation state after the black hole has evaporated.
But what could it mean to say that the island ``becomes part of the radiation'', and how does it evade the small corrections theorem and the resulting entanglement deficit?
A key ingredient of the small corrections theorem is bulk locality; once created, Hawking pairs separate and the radiated member of the pair no longer interacts with the black hole. The theorem assumes that small corrections can only happen while the nascent Hawking quanta are in the vicinity of the horizon, and that once the exterior member of the pair is well-separated from the black hole the interactions vanish sufficiently rapidly that they can be ignored.
\vskip .8cm
\noindent
\ref{sec:islands}.1~~{\em Interactions between the black hole and Hawking quanta within AdS}
\medskip
The conventional black hole evaporation process in the model of section~\ref{sec:hawking} was reduced to evolution under the interaction Hamiltonian $H_{\rm int}$~\eqref{Hint} which implements a unitary transformation
\begin{equation}
\label{expHint}
U_{B,R} = e^{iH_{\rm int}\,\delta t}
\end{equation}
mixing $\cH_\sfB$ with $\cH_\sfR$, extracting $\sfb$ qubits from $\cH_\sfB$ and depositing them in $\cH_\sfR$.
If this were all there is, and there were no non-local interactions of the sort~\eqref{nonlocal}, one would have simply the Hawking process and a continually rising entropy in the bath subspace $\cH_\sfR$, as dictated by the small corrections theorem.
Non-local interactions between the radiated quanta and the black hole bypass the assumptions of the theorem and could thereby evade its conclusion of monotonically rising entropy. For instance, in addition to the scrambling dynamics of the internal degrees of freedom of the black hole according to~\eqref{scramble}, a small correction to the evolution could also modify slightly the state of the radiated $\sfb$ quanta
\begin{equation}
\label{nonlocal}
\ket{\Psi_{{\rm tot}}(t_{n+1})} = U^\epsilon_{AC,B}\Big( U_{A,C}\ket{\Psi_{\scriptscriptstyle\rm\! BH}(t_n)}\otimes \ket{\psi_{\rm pair}} \Big) ~.
\end{equation}
Here $U_{A,C}$ is a scrambling dynamics internal to the black hole and $U^\epsilon_{AC,B}$ is an infinitesimal unitary rotation in the full CFT Hilbert space that affects the radiated $\sfb$ quanta in a way that depends on the internal state of the black hole. The accumulated effect after many time steps is the sort of large correction needed to evade the small corrections theorem. For instance, this infinitesimal transformation can partially swap the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ and $\sfb$ qubits, so that by the time the $\sfb$ quanta have reached the AdS boundary they have been fully swapped out for the original ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits, which can then be extracted into the bath. At this point those $\sfb$ qubits have been embedded in $\cH_{\scriptscriptstyle\rm\! BH}=\cH_{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}\otimes\cH_\sfC$ together with whatever their $\sfc$ partners have evolved into, and the Hawking pair that was introduced by the vacuum dynamics near the horizon can be eliminated via the scrambling dynamics of the black hole.
In this scenario, there are nonlocal interactions prior to the Hawking quanta reaching the AdS boundary, such that the interaction~\eqref{expHint} at the boundary is instead radiating the initial ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits into the radiation state space.
The supposed dynamics acts on the radiated Hawking quanta to change their state far from the black hole (but while they are still in the AdS region), in such a way that there are no additional radiated quanta, but the interaction gradually restores purity to the radiation state as the system evolves.
To summarize the scenario, after Hawking pairs are created by the standard effective field theory mechanism sketched above, the system executes a swap operation that replaces the $\sfb$ qubits with the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits as they propagate out, and thereby allows the $\sfb$ quanta to recombine with their partners $\sfc$ and disappear along with the black hole in a unitary fashion while the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ quanta comprise the radiation in the end.
This dynamics is inherently non-local, connecting the black hole interior to arbitrarily distant regions. Can such wild non-locality be detected by an observer remaining outside the black hole, or is it somehow hidden in the complexity of the encoding of bulk physics in holographic systems?
A basic difficulty with such a scenario is that we are free to non-destructively measure the Hawking quanta $\sfb$, and ask whether their state is changing as they propagate outward.
Suppose we erect a set of ``quantum non-demolition'' measuring devices, that don't change the bit-parity of the $\sfb,\sfr$ qubits but correlate them with a measuring apparatus ${\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}} \def\sfP{{\mathsf P}$ with Hilbert space $\cH_{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}} \def\sfP{{\mathsf P}$. We begin with $k$ measuring apparatus qubits ${\mathsf m}} \def\sfn{{\mathsf n}} \def\sfo{{\mathsf o}} \def\sfp{{\mathsf p}$ in a unique state, say $\ket{\Psi_{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}} \def\sfP{{\mathsf P}}=\ket{0_1 0_2\ldots 0_k}_{\mathsf m}} \def\sfn{{\mathsf n}} \def\sfo{{\mathsf o}} \def\sfp{{\mathsf p}$, and let the measurement consist of the encoding
\begin{align}
\ket{0}_\sfb\otimes \ket{\Psi_{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}} \def\sfP{{\mathsf P}} &\longmapsto \ket{0}_\sfb\otimes \ket{0_1 0_2\ldots 0_k}_{\mathsf m}} \def\sfn{{\mathsf n}} \def\sfo{{\mathsf o}} \def\sfp{{\mathsf p}
\nonumber\\[.1cm]
\ket{1}_\sfb\otimes \ket{\Psi_{\mathsf M}} \def\sfN{{\mathsf N}} \def\sfO{{\mathsf O}} \def\sfP{{\mathsf P}} &\longmapsto \ket{1}_\sfb\otimes \ket{1_1 1_2\ldots 1_k}_{\mathsf m}} \def\sfn{{\mathsf n}} \def\sfo{{\mathsf o}} \def\sfp{{\mathsf p}
\end{align}
which is a simple example of an error-correcting code.
We can arrange to have a series of such measuring devices placed at various radial positions in order to record the state of a Hawking quantum as it propagates out toward the AdS boundary. Afterward, we can read out the states of the measuring devices and determine whether the qubit state has flipped during the course of its evolution. If the outgoing radition were interacting with the black hole interior in some non-local way and swapping $\sfb$ qubits for ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits, then the outgoing qubit has a substantial likelihood (of order 50\%) of being flipped along the way. We can similarly examine whether the qubits in $\cH_\sfB$ are faithfully transferred to bath quanta $\sfr$, by measuring the state in $\cH_\sfB$ before and comparing it to the state in $\cH_\sfR$ after.%
\footnote{The conspiracy-minded might want to invoke here the possibility that since the measuring apparatus is also built out of CFT degrees of freedom, the non-local interactions might also reach into it and mess with its internal state as well, thereby erasing our ability to tell whether the Hawking qubit has flipped during propagation. But the non-local interactions would then have to know about the details of the measuring apparatus (which might consist of a large number of qubits), and whether we have sent a signal from the boundary to turn it on or off, {\it etc}.; and such a possibility becomes ever more preposterous.}
In this way, we can detect whether the black hole is radiating quanta as an ordinary thermal body would radiate ({\it i.e.}\ whether the state of the radiated quanta decouples from that of the black hole once it leaves the vicinity of the horizon), or if instead there are non-local effects persisting in faraway regions.
Of course, it could happen that this swap of the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ and $\sfb$ qubits takes place in a small region or ``atmosphere'' of the black hole, so that a few horizon radii out the black hole is radiating like an ordinary body. But the Hawking pair state $\ket{\psi_{\rm pair}}$ is created in a region of size $r_{\rm hor}$ consisting of quanta whose wavelength is of order $r_{\rm hor}$. If the Hawking quanta have been repatriated to the black hole interior in just a few horizon radii, essentially one has admitted that local effective field theory breaks down at the horizon scale, since there are large corrections to the Hawking process whereby quantum information is being released and Hawking quanta re-absorbed within just a few horizon radii.%
\footnote{Note that a resolution of this sort would have little to do with the island scenario; the black hole interior is not becoming part of the bath $\sfR$ so much as communicating directly with the near zone exterior $\sfB$.}
Additional arguments against such a scenario were given in~\cite{Almheiri:2013hfa}. For instance, one can perform operations on the outgoing radiation that interfere with the swap operation between ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ and $\sfb$ qubits, thereby decreasing the outflow of information from the black hole, leading to a buildup of entropy that leads to a remnant problem.
\vskip .8cm
\noindent
\ref{sec:islands}.2~~{\em Nonlocal interactions that radiate additional quanta}
\medskip
Another possibility one might entertain is the radiation of additional quanta, apart from those generated by the Hawking process (see for instance%
~\cite{Giddings:2009ae,Giddings:2011ks,Giddings:2012gc,Giddings:2013kcj,Giddings:2013noa,Giddings:2014nla,Giddings:2017mym,Almheiri:2013hfa,Wald:2019ygd}),
and that these additional quanta are responsible for emitting the information content in the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits of the initial state $\ket{\Psi_{\scriptscriptstyle\rm\! BH}}=\ket{\Psi_{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}}$, as well as the $\sfc$ quanta that purify the state of the $\sfb$ quanta.
The island scenario proposes an effect which is a unitary rotation
\begin{equation}
\label{islandformation}
\ket{\Psi_{\rm tot}(t_{n+1})} = U_{AC,R} \, U_{B,R} \Big( \ket{\Psi_{\scriptscriptstyle\rm\! BH}(t_n)}\otimes\ket{\Psi_\sfB(t_n)}\otimes\ket{\Psi_\sfR(t_n)} \Big)
\otimes \ket{\psi_{\rm pair}}
\end{equation}
mixing the black hole interior state space $\cH_{\scriptscriptstyle\rm\! BH}=\cH_{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}\otimes\cH_\sfC$ with $\cH_\sfR$, so that the earlier qubits $\sfc$ and the qubits ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ in the black hole interior eventually become ``part of the radiation'', while the Hawking process~\eqref{paircreate} continues at the horizon, and the unitary rotation~\eqref{expHint} transfers $\sfb$ quanta to the bath.%
\footnote{The possibility that $U_{AC,R}$ and $U_{B,R}$ interfere with one another, and modify the transfer of $\sfb$ quanta into the bath, is a variant of~\eqref{nonlocal} where the non-local interaction happens at the very top of the throat where it connects to the bath, and suffers the same problems.}
Note that in the way we have set up the problem, we are only coupling the CFT and the bath through a coupling of the sort~\eqref{Hint}, where $\sfR$ is a free field theory and $\cO_{\rm bath}$ is a localized source. After $\sfr$ quanta are generated in the bath, they completely decouple from the CFT (since we can make the bath as big as we like), and so any non-local interaction generating~\eqref{islandformation} does not act on these faraway qubits in the bath. Thus, if the state of the Hawking $\sfb$ quanta has only undergone small corrections, and is transmitted faithfully to the bath via~\eqref{expHint}, then the only way to restore purity to the final state is through the radiation of additional quanta via the same interaction~\eqref{Hint}, having the effect of $U_{AC,R}$ in~\eqref{islandformation}.
If we take the evolution of figure~\ref{fig:island-scenario} literally, there is a non-local process by which $\sfc$ qubits are transferred from the entanglement wedge of the CFT into the island, which is supposed to be a subspace of the radiation state space $\cH_\sfR$. If we focus on the late-time evolution well after the Page time, the QES gradually moves out toward the horizon and so gradually encodes more and more of the $\sfc$ quanta; figure~\ref{fig:island-xfer} depicts the transfer of a Hawking partner qubit $\sfc$ to the island.
\begin{figure}[ht]
\centering
\includegraphics[width=.35\textwidth]{island-xfer-alt2.png}
\caption{\it
The partner $\sfc$ (red) of a Hawking quantum $\sfb$ (blue) is transferred to the island. As time evolves, these quanta both travel outgoing null trajectories while the island (shaded green) moves closer to the horizon. The QES on each time slice is indicated by a green diamond. In the earliest time slice, the $\sfc$ quantum is in the entanglement wedge exterior to the QES. The middle time slice depicts the moment of the transition. In the latest time slice, the $\sfc$ quantum lies in the island.
}
\label{fig:island-xfer}
\end{figure}
Note that in our analysis we always adopt a convention where $\cH_{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}\otimes\cH_\sfC$ is a subsystem of the CFT Hilbert space, and ask that the appearance of the island as part of the radiation state space be made explicit as an operation that transfers the data in the black hole interior from the CFT Hilbert space into the radiation Hilbert space $\cH_\sfR$ as some sort of unitary encoding, while the ground state in the bath rotates into a unique state in the black hole state space $\cH_{\scriptscriptstyle\rm\! BH}=\cH_{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}\otimes\cH_\sfC$ as it evaporates away.
For instance the process depicted in figure~\ref{fig:island-xfer} transferring a $\sfc$ qubit to the island (perhaps after some scrambling has taken place within $\cH_{\scriptscriptstyle\rm\! BH}$) involves just such a rotation~\eqref{islandformation}. In the model of section~\ref{sec:hawking}, this rotation has to be implemented by the interaction Hamiltonian~\eqref{Hint}, and seems to suggest a substantial modification of the operator map~\eqref{opmap}.
More generally, one might allow for the possibility that evolution involves a unitary operator $U_{AC,BR}$ that rotates $\cH_{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}\otimes\cH_\sfC$ into $\cH_\sfB\otimes\cH_\sfR$, so that some of the information is transferred out while the Hawking quanta are propagating from the near-horizon region out to the AdS boundary as discussed above, while the rest happens at the interface between the CFT and the bath through the coupling~\eqref{Hint}.
This modification does not substantially change the analysis.
A generic transformation of this sort will result in the random appearance of new radiation quanta far from the black hole, which again would be detectable to outside observers who see the flux growing larger further away from the black hole. In order for this not to happen, the additional flux should come entirely from the region near the black hole. This is the sort of scenario envisioned in~\cite{Giddings:2012gc,Giddings:2013kcj,Giddings:2013noa,Giddings:2014nla,Giddings:2017mym}, in which the black hole appears to outside observers to be radiating in a dual mode, via the Hawking process {\it and} via a non-local communication of the interior to the near zone. This possibility dramatically affects the thermodynamics, as discussed in~\cite{Giddings:2012dh,Giddings:2013vda}, in ways that disagree with the known thermodynamics of the dual CFT (for instance in $AdS_3$ where the thermodynamics is dictated by conformal invariance).
Regardless, the process depicted in figure~\ref{fig:island-xfer} is quite different~-- a direct radiation of qubits from the interior of the black hole into the bath, which would be seen as quanta emerging into the asymptotically flat region from the top of the AdS throat that were not present lower down in the throat, outside the horizon. This outcome obviously differs from the process of radiation from an ordinary body.
In an evolution of the sort~\eqref{islandformation}, in which there is direct radiation of information from the black hole interior into the bath, one way to have such a process push out more information might be to make use of higher angular momentum channels~\cite{Almheiri:2013hfa}. The Hawking flux is suppressed due to an angular momentum barrier (see {\it e.g.}~\cite{Cvetic:1997uw,Klebanov:1997cx} and equation~\eqref{decayrate}), but if the black hole interior connected directly to the far region, there need not be such a suppression of higher angular momentum modes. One would essentially be arguing that black hole interior radiates information from an angular sphere of size $R_{AdS}$, possibly as far away as the neck between the top of the AdS throat and the asymptotically flat region, with different locations on the sphere radiating independently.
Such radiation might occur in the standard modeling of a CFT/bath interaction via a local coupling at the neck such as that of $H_{\rm int}$ of~\eqref{Hint}
\footnote{But again, the disparity between the Hawking flux coming up the throat and the flux emerging into the asymptotically flat region would be detectable to observers outside the black hole.}
However, nothing requires us to couple the two systems this way; indeed, we could simply couple the bath to the s-wave mode of the scalar $\phi$ that comprises the dominant component of Hawking radiation by spatially averaging the CFT operator $\cO^\phi_{\rm CFT}$ and coupling it to some bath operator of fixed spatial dependence (some sort of antenna). Then the Hawking modes would leak out, but not so much the information from the black hole interior, and one is then forced into a remnant scenario of the sort we discuss below. Unless the non-local radiation and the Hawking radiation are emitting via the same channels in more or less the same way, one can engineer a coupling that allows the latter to radiate into the bath but not the former, leading one back into the paradox.
Moreover, the sort of interaction~\eqref{Hint} coupling the CFT to the bath, which models the nexus between the AdS throat and an asymptotically flat region, doesn't seem to generate any additional flux~-- its role seems to be to transfer Hawking $\sfb$ quanta near the AdS boundary to the bath modes which model the asymptotically flat region. One would be saying that this same interaction can reach into the black hole interior and non-locally extract ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ and $\sfc$ qubits, and that the correlation functions of the CFT operators involved are decidedly non-thermal, which seems entirely at odds with what we have learned about the AdS/CFT dictionary, where operators such as $\cO^\phi_{\rm CFT}$ in~\eqref{Hint} have thermal correlation functions at a unique temperature~-- that of the Hawking radiation.
\vskip .8cm
\noindent
\ref{sec:islands}.3~~{\em Remnant dangers}
\medskip
One difficulty with the emergence of ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ and $\sfc$ qubits from the interior was mentioned above~-- that one is now trying to repair the entanglement deficit, as well as radiate away the information in the initial state, with fewer resources. Furthermore, this additional radiation only adds to the Hawking flux; such a black hole is ``hotter'' than black hole thermodynamics would predict, with a distribution of energies that will be at odds with that of a thermal body.%
\footnote{Of course, the Hawking radiation from a black hole is not a pure blackbody spectrum due to the greybody filter that results from the propagation of the radiation through the ambient geometry to the far region~\cite{Maldacena:1996ix,Klebanov:1997cx,Cvetic:1997uw}. But these are determined by the ambient geometry and so in principle one can measure that geometry, compute the expected Hawking flux, and compare to the actual flux.}
Black hole properties such as the equation of state and the radiated spectrum would thus differ from the predictions of semi-classical gravity by a significant amount~\cite{Giddings:2012dh,Giddings:2013vda}.
Emitting the quanta necessary to maintain the purity of the final state, with the energy available after one devotes a large fraction of the energy budget to the emission of Hawking modes that carry away no information, runs the risk of generating a remnant scenario with all the attendant inconsistencies in an AdS/CFT context (since the CFT density of states is rather well understood, and inconsistent with a large component comprised of remnants).
The system might try to hide this additional flux in extremely soft radiation modes~-- some sort of ``soft hair'' (as for instance in~\cite{Hawking:2016msc}) that can carry away information at very little cost in energy, and is so diffuse that it is hard to measure. However such soft hair takes a very long time to radiate a large entropy.
For instance the most efficient entropy flux for a given energy flux is that of thermal radiation; the entropy flux is bounded by the available energy and so the rate of information emission becomes lower and lower the smaller the fraction of the energy budget of the black hole is devoted to repairing the entanglement deficit and actually radiating the initial information content of the black hole (see~\cite{Wald:2019ygd} for a recent discussion in the context of moving mirrors).
If one really wants to radiate the information in soft hair, one is pushing it out via the same sort of field theory modes that carry the Hawking radiation, just at a lower temperature.
But then the rate of entropy radiation in such modes is much lower, and unitary evolution generates a remnant as one waits for the black hole to recover from the entanglement deficit built up by the Hawking process.
It would seem that one needs the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ and $\sfc$ qubits to be emitted at roughly the same rate as the $\sfb$ qubits in order to avoid a remnant problem and a contradiction with the known CFT density of states. But then the temperature of their component of the radiation will be approximately the same as that of the ordinary Hawking process, since in the setup above, the bath and the CFT only interact via \eqref{Hint} and so both sets of qubits are being fed out through the same interaction Hamiltonian. But then the difference between the expected Hawking flux and the actual flux should then be obvious~\cite{Giddings:2012dh,Giddings:2013vda}, and one would find that the black hole does not actually satisfy the predictions of general relativity.
\vskip .8cm
\noindent
\ref{sec:islands}.4~~{\em Non-local modifications of Hilbert space structure}
\medskip
Another place the necessary non-locality could lurk is in the Hilbert space structure rather than the Hamiltonian. It has been suggested that the interior degrees of freedom of the black hole are a complicated, highly nonlocal rewriting of degrees of freedom in the black hole exterior~\cite{Papadodimas:2013wnh,Papadodimas:2013jku,Susskind:2013tg,Maldacena:2013xja,Verlinde:2013qya}, so that the assumption made in section~\ref{sec:hawking} of an independent Hilbert space $\cH_\sfC$ for interior partners does not hold.
This idea had its own set of issues in the context in which it was originally proposed, see for instance~\cite{Harlow:2014yoa,Harlow:2014yka,Marolf:2015dia}, though see also~\cite{Papadodimas:2015jra,Papadodimas:2017qit,Kim:2020cds}.
Note that these discussions in large part don't concern bulk dynamics or how the bulk Hawking process occurs, but rather propose encodings of operators and/or qubits, as properties of the holographic map. We will have more to say about this in the next section.
The island scenario might be regarded as a newer variant of this idea, in which only the portion of the black hole interior inside the QES is mapped in a complicated, non-local way to a subspace of the radiation Hilbert space $\cH_\sfR$. One might then entertain the notion that the late interior $\sfc$ quanta don't lie in a Hilbert space $\cH_\sfC$ distinct from this subspace of $\cH_\sfR$, and in fact are some complicated rewriting of the early $\sfb$ quanta. Since the late $\sfb$ quanta are entangled with the late $\sfc$ quanta, they would then be secretly entangled with the early $\sfb$ quanta which by then have been deposited in $\cH_\sfR$. One would then conclude that the early and late Hawking quanta are in fact entangled, such that the entanglement entropy of the radiation is decreasing after the Page time. The effective small corrections theorem is thus evaded, and the problem of increasing entanglement is seemingly solved, due to a lack of factorization of the Hilbert space into interior and exterior subspaces.
However, one can't have simultaneously that (i) the late $\sfc$ qubits are made out of the early $\sfb$ qubits; (ii) the early $\sfb$ qubits are fully entangled with the early $\sfc$ qubits (up to small corrections); (iii) the late $\sfb$ qubits are fully entangled with the late $\sfc$ qubits (again up to small corrections); without having (iv) that the late $\sfb$ qubits are made out of the early $\sfc$ qubits. At least, if we are supposing that throughout the evolution the Hawking process is taking place at the horizon, so that we have (ii) and (iii).
At late times, the early $\sfb$ and $\sfc$ qubits are in the bath according to the island scenario, completely decoupled from the CFT~-- the early $\sfb$ qubit because it was directly radiated into the bath through the boundary coupling, and the early $\sfc$ qubit is also supposed to have arrived in the bath as part of the island. So one would be saying that the late-time Hawking radiation process at the horizon is being carried out in free field theory in the bath, which seems absurd.
Furthermore, the $\sfb$ quanta are always in the entanglement wedge of the CFT, until they are dumped into the bath (for a single-sided black hole, where the QES is inside the horizon), so according to the island formalism, at the time of Hawking pair creation, the late $\sfb$ quanta cannot be made from early $\sfc$ quanta since the latter are supposed to be in the bath already, and quanta cannot simultaneously be in the CFT and in the bath. Ultimately, the proposal runs into the problem that the Hilbert space {\it is} factorized between the CFT and the bath, and so one can't implement the sort of identification of degrees of freedom necessary to make $\cH_\sfC\subset\cH_\sfR$ for the late-time Hawking process.
Finally, this whole discussion makes no provision for the radiation of the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits, the actual information in the black hole. The Hawking process itself isn't carrying away the original information in the state of the black hole, and the whole discussion seems to be an elaborate mechanism for repaying the entanglement deficit without actually radiating any information. So the radiation of these qubits would constitute additional quanta beyond those of the Hawking process, with the attendant problems discussed above.
\vskip .8cm
\noindent
\ref{sec:islands}.5~~{\em Something else}
\medskip
Perhaps there is some other mechanism that is being invoked in the statement that the black hole interior ``becomes part of the radiation''; if so, it would be helpful to know what it is. If the non-locality needed to evade the small corrections theorem lies in the Hamiltonian, the types of unitary evolution discussed above appear to cover the various options, and the conclusion seems to be that if the Hawking process of effective field theory is taking place at the black hole horizon, then one can detect from external observations the non-local processes required to override the random, incoherent nature of the Hawking process in order to restore purity of the final state. Introducing non-locality in the Hilbert space structure has its own set of issues.
\section{But what about \texorpdfstring{$\ldots$}{} ?}
\label{sec:whatabout}
A potential way to evade the constraints of the small corrections theorem is through the appearance of non-local interactions, such as those discussed above, which can communicate information in the black hole interior to the outside world. We have argued that such non-local interactions are problematic. We now come to the question of whether various other non-localities that have been discussed in the literature are up to the task.
Often the complicated nature of the holographic map between the bulk gravity and non-gravitational CFT sides of AdS/CFT duality is invoked as an essential ingredient of the resolution of the information paradox. In the island scenario, this complexity is thought to provide a means of hiding the island in plain sight in the radiation Hilbert space~\cite{Kim:2020cds}. But does this complication somehow bypass the effective small corrections theorem \ref{scta}-\ref{sctb} of~\cite{Guo:2021blh}?
Non-local effects that have been considered include the following:
\begin{enumerate}[start=1,
labelindent=\parindent,
leftmargin =1.7\parindent,
label=(NL\arabic*)]
\item
\label{code1}
The holographic map is a non-local encoding of bulk gravitational physics in a dual non-gravitational description.
A qubit localized in the AdS bulk is encoded non-locally over many qubits in the dual CFT, in much the same way that quantum information is stored non-locally in an error-correcting code. Does this form of non-locality allow us to evade the small corrections theorem?
\item
\label{code2}
This encoding is also believed to be ``state dependent'', {\it i.e.}\ the embedding of bulk effective field theory quanta into the CFT Hilbert space should depend on the microscopic details of the black hole microstate in a complicated way. Can this secretly embed information about the interior state of the black hole in the radiation?
\item
\label{code3}
Topology-changing processes can also be non-local phenomena that might create pathways (such as traversible wormholes~\cite{Gao:2016bin,Maldacena:2018lmt}) for information to leak out of black holes.
\item
\label{code4}
There is the possibility that smallness of matrix elements is compensated by the exponentially large number of states available to the black hole.
\item
\label{code5}
The non-factorizablility of the Hilbert space of quantum field theory and of gauge/gravitational systems is also sometimes pointed to as a mechanism for resolving the paradox (see for instance~\cite{Raju:2020smc,Raju:2021lwh} for a recent discussion), as are non-locality/non-factorizability of gravitational dressing.
\item
\label{code6}
Yet another possibility is that the holographic map is only approximate. Semiclassical bulk configurations are coherent states that might be dramatically overcomplete in the state space of the black hole, as suggested for instance by the rapid growth of interior volume on nice slicings such as those of figure~\ref{fig:HawkingPair}. There might then be a non-local breakdown of effective field theory when the information capacity of the naive state space (given by the volume of the semiclassical bulk phase space) exceeds the size of the actual black hole interior state space of the exact theory.
\end{enumerate}
Let us consider each of these points in turn.
First of all, it is important not to conflate the two sides of the duality, nor to conflate the map between the two sides with the properties of either one. Let us assume for the moment that AdS/CFT is a true duality~-- that both sides of the duality admit an exact description, and holography is a statement of the equivalence of these two exact descriptions. The map between these two descriptions can be very complicated, with localized qubits in the AdS bulk mapping to logical qubits non-locally encoded in the CFT. But this is a non-locality of the {\it map}, not a non-locality or acausality of the dynamics {\it within} either side of the duality.
The fact that the holographic map embeds a bulk qubit by distributing it over many qubits in the CFT dual does not alter the fact that in the bulk description there is a $\sfb$ qubit that is entangled with the black hole interior via the Hawking process which, absent nonlocal and acausal effects {\it in the bulk description}, will not evolve in a way that decreases the von Neumann entropy of the black hole exterior subsystem, once the Hawking quantum leaves the vicinity of the black hole. These logical qubits may indeed be spread over many qubits in the exact CFT description, and some of them may be subject to fast scrambling dynamics that mixes them in some complicated way; nevertheless, if the holographic map is exact, and the Hawking process is hypothesized to be taking place at the horizon in the bulk description, then there will be an image under the map of the Hawking process in which there will be a logical $\sfb$ qubit fully entangled with a logical $\sfc$ qubit in the CFT at each step of the evaporation process. There will also be the images of the previously emitted logical $\sfb$ qubits, and these plus the next pair are all one needs to apply the small corrections theorem. Locality of the bulk dynamics dictates that soon after the $\sfb$ qubits leave the vicinity of the black hole, they decouple from the interior dynamics of the black hole, and the degrees of freedom in the CFT that represent it under the duality map. Both descriptions will have to obey the effective small corrections theorem for these logical $\sfb$ qubits, together with the next newly minted $\sfb,\sfc$ pair.
One is then forced into a situation where, if there is some CFT-plus-bath dynamics that purifies these logical $\sfb$ qubits, then running the holographic map the other way, it must be bulk-nonlocal.
Either the logical $\sfb$ qubits are bulk non-locally communicating with the black hole interior in a way that detectably changes their state as discussed in section~\ref{sec:islands}.1, or the boundary interaction~\eqref{Hint} magically acts to both transfer Hawking $\sfb$ qubits to the bath as well as ``island'' qubits $\sfc$ and ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ that are interior to the black hole, as discussed in section~\ref{sec:islands}.2. But this latter possibility will also be easily detectable in the bulk as a process by which the flux coming up the AdS throat differs dramatically from the flux entering the bath, and black hole thermodynamics is significantly modified from the Hawking predictions. If the holographic map is exact, then the bulk dynamics has a boundary image, and the small corrections theorem has a boundary image. All that the holographic map has done is to obfuscate the bulk dynamics via the complexity of that map.
The supposed state dependence of the encoding map is also a statement about the holographic map rather than the dynamics within a given side of the duality. It does not change the entanglement structure of the Hawking pairs, and their von Neumann entropy within the bulk description up to small corrections. Radiation of the information in the initial state of course requires state dependence, but the small corrections theorem tells us that state-dependence of the holographic map does not help. One needs the state of the radiation to depend on the state of the black hole, and this is what is ruled out by the small corrections theorem absent nonlocal processes {\it in the bulk}, if the Hawking process is occurring at the horizon.
We conclude that \ref{code1} and \ref{code2} are irrelevant to the problem at hand.
Topology-changing processes are an example of bulk non-local effects \ref{code3} that might transfer information out of the black hole interior~-- a particular mechanism for implementing the sorts of non-local dynamics discussed above. If they act so as to radiate additional quanta beyond those of the Hawking process, then this will affect the distribution of quanta in the radiation in detectable ways, as discussed above~-- the spectrum of quanta radiated into the bath differs significantly from the Hawking prediction. The possibility that the topology-changing processes don't affect the spectrum but rather gradually imprint the data of the initial black hole state onto the radiation was also discussed above; we saw that there are simple ways to tell if the states of the $\sfb$ qubits were swapped out for the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits on their way out to the AdS boundary.
Either way, an outside observer measuring the radiation will observe that the black hole is not radiating as an ordinary body.
Next, we come to the question \ref{code4} of whether the smallness of any individual correction to the semiclassical bulk picture is overwhelmed by the enormous number of black hole microstates. The problem is that this enormous number has to do with the internal structure of the black hole microstates, whereas the small corrections theorem involves the application of strong sub-additivity to subspaces of the radiated quanta that are far from the black hole and seemingly decoupled from the black hole interior. Any correction in which the large number of interior states of the black hole affects the radiated quanta amounts to a non-local effect of the sort discussed above~-- an order one mixing between the Hawking radiation and the black hole interior, that could be detected by outside observers making non-destructive measurements on the radiation.
Regarding the non-factorizability of the Hilbert space \ref{code5}, while this is a feature of the ultraviolet structure of field theory it should not be an issue for the effective dynamics as expressed {\it e.g.}\ in bit-models of the evaporation process. The fact that the fields whose quanta are electrons or photons have a highly entangled ultraviolet vacuum structure, as well as a long-range gravitational (and for the electron, electromagnetic) dressing, does not prevent us from isolating the spin states of a few electrons in a magnetic trap or photons in a superconducting cavity and approximating their dynamics to a high degree of accuracy using a factorized Hilbert space description. These are the sort of small effects that are accommodated by the effective small corrections theorem.
Exotic forms of non-factorizability that might affect the analysis of effective bit-models of the evaporation process were dealt with in section~\ref{sec:islands}.4.
Corrections due to gravitational dressing are non-local and spoil factorizability.
However, such effects are purely kinematic in nature, and in 3+1d have been argued to be exponentially small at least in perturbation theory~\cite{Donnelly:2018nbv,Giddings:2021khn}. Basically, these tails of the wavefunctions of physical states carry very little information beyond global charges. Multipole moments of the fields that might distinguish microstates are highly suppressed in generic states (both above the black hole transition where the observables have to approximate the classical no-hair theorems, and at threshold where {\it e.g.}\ the BPS spectrum and its properties admit quantitative analysis~\cite{Balasubramanian:2005qu,Balasubramanian:2018yjq}).
In 1+1d, gravitational dressing is well understood.
In two-dimensional dilaton-gravity models, the gravity sector consists of the scale factor $\rho$ of the metric (so that the dynamical metric $g_{\alpha\beta}=e^{2\rho}\hat g_{\alpha\beta}$ is a Weyl rescaling of a fixed metric $\hat g_{\alpha\beta}$) together with a dilaton $\phi$. These can be traded for a pair of ``light-cone coordinates'' $X^\pm$ in field space via a canonical transformation.
Two-dimensional spacetime dynamics then has an alternate interpretation as the worldsheet of a macroscopic string (see for instance~\cite{Martinec:1996ad} for a review); gravitational dressing amounts to solving the physical state constraints of the string. These constraints amount to the projection of the causal structure in field space onto the worldsheet, and physical observables can be written in terms of so-called DDF operators~\cite{Schoutens:1993hu} (taking the matter sector to consist of free scalar fields~$f^i$)
\begin{align}
{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}^i(p_+) = \int \!\frac{du}{2\pi} \, e^{ip_+ X^+} \partial_u f
~~,~~~~
\widetilde{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}^i(p_-) = \int \!\frac{dv}{2\pi} \, e^{ip_- X^-} \partial_v f ~,
\end{align}
where $u,v$ are 2d null coordinates. These modes were shown to have an exchange algebra
\begin{equation}
\widetilde{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}^i(p_-)\, {\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}^j(p_+) = e^{-\frac{i}{2\pi} p_+ p_-} {\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}^j(p_+) \, \widetilde{\mathsf A}} \def\sfB{{\mathsf B}} \def\sfC{{\mathsf C}} \def\sfD{{\mathsf D}^i(p_-) +R_{ij} (p_+,p_-) ~.
\end{equation}
Initially, it was thought that the non-commutativity of the in- and out-mode operators pointed towards a resolution of the information paradox in terms of complementarity between these sets of observables~\cite{Kiem:1995iy}. In hindsight, however, what was observed is simply the 1+1d shock wave S-matrix related to quantum chaos near the horizon~\cite{Shenker:2013pqa,Maldacena:2015waa,Lam:2018pvp}; information is still lost in these toy models (see for instance~\cite{Mertens:2019bvy,Moitra:2019xoj} for a recent discussion) because they obey the small corrections theorem and have no compensating non-locality in the field space ({\it i.e.}\ the target space in the macroscopic string analogy) where the physical dynamics takes place.
Finally, we come to the possibility \ref{code6} that the duality is only an approximation, and that while the CFT is a non-perturbative description of quantum gravity, the bulk theory is not. This supposition opens up a larger set of possibilities for the encoding of bulk AdS physics in the CFT. In particular the embedding of Hawking pairs in the Hilbert space of the exact description might not be an isometry,%
\footnote{It should be noted, however, that there is no evidence that AdS/CFT is only approximate rather than an exact duality. In fact, the impressive matching of bulk computations of supersymmetric black hole partition functions as well as the matching of complicated supergravity solutions to precise states in the dual CFT (see for example~\cite{Lunin:2001fv,Lin:2004nb,Dabholkar:2012nd,Dabholkar:2014ema,Cabo-Bizet:2018ehj,Benini:2020gjh}) seems to suggest otherwise.}
and it is suggested that the entangled Hawking state is not exact but rather there is a state-dependent embedding of it and the black hole interior into the exact description, that leads to the island prescription.
For instance, it was pointed out in~\cite{Akers:2021fut} that one could embed all the product states $\ket{\psi_i}$ of $n$ qubits in a Hilbert space of $m \sim \log n$ qubits in such a way that one captures all their inner products $\bra{\psi_i}\!\psi_j\rangle$ to a high degree of accuracy. Since one can always write operators in terms of such a product basis
\begin{equation}
\cO = \sum_{ij} |\psi_i\rangle \cO_{ij} \langle\psi_j| ~,
\end{equation}
so long as there are not too many off-diagonal terms in the sum ({\it i.e.}\ the operator acts within a ``code subspace'' of the effective theory of approximately product states), one can well approximate such bulk operators in the exact description, because the evaluation of correlation functions reduces to sums over inner products that are well approximated in the much smaller Hilbert space of the exact theory. Of course, for states that are not close to product states, and operators that are not close to diagonal in the product state basis, matrix elements could involve enough terms in the product state basis that the errors in the approximation of the product states accumulate to the point that the bulk description (characterized by these approximate product states) breaks down.
Even so, once again the effective small corrections theorem bypasses these complications, and constrains what one might accomplish by giving up on exact duality between the bulk and boundary descriptions. All that is needed is that the exact description realizes the product state $\ket{\psi_{\rm pair}}^{\otimes \frac n2}$ with high fidelity, as it must if there is bulk effective field theory and the Hawking process is taking place at the horizon, up to small corrections. Then the effective small corrections theorem applies and tells us that there are entangled logical qubits involving the radiation quanta $\sfb$ which lead to a monotonically increasing von Neumann entropy of that radiation, absent significant, detectable non-localities in the bulk dynamics. As in the discussion of~\ref{code1}-\ref{code2} above, these various complexities, approximations and non-localities of the bulk-boundary map are a distraction from the problem at hand.
\section{Discussion}
\label{sec:discussion}
All the problems reviewed above trace back to the Hawking process and the desire to maintain effective field theory near the horizon.
Every Hawking pair created only adds to the problem we need to solve, because Hawking pairs are created in a unique state and therefore cannot carry information away from the black hole; instead, they generate additional entanglement entropy that must be disentangled by the end of the evaporation process.
The fundamental difference between the Hawking process and radiation from an ordinary body is that in the latter process the state of the radiated quanta depends on the state of the radiating object; in the notation we have been using, the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits are being emitted directly, rather than some random $\sfb$ qubits whose state is totally uncorrelated to and unentangled with that of the initial object because they are fully entangled with their partner $\sfc$ qubits. The (effective) small corrections theorem tells us that the game is over as soon as we postulate the effective field theory Hawking process as the means by which black holes radiate, provided that there is no subsequent non-local dynamical effect that reaches out and modifies the radiation state in some way that depends on the interior state of the black hole.
There is also the Occam's razor question of why does the system bother with the Hawking process to begin with, if AdS/CFT is capable of the sorts of non-localities needed to patch up the Hawking process; why not simply radiate the information directly using such non-local processes and forego the Hawking mechanism altogether? Why can't the non-locality support the storage of information at the horizon scale, and thus the ordinary radiation of that stored information from the horizon scale into the bulk?
The problem does not arise if black holes radiate as ordinary bodies do, but this requires some horizon scale microstate structure that can directly radiate ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits in the way that ordinary bodies do from their surface. In the context of AdS/CFT, the boundary interaction~\eqref{Hint} then directly transfers ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits to the bath. The Page curve then arises as it does for ordinary bodies~-- if the initial state is pure, then the radiation of ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits first causes the entanglement entropy between the radiation and the remaining hole to rise, until half the ${\mathsf a}} \def\sfb{{\mathsf b}} \def\sfc{{\mathsf c}} \def\sfd{{\mathsf d}$ qubits are radiated, and then the entanglement entropy starts to fall because the subsequently radiated qubits are mostly entangled with the ones radiated earlier.
This logic in part motivates the fuzzball proposal. Rather than invoking non-local processes that act on arbitrarily distant degrees of freedom, in the fuzzball scenario one only requires effects that act at the horizon scale. More precisely, rather than non-locality, the fuzzball scenario proposes that the quantum wavefunction of the black hole constituents is quantum coherent over the horizon scale. One gives up the Hawking process at the horizon, as it is the source of all the subsequent difficulties we have seen above.
One might then ask, absent the Hawking process, how does one manage to recover all the usual thermodynamic properties of black holes? Naively it seems that is the structure of the vacuum geometry and its smoothness at the horizon that lead to those properties. This includes the calculation of the black hole temperature that results from the Hawking process. Don't we lose all this structure if we give up vacuum physics at the horizon?
The answer to this question lies in the very emergence of gravity as the effective theory {\it outside the horizon}. It is the effective gravity theory that determines the equation of state {\it e.g.}\ via the smoothness of the Euclidean continuation of the exterior geometry~\cite{Gibbons:1976ue}; or alternatively via the boost symmetry of the exterior static geometry~\cite{Wald:1993nt} in Lorentz signature. Ultimately the thermodynamic properties arise from the emergent diffeomorphism symmetry of the effective theory. The entropy is a Noether charge related to the near-horizon boost symmetry; the temperature is given by the surface gravity and thus also determined by the symmetry.
Whatever the corrections to the emergent low-energy effective theory and its diffeomorphism symmetry, it is likely that those effects are confined to a small ``stretched horizon'' that extends slightly outside the horizon of the black hole solution of the effective theory. One can then compare the exact theory which includes those corrections to the solution of the effective theory ({\it e.g.}\ supergravity), and they will match closely outside the stretched horizon. For instance, in the Euclidean solution we would be cutting a small disc out of the $r$-$t$ plane near the Euclidean continuation of the black hole horizon, and the exact and supergravity solutions match well in the region outside that disk. Filling in that disk with a smooth geometry solving the supergravity field equations, and thereby extrapolating the effective supergravity geometry all the way to its horizon, makes a mistake relative to the exact theory that is small when the black hole is sufficiently large and semi-classical.
An illustrative example of this behavior is the black fivebrane, whose Euclidean geometry takes the form of the gauged WZW model $\frac\ensuremath{SL(2,\bR)}U(1)$ in the $r$-$t$ plane
\begin{equation}
ds^2 = k\big(dr^2 + \tanh^2\!r\,dt^2\big)
\end{equation}
(where $k$ is the level of the underlying $\ensuremath{SL(2,\bR)}$ current algebra of worldsheet string theory).
This geometry is inextricably linked to a stringy condensate concentrated within a string length of $r=0$, whose properties are precisely known due to a property of the coset model known as FZZ duality (see for instance~\cite{Giveon:2016dxe} for an overview), yet this feature does not affect the thermodynamics at leading order~\cite{Kutasov:2000jp,Giveon:2005jv} in the large $k$ limit.
In gauge/gravity duality, the onset of the black hole regime is associated to a Hawking/Page phase transition on the gravity side, dual to a deconfinement transition on the gauge theory side. The obvious candidate for horizon scale ``fuzz'' is the deconfined phase, which would shut off at the horizon scale but characterizes the black hole interior. The black hole would then be a compact object composed of this exotic phase of matter, radiating like an ordinary body. Its entropy, temperature, and other thermodynamic properties are guaranteed to match those predicted by general relativity because the exterior geometry is unchanged (up to small corrections), and the symmetries of that effective geometry determine the thermodynamics regardless of what exotic phase governs the black hole interior.
If one is giving up vacuum physics at the horizon scale, a natural followup question concerns the experience of the infalling observer who crosses the horizon into the fuzzball. This is a question about the response function of ``fuzz'' to probes of high transverse momentum. We don't know enough about the brane dynamics that constitutes ``fuzz'' in the relevant dense and strongly coupled regime in order to answer this question. The ``firewall'' scenario~\cite{Almheiri:2012rt} argues for a stiff response function; the idea of ``fuzzball complementarity''~\cite{Mathur:2010kx,Mathur:2012jk,Avery:2012tf} suggests the possibility of a soft response function at high transverse momenta. The latter idea accords with what we know of string theory at weak coupling and low density, where large transverse momentum exchanges over short periods of time are avoided~\cite{Gross:1987ar}. This soft response scenario leads to a sort of ``bag model'' picture of the quantum black hole, in which a confined exterior (whose excitations are gauge singlet supergravitons) makes an abrupt transition to a deconfined interior. A probe crossing the phase boundary might not experience strong kicks but instead fragment over some penetration depth over which it scrambles and eventually thermalizes (like a QCD meson entering a quark-gluon plasma and forming a jet). The debate about firewalls versus fuzzball complementerity is essentially the question of what is the penetration depth relative to the horizon scale. However, we should stress that these are simply informed speculations as to the nature of ``fuzz''; one hopes that someday such speculations will be supplanted by reliable calculations.
\section*{Acknowledgements}
I thank
G. Penington,
M. Rangamani,
and especially
S. Mathur
for discussions.
This work is supported in part by DOE grant DE-SC0009924.
|
1,116,691,499,088 | arxiv | \section{Introduction}
It has been known for more than a century that light carries linear and angular momentum \cite{Poynting1884,Loudon2012} in addition to energy. When light is scattered or absorbed by a particle, the transfer of momenta can cause the particle to move and/or to rotate. Thus light can be used to manipulate particles, molecules or mesoscale objects in general \cite{Grzegorczyk2006,Marago2013,Spesyvtseva2016,Gao2017}. Direct manipulation of objects through light-induced forces has led to formidable progress which has been impacting research in many areas ranging from ultra-cold matter physics \cite{Letokhov2007}, biology \cite{Ashkin1987,Svoboda1994}, microfluidics \cite{Rodrigues2017,Paie2018}, optical printing \cite{Gargiulo2017}, to optical engineering \cite{Li2008,Renaut2013,Qiu2014,Ma2012,Merklein2017} among other fields. For example, demonstration of levitation and trapping of micron-sized particles by radiation pressure dates back to 1970 \cite{Ashkin1970}. Since the 90's, even new states of matter have been conceived with manipulation by optical forces \cite{Burns1990}. In particular, one important consequence of electromagnetic two-particle interaction is the optical binding (OB) first noted by Burns et al. \cite{Burns1989,Figliozzi2017}. Two particles excited by a common field can form a bound dimer with stable or pseudo-stable positions \cite{Johnson1993,Figliozzi2017,Nan2018}.
Equally important to getting such manipulable forces is to realize proper theoretical/predictive models. In particular, the optical forces exerted on a small, single scatterer are typically assumed as the contributions due to its dipole moments \cite{Gordon1973}. In this regime, two force components are usually found, namely the gradient component and the scattering force \cite{Chaumet-Nieto2000}. Moreover, this last component is usually associated with the radiation pressure exerted by light on the system \cite{Novotny2012}. These components can be easily distinguished when the light is, for instance, a single plane wave. However, non-trivial contributions can exist for ``structured'' light having spin densities of light \cite{Mole2009}. This means that the more complex is the electromagnetic field around the system, the more complex are the exerted components of force and torque to model \cite{AndrewsBookStrLight,Nieto2015,Nieto2015b}.
For coupled systems as dimers, there are no analytical formulations of the force far beyond the dipole-dipole coupling, even for simple illumination as with plane waves \cite{Kall2010,Albella2013,Bakker2015}. In the case of two bodies, it is commonly assumed that the system can undergo binding forces in addition to the scattering forces, both kinds generated by field gradients in response to the incident field \cite{Dholakia2010}. The former forces are contributions that try to attract/repel the particles each other and the latter forces, or the radiation pressure, push the system to the forward direction with respect to the direction of the incident wave. Moreover, the binding forces exerted on the system are expected to be compensated in symmetric or reciprocal systems, as it occurs for the case of a homodimer system, i.e. two coupled and equal spheres made of an isotropic material, under symmetric illumination. Many works have studied the optical forces exerted in dimers under the influence of light fields \cite{Chaumet2001,Sukhov2015,Kostina2017,Kall2010,Liaw2016}. In particular, Haefner et al. \cite{Haefner2009} have discovered the presence of spin torques in homodimers of nanospheres when illuminated symmetrically with plane waves having linear or circular polarization. However, many new degrees of freedom have been found in this work for the movement of electromagnetically coupled systems that have general validity irrespective of material and shape properties. A complete scheme of induced torques under linearly-polarized incident waves are reported here. Aditional spin contributions to the movement of the system are found in addition to the usual exerted forces and torques predicted in \cite{Haefner2009}. These induced, non-trivial torques are a product of the multiple interactions occurring in the system, there is no need of using complex helical fields to obtain angular momentum transfer \cite{Nieto2015,Sukhov2017}. The spin motors or ‘‘nanorotators’’ have been traditionally discussed based on optical traps created with circularly polarized light or vortex beams \cite{Dienerowitz2008,Jones2009}.
A well-known method of discrete dipoles, namely the discrete dipole approximation (DDA), is used to perform our calculations \cite{DeSousa2016,Draine1988,Draine1994,Yurkin2007,AbrahamE2017}. For the sake of simplicity, we focus the study on the response by dimers of silica nanospheres under three configurations of variation of the illumination. With the aim to give a simple explanation of the phenomena, the results are compared with similar results obtained by simplified systems consisting in a few dipole moments. With this procedure, it is deduced how these new torques appear naturally as a consequence of the inhomogeneous inner fields that are induced on the system in reaction to the incident wave.
Similar results have been reported recently for two-dimensional systems of bound cylinders calculated with an integral formulation \cite{AbrahamE2016,AbrahamE2018_Ag,AbrahamE2018_Si}. As the present work uses another method of solving Maxwell equations and other dimensions, the results of this paper complete those previous studies with a generalization of the phenomena that is independent of the materials, dimensions, and geometries involved.
This paper also shows that in general the retardation effects cannot be neglected in a scattering problem. Even when a few multipolar terms are not taken into account, the dynamics of the system under time-harmonic fields can be seriously affected. In this way, the results have a key role for the correct design of optical nanoscale devices and nanorotators, because they predict new dynamics of the systems as a first approximation of their movements. Although an exact electromagnetic method is used here, neither thermal nor Brownian forces are considered \cite{Albaladejo2011}. Also, no ``dynamic'' forces are calculated, i.e. forces that take into account initial velocities and accelerations of the wires \cite{Grzegorczyk2006dynamics}. Then, no complete dynamics is obtained for the system. Yet, the new dynamical features presented here are essential for the functionality and efficiency of optical small devices and they should be taken into account for future applications involving photonic forces.
\section{Method}
The DDA is a well-known method of resolution of electromagnetic scattering problems. It solves the problem by dividing the scatterer's domain into small subvolumes which respond to the electromagnetic field by induced dipole moments. Small non-magnetic particles or subvolumes develop only an electric dipole moment in response to the light's electric field. A complete version of the DDA can be found in Refs.~\cite{DeSousa2016,AbrahamE2017} for instance. Let's summarize the method we require here for our calculations involving isotropic and non-magnetic materials that respond to a time-harmonic field. In the absence of currents inside the object, the electric field is given by the solution of the volume-integral equation \cite{Novotny2012}
\begin{equation}
\label{eq-VIE}
\mathbf E(\mathbf r) = \mathbf E_0(\mathbf r) + k^2_0 \int_{V} \hat G(\mathbf r, \mathbf r^{\prime})
[\epsilon(\mathbf r) -1] \mathbf E(\mathbf r^{\prime}) d\mathbf r^{\prime} .
\end{equation}
Here, $\mathbf E_0(\mathbf r)$ is the electric field of the incident wave, $k_0 = \omega/c$ is the magnitude of the vacuum wave vector, $V$ is the volume of the object, and $\hat G(\mathbf r, \mathbf r^{\prime})$ is the vacuum dyadic Green tensor \cite{Novotny2012}.
In the DDA approach, the previous integral equation is solved by discretizing the volume $V$ as
$V= \sum^N_{n=1} V_n$, where $V_n$ is the volume of a homogeneous region where the electric
field is assumed to be constant. Thus, Eq.~(\ref{eq-VIE}) now reads
\begin{equation}
\label{eq-VIE-DDA}
\mathbf E(\mathbf r) = \mathbf E_0(\mathbf r) + k^2_0 \sum_n \hat G(\mathbf r, \mathbf r_n)
\left(\epsilon(\mathbf r) -1\right) \mathbf E(\mathbf r_n) V_n .
\end{equation}
Defining the dipole moments as
\begin{equation}
\mathbf p_n = \epsilon_0 V_n \left(\epsilon(\mathbf r) -1\right) \mathbf E(\mathbf r_n) ,
\end{equation}
we can rewrite Eq.~(\ref{eq-VIE-DDA}) as
\begin{equation}
\label{eq-VIE-DDA-p}
\mathbf E(\mathbf r) = \mathbf E_0(\mathbf r) + \frac{k^2_0}{\epsilon_0} \sum_n
\bar{\hat G}(\mathbf r, \mathbf r_n) \mathbf p_n ,
\end{equation}
where
\begin{equation}
\bar{\hat G}(\mathbf r, \mathbf r_n) = \frac{1}{V_n} \int_{V_n} \hat G(\mathbf r, \mathbf r^{\prime})
d\mathbf r^{\prime} .
\end{equation}
It can be shown that \cite{DeSousa2016}
\begin{equation}
k^2_0 \bar{\hat G}(\mathbf r, \mathbf r_n) \approx
k^2_0 \hat G(\mathbf r, \mathbf r_n) \;\; \mbox{if} \; \mathbf r \notin V_n
\end{equation}
and
\begin{eqnarray}
k^2_0 \bar{\hat G}(\mathbf r, \mathbf r_n) & \approx &
-\hat L_n/V_n + i k^2_0 \mbox{Im} \{ \hat G(\mathbf r_n, \mathbf r_n) \} = \\
& & -\hat L_n/V_n + i k^3_0/(6\pi) \hat 1 \;\; \mbox{if} \; \mathbf r \in V_n .
\end{eqnarray}
Here, $\hat L_n$ is the so-called electrostatic depolarization dyadic
that depends on the shape of the volume element $V_n$ \cite{Lakhtakia1992,Yaghjian1980}. For cubic volume
elements, the depolarization tensor is diagonal: $\hat L_n = (1/3) \hat 1$.
Thus, we can now rewrite Eq.~(\ref{eq-VIE-DDA-p}) for the internal field,
$\mathbf E_n \equiv \mathbf E(\mathbf r_n)$, as follows
\begin{eqnarray}
\label{eq-En}
\left[ \hat 1 + \left( \hat L_n - iV_n \frac{k^3_0}{6\pi} \right) [\hat \epsilon_n
- \hat 1 ] \right] \mathbf E_n & = & \mathbf E_{0,n} + \nonumber \\
k^2_0 \sum_{m \neq n} \hat G_{nm} [\hat \epsilon(\mathbf r_m) - \hat 1] V_m
\mathbf E_m , & &
\end{eqnarray}
where $\mathbf E_{0,n} \equiv \mathbf E_0(\mathbf r_n)$, $\hat \epsilon_n \equiv
\hat \epsilon(\mathbf r_n)$ and $\hat G_{nm} \equiv \hat G(\mathbf r_n, \mathbf r_m)$.
The left-hand side of Eq.~(\ref{eq-En}) can be defined as the exciting field
$\mathbf{E}_{\rm exc}(\mathbf r_n)$, i.e., the field that excites the $n$-volume element.
Now, defining the polarizability of the $n$-volume element, as
\begin{equation}
\label{eq-alpha-sphere}
\alpha_n = \frac{\alpha_{0,n}}{1- ik^3_0 \alpha_{0,n}/(6\pi)},\,\,
\alpha_{0,n} = 3 V_n \left( \frac{\epsilon_n -1}{\epsilon_n + 2} \right).
\end{equation}
where $\alpha_{0,n}$ is the quasistatic polarizability, Eq.~(\ref{eq-En}) can be rewritten as
a set of coupled dipole equations for the exciting fields at each element
\begin{equation}
\label{eq-Eexc}
\mathbf E_{{\rm exc},n} = \mathbf E_{0,n} +
k^2_0 \sum^N_{m \neq n} \hat G_{nm} \alpha_m \mathbf E_{{\rm exc},m} .
\end{equation}
It is worth stressing that this DDA formulation includes automatically the so-called
radiative corrections \cite{Sipe1974,Belov2003,Albaladejo2010}, which are related to the
imaginary part of the Green tensor, and it is thus fully consistent with the optical
theorem. On the other hand, from the solution of Eq.~(\ref{eq-Eexc}), which constitutes
a set of 3$N$ coupled linear equations for the exciting fields, one can get the dipole
moments and the total internal fields as follows
\begin{eqnarray}
\label{eq-pn}
\mathbf p_n & = & \epsilon_0 \alpha_n \mathbf E_{{\rm exc},n} \\
\label{eq-En-pn}
\mathbf E_n & = & \frac{1}{\epsilon_0 (\epsilon_n - 1) V_n}\mathbf p_n .
\end{eqnarray}
From the knowledge of the dipole moments and the internal fields, one can easily compute the different cross sections (scattering, absorption, and extinction) of the object or the system in question. In particular, the extinction cross section, which we use below, can be obtained as follows. Assuming a plane-wave illumination, $\mathbf E_0(\mathbf r) = \mathbf E_0 e^{i\mathbf k_0 \cdot \mathbf r}$, the extinction cross section is given by \cite{DeSousa2016}
\begin{equation}
\label{eq-Cext}
C_{\rm ext} = \frac{k_0}{\epsilon_0 |\mathbf E_0|^2} \sum^N_{n=1}
\mbox{Im} \left\{\mathbf{E}^{\ast}_0(\mathbf{r}_n)\mathbf p_n \right\} .
\end{equation}
To calculate the net time-average optical force exerted on a body or on a system, it occurs as with the calculation of the optical cross sections; we must use the proper dipole moments that represent each scatterer region in the frame of the DDA. We consider here that each body $b$ is composed by $N_b$ dipole moments and then the subindex $n$ run only over all the dipoles of the chosen body, i.e. $n=1,2,..., N_b$. Notice, on the other hand, that in the Eqns.~(\ref{eq-Eexc}) we use $N$ or the number of dipole moments in the whole system. This number accounts for the multiple interactions between all the dipole moments. In general, $N\neq N_b$ for several coupled bodies. In this way, the components of the net force exerted on the body $b$ in question, are represented by
\begin{eqnarray}
\label{eq-FcDDA}
F_{i,b}=\sum^{N_b}_{n=1}F_{n,i}
\end{eqnarray}
The $i$-component of this force, $F_{n,i}$, can be obtained from the time-averaged force on a particle within the Rayleigh approximation \cite{Chaumet-Nieto2000}. This is
\begin{eqnarray}
\label{eq-FcDDA}
F_{n,i}=\frac{1}{2}Re\{\mathbf{p}^t_n[\partial_i\mathbf{E}^{*}(\mathbf{r},\omega)|_{\mathbf{r}=\mathbf{r}_n}\}
\end{eqnarray}
The derivatives of the total field $\partial_i\mathbf{E}(\mathbf{r},\omega)|_{\mathbf{r}=\mathbf{r}_n}$ at the dipoles's position $\mathbf{r}_n$ of body $b$ can be obtained from the DDA Eqns.~(\ref{eq-Eexc}), giving \cite{Chaumet2007}:
\begin{eqnarray}
\label{eq-FcDDA}
\partial_i\mathbf{E}(\mathbf{r},\omega))_{\mathbf{r}=\mathbf{r}_n}=\partial_i\mathbf{E}_0(\mathbf{r},\omega))_{\mathbf{r}=\mathbf{r}_n}+ \nonumber \\
+ \frac{k^2_0}{\epsilon_0}\sum^N_{m=1, m\neq n}(\partial_i\mathbf{G}(\mathbf{r},\mathbf{r}_m))_{\mathbf{r}=\mathbf{r}_n}\mathbf{p}_m]\}
\end{eqnarray}
The optical torques can also be calculated from the DDA method, as given in Ref.~\cite{Chaumet2009}:
\begin{equation}
\label{eq-TqDDA}
\mathbf{N}_b=\sum^{N_{b}}_{n=1} \mathbf{r}_{n} \times \mathbf{F}_{n} + \frac{1}{2}\sum^{N_{b}}_{n=1} Re[\mathbf{p}_{n} \times (\mathbf{p}_n/\alpha_0)^{*}]
\end{equation}
The first term of Eq.~(\ref{eq-TqDDA}) involves the so-called extrinsic torque and the second term is called the intrinsic torque. Their significance has been discussed in Refs.\cite{Nieto2015,Nieto2015b,Chaumet2009}, among others, but the main difference between them lies in the \textit{spatial} dependence of the first term. However, it is important to mention that $\mathbf{N}_b$ can represent both orbital or spin torque (do not confuse the torque $\mathbf{N}_b$ with the number of dipole moments of the body, $N_b$). The character of spin or orbital of the torque of Eq.~(\ref{eq-TqDDA}) will depend on the choice of the reference system to define the positions $\mathbf{r}_n$ of the dipole moments that compose the examined body. We then deal with spin torques when the associated positions are taken with respect to the center of each body. Otherwise, we will assume that the reference system is located at the center of mass of the whole system and then we will set orbital torques. The forces involved in the calculation of the extrinsic torque are the forces exerted on each dipole $n$ that composes the particle $b$.
\section{Results}
The central configuration of this study can be seen in the Fig.~\ref{fig:1_GralConfig}. A homodimer of silica spheres is positioned with its axis parallel to the $y$-axis of a rectangular coordinate system. The gap is $d$ and the radii of the spheres are $R=240$ nm. The incident wave approaches with a wavevector $\mathbf{k}_0$ which is determined by the angles $\theta$, $\phi$ given by the spherical coordinates. The incident wave is fully determined by the incident wavelength $\lambda=2\pi/k_0$ and a third angle $\zeta$ which is the polarization angle -the electric field is highlighted in a blue letter, see also inset scheme-. Silica nanospheres are simulated by a refractive index $n=1.59 + i 10^{-7}$. \\
\textit{-Single Nanosphere} \\
In order to get a rapid insight into the spectral response of the silica nanosphere systems, the Fig.~\ref{fig:2_SingleSph} shows the optical response by a single silica nanosphere. For this study, the single sphere is located at the center of the coordinate system of the Fig.~\ref{fig:1_GralConfig}. In the extinction curve, we can see the multiple excitations due to the presence of morphology-dependent resonances (MDRs) \cite{VanBladel1977,Ng2005}. As the silica corresponds to a low refractive index, the resulting MDRs are overlapped in the optical region. At high energies, many modes appear in the spectrum and the overlapping is greater than at low energies. The spectrum was calculated with \#1791 dipole moments.
Although some changes are expected for the optical response of coupled silica nanospheres, the spectral structure may result quite similar in shape to the spectra of single spheres. This is somehow expected because the optical response by silica spheres is dominated by the presence of the MDRs which are volume resonances \cite{AbrahamE2018_Si}. To illustrate the volume nature of the MDRs, some inner field maps were calculated (see inset graphics) for an arbitrary value of wavelength, i.e. $\lambda=600$ nm. This particular value has been chosen because it corresponds to a spectral position where several overlapped modes can be excited, see the extinction behavior at this wavelength. The maps were calculated with \#2553 dipole moments.
Notice that the multiple scattering between all the dipoles originates an inhomogeneous inner field but this is always symmetric with respect to the incident wave field for a single sphere, see \ref{fig:2_SingleSph}(b-d). The resultant dipoles are, of course, oriented as the electric field or polarization vector (not shown) but they generate symmetric forces that give together the typical scattering component exerted on the sphere, i.e. the radiation pressure. This fact will be contrasted with the results obtained in the dimer system, giving interesting conclusions with regard to the movement of the system.
Here below, the optical response by dimers of silica spheres is explored in terms of the mechanical magnitudes. The results are scaled by proper factors, but they represent forces of an order of piconewtons when the intensity of the illumination reaches a few $mW/\mu m^2$. In the same way, the obtained torques reach an order of $pN.nm$ when the same order of illumination intensity is used.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=7cm,height=7cm,keepaspectratio]{figure1.eps
\par\end{centering}
\caption{\label{fig:1_GralConfig}Scheme of the reference system and the geometrical configuration of the problem. The angles $\theta,\phi$ of the incident wave correspond to the spherical coordinate system. The inset graph shows the definition of the polarization angle $\zeta$. The electric-field vector is highlighted in blue.}
\end{figure}
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=9cm,height=9cm,keepaspectratio]{figure2a.eps}
\includegraphics[width=3cm,height=3cm,keepaspectratio]{figure2b.eps
\includegraphics[width=3cm,height=3cm,keepaspectratio]{figure2c.eps
\includegraphics[width=3cm,height=3cm,keepaspectratio]{figure2d.eps} \\%-d-
\par\end{centering}
\caption{\label{fig:2_SingleSph}Morphology-dependent resonances in a silica nanosphere of $R=240$ nm. (a) Spectral curve for the extinction efficiency, $Q_{ext}$. (b)-(d) Maps of the relative module of the inner electric field, $|\mathbf{E}/\mathbf{E}_0|$,calculated at $\lambda=600$ nm with \#2553; (b) cut $x,y,z=0$, (c) cut $x,y=0,z$ and (d) cut $x=0,y,z$. The arrows indicate the direction of the vectors of the incident wave (not in scale).}
\end{figure}
\textit{-Dimer: Spectral variation.} \\
In order to study the spectral forces exerted on silica dimers, a first example has been taken for a gap of $d=500$ nm (Fig.~\ref{fig:3_config1_DimerFcs}). The incident wave is assumed to have a direction given by $\mathbf{k}_0=k_0\mathbf{\hat{z}}$ and a polarization $\mathbf{E}_0=E_0\mathbf{\hat{y}}$. The whole system suffers the action of binding forces, panel~\ref{fig:3_config1_DimerFcs}(a), due to the electromagnetic coupling between the spheres and the incident field while, at the same time, the system is also pushed by radiation pressure, panel~\ref{fig:3_config1_DimerFcs}(b), along with the forward direction. The binding force is defined as $\Delta=F_{1y} - F_{2y}$, the difference between the force components along the axis of the dimer. On the other hand, the scattering force is defined as $F_z=F_{z1} + F_{z2}$, the total force corresponding to the force exerted on the center of mass of the system. The forces are scaled to the magnitude $3V_nk_0u_E$, being $u_E=\frac{1}{2}\epsilon_0|\mathbf{E}_0|^2$ the electric energy density of the incident field and $V_n$ is the subvolume of the discretization used in the corresponding calculation.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=9cm,height=9cm,keepaspectratio]{figure3.eps
\par\end{centering}
\caption{\label{fig:3_config1_DimerFcs}Spectra of scaled optical forces exerted on a silica dimer of $R=240$ nm and $d=500$ nm. The incident wave has a wavevector $\mathbf{k}_0=k_0\mathbf{\hat{z}}$ and is polarized along the dimer's principal axis, i.e. along with $\mathbf{\hat{y}}$. (a) Binding force, $\Delta=F_{1y}-F_{2y}$; the blue line indicates the zero of the scale. (b) Scattering force or radiation pressure, $F_{pr}=|F_{z}(CM)|$.}
\end{figure}
The excitations of the MDRs in the dimer can also be seen through the spectra of Fig.~\ref{fig:3_config1_DimerFcs}. Notice how the binding force (panel~\ref{fig:3_config1_DimerFcs}(a)) alternates its sign by means of the excitations of the MDRs. Repulsive ($\Delta >0$) or attractive character ($\Delta <0$) is obtained depending on which resonance is excited: the curve oscillates around the zero-value of force (blue line). The values of the radiation pressure are much larger than those obtained for binding forces. In addition, some resonance shifts can be found between both kinds of spectra \cite{AbrahamE2018_Si,AbrahamE2018_Ag}. Thus, as seen in previous works, the optical forces of the system are able to ``feel'' the electromagnetic modes and they can even serve as near-field observables of the scatterer system.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=9cm,height=9cm,keepaspectratio]{figure4.eps
\par\end{centering}
\caption{\label{fig:4_config1_DimerTqs}Spectra of scaled spin torques exerted on a silica dimer of $R=240$ nm and $d=500$ nm. The incident wave has a wavevector $\mathbf{k}_0=k_0\mathbf{\hat{z}}$ and it is polarized along the dimer's principal axis, i.e. along with $\mathbf{\hat{y}}$. The blue line indicates the zero of the scale.}
\end{figure}
For the configuration of illumination of the Fig.~\ref{fig:3_config1_DimerFcs}, neither spin nor orbital torque has been expected in the literature \cite{Haefner2009}. Logically, there are no orbital torques because we deal with a \textit{homodimer} \cite{AbrahamE2018_Si}, or a dimer made of similar particles. However, when introducing the DDA results, optical spin torques can be found with respect to the $x$-axis of the system (Fig.~\ref{fig:4_config1_DimerTqs}, see curves in solid line). This kind of spin has been recently found in 2D-systems by an integral formulation \cite{AbrahamE2016,AbrahamE2018_Ag,AbrahamE2018_Si}. The new dynamics can be obtained only when \textit{realistic} interactions between the scatterers are simulated, i.e. the complete multiple scattering must be taken into account. Specifically, the inhomogeneities induced in the inner fields are the cause of the spins. Notice that the phenomenon is exclusively obtained by bodies represented with multiple dipole moments; the induced spins for two bodies represented by single dipole moments are zero (Fig.~\ref{fig:4_config1_DimerTqs}, see dashed line).
As it was said in previous works \cite{AbrahamE2016,AbrahamE2018_Ag,AbrahamE2018_Si}, the spin torques appear in \textit{coordinated} form when the dimer is composed by equal scatterers. In other words, the spin curves for each particle have equal values but opposite signs, see the curves in black solid line and in red solid line in Fig.~\ref{fig:4_config1_DimerTqs}. This means that the induced torques cancel each other and give zero torque for the whole system by reasons of symmetry.
We saw in the previous study for a single sphere that the particles must be represented by multiple dipole moments to obtain inhomogeneous fields. The simplest dimer configuration to obtain spins is a system represented by three dipole moments, see the panel (a) of Fig.~\ref{fig:5_config1_3dipoles}. In this case, one sphere is represented by a single dipole moment and the other sphere is represented by two dipole moments placed in symmetric positions with respect to the direction of polarization. In this way, notice that the spin can only be induced on the sphere composed of two dipole moments, see the red arrows on the scheme \ref{fig:5_config1_3dipoles}(a). The spin for the sphere $1$ in \ref{fig:5_config1_3dipoles}(a) is identically zero. The resultant inhomogeneity of the inner field inside sphere $2$ makes the whole particle to rotate because the induced tangential forces for each dipole's subvolume are different between them (inner field not shown in the figure for this configuration). However, the symmetry of the whole system is preserved from the point of view of the net forces. The induced force $\mathbf{F}_1$ on the sphere $1$ is perfectly balanced with the induced force $\mathbf{F}_2=\mathbf{F}_{21} + \mathbf{F}_{22}$ on the sphere $2$ along the polarization direction. As a result, a binding force (i.e. relative attraction/repulsion) between the particles can exist but the system as a whole moves forward due to the radiation pressure. Under this symmetric illumination, there is no unbalanced force along the direction given by $\mathbf{\hat{y}}$.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=9cm,height=9cm,keepaspectratio]{figure5a.eps} \\
\includegraphics[width=4cm,height=4cm,keepaspectratio]{figure5b.eps
\includegraphics[width=4cm,height=4cm,keepaspectratio]{figure5c.eps
\par\end{centering}
\caption{\label{fig:5_config1_3dipoles}(a) Scheme to explain the origin of the spin torques in a homodimer under the illumination configuration of the Figs.~\ref{fig:3_config1_DimerFcs} and \ref{fig:4_config1_DimerTqs}. The induced inner field in the particle $2$ make it spin around its center (red arrows) because the elemental forces $\mathbf{F}_{21}$ and $\mathbf{F}_{22}$ are unbalanced. The vector in magenta for the center of mass of the system (CM) represent the whole movement of the system but it is not scaled to the forces $\mathbf{F}_1$, $\mathbf{F}_2=\mathbf{F}_{21} + \mathbf{F}_{22}$ induced on each respective particle. The boxes represent the subvolumes of the dipole moments in the frame of the DDA method. (b)-(c) Maps of the relative module of the inner electric field, $|\mathbf{E}/\mathbf{E}_0|$ for the configuration of the Figs.~\ref{fig:3_config1_DimerFcs} and \ref{fig:4_config1_DimerTqs} at $\lambda=600$ nm (calculated with \#1189 dipole moments each sphere). The planes $y,z$ shown corresponds to the cut $x=0$. (b) Left sphere, $1$. (c) Right sphere, $2$.}
\end{figure}
A similar configuration to the figures \ref{fig:2_SingleSph}-\ref{fig:4_config1_DimerTqs} has been studied through dimers of 2D-systems. The induced spin torques have also been thought as near-field observables of the interactions of the system. The spin torques have been shown to provide different information from the one that can be obtained by other mechanical or optical magnitudes. Notice that in general, the number of resonances ``detected'' has increased in the spectra of spins with respect to the number detected by the spectra of forces. In addition, the resolution of the resonances has improved, in general, in the spectra of torques. Another feature to be remarked upon is the shifts that exist in many spectral locations between the torque's and the force's resonances. Within all these features, however, the results of the Fig.~\ref{fig:4_config1_DimerTqs} constitute a generalization in three dimensions of the results obtained by the previous studies referenced. As in those studies, the new phenomenon can be also obtained by plasmonic systems. In that case, the electromagnetic resonances detected correspond to surface resonances seen in the spectra as excitations of surface plasmons \cite{AbrahamE2016,AbrahamE2018_Ag}.
Although the spin torques are not familiar to us for this symmetric system, they appear as a natural phenomenon after the examination of the simpler case schematized in \ref{fig:5_config1_3dipoles}(a). Now we can observe the map of \ref{fig:5_config1_3dipoles}(b-c) where the distributions of the inner fields have been calculated for the configuration of the Figs.~\ref{fig:2_SingleSph}-\ref{fig:4_config1_DimerTqs} at the wavelength $\lambda=600$ nm. The panel \ref{fig:5_config1_3dipoles}(b) (\ref{fig:5_config1_3dipoles}(c)) corresponds to the cut $x=0$ of the field distribution $y,z$ inside the left (right) sphere, $1$ ($2$). We can see that the fields inside the spheres present almost the same distribution as in the map $y-z$ shown for the single sphere, Fig.~\ref{fig:2_SingleSph}. However, the distribution changes in this case due to the interaction between the spheres. Although we have a symmetric configuration with respect to the incidence, the multiple scattering between the spheres results in bent patterns for the inner fields. Consequently, those asymmetric patterns generate the unbalanced forces that produce the spin torques.
With the aim of exploring the new degrees of freedom that can be induced in dimers, other configurations of illumination have been studied. In the following, the results correspond to angular variations of the illumination at a fixed value of the wavelength, namely $\lambda=600$ nm. This value has been picked up in connection with the relative maximum obtained in Fig.~\ref{fig:4_config1_DimerTqs} at this spectral position. The results can be approximately reproduced in any experiment having any laser wavelength near this value as, for instance, an He-Ne laser $\lambda=632.8$ nm. \\
\begin{figure}[!h]
\includegraphics[width=9cm,height=9cm,keepaspectratio]{figure6.eps
\caption{\label{fig:6_config2_DimerFcs}Scaled optical forces exerted on a silica dimer of $R=240$ nm and $d=500$ nm when the angle $\phi$ of the illumination is varied. The incident wavelength is $\lambda=600$ nm, and the other angles are $\theta=90$ deg. and $\zeta=0$ deg. (a) Binding force. (b) Module of the radiation pressure for the whole system, $F_{pr}=|\mathbf{F}_{pr}|=\sqrt{F^{2}_{x,CM}+F^{2}_{y,CM}}$. (c) Deviation $\delta=\phi(\mathbf{F}_{pr})-\phi$ of the radiation pressure from the direction of the incident wave. Curves in continuous (dashed) line correspond to a calculation with $\#1189$ ($\#1$) dipole moments per particle. The blue lines in (a) and (c) indicate the zero of the scales.}
\end{figure}
\textit{-Dimer: Azimuthal variation.} \\
The variations of the induced forces with the azimuthal angle can be seen in the Fig.~\ref{fig:6_config2_DimerFcs}. The panels show the binding forces (a), the module $F_{pr}=|\mathbf{F}_{pr}|=\sqrt{F^{2}_{x,CM}+F^{2}_{y,CM}}$ of the scattering force (b) and (c) the deviation of direction of the scattering force with respect to the direction of the incident wave, $\delta=\phi(\mathbf{F}_{pr})-\phi$. Here the angle of the scattering force is defined as $\phi(\mathbf{F}_{pr})=atan(F_{y,CM}/F_{x,CM})$ and the components of the center of mass are defined as $F_{j,CM}=F_{j,1} + F_{j,2}$ where $j=x,y$. The curves in solid line are calculated with $\#1189$ dipole moments per particle and they are compared against the same results but simulating the particles by single dipole moments (dashed line, $\#1$ each). The other angles of the illumination are set at $\theta=90$ deg. and $\zeta=0$ deg.
All the curves are periodic with the variation of $\phi$. Moreover, these curves present a modulation induced by the multiple scattering at $\lambda=600$ nm. We can find angular shifts between common features obtained for each curve. The binding force presents a small value of attraction at $\phi=0,180$ deg (see the curve in solid line) and it changes as $\phi$ is increased, oscillating from attraction to repulsion several times. The absolute minima of the curve are reached at $\phi=90,270$ deg. The minima correspond to the strongest states of repulsion of the configuration because the phase difference of the incident field between the particles is the greatest at these angles of illumination. In other words, the end-fire illumination is the most antisymmetric than it can be obtained from the point of view of the dimer. On the contrary, the values of the radiation pressure in \ref{fig:6_config2_DimerFcs}(b) have the absolute maxima at these right angles.
Notice that the force values of \ref{fig:6_config2_DimerFcs}(a) and (b) for $\phi=0$ deg. correspond exactly to their respective analogues in the panels \ref{fig:3_config1_DimerFcs}(a) and (b) for the wavelength $\lambda=600$ nm. In particular, this is useful to compare the values obtained in \ref{fig:6_config2_DimerFcs}(b) against the constant value for a single sphere, which can be obtained by calculating the radiation pressure exerted on the sphere $R=240$ nm at any arbitrary azimuthal angle. This value is approximately half of the value of $F_{pr}$ for the system at $\phi=0$ deg., i.e. $F_{pr,b=1}\approx 118$ in the units $3V_nk_0u_E$ of the scaled forces. Thus, the analysis of the variations of $F_{pr}$ with the azimuthal angle also serves to estimate the effects of the realistic interaction between the spheres if compared with the single-sphere response.
Logically, the contributions to the force values are changed by the number of dipole moments that represent the particles as we can see from comparing the curves in solid vs. in dashed line. However, this difference manifests also in a new phenomenon that has never been previously discussed in the literature. This is, the angle of the net scattering force for the system may result quite different than the angle of the illumination. Moreover, this angle depends on the inner-field modulation of the MDRs excited. As a result, the whole dimer moves as more on the right or more on the left than expected by the forward direction. Thus, the deviation of \ref{fig:6_config2_DimerFcs}(c) depends on the relative phase modulation of the field that is set by the angle of illumination, $\phi$. In the current example of the dimer, we can observe from \ref{fig:6_config2_DimerFcs}(c) that $\delta$ reaches extremals of $\approx21$ deg. at the angles $\phi \approx70,110,250,290$ deg and of $\approx15.5$ deg. at the angles $\phi \approx45,135,225,315$ deg.
The wide variation of deviation is an exclusive phenomenon induced by the multipolar structure of the scattering of the system; notice that there is also an angular deviation for the interaction between the single dipole moments (case $\#1$, see curve in dashed line in Fig.~\ref{fig:6_config2_DimerFcs}(c)). In this case, the whole system can be seen as composed by two dipole moments for the case $\#1$, and then the response of this system is quadrupolar in general \cite{Jackson}. If the illumination is not aligned with any principal direction for this quadrupole, the radiation pressure results deviated. However, the effect of deviation is vanishing for this case; the maxima do not exceed the $1.6$ \%. In the case of single dipole moments, the inner fields have constant values but they vary smoothly by the change of the relative phases that influences the dipoles' interaction. In this way, it could be practically considered that the angle of the scattering force coincides with the angle of illumination. Interestingly, the effect preserves the symmetry of the configuration; for the angles $\phi=0,90,180,270$ deg there is no induced deviation of the scattering angle (see both curves in solid and in dashed line). In the case of a single sphere, there is no deviation from the forward direction because the inner field responds only to the incident field, no matter how many dipole moments compose the sphere. The induced field results always symmetric with respect to the illumination direction and makes the particle to follow the forward direction, no matter how inhomogeneous the field is.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=9cm,height=9cm,keepaspectratio]{figure7.eps
\par\end{centering}
\caption{\label{fig:7_config2_DimerTqs}Scaled optical torques exerted on a silica dimer of $R=240$ nm and $d=500$ nm when the angle $\phi$ of the illumination is varied. The incident wavelength is $\lambda=600$ nm, and the other angles are $\theta=90$ deg. and $\zeta=0$ deg. (a) Orbital torque; (b) Spin torques. Curves in continuous (dashed) line correspond to a calculation with $\#1189$ ($\#1$) dipole moments per particle. Black (red) line for torque $N_{z1}$ ($N_{z2}$) exerted on the sphere 1 (2), green line for torques $N_{z}(CM)=N_{z1}+N_{z2}$ exerted on the center of mass of the system.}
\end{figure}
Now we explore more degrees of freedom to the study of the dynamics of the system. The investigation of the torques exerted under this configuration is presented in the Fig.~\ref{fig:7_config2_DimerTqs}. The results for both cases $\#1189$ and $\#1$ have been also added, see curves in solid and in dashed line. In this case, the symmetry breaking induced by the illumination direction with respect to the dimer axis allows for the presence of orbital torques in the structure (panel \ref{fig:7_config2_DimerTqs}(a)) even in the case $\#1$. In agreement with the broken symmetry, the spin torques exerted on each particle result different in general (panel \ref{fig:7_config2_DimerTqs}(b)). The only non-zero components correspond to torques aligned with the $z$-axis. The torques are influenced by a modulation similar to the one found for the forces in Fig.~\ref{fig:6_config2_DimerFcs}. Similar features are repeated with the period of $180$ deg. and again, the results under right angles of illumination save the symmetry of the configuration. In order to approach the movement for the whole system, the curves in green line have been added to the figures. They represent the resultant torques for the center of mass of the system, as given by $N_z(CM)=N_{z1}+N_{z2}$ for both the orbital \ref{fig:7_config2_DimerTqs}(a) and the spin components \ref{fig:7_config2_DimerTqs}(b). Notice that all the torques for the system vanish when the illumination direction is given by the values $\phi=0,90,180,270$ deg. In particular, the values of the spin torques for $\phi=0$ deg and $\lambda=600$ nm of Fig.~\ref{fig:4_config1_DimerTqs} are recovered in the panel~\ref{fig:7_config2_DimerTqs}(b) if a transformation of the axis of rotation $x \rightarrow z$ is performed.
Again, the spin torques result in zero for the case of two dipole moments, see dashed line in \ref{fig:7_config2_DimerTqs}(b). On the other hand, the orbital torques present quasi-sinusoidal curves for each particle in the case \#1, see curves in black and in red line in \ref{fig:7_config2_DimerTqs}(a); there is a modulation for the curves of $\#1$ but it is much weaker than the one appearing in the curves for $\#1189$. In the case of the spheres with $\#1189$, the orbital components for each scatterer are ruled by this modulation that is mounted over the sinusoidal function. Thus, the orbital torques can be interpreted as a result of a combination of dipole-dipole interactions with multipolar contributions added to them. In this way, the predicted movement is not trivial; the spin components also results in oscillating curves. Both particles follow similar phases in a wide range of values of $\phi$ so they would spin in the same sense of rotation in those angular values, giving a net spin for the whole dimer that vanishes at $\phi=\kappa 45$ deg (curve in green line in \ref{fig:7_config2_DimerTqs}(b)), being $\kappa$ an integer number.
Given the complex movement that the curves of the Figs.~\ref{fig:6_config2_DimerFcs} and \ref{fig:7_config2_DimerTqs} imply, we could obtain a rough approach to the movement of the system by taking only the curves $\#1$ as many works do throughout the literature \cite{Mohanty2004,Tumkur2016,Tumkur2018,Gargiulo2017}. The dipole-dipole interaction may result in a more intuitive interpretation of the electromagnetic coupling of the scatterers. This simple picture can explain some features of the interaction as the sign of the binding forces in \ref{fig:6_config2_DimerFcs}(b) (curves in dashed line) because the attraction/repulsion is seen as generated by the forces exerted between equal or opposite charges in the dipole moments. It can also explain the origin of the orbital torques in \ref{fig:7_config2_DimerTqs}(a) for the case \#1 as a result of the alignment of the total dipole of the system with the field. The picture of the dynamics of the system based on the alignment of dipole moments should be managed with care because the induced dipole moments have complex components. Two perpendicular vectors of the dipole moments can be defined with respect to the incident field, namely the parallel and perpendicular components to it. They are both complex quantities and different from zero under illumination with an arbitrary angle $\phi$. Yet, the particular situation under illumination with right angles, i.e. $\phi=0,90,180,360$ deg., results different because the perpendicular components of the dipole moments are identically zero. Moreover, if a small particle approximation can be applied (of course it is not the general case), we can always get almost zero perpendicular components of the dipole moments no matter the value $\phi$ is and the imaginary parts of the induced dipole moments are compensated (compensation of the phases). It is in this context when the two-dipoles interpretation is useful. Still, if the system is observed under angles of illumination close to the right angles, we can see that the system tries to get one of two stable configurations \cite{Dholakia2010}, depending on the relative phases for each dipole; one is the position of canceled dipole moment for the system. The other one is the complete alignment of parallel dipoles to form a net dipole moment for the system. However, the phases of the dipole moments (imaginary parts of the components) are not compensated if the particles are not relatively small enough with respect to the wavelength, and this makes the dipole-dipole interpretation more difficult to deal with. Even more, the spin torques are completely avoided within the frame of this formulation. This is maybe the reason why the spins have not been predicted long time ago \cite{Haefner2009}.
Then, in conclusion, the complex dynamics predicted here cannot be simply approached by taking the interaction between a few dipole moments in the system. The complete multiple scattering must be taken into account to observe all the induced mechanical features. \\
\textit{-Dimer: Variation of polarization.} \\
Due to the symmetry of the homodimer system, the study of the variation of the angle $\theta$ has no relevance and that configuration can be described by the rest of the analysis carried out up to here. In this subsection, the angle of polarization $\zeta$ is varied for the fixed wavelength $\lambda=600$ nm and a gap of $d=2R=480$ nm. In particular, this configuration of illumination has been studied previously in Ref.~\cite{Haefner2009} for silica dimers. However, some of the induced torques were not predicted; in this work, we complete the information about the movement of the system under this configuration. In the following, we set the other illumination angles at the values $\theta=180$ deg. and $\phi=0$ deg.
As we did before, we first analyze the behavior of the induced forces, Fig.~(\ref{fig:8_config3_DimerFcs}), for the cases of spheres made with \#1189 (curves in solid line) and \#1 (curves in dashed line). The curves in black (red) line in \ref{fig:8_config3_DimerFcs}(a) correspond to the net tangential force induced on the left (right) sphere, i.e. sphere $1$ ($2$). As pointed out in Ref.~\cite{Haefner2009}, the tangential forces \ref{fig:8_config3_DimerFcs}(a) may result unusual and already predict the presence of non-trivial orbital torques, see \ref{fig:9_config3_DimerTqs}(a). The Fig.~\ref{fig:9_config3_DimerTqs} follows the same color code as the Fig.~\ref{fig:8_config3_DimerFcs} with the exception of the curve in red solid line in panels (a) and (d) that is replaced for a curve with red symbols for clarity. The resultant binding forces are plotted in \ref{fig:8_config3_DimerFcs}(b) and the module of the radiation pressure is plotted in \ref{fig:8_config3_DimerFcs}(c). In this configuration, the direction corresponding to the radiation pressure is always directed parallel to the versor $\mathbf{\hat{z}}$, but its module varies smoothly with the polarization angle. Observe that all the curves present a clear sinusoidal form due to the phase variation of the incident field with respect to the symmetry of the dimer.
The presence of the tangential forces is not a phenomenon exclusively occurring with many dipole moments (curves in solid line). The tangential forces are also induced in the case \#1 for each sphere (curves in dashed line). This means that there will be orbital torques also for \#1 when the angle $\zeta$ is varied, in a manner similar to the case when $\phi$ were varied. But more importantly, the effect of adding dipole moments for each sphere results in a very different dynamics in this particular case. Notice that the resultant tangential forces for the case \#1189 are reduced in absolute value and even change its sign with respect to the case of the two dipoles moments under this wavelength, see \ref{fig:8_config3_DimerFcs}(a). Such behavior is another illustration of the errors one can get when trying to describe the dynamics of the system with only a few dipole moments.
The variation of the binding force and the radiation pressure with the angle $\zeta$ are also reduced with the presence of several dipole moments, \ref{fig:8_config3_DimerFcs}(b) and (c). We could say that the multiple interactions between all the dipole moments moderates the variations of the forces exerted on the system. In particular, it can be seen from \ref{fig:8_config3_DimerFcs}(b) that the repulsion between the spheres is very attenuated with the multiple dipoles' response; the negative values are not so pronounced for \#1189 if compared to the case \#1. In addition, the curves for the tangential forces in \ref{fig:8_config3_DimerFcs}(a) differ in phase in $90$ deg. with respect to the curves of the forces in \ref{fig:8_config3_DimerFcs}(b) and (c). In other words, symmetric inductions as the radiation pressure and the binding force prevail for symmetric electric fields as with $\zeta=90,270$ deg. On the other hand, asymmetric inductions such as the tangential forces cancel out for these values of $\zeta$.
Although not being realistic, it is interesting to describe the dipole-dipole dynamics of the dimer, see the curves in dashed line of the Figs.~\ref{fig:8_config3_DimerFcs} and \ref{fig:9_config3_DimerTqs}(a). Its utility was discussed in the previous subsection. Notice that there are two sets of angular positions where the system gains some stability. One set corresponds to $\zeta=0,180$ deg. where the system finds an equilibrium of zero tangential forces. However, it is unstable because a little angular perturbation around these positions would force the system to move off from its alignment with the incident electric field. Observe the values and signs of the tangential forces in \ref{fig:8_config3_DimerFcs}(a) and the torques in \ref{fig:9_config3_DimerTqs}(a). At $\zeta=0,180$ deg. the two complex dipole moments become parallel to the incident electric field in a way they are oriented along their connecting line ($\rightarrow \rightarrow$). Thus we obtain attractive forces between them because the opposite charges of the dipoles attract them to each other. However, this alignment is not preferred as soon as the system leaves the configuration of $\zeta=0,180$ deg.
The other equilibrium set is obtained for $\zeta=90,270$ deg. In this case, the system finds a stable equilibrium; the tangential forces are zero but small deviations from those angle values result in a ``rapid'' realignment to the original position. The system prefers to align its axis to the perpendicular direction with respect to the incident electric field. The two complex dipole moments become parallel to each other and to the incident electric field. Thus we obtain repulsive forces between them because the positive and negative charges of the dipoles coincide in position and repel each other ($\uparrow \uparrow$). Also note that for this set of angles, the values of the radiation pressure are greater and in opposite phase with respect to the binding force, see \#1 of \ref{fig:8_config3_DimerFcs}(b-c).
The dynamics for the case \#1189 is completely changed with respect to the dynamics for \#1. Although the signs of $\Delta$ do not change near $\zeta=0,90,180,270$ deg., the values of $F_t$ and $F_{pr}$ result very changed. Actually, $F_t$ results inverted in sign for all $\zeta$; \#1189 and \#1 in \ref{fig:8_config3_DimerFcs}(a) are in opposite phase each other. As a consequence of that, the orbital torques in \ref{fig:9_config3_DimerTqs}(a) show a similar property. Moreover, the presence of many dipole moments in the interaction between the spheres reduces the absolute values of both the tangential forces and the orbital torques.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=9cm,height=9cm,keepaspectratio]{figure8.eps
\par\end{centering}
\caption{\label{fig:8_config3_DimerFcs} Scaled optical forces exerted on a silica dimer of $R=240$ nm and $d=480$ nm when the angle of polarization $\zeta$ of the illumination is varied. The incident wavelength is $\lambda=600$ nm, and the other angles are $\theta=180$ deg. and $\phi=0$ deg. (a) Tangential force $F_{t}\equiv F_{x}$, curves in black (red) line for force exerted on the sphere 1 (2); (b) Binding force; (c) Module of the radiation pressure. Curves in continuous (dashed) line correspond to a calculation with $\#1189$ ($\#1$) dipole moments per particle.}
\end{figure}
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=9cm,height=9cm,keepaspectratio]{figure9.eps
\par\end{centering}
\caption{\label{fig:9_config3_DimerTqs}Scaled optical torques exerted on a silica dimer of $R=240$ nm and $d=480$ nm when the angle of polarization $\zeta$ of the illumination is varied. The incident wavelength is $\lambda=600$ nm, and the other angles are $\theta=180$ deg. and $\phi=0$ deg. and $\zeta=0$ deg. (a) Orbital torque; (b)-(d) Spin torques. Curves in a continuous line or line with symbol correspond to a calculation with $\#1189$ dipole moments per particle. Curve in dashed line for $\#1$. Black (red) line for torques $N_{z1}$ ($N_{z2}$) exerted on the sphere 1 (2).}
\end{figure}
Finally, we shall discuss unexpected torques that were found for this illumination configuration. New spin components can be seen from the panels (b)-(d) of the Fig.~\ref{fig:9_config3_DimerTqs}. Panels (b), (c) and (d) correspond to spin torques around the $x$, $y$, and $z$ axes respectively. Again, there are no spin components for the case \#1 corresponding to two interacting dipole moments.
Notice that the spin torques found also appear in coordinated form like we observed in the first configuration of illumination, Fig.~\ref{fig:4_config1_DimerTqs}; the curves in black line and in red line are equal in value but sign-opposed, see \ref{fig:9_config3_DimerTqs}(b-c). Moreover, the curves in black line and in red line are totally equal in \ref{fig:9_config3_DimerTqs}(d). The coordination of the spins preserves the symmetry of the system. The system ``feels'' an asymmetry with respect to the $x$ and $y$ axes due to this illumination variation. However, the components of the total spin torque are identically zero but their individual components, i.e. for each particle, are not. In addition, we can see from \ref{fig:9_config3_DimerTqs}(b) that the $x$-component never vanishes for this particular wavelength, in a way consistent with the results of the Fig.~\ref{fig:4_config1_DimerTqs} when $\zeta=0$ deg. Remarkably, this component of the spin torque reaches the largest values. The $x$- and $y$- components reach larger values than the $z$-component. It could be supposed that the $z$-component should ``perceive'' the variation of the polarization more directly than the rest of the components and then it would be expected to be the largest in variation but this is not the case.
Now the ``gear'' mechanism of the light scattering already appears in 2D results, see Refs.~\cite{AbrahamE2016,AbrahamE2018_Si,AbrahamE2018_Ag}. In the present three-dimensional case, this mechanism appears for the torques along the coordinates $x$ and $y$. In the Refs.~\cite{AbrahamE2016,AbrahamE2018_Si,AbrahamE2018_Ag} this kind of induction appears in only one dimension, here analogous to what we found along the $x$ direction, i.e. spin rotation with respect to the $x$-axis. Now the new component, namely the $y$-spin, makes the particles to rotate with respect to the $y$-axis in ``counter rotation'' each other.
It is worth to mention that the phases of the curves of the orbital torque and $z$-spin torques are opposite for \#1189. This has sense since some spin-orbit coupling is expected. The total torque exerted on the whole system has to be zero on average because the illumination has no net angular momentum. Consequently, the electromagnetic field around the system must have a net contribution to the rate of angular momentum that is exactly compensated with the rate of angular momentum that is taken for the movement of the system. An indication of such effect was shown in Ref.~\cite{AbrahamE2016} for 2D systems of plasmonic dimers. A similar same phenomenon is expected in 3D systems (not shown here).
Note that the scaling factors of the orbital torque and the spin torques differ in the constant value $k_0d$. Although it appears that the module of the $z$-spin torque is larger than the module of the orbital torque for the scaled values, it is not. To compare all the values of the Fig.~\ref{fig:9_config3_DimerTqs} each other one must multiply the orbital torque by the adimensional factor $k_0d$ in a way that it takes the same units than the spin torques, i.e. $3V_nu_E$. In this particular case, the factor is $k_0d\approx5$. Then, the maxima of the orbital torque are $1.4$ times larger than the maxima of the $z$-component of the spin torque. However, the maxima of the $x$-component of the spin torque result larger than the maxima of the orbital torque in $\simeq2.9$ times for this case.
Note that the curve for the $x$-component of the spin torque differs in phase in $90$ deg. with respect to the curve for the $y$-component. This is a phase difference of geometrical origin because these components are orthogonal to each other. However, the whole movement of the system results complex in general and, of course, the relative phases and modules of the torques differ when other wavelengths are observed. This analysis was made in order to illustrate the new effects regarding the dynamics of optically coupled nanoparticles. In particular, as said, the dynamics for metal or plasmonic nanoparticles will be different as the scattering properties of the system are ruled by surface resonances and not by volume resonances as in this case.
\section{Conclusions}
This paper outlines some unexpected consequences of the multiple scattering between two nanospheres when illuminated with time-harmonic fields that do not carry angular momentum. In particular, the study is focused on the optical forces and torques exerted on silica dimers and it is supported by DDA calculations. Three illumination configurations were explored, including a spectral study of the mechanical inductions as it was presented in previous works for 2D systems. Unusual spin torques were found for illumination with plane waves having a linear polarization. The results were explained in terms of asymmetries in the inner fields of the spheres.
The variation of the induced forces with the angles of illumination were also studied. Remarkably, new degrees of freedom were predicted for the movement of the dimer. It was found that the whole dimer can deviate from the direction expected by the radiation pressure. When the polarization of the illumination was varied, new spin torques were found that would respond to a spin-orbit coupling. Orbital torques are coordinated with the spins of the particles in this case. If the illumination is symmetric with respect to the dimer, the spins appear in a coordinated form in such a way that the net torque for the dimer results in zero.
In the present case of dielectric nanosphere dimers, the fields are a consequence of the coupling of morphology-dependent resonances on the spheres. However, the results of this work are supported by previous works; the new movements predicted here are of general validity. They can be obtained with independence on the materials simulated and the geometry of the scatterers \cite{AbrahamE2016,AbrahamE2018_Ag,AbrahamE2018_Si,Abraham2015_2}. In the case of plasmonic systems, the new effects are a consequence of the coupling of surface plasmon resonances.
In addition, the problem of approaching the system by single dipole-dipole interactions was compared against the complete problem that involves many dipole moments for each particle. The results show clearly how the multipolar interactions between the particles are essential in considering the realistic dynamics of the system. The dimer can be represented by two dipoles as many works do for a first approximation but this simple model cannot predict many important features of the mechanics of the optically-coupled scatterers.
\begin{acknowledgments}
The author would like to thank Dr. Antonio Garc\'ia-Mart\'in from IMN-CSIC for sharing interesting discussions on the topic.
\end{acknowledgments}
|
1,116,691,499,089 | arxiv | \section{Introduction} \label{sec:intro}
Diffuse gas that is within galactic halos but outside the star-forming disk, referred to as the circumgalactic medium (CGM), is critical to how galaxies evolve \citep*{tumlinson17}. This gas is comprised of metal-poor inflows from the intergalactic medium (IGM), metal-rich outflows from supernova (SN), feedback in the galactic disk, and intermediate metallicity gas that is mixed as gas recycles onto the disk or is stripped from in-falling satellite galaxies. While these processes are all readily seen in simulations, observing them in emission remains difficult because the high temperatures and low densities of the gas shift most of the emission to ultraviolet wavelengths and low surface brightnesses. Using the FOGGIE (Figuring Out Gas \& Galaxies In Enzo) simulations, we show here that better resolving the CGM in simulations has a profound impact on predictions for the surface brightness and kinematics of observable circumgalactic emission.
At low redshift, CGM absorption measurements have been connected to galaxy properties \citep[][]{stocke13,tumlinson13,werk14,bordoloi14,liang14,borthakur16,keeney18,berg18}. However, such samples are inherently limited by the number of UV bright quasars needed to make the absorption measurements.
At high redshift ($z\gtrsim 2$), the absorption lines probing this gas have shifted into visible wavelengths. Studies of damped Lyman-$\alpha$\ absorbers (DLAs; \citealp*{wolfe05,neeleman13,rafelski16}), super Lyman limit systems / sub-DLAs \citep{peroux08,som15,fumagalli16,quiret16}, Lyman limit systems (LLSs; \citealp*{lehner14,fumagalli16,lehner16}), and partial LLSs \citep{lehner16} have long shown large amounts of dense \ion{H}{1}\ and corresponding metals throughout the universe. Yet the redshift that puts these absorption lines within reach also shifts key line diagnostics of the associated galaxies into the infrared and out of the range of detection by current instrumentation. Thus, relating the absorption features to their galactic environment at high-$z$ has remained challenging \citep[though see][]{rudie12a,rudie13,turner14,turner15,turner17}.
In contrast, observing the CGM directly in \emph{emission} promises to help us understand the spatial and physical distribution of the gas around a single galaxy. Yet emission studies have faced similar challenges when trying to resolve their sources. Recently, two powerful new integral field units (IFUs) on 8--10m class telescopes---the Multi Unit Spectroscopic Explorer (MUSE) on the VLT \citep{bacon10} and the Keck Cosmic Web Imager (KCWI) on Keck \citep{morrissey18}---have provided exciting new tools with which to detect spatially extended Lyman-$\alpha$\ emission. Looking to quasars as triggers for bright emission from the gas surrounding them, most at $2 < z < 3$, have measurable Ly$\alpha$\ profiles extending as far as 80 kpc from the galaxy on average \citep{arrigoni18b}, while a handful have detected emission as distant as 200--300 kpc \citep{borisova16,arrigoni18a,cai18}. For galaxies, MUSE has revealed Ly$\alpha$\ around nearly every galaxy it has observed in the high-$z$ universe \citep{wisotzki18} though generally for a smaller median extent of 4--5 kpc \citep{leclercq17}. Though the source of this ionization is still unclear \citep{prescott15}, IFUs probe the dynamics of the gas, showing both inflows \citep{martin14} and outflows \citep{swinbank15} from galaxies.
While Ly$\alpha$\ can tell us much about the CGM, there are advantages to searching for the dimmer emission driven by metal lines. First, because Ly$\alpha$\ is a resonant line, untangling the structure of the emitting gas versus the gas scattering Ly$\alpha$\ photons is challenging and requires modeling of the radiative transfer. In addition, Ly$\alpha$\ necessarily traces the relatively cool, dense gas preferred by \ion{H}{1}. Metal lines, on the other hand, can probe the full range of densities, temperatures, and ionization states expected in the CGM because of the large number of available transitions. Metal lines also trace the gas flows that drive galaxy evolution and set the physical properties of the CGM itself.
Because metal-line emission is very faint in practice, simulations can help guide our search for detectable targets. Though the CGM has increasingly been used to place novel constraints on the sub-grid physics recipes in hydrodynamic simulations \citep{hummels13,suresh15,suresh17,ford16}, few theoretical emission predictions have been made for a large number of lines at high-$z$ since \citet{bertone12}. \citeauthor{bertone12} established which lines emit the most brightly in the CGM and highlighted the strong dependency of the emission on both the gas density and temperature in relation to the cooling curves of the emitting ions. \citet{sravan16} explored the variable nature of CGM emission and discussed how detectable emission will be biased towards galaxies having recently experienced large starburst events. In their work at low-$z$, \citet{bertone10} also demonstrated the relative insensitivity of emission to changes in the simulation's feedback prescriptions because of its strong bias to high densities. \citet{frank12} highlighted the strength of CGM emission relative to IGM emission, indicating that it was a good candidate for direct detection. \citet{corlies16} focused on low-$z$ emission around a single galaxy and found that the brightest emission follows the filament structure of the halo, and determined that simulation resolution indeed limits the ability to draw physical conclusions. However, while these studies mention the relevance of the predictions for upcoming instrumentation, only \citet{frank12} makes specific instrument-focused predictions for FIREBall \citep{tuttle08} from their simulations.
In this paper, we analyze the first generation of the FOGGIE simulations, wherein we take a novel approach where the spatial resolution in the CGM of a Milky Way-like galaxy is forced to be as high as the resolution in the galactic disk, an improvement of 8--$32 \times$ better than what is typically found in similar simulations (though see recent work from \citealp{vandevoort19} and \citealp{suresh18}).
With this new approach to resolving the CGM, we investigate how our predictions of emission from this gas change due to the resolution alone. In particular, we investigate how the observable properties of the gas change and how they can be linked to changes in the physical properties of the gas. While our focus is on $z=3$ to maximize the number of lines observable by current ground-based IFUs while minimizing the effects of surface brightness dimming, these lessons are broadly application to $2 \lesssim z \lesssim 4$ when the galaxy has passed the first stages of star formation but has not finished merging into the final, massive galaxy.
In Section \ref{sec:sims}, we present the simulations and the refinement method that allows us to achieve such high resolution in the outer halo. In Section \ref{sec:predict}, we make predictions for CGM metal-line emission and examine how the results change with resolution. In Section \ref{sec:phys}, we link the changes in observable properties to changes in the physical, ionization state of the gas. In Section \ref{sect:instr}, we make specific predictions for different observing modes of KCWI and MUSE for easy comparison with future observations. Finally, in Section \ref{sec:disc} we discuss the broader context of our results and summarize our conclusions in Section \ref{sec:conc}.
\section{Simulations and Methods} \label{sec:sims}
The cosmological hydrodynamic simulations we analyze here are the same as presented in \citet[][hereafter \citetalias{peeples18}]{peeples18}; the full details of the simulations and our novel ``forced refinement'' scheme are given there. We briefly review the highlights in Section~\ref{sec:simdetails}; in Section~\ref{sec:calcemis}, we describe how we calculate emissivities from these simulations.
\subsection{Simulation Basics}\label{sec:simdetails}
The FOGGIE simulations were evolved with with Enzo, an Eulerian adaptive mesh refinement (AMR) grid-based hydrodynamic code \citep{bryan14} using a flat \citet{planck13} $\Lambda$CDM cosmology ($1 - \Omega_{\Lambda} = \Omega_{\rm m} = 0.285$, $\Omega_{\rm b} =0.0461$, $h = 0.695$).
We focus here on a single halo (named ``Tempest'') selected to ultimately have a Milky Way-like mass at $z = 0$ and no major mergers for $z < 1$. The selected halo has $R_{200} = 31$\,kpc and $M_{200} = 4 \times 10^{10}$ ${\rm M}_{\odot}$\ at $z = 3$, with dark matter particle mass $m_{\mathrm{DM}} = 1.39\times10^6 \ \mathrm{M}_{\odot}$. This halo resides in a cosmological domain with a size of 100 comoving Mpc$/h$. The AMR is allowed to reach a maximum of 11 levels of refinement, corresponding to a finest spatial resolution of 274 comoving pc or a physical resolution of 68\,pc at $z = 3$.
The simulations include metallicity-dependent cooling and a metagalactic UV background \citep{haardt12} using the Grackle chemistry and cooling library \citep{smith17}. All metals are tracked as a single combined field; thus, particular elemental abundances throughout the paper are calculated assuming solar abundances. We use a \citet{cen06} thermal supernova feedback model, forming stars in gas exceeding a comoving number density of $\simeq 0.1$\,cm$^{-3}$ with a minimum star particle mass of $2 \times 10^4$\,${\rm M}_{\odot}$. The effects of Type Ia SNe are not included.
The general aim of AMR simulations is to place refinement in areas that are the most physically interesting. Typically with these types of cosmological zoom-in simulations, the additional refinement is triggered primarily by increases in density, with the goal of best refining the dense, star-forming disk of the galaxy of interest. For each level of refinement, the cell size decreases by a factor of two such that
\begin{equation}
\mathrm{Cell \ Size} = \frac{\mathrm{Box \ Size}}{\mathrm{Root \ Grid \ Cells}} \times 2^{-N_{\rm ref}},
\end{equation}
where $N_{\rm ref}$ is the level of refinement; our root grid is $256^3$. In our standard AMR simulations, the CGM typically reaches a refinement level of 6--8 while the ISM reaches $N_{\rm ref}=11$. This corresponds to 2.2--0.55 kpc resolution in the CGM at $z=3$. However, as discussed in \citetalias{peeples18}, there are many processes relevant to circumgalactic physics with potentially smaller spatial, the cooling length being the most notable.
This first generation of FOGGIE simulations takes a different approach and targets cells for refinement based on their spatial location alone.
This ``forced refinement'' scheme follows the targeted galaxy with a cubic box that tracks it through the domain.
To implement forced refinement, we first run a ``standard'' AMR simulation as described above, writing out snapshots in 20\,Myr increments. The main halo is identified and the coordinates of a 200\,kpc comoving box centered on the galaxy are recorded for each snapshot.
The simulation is then restarted at $z=4$ with the volume enclosed by this box refined to a minimum refinement level; for our default $N_{\rm ref}=10$ run (the ``high-resolution'' simulation in \citetalias{peeples18}), this corresponds to a fixed resolution of 380 h$^{-1}$ comoving parsec.
We have additionally evolved an $N_{\rm ref}=11$ simulation (190 h$^{-1}$ comoving pc) to $z=2.5$
with a cell size of 380 h$^{-1}$ comoving pc ($N_{\rm ref}=10$) or 190 h$^{-1}$ comoving pc ($N_{\rm ref}=11$).
The location of the box is updated every 20 Myr.
At $z=3$, the two highly refined runs have physical spatial resolutions of 137\,pc ($N_{\rm ref}=10$) and 68\,pc ($N_{\rm ref}=11$) respectively. Throughout the rest of this paper, we will reference the normal AMR run as ``standard'' while the two highly refined runs will be referred to by this physical size of the refined CGM cells.
\subsection{Calculating Emissivities}\label{sec:calcemis}
For the densities and temperatures typical of the CGM, the gas cools primarily through collisional excitation followed by radiative decay, leading to a $n^2$ dependence of line emission. For a given line, the brightest emission will therefore come from gas with temperatures that correspond to the peak of that line's cooling curve. \citet{bertone13} shows examples of the cooling curves that dominate cooling of the diffuse universe.
To calculate the emissivity in each cell, the simulation is post-processed using the photoionization code {\sc cloudy} \citep[version 10.0;][]{ferland98}. For each cell, the emissivity is calculated using {\sc cloudy} tables parameterized by hydrogen number density ($n_{\mathrm{H}}$), temperature ($T$), and redshift. The metal line emissivity is then scaled linearly by the metallicity of each cell.
First, we constructed {\sc cloudy} look-up tables of emissivity as a function of temperature ($10^3 < T < 10^8 \mathrm{K}, \ \Delta \log T=0.1$) and hydrogen number density ($10^{-6} < n_{\mathrm{H}} < 10^2$~cm$^{-3}$, $\Delta \log n_{\mathrm{H}}=0.5$). The calculation assumes solar metallicity and abundances. The grid is then linearly interpolated for every cell to the correct temperature and $n_{\mathrm{H}}$.
Finally, {\sc cloudy} also assumes that the gas is in ionization equilibrium, accounting for both photoionization and collisional ionization. For consistency with \citet{corlies16}, we use the 2005 updated version of \citet{haardt01} as our extragalatic ultraviolet background throughout.
\section{Predicted Emission Properties} \label{sec:predict}
In this section we make predictions for the distribution of metal-line emission at $z=3$ and demonstrate the role CGM resolution plays on the probability of its detection. We present surface brightness maps for H$\alpha$, \ion{Si}{4}, \ion{C}{3}, \ion{C}{4}, and \ion{O}{6}\ in Section~\ref{sect:sb_maps}, the role of angular resolution in Section~\ref{sect:angres}, radial profiles and covering fractions in Section~\ref{sect:covering}, and the kinematic properties in Section~\ref{sect:kinematic}.
\subsection{Surface Brightness Maps}\label{sect:sb_maps}
\begin{figure*}
\centering
\includegraphics[width=0.72\textwidth]{Figure1.pdf}
\caption{Surface brightness maps at $z=3$ of five different emission lines (H$\alpha$, \ion{Si}{4}, \ion{C}{3}, \ion{C}{4}, and \ion{O}{6}) for the standard AMR simulation, the 137\,pc simulation, and the 68\,pc simulation. The colors correspond roughly to detection probability with gray being non-detectable and colors related to different levels of likelihood as described in Section \ref{sect:sb_maps}. The pixel size of the standard simulation is 137\,pc and matches the CGM resolution in the two highly refined cases. Denser structures are clearly visible in the more highly refined simulations but most structures will remain beyond the detection limits of current and upcoming instrumentation. \label{emis_map.fig}}
\end{figure*}
Figure \ref{emis_map.fig} shows surface brightness (SB) maps of the entire 200 comoving kpc high refinement region at $z=3$ for our standard AMR simulation (left), the 137\,pc simulation (middle), and the 68\,pc simulation (right) for H$\alpha$ and a number of metal lines. Because the standard run has varying cell sizes due to the AMR, we choose to force the pixel size to match the 137\,pc simulation for easy comparison. The two highly refined simulations have pixel sizes matching their stated CGM resolution. The SB dimming of an object at this redshift is accounted for in all images throughout the paper. This colormap will be used throughout the paper and corresponds roughly to the probability of detection with current and upcoming instrumentation. Green corresponds to pixels that should always be detected (log$_{10}$(SB) $\geq 3$ photons s$^{-1}$ cm$^{-2}$ sr$^{-1}$), blue to pixels that will probably be detected ($2 \leq $ log$_{10}$(SB) $< 3$ photons s$^{-1}$ cm$^{-2}$ sr$^{-1}$), and pink to pixels that are formally possible to detect but push the limits of all instruments ($1 \leq $ log$_{10}$(SB) $< 2$ photons s$^{-1}$ cm$^{-2}$ sr$^{-1}$). Gray are pixels that will not be detected in the near future (log$_{10}$(SB) $< 1$ photons s$^{-1}$ cm$^{-2}$ sr$^{-1}$). Detailed matches to two current instruments, KCWI and MUSE, are discussed in Section \ref{sect:instr}.
\begin{figure*}
\centering
\includegraphics[width=0.72\textwidth]{Figure2.pdf}
\caption{Same $z=3$ surface brightness maps as Figure \ref{emis_map.fig} but now zoomed in so only an area of $40\times 40$\,kpc (5\arcsec $\times 5$\arcsec) is shown. The bright, observable emission is confined to within roughly 20 kpc of the galaxy. More disjointed areas can have higher surface brightnesses in the higher resolution simulations where regions are allowed to collapse to higher densities. \label{emis_map_zoom.fig}}
\end{figure*}
Table \ref{tab:lumin} gives the total luminosity of each line in the 200 kpc comoving refinement region for each of the simulations. While the distribution of the observable emission is not greatly affected by the resolution, the total luminosity emitted in each line does change substantially with resolution: the total luminosity of each emission line we consider {\em increases by about an order of magnitude} owing to improving the circumgalactic resolution alone.
The luminosities from the 68\,pc simulation are much closer to the 137\,pc simulation than either are to the standard-resolution simulation, suggesting that the 137\,pc simulation is nearly converged with respect to this diagnostic.
Improved spatial resolution allows regions in the CGM to collapse to higher density, leading to more efficient cooling radiation and larger luminosities overall. We discuss this effect in more detail in Section \ref{sec:rad_profs}.
\begin{table}
\centering
\begin{tabular}{ |c|c|c|c|c| }
\hline
\ Line & Wavelength & Standard & 137\,pc & 68\,pc \\
\hline
H$\alpha$ & 6563\,\AA & 8.9e42 & 1.3e43 & 1.2e43 \\
\ion{Si}{4} & 1394\,\AA & 1.6e40 & 4.7e41 & 7.2e42 \\
\ion{C}{3} & 977\,\AA & 1.2e41 & 3.5e42 & 4.5e43 \\
\ion{C}{4} & 1548\,\AA & 8.1e39 & 5.6e41 & 3.2e41 \\
\ion{O}{6} & 1032\,\AA & 5.5e39 & 1.4e40 & 2.1e41 \\
\hline
\end{tabular}
\caption{Total luminosity of a given line within the refinement box for each simulation in units of ergs\,s$^{-1}$. The standard simulation under predicts the luminosity in each line by roughly an order of magnitude compared to the highly refined simulations.} \label{tab:lumin}
\end{table}
In general, lines whose cooling curves peak at slightly lower temperatures, like \ion{C}{3},\ tend to be the brightest at this redshift because it is at these temperatures where the bulk of the dense gas throughout the halo is found. \ion{O}{6}, on the other hand, is particularly weak because there is little dense gas at higher temperatures, resulting in little detectable emission. The physical causes of the emission are addressed further in Section \ref{sec:phys}.
Adding resolution to the CGM clearly reveals the filaments feeding the galaxy and the structure within them that is artificially smoothed by the poor resolution in the standard run (left panels). Other small-scale structure is created by SN-driven outflows and by gas stripped from inflowing satellites. If we want to examine the small scale structure in emission, these highly refined simulations are needed.
However, despite these significant morphological differences between the runs, most of this increased small-scale structure around this relatively small galaxy is undetectable, as exhibited by the color map. Almost all of the detectable gas remains within 20 physical kpc, regardless of the CGM resolution.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{Figure3.pdf}
\caption{Emission map of \ion{C}{3}\ at $z=3$ for the 137\,pc resolution simulation shown with different angular resolutions, starting with the resolution of the simulated CGM on the left and with pixels getting bigger moving right. For the coarsest angular resolution, a detection can potentially be made. At higher angular resolution, however, not only can the CGM be detected but its underlying structure and the processes shaping it can be probed. \label{emis_ang_res.fig}}
\end{figure*}
The one large outlier is H$\alpha$. Because it is independent of metallicity, the line is extremely bright even at $z=3$, tracing the cosmic filaments. However, at $z=3$, H$\alpha$ has shifted to an observed wavelength of 2.6$\micron$, well outside the bandpasses of the ground-based IFUs discussed in this paper. This does fall at a wavelength observable by NIRSpec on the {\em James Webb Space Telescope}; more detailed {\em JWST} predictions will be the focus of future FOGGIE simulations.
To better highlight the detectable regions, Figure \ref{emis_map_zoom.fig} shows a zoomed in view of the galaxy that is 40 physical kpc across (or 5\arcsec $\times 5$\arcsec\ at $z=3$). Much of the clearly observable emission is coming from the central part of the galaxy and thus the interstellar medium as opposed to the CGM. Yet, it is also obvious that the higher spatial resolution leads to the formation of small, dense regions that are detectable to larger radii in the 68 pc and 137 pc simulations. Thus, the emission can be clumpy on small scales that would not be predicted if not for this enhanced simulation resolution. The 68 pc and 137 pc simulations show that we can possibly expect to detect emission from the CGM at larger radii from the main galaxy. These will enable us to definitively say that the emission is from the CGM and not from the galaxy itself.
It is worth noting that the standard simulation displayed here is somewhat unrepresentative. At this particular redshift, the main halo is actually in the process of merging with another dense galaxy which appears as two distinct regions of blue/green pixels in the images. Thus, the extent of emission from the center of either galaxy in these frames is smaller than that from the main galaxy in either highly refined simulation. The orientation of this satellite in the refined simulations is different such that it does not produce as much observable emission.
\subsection{Angular Resolution}\label{sect:angres}
In addition to the surface brightness limits, the angular resolution of an observation can have a large effect on the conclusions that can be drawn about the CGM. Figure \ref{emis_ang_res.fig} takes the \ion{C}{3}\ emission from the 137\,pc simulation shown in Figure \ref{emis_map_zoom.fig} as the leftmost panel. The angular resolution of the image is then degraded as the panels move towards the right. The labels show physical size and angular size at $z=3$ for each panel.
For the coarsest resolution shown, the galaxy can barely be detected. At 5\,kpc resolution, both the galaxy and the CGM are likely to be detected but it is difficult to glean any information about the true shape and physical distribution of CGM properties. Instead, the resolution of 1\,kpc (0.13\arcsec) is necessary to discern both the CGM's spatial distribution and also to say anything definitive about the processes that are shaping the CGM. At this resolution, one can see that the gas is not spherically symmetric and that it is clumping on scales of at least the size of the pixels. This resolution is fine, but is not impossible to achieve with current instrumentation and highlights the need to prioritize high angular resolution in future instrumentation \citep[e.g.,][]{luvoir18}.
\subsection{Surface Brightness Profiles and Covering Fractions}\label{sect:covering}
The emission maps of Figures \ref{emis_map.fig} and \ref{emis_map_zoom.fig} show by eye the differences in the extent and scale of emission in the CGM and how it depends on the simulation resolution. In this section, we quantify these differences with a focus on the observational implications by looking at the radial profile and covering fractions of the surface brightness.
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{Figure4.pdf}
\caption{Radial profiles of the surface brightness maps shown in Figure \ref{emis_map.fig} for four emission lines and all 3 simulations. The colors correspond to the color maps of Figures \ref{emis_map.fig}--\ref{emis_ang_res.fig}: green -- detectable; blue/pink -- possible to detect; gray -- beyond current instruments. The detectable emission is found within 20 kpc for all the simulations. For non-detectable emission, structures within the CGM gas are much better traced in the simulations with better spatial resolution. \label{sb_prof.fig}}
\end{figure*}
Figure \ref{sb_prof.fig} takes every pixel shown in the emission maps of Figure \ref{emis_map.fig} and plots the radial profile of the surface brightness for four emission lines for the given projection axis. The colors here are generally matched to the colorbar of Figure \ref{emis_map.fig}. Radial profiles averaging over the three primary simulation axes tend not to show much variation so this single axis is illustrative \citep{corlies16}. The radial profiles confirm that easily detectable emission is confined to the central parts of the galaxy. However, the potentially detectable (blue and pink) pixels can be found as far as 20\,kpc from the center of the galaxy, allowing for confirmable detection of CGM emission. While most pixels remain undetectable, the radial profiles also highlight how the low resolution in the standard run does not fully sample such low surface brightness structures in the outer CGM.
Figure \ref{sb_prof.fig}'s emission-focused radial profiles can be compared to the absorption-focused radial profiles in Figure~7 of \citetalias{peeples18}. The emission seems to follow the \ion{H}{1}\ column density the most closely with the brightest emission and the largest \ion{H}{1}\ column densities being found within 20\,kpc of the galaxy. However, the steepness of the SB profiles does not change as strongly with CGM resolution like it did for the column density profiles. This is in part because we have chosen to highlight the detectable emission so the plot spans almost 12 orders of magnitude on the $y$-axis. Looking instead at the undetectable pixels, more pixels exist at a larger spread of values so the radial profile is flatter for these larger radii. However, this similarity to the \ion{H}{1}\ suggests that the main reason for these similar SB profiles is the strong dependence of the emissivity on the gas density whereas the number of pixels traces the volume-filling, diffuse gas.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figure5.pdf}
\caption{Number of pixels above a given surface brightness limit for four emission lines and all three simulations, averaged over the three primary projection axes of the simulation boxes. Fewer than $1\%$ of pixels are observable and the fraction does not vary greatly with the simulation resolution.}
\label{cov_frac.fig}
\end{figure}
We further quantify the observability of the emission by considering covering fractions of varying SB levels. Figure \ref{cov_frac.fig} shows the fraction of pixels above a given surface brightness level for four emission lines for each of the simulations. The covering fraction is then averaged over all three axes of the simulation box to reduce the influence of any preferential viewing angles.
\subsection{Tracing Kinematic Properties}\label{sect:kinematic}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.7\textwidth]{Figure6.pdf}
\caption{Maps of the emissivity-weighted LOS velocity after the bulk velocity of the refinement region has been subtracted. Direct comparisons between simulations is difficult because the orientation of the galaxy changes to match the emission maps shown in Figure \ref{emis_map.fig}. Increasing the resolution increases variations in the kinematics amongst the different emission lines and reveals complex kinematic structures on the smallest scales. \label{velmap.fig}}
\end{figure*}
In general, fewer than $1\%$ of the pixels are detectable for any ion at the highest resolution of each simulation (and binned at 137\,pc for the standard run). Above 1\,photon\,s$^{-1}$\,cm$^{-2}$\,sr$^{-1}$, the 137\,pc simulation does have a higher covering fraction than the standard simulation. Denser peaks are allowed to form because of the higher spatial resolution, which leads to brighter emission. On the other hand, the 68\,pc simulation has the lowest covering fraction because the bright emission is confined to smaller physical regions which leaves more of the pixels at lower surface brightness. Since each pixel is smaller, the overall number of observable pixels does not decrease (see Figure \ref{sb_prof.fig}) just the percentage of the total.
A unique strength of using IFUs is that for every pixel, a spectrum is generated, providing kinematic information that can inform our understanding of the gas origins. To begin to estimate such properties from the simulation, we calculate the bulk velocity of the entire refinement region and subtract it from the cells within the box to provide a meaningful frame of reference for the velocities. Figure \ref{velmap.fig} shows the emissivity-weighted line of sight (LOS) velocity at $z=3$ for each simulation; the projection axis is the same as for the emission maps shown in Figure \ref{emis_map.fig}.
We caution against directly comparing the simulations because the orientation of the galaxy relative to the projection axis is somewhat different in each simulation. Nevertheless, some general trends can still be identified.
In the standard simulation, there is not much variation in the velocity structure amongst the different emission lines. In contrast, in the highly refined simulations, while the bulk velocity flows remain similar, more small-scale velocity fluctuations are seen as the ionization energy of the line increases. H$\alpha$ and the other low ions are tracing dense gas which is dominated by coherent filaments at these high redshifts. The higher ions, like \ion{O}{6}, trace the volume-filling gas which has more peculiar motions from outflows.
These maps demonstrate how the high resolution in the CGM changes the kinematics which in turn will affect the predicted emission line profiles, akin to the ways we showed how simulated velocity discretization affects absorption line profiles \citepalias{peeples18}. Thus, this resolution is crucial for using simulations to inform the interpretations of future observations of circumgalactic gas kinematics in emission.
\section{Connecting Emission to Physical Conditions}\label{sec:phys}
Ultimately, the goal of observing the CGM in emission is to
understand the physical properties---the density, temperature, and metallicity---of the gas. In this section, we link the changes in emission properties to changes in the physical properties of the gas.
\subsection{CGM Physical Properties and Resolution}\label{sec:rad_profs}
Figure \ref{rad_prof.fig} shows the radial profiles of temperature, hydrogen number density, metallicity, and a 1D-velocity for the three simulations presented throughout the paper. In general, the {\em average} physical properties of the gas are unchanged, which is not surprising since all that varies between these simulations is the numerical resolution. However, we do see that the \emph{breadth} of all of these quantities has increased. In the highly resolved CGM, gas can exist at low and high density, temperatures, metallicities, and velocities at all radii. That is, the gas is more multiphase at all radii in this halo at $z=3$ when the CGM is more highly resolved.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.98\textwidth]{Figure7.pdf}
\caption{Radial profiles at $z=3$ of the temperature, density, metallicity, and z-velocity for the three simulations discussed. Each cell in the volume is depicted such that fewer points corresponds to fewer number of cells in that simulation. While changing the resolution of the CGM does not affect the bulk, average properties of the gas, the spread in all of these physical quantities has changed dramatically at all radii. \label{rad_prof.fig}}
\end{figure*}
A broader distribution of gas densities means there is more high density gas, which in turn translates directly to the higher total emission noted in Section \ref{sect:sb_maps} and shown in Table \ref{tab:lumin}. Because the emission is predominantly produced by gas cooling through collisional excitation of these lines, the $n^2$ nature of this process means the strength of emission depends strongly on the density. Even though the bulk of the gas remains undetectable, the brightness of the source generically increases because of this more highly resolved, dense gas.
Similarly, when the gas is more artificially mixed it settles at a single temperature ($\sim 10^{5.5}$\,K in the standard run, for example). This results in the gas cooling more strongly through certain lines (\ion{C}{4},\ \ion{O}{6}) at the detriment of others (\ion{Si}{4},\ \ion{C}{3}). Instead, the increased resolution allowing gas to be more distributed in temperature means that more gas can also exist at the peak of the cooling curve of a larger number of metal lines.
Finally, the emissivity of the gas is also regulated by its metallicity. Just as the temperature changes when the gas is artificially mixed, so too does the metallicity. This can help explain why gas is not uniformly brighter in the high resolution simulations with denser gas. If the denser gas also now has lower metallicity, then the emission will not become as bright as gas at the same density but with higher metallicity from the artificial mixing.
In short, the combination of larger spreads in density, temperature, and metallicity result in more overall emission and in a different spatial distribution of such emission. The complicated interplay of these properties is why emission can be such a useful tool for diagnosing the CGM.
\subsection{Examining the Ionization Process Driving Emission}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figure8.pdf}
\caption{Hydrogen number density ($n_{\mathrm{H}}$) and temperature, weighted by the \ion{O}{6}\ emissivity. The left column shows the standard simulation and the right column the 137\,pc simulation. In the top panels, colors correspond to the average surface brightness of pixels in each bin. In the bottom panels, colors show the average \ion{O}{6}\ ion fraction of pixels in each bin. The bottom panels show that high \ion{O}{6}\ ion fractions are generated by both photoionization (low density, low temperature) and collionsional ionization (high density, high temperatures). However, only the collisional ionization, which occurs near the peak of the \ion{O}{6}\ cooling curve, generates observable emission, as seen in the top panels. \label{phase_OVI.fig}}
\end{figure}
It is a long-standing debate as to if the \ion{O}{6}\ seen in absorption is predominantly photo- or collisionally-ionized \citep{tripp08,savage14,werk16,oppenheimer16,nelson18}. Figure \ref{phase_OVI.fig} show the hydrogen number density ($n_{\mathrm{H}}$) and temperature weighted by the \ion{O}{6}\ emissivity along the line of sight for each pixel in the emission maps of Figure \ref{emis_map.fig}. In the top panels, the colors correspond to the average surface brightness of pixels that contribute to that bin, matching the color maps of Figures \ref{emis_map.fig}--\ref{sb_prof.fig}. The normalized histograms show the distribution of $n_{\rm H}$ and temperature for pixels falling within a given detectability bin. The phase diagrams show a clear trend that higher density leads to increasingly brighter emission. However, these dense regions also need to exist at the temperature at the peak of the cooling curve of that line to produce observable emission. Indeed, the observable bins all cluster around $T=10^{5.5}-10^6$ K for the \ion{O}{6}\ line.
Overall, there is not much variation between the two simulations in terms of the \ion{O}{6} -emitting gas. The phase space is clearly more finely sampled by the higher resolution run, and a slightly wider range of densities and temperatures contribute to detectable pixels, most likely because the metallicity has increased for some of the pixels.
The bottom panels show the same phase diagrams but colored to show the average ion fraction of pixels contributing to that bin. In both simulations, there is a large fraction of \ion{O}{6}\ for hot, dense gas (top right of each panel) representing collisionally ionized gas. There is also a peak in the \ion{O}{6}\ fraction at lower densities and at lower temperatures, revealing that there is also photoionzied \ion{O}{6}\ gas in the simulation. However, comparing the top and bottom panels, we can see that all of the gas that can be detected in emission comes from the collisionally ionized regime.
\subsection{The Effect of Angular Resolution on Deriving Physical Gas Properties}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figure9.pdf}
\caption{Top panels show the emission maps of \ion{C}{3}\ and \ion{O}{6}\ for the 137\,pc simulation at its fiducial resolution and for an overplotted image where the resolution has been degraded to 10\,kpc. The pink pixel in both coarse images is found and the corresponding region in the high resolution image is identified. The LOS properties of the coarse simulation are then plotted in the lower panels as gray, dotted lines and of the highly refined simulation in solid colors. The solid colored line corresponds to the median values and the shaded region shows brackets the minimum and maximum value at each LOS position. The coarse resolution blends the gas physical properties such that the actual range of the gas's physical values, limiting what can be inferred from such a measurement. \label{pixel_smear.fig}}
\end{figure}
Finally, the high resolution simulations can help place constraints on the degree to which the CGM properties are artificially blended by both coarse spatial resolution in the simulations and coarse angular resolution in the observations. The top panels of Figure \ref{pixel_smear.fig} show the emission maps for two lines, \ion{C}{3}\ and \ion{O}{6}, from the 137\,pc simulation and overplotted is the same image but where the pixel size is degraded to 10 kpc. The color map matches that of Figures \ref{emis_map.fig}--\ref{sb_prof.fig}. Visually, a single given observable pixel in the coarse image corresponds to a complex region with a large range of surface brightnesses and gas structures in the high resolution simulation. A single pixel wether simulated or observed is unable to capture such variations in CGM physical properties.
To understand this variation, we de-project the cube used to generate the emission map to recover the LOS information. We first identify the position where the pink pixel is found in the 10 kpc map and the corresponding region in the 137\,pc image. In the lower panels of Figure \ref{pixel_smear.fig}, we plot the physical properties along the LOS for the single pixel in the coarse map as gray, dashed lines. The line-of-sight variation of the emissivity, hydrogen number density, temperature, metallicity, and LOS velocity in the low resolution cube are evident.
For the set of pixels in the corresponding region of the full resolution cube, the colored lines show the median values of the physical properties along the LOS and the shaded regions correspond to the minimum and maximum values at each distance. The high resolution demonstrates that the coarser resolution in either simulations or observations blends the gas properties such that their variation is decreased. Gas is neither as hot or as cold, as dense or as diffuse, as metal-rich or metal-poor, as out-flowing or in-flowing in the coarse image as it is in the highly resolved image.
Furthermore, the emission in a given 10\,kpc region is ultimately being driven by a handful of pixels that represent much smaller spatial scales. The brightest pixels can have emissivities of $10^{-15}$ to $10^{-10}$\,photons\,s$^{-1}$\,cm$^{-3}$\,sr$^{-1}$ as opposed to the median values of $10^{-25}$\,photons\,s$^{-1}$\,cm$^{-3}$ sr$^{-1}$. How the properties of these bright pixels vary with the LOS and how these properties compare to what would be derived from {\sc cloudy} modeling of the measured emission on these scales will be the focus of future work.
\section{Instrument-specific Emission Maps} \label{sect:instr}
In this section, we re-present the surface brightness maps at $z=3$ of the 137\,pc simulation to reproduce the properties of two ground-breaking optical integral field units: KCWI on Keck and MUSE on the VLT. Direct detection of circumgalactic emission is one of the primary science goals for both of these instruments. Both have multiple observing modes, but we focus here on those which have the most sensitive surface brightness limits combined with the best angular resolution. This is the ``full-slice'' mode on KCWI and the ``narrow field'' mode on MUSE, the details of which we summarize in Table \ref{tab:instr}.
\begin{table}
\centering
\begin{tabular}{ |c|c|c| }
\hline
\ & KCWI & MUSE \\
\hline
Mode Name & Full Slice & Narrow Field \\
FOV & 20\arcsec $\times$ 33\arcsec & 7.5\arcsec $\times$ 7.5\arcsec \\
Angular Resolution & 0.5\arcsec & 0.025\arcsec \\
Bandpass & 3500--5600 \AA & 4650--9300 \AA \\
Exposure Time & 30h & 27h\\
SB Limit & $7\times 10^{-21}$ & $1\times 10^{-19}$ \\
\hline
\end{tabular}
\caption{Summary of details of observing modes modeled in Section \ref{sect:instr} for KCWI and MUSE. Surface brightness limits are giving in ergs s$^{-1}$cm$^{-2}$arcsec$^{-2}$.} \label{tab:instr}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Figure10.pdf}
\caption{Emission maps to match the properties of two specific observing modes on KCWI and MUSE as outlined in Table \ref{tab:instr}. \emph{Pixels that lie above the surface brightness limit of the instrument are colored to stand out from the colormap}: red for KCWI and blue for MUSE. Only a few pixels are detectable by either instrument. Gray boxes represent lines that have shifted out of the bandpass of the respective instrument and thus can not be observed at $z=3$. The large FOV of KCWI allows the entire CGM to be observed simultaneously. MUSE has a similarly broad observing mode but here we highlight the ``narrow field'' mode, which has exceptional angular resolution. Such high angular resolution allows for a detailed look at the gas properties that are only resolved in the simulation because of our new refinement scheme. \label{instr.fig}}
\end{figure*}
Figure \ref{instr.fig} shows the emission maps for the ions of interest at $z=3$ for both instruments. The relative sizes of the field of view (FOV) are depicted in the first two columns; the third shows a larger version of the MUSE images for clearer comparison with the KCWI images. All images reflect the stated angular resolution of the instruments' observing modes from their respective websites \footnote{https://www2.keck.hawaii.edu/inst/kcwi/configurations.html} \footnote{https://www.eso.org/sci/facilities/develop/instruments/muse.html}. For MUSE, the surface brightness limit is taken from \citet{wisotzki18} who observed in the wide field mode. In the narrow field mode we discuss here, the limits should be similar for all but readnoise-limited cases. However, we use this value as a good rule of thumb for this exposure time. We focus on the narrow field mode since the small scales of the emission that are the focus of this work may raise the mean SB measured per spaxel as the emission is concentrated by the higher resolution of the instrument.
The left panels of Figure \ref{instr.fig} show how the large FOV of KCWI in this mode (corresponding to $158\times 260$ physical kpc at $z=3$) allows the entire CGM be observed simultaneously. In this way, a single observation can capture the processes shaping the inner and out CGM, whether that is cosmic filaments, minor mergers, or starburst-driven or AGN-driven outflows.
MUSE has a mode the enables a FOV twice the size of the KCWI mode presented above, but here we have chosen to highlight the predicted performance of the instrument when operating with full adaptive optics. The superb angular resolution in the narrow mode allows for the details of the small-scale gas structure to be probed. The right panels of Figure \ref{instr.fig} demonstrate how both high spatial resolution in the simulations and high angular resolution in the observations is needed to understand the distribution of physical and spatial properties of the CGM as laid out in the previous sections.
A major consideration that does not change with observing mode is the bandpass of the instruments. KCWI currently observes at much bluer wavelengths than MUSE. Because of the varying wavelengths of the emission lines, neither instrument can observe all of the metal lines presented here simultaneously. At lower redshifts, even more of the lines have shifted blue-ward of the MUSE bandpass. (H$\alpha$, which is detectable at $0 < z < 0.42$, is the notable exception.)
Despite the FOV, bandpass, and angular resolution trade offs, both instruments are ultimately limited by their surface brightness sensitivities. For the panes in Figure~\ref{instr.fig}, pixels that are brighter than the limits of each instrument's observing mode are colored green. For both instruments and for any line, there are at most a handful of pixels that clear the detection limit.
Binning (reducing angular resolution) or stacking (minimizing individual CGM features) may allow for better overall detection of the CGM emission. However, this single galaxy appears largely undetectable at $z=3$ for these instruments. In the next section, we discuss the implications of this for CGM emission studies overall.
\section{Discussion} \label{sec:disc}
The instrument-specific emission maps shown above present a seemingly bleak picture for the future of directly detecting emission from the CGM. However, a more accurate statement is that they indicate that emission from \emph{this} galaxy remains out of reach. While a Milky Way-like progenitor is interesting for understanding the evolution of galaxies like our own, this is not an ideal candidate to target for current emission observations. This galaxy has a total mass of only $4\times 10^{10}$\,${\rm M}_{\odot}$, has a star formation rate of 3--4\,${\rm M}_{\odot}$\,yr$^{-1}$, and has no active AGN. A more massive galaxy will likely have a denser CGM, be fed by stronger cosmic filaments, and have more in-falling satellites to provide dense, stripped material throughout its halo. Higher star formation rates and AGN feedback will eject more mass and metals into the CGM as well as generate more radiation to enhance photoionization and can lead to strong time variability in the emission \citep{sravan16}. This effect is seen at low-redshift in the COS-Bursts data low-redshift \citep{heckman17}. Thus, the prospects for more massive, active galaxies are promising for high-$z$ studies.
In addition to looking at galaxies with more observationally favorable properties, this work also looks towards the development of extremely large telescopes (ELTs) that may search for the CGM emission of progenitors of Milky Way-like galaxies. With larger collecting areas, ELTs can push to even lower SB limits with the same angular resolution as current large telescopes, increasing our chances of detecting galaxies such as the one presented in this paper. However, there will be trade-offs: if the typical solid angle of the sky sampled by these new instruments is significantly smaller (e.g., to take advantage of the extreme adaptive optics corrections on the ELTs), the sensitivity to diffuse gas may remain little changed. Studies such as this one can help evaluate such trade offs in future instrument designs in light of different science goals.
Besides choosing galaxies with more favorable emission properties or lowering the surface brightness limit of observations, stacking remains a viable option for detecting emission from the CGM. While valuable information is lost pertaining to the exact gas distribution around each galaxy, stacking large numbers of galaxies shows that the extent of ionized gas is dependent on galaxy properties \citep{zhang18a} and can be used to probe the dominant source of ionization of the gas at different galaxy masses \citep{zhang18b}. Large-scale cosmological simulations could also be used to mimic such a stacking procedure and examine any biases due to viewing angles and time variability though that is beyond the scope of this paper.
Furthermore, one of the biggest hindrances to detecting this emission is simply the distance and the resulting surface brightness dimming. Observing galaxies at lower redshift and in the UV, while still challenging, helps mitigate this particular limitation. \citet{corlies16} showed that emission from a Milky Way-like galaxy at $z=0$ can potentially be detected as far as 120\,kpc from the galaxy and that the covering fraction of detectable pixels can be as high as 5--10\%\ depending on the surface brightness limit assumed. Similar fractions are predicted for a larger, cosmological volume by \citet{bertone10}. UV-missions such as FIREBall-2 and LUVOIR may provide our most promising prospect for measuring the CGM in metal-line emission \citep{grange16,luvoir18}.
Finally, this paper has focused on metal-line emission because of its usefulness it tracing large-scale galactic gas flows and probing the ionization state of the CGM. Despite the limitations in interpreting its emission, Lyman-$\alpha$\ is expected to be at least times brighter than the next brightest emission line \citep{bertone10}. Future work will focus on combining these new, highly-refined simulations with a full radiative transfer code to make accurate predictions of Ly$\alpha$\ emission maps and kinematics. Similarly, although H$\alpha$ had the highest surface brightness, its long wavelength makes it unobservable by the optical IFUs we present here. However, this makes it a good candidate for observation with the {\em James Webb Space Telescope}; we will explore this potential in future work.
\section{Conclusions and Future Directions} \label{sec:conc}
Observing emission from the CGM would provide us with an unprecedented understanding of the 3D spatial and kinematic properties of how this gas is flowing into and out of galaxies, regulating their evolution. In this paper, we have focused on making metal-line emission predictions for the progenitor of a Milky Way-like galaxy at $z=3$. Our novel approach to resolving the CGM has allowed us to probe structures on scales smaller than ever before and to understand how the physical properties of these scales link back to observable gas. All of the results we present here owe to changes in the simulated circumgalactic resolution alone, with no changes to the resolution of the interstellar medium or sub-grid physics recipes.
Our main conclusions are:
\begin{enumerate}
\item High spatial resolution in the CGM is necessary to better predict its emission properties. Improved spatial resolution allows gas to clump on scales smaller than resolved by typical cosmological simulations. Many of these clumps are potentially detectable and found at larger distances from the galaxy than clumps in standard-resolution simulations.
\item Globally, increasing the CGM resolution alone also increases the total luminosity of the lines considered here by an order of magnitude compared to the standard simulation.
\item The differences in emission can be attributed to the broader range of physical properties the CGM possess once it is more finely resolved. More multiphase gas exists in the highly refined simulations at all distances from the galaxy as compared to the standard simulation.
\item Two instrument-specific maps for observing modes on KCWI and MUSE show that the emission from a small, low star-forming, high-redshift galaxy is generally not detectable. Simulations like these can be used to identify better candidates for direct detection in the future.
\end{enumerate}
Moving forward, understanding the CGM will continue to be a science driver for future instrumentation, as it was for both KCWI and MUSE. Interpreting new IFU observations that probe small angular scales requires more simulations like the ones we present here that can achieve small spatial resolutions in the halo.
Future generations of FOGGIE simulations will include more massive galaxies as well as on those with more active merger and star formation histories. These systems will likely have a higher probability of detection of CGM emission from current instrumentation and provide a broader theoretical sample of highly-resolved galactic halos to guide target selection for future observations.
Observing galaxies at lower redshift will also improve the likelihood of detecting this gas by decreasing the amount of SB dimming. Thus, future FOGGIE simulations will also focus on expanding the size of our refinement region to encompass the entire virial radius of galaxies at $z=0$ to make predictions for and inform the development of future UV observatories such as LUVOIR.
\acknowledgments
LC woud like to thank Britton Smith for helpful conversations throughout this project. We gratefully acknowledge the National Science Foundation for support of this work via grant AST-1517908, which helped support the contributions of LC, BWO, NL, JOM, and JCH. LC was additionally supported in part by HST AR \#15012. BWO was supported in part by NSF grants PHY-1430152, AST-1514700, OAC-1835213, by NASA grants NNX12AC98G, NNX15AP39G, and by HST AR \#14315. NL was also supported by NASA ADAP grant NNX16AF52G. This work benefited from the successkid and sunglasses emoji on Slack. Computations described in this work were performed using the publicly-available Enzo code, which is the product of a collaborative effort of many independent scientists from numerous institutions around the world. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center and were sponsored by NASA's Science Mission Directorate; we are grateful for the superb user-support provided by NAS. Resources were also provided by the Blue Waters sustained-petascale computing project, which is supported by the NSF (award number ACI 1238993 and ACI-1514580) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its NCSA.
\facilities{NASA Pleiades, NCSA Blue Waters}
\software{astropy \citep{astropy2},
{\sc cloudy} \citep{ferland13},
Enzo \citep{bryan14},
grackle \citep{smith17},
yt \citep{turk11}
}
\bibliographystyle{aasjournal}
|
1,116,691,499,090 | arxiv | \section{Introduction}
Neural networks belong since many years to the most promising approaches in
nonparametric statistics in view of multivariate
statistical applications, in particular in pattern recognition
and in nonparametric regression
(see, e.g.,
the monographs \cite{AB09, DGL96, GKKW02, H98, HPK91, R95}).
In recent years the
focus in applications is on what is called deep learning, where
multilayer feedforward neural networks with many hidden
layers are fitted to observed data (see, e.g.,
\cite{Sch15} and the
literature cited therein). Motivated by this practical success,
there is also an increasing interest
in the literature in showing good theoretical properties
of these neural networks, see, e.g., \cite{MP16a, ES15, GoPeElBo19, Y18, YZ19, LS20}
and the literature cited therein for the analysis
of corresponding
approximation properties of neural networks.
\subsection{Nonparametric regression}
In this paper we study these kind of estimates
in connection with nonparametric regression. Here,
$(\bold{X},Y)$ is an $\mathbb{R}^d \times \mathbb{R}$--valued random vector
satisfying
${\bf E} \{Y^2\}<\infty$, and given a sample
of size $n$ of $(\bold{X},Y)$, i.e., given a data set
\begin{equation*}
{\mathcal{D}}_n = \left\{
(\bold{X}_1,Y_1), \ldots, (\bold{X}_n,Y_n)
\right\},
\end{equation*}
where
$(\bold{X},Y)$, $(\bold{X}_1,Y_1)$, \ldots, $(\bold{X}_n,Y_n)$ are i.i.d.,
the aim is to construct an estimator
\[
m_n(\cdot)=m_n(\cdot, {\mathcal{D}}_n):\mathbb{R}^d \rightarrow \mathbb{R}
\]
of the so--called regression function $m:\mathbb{R}^d \rightarrow \mathbb{R}$,
$m(\bold{x})={\bf E}\{Y|\bold{X}=\bold{x}\}$ such that the so--called $L_2$-error
\[
\int |m_n(\bold{x})-m(\bold{x})|^2 {{\bf P}}_{\bold{X}} (d\bold{x})
\]
is ``small'' (cf., e.g., \cite{GKKW02}
for a systematic introduction to nonparametric regression and
a motivation for the $L_2$-error).
\subsection{Neural Networks}
In order to construct such regression estimates with neural
networks, the first step is to define a suitable
space of functions $f:\mathbb{R}^d \rightarrow \mathbb{R}$ by using neural networks.
The starting point here is the choice of an activation function $\sigma: \mathbb{R} \to \mathbb{R}$.
Traditionally, so--called squashing functions are chosen as activation
function $\sigma: \mathbb{R} \to \mathbb{R}$, which are nondecreasing
and satisfy $\lim_{x \rightarrow - \infty} \sigma(x)=0$
and
$\lim_{x \rightarrow \infty} \sigma(x)=1$,
e.g., the so-called sigmoidal or logistic squasher
\begin{equation*}
\sigma(x)=\frac{1}{1+\exp(-x)}, \quad x \in \mathbb{R}.
\end{equation*}
Recently, also unbounded activation functions are used, e.g., the
ReLU activation function
\begin{align*}
\sigma(x)=\max\{x,0\}.
\end{align*}
The network architecture $(L, \textbf{k})$ depends on a positive integer $L$ called the \textit{number of hidden layers} and a \textit{width vector} $\textbf{k} = (k_1, \ldots, k_{L}) \in \mathbb{N}^{L}$ that describes the number of neurons in the first, second, $\ldots$, $L$-th hidden layer. A multilayer feedforward neural network with network architecture $(L, \textbf{k})$ and ReLU activation function $\sigma$ is a real-valued function defined on $\mathbb{R}^d$ of the form
\begin{equation}\label{inteq1}
f(\bold{x}) = \sum_{i=1}^{k_L} c_{1,i}^{(L)}f_i^{(L)}(\bold{x}) + c_{1,0}^{(L)}
\end{equation}
for some $c_{1,0}^{(L)}, \ldots, c_{1,k_L}^{(L)} \in \mathbb{R}$ and for $f_i^{(L)}$'s recursively defined by
\begin{equation*}
f_i^{(s)}(\bold{x}) = \sigma\left(\sum_{j=1}^{k_{s-1}} c_{i,j}^{(s-1)} f_j^{(s-1)}(\bold{x}) + c_{i,0}^{(s-1)} \right)
\end{equation*}
for some $c_{i,0}^{(s-1)}, \dots, c_{i, k_{s-1}}^{(s-1)} \in \mathbb{R}$,
$s \in \{2, \dots, L\}$,
and
\begin{equation*}
f_i^{(1)}(\bold{x}) = \sigma \left(\sum_{j=1}^d c_{i,j}^{(0)} x^{(j)} + c_{i,0}^{(0)} \right)
\end{equation*}
for some $c_{i,0}^{(0)}, \dots, c_{i,d}^{(0)} \in \mathbb{R}$.
The space of neural networks with
$L$ hidden layers and $r$ neurons per layer
is defined by
\begin{align}\label{F}
\mathcal{F}(L, r) = \{ &f \, : \, \text{$f$ is of the form } \eqref{inteq1}
\text{ with }
k_1=k_2=\ldots=k_L=r
\}.
\end{align}
As there is no further restriction on the network architecture (e.g. no sparsity restriction as in \cite{Sch17}) and as two neurons are only connected if and only if they belong to neighboring layers, we refer to the networks of the class $\mathcal{F}(L, r)$, similar as \cite{YZ19}, as \textit{fully connected feedfoward neural networks}. The representation of this kind of network as a directed acyclic graph is shown in \hyperref[fprod]{Fig.\ref*{neunet}}. Here we get an impression of how such a network looks like and why we call those networks \textit{fully connected}. Remark that this network class also contains networks with some weights chosen as zero.
\begin{figure}[h!]
\centering
\pagestyle{empty}
\def1.5cm{2.5cm}
\begin{tikzpicture}[shorten >=1pt,->,draw=black, node distance=1.5cm, scale=1]
\centering
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black!25,minimum size=10pt,inner sep=0pt]
\tikzstyle{input neuron}=[neuron, fill=
black];
\tikzstyle{output neuron}=[neuron, fill=
black];
\tikzstyle{hidden neuron}=[neuron, fill=black!50
];
\tikzstyle{annot} = [
text centered
]
\foreach \name / \y in {1,...,4}
\node[input neuron, pin=left:\footnotesize{$x^{(\y)}$},
xshift=1cm
] (I-\name) at (0,-\y) {};
\foreach \name / \y in {1,...,5}
\path[
yshift=0.5cm
]
node[hidden neuron] (H-\name) at (1.5cm,-\y cm) {};
\foreach \name / \y in {1,...,5}
\path[yshift=1cm]
node[hidden neuron, right of = H-\name] (H2-\name) {};
\node[output neuron,pin={[pin edge={->}]right:\footnotesize{$f(\bold{x})$}}, right of=H2-3, xshift=-0.8cm] (O) {};
\foreach \source in {1,...,4}
\foreach \dest in {1,...,5}
\path (I-\source) edge (H-\dest);
\foreach \source in {1,...,5}
\foreach \dest in {1,...,5}
\path (H-\source) edge (H2-\dest);
\foreach \source in {1,...,5}
\path (H2-\source) edge (O);
\node[annot,above of =H-1, xshift=1.3cm, node distance=1cm] (hl) {\footnotesize{Hidden layers}};
\node[annot,above of=I-1, node distance = 1cm] {\footnotesize{Input}};
\node[annot,above of=O, yshift=1.5cm, node distance = 1cm] {\footnotesize{Output}};
\node[draw, below of= H-5, yshift=2.2cm, xshift=-0.4cm, rounded corners, minimum size=1cm] (r) {\footnotesize{$\sigma(\bold{c}^t\bold{x}+c_0)$}};
\end{tikzpicture}
\caption{A fully connected network of the class $\mathcal{F}(2,5)$}
\label{neunet}
\end{figure}
In the sequel the number $L=L_n$ of hidden layers and number $r=r_n$ of neurons per hidden layer
of the above function space are properly chosen. Then we
define
the corresponding
neural network regression estimator as the minimizer of the so--called
empirical $L_2$-risk over the function space ${\cal F}(L_n,r_n)$, i.e.,
we define our estimator by
\begin{align}
\label{least}
m_n(\cdot)
=
\arg \min_{f \in {\cal F}(L_n,r_n)}
\frac{1}{n} \sum_{i=1}^n |f(\bold{X}_i)-Y_i|^2.
\end{align}
For simplicity we assume here and in the sequel that the minimum above indeed exists. When this is not the case our theoretical results also hold for any estimate which minimizes the above empirical $L_2$-risk up to a small additional term.
\subsection{Curse of dimensionality}
In order to judge the quality of such estimates theoretically, usually
the rate of convergence of the $L_2$-error is considered.
It is well-known, that smoothness assumptions on the
regression function are necessary in order to derive non-trivial
results on the rate of convergence
(see, e.g., Theorem 7.2 and Problem 7.2 in
\cite{DGL96} and
Section 3 in \cite{DW80}).
For that purpose, we introduce the following definition of $(p,C)$-smoothness.
\begin{definition}
\label{intde2}
Let $p=q+s$ for some $q \in \mathbb{N}_0$ and $0< s \leq 1$.
A function $m:\mathbb{R}^d \rightarrow \mathbb{R}$ is called
$(p,C)$-smooth, if for every $\bm{\alpha}=(\alpha_1, \dots, \alpha_d) \in
\mathbb{N}_0^d$
with $\sum_{j=1}^d \alpha_j = q$ the partial derivative
$\partial^q m/(\partial x_1^{\alpha_1}
\dots
\partial x_d^{\alpha_d}
)$
exists and satisfies
\[
\left|
\frac{
\partial^q m
}{
\partial x_1^{\alpha_1}
\dots
\partial x_d^{\alpha_d}
}
(x)
-
\frac{
\partial^q m
}{
\partial x_1^{\alpha_1}
\dots
\partial x_d^{\alpha_d}
}
(z)
\right|
\leq
C
\|\bold{x}-\bold{z}\|^s
\]
for all $\bold{x},\bold{z} \in \mathbb{R}^d$, where $\Vert\cdot\Vert$ denotes the Euclidean norm.
\end{definition}
\cite{Sto82} showed that the optimal minimax rate of convergence in nonparametric
regression for $(p,C)$-smooth functions is $n^{-2p/(2p+d)}$. This rate
suffers from a characteristic feature in case of high-dimensional functions: If $d$ is relatively large compared to $p$, then this rate of convergence can be extremely slow (so--called curse of dimensionality).
As was shown in \cite{Sto85, Sto94} it is possible to circumvent
this curse of dimensionality by imposing structural assumptions
like additivity on the regression function. This is also used, e.g., in
so-called single index models, in which
\[
m(\bold{x}) = g(\bold{a}^{\top} \bold{x}), \quad \bold{x} \in \mathbb{R}^d
\]
is assumed to hold, where $g: \mathbb{R} \rightarrow \mathbb{R}$ is a univariate
function and $\bold{a} \in \mathbb{R}^d$ is a $d$-dimensional vector
(see, e.g., \cite{Ha93, HaSt89,KoXi07,YYR02}).
Related to this is the so-called projection pursuit, where the regression function
is assumed to be a sum of functions of the above form, i.e.,
\[
m(\bold{x}) = \sum_{k=1}^K g_k(\bold{a}_k^{\top} \bold{x}), \quad \bold{x} \in \mathbb{R}^d
\]
for $K \in \mathbb{N}$, $g_k: \mathbb{R} \rightarrow \mathbb{R}$ and $\bold{a}_k \in \mathbb{R}^d$ (see, e.g., \cite{FrSt81}). If we assume that the univariate functions in these postulated structures are
$(p,C)$-smooth, adequately chosen regression estimates can achieve the above univariate rates of convergence up to some logarithmic factor
(cf., e.g., Chapter 22 in \cite{GKKW02}).
\cite{HM07} studied the case of a regression function, which satisfies
\[
m(\bold{x})=g\left(\sum_{l_1=1}^{L_1}g_{l_1} \left(\sum_{l_2=1}^{L_2}g_{l_1, l_2}\left( \ldots \sum_{l_r=1}^{L_r}g_{l_1,\ldots, l_r}(\bold{x}^{l_1,\ldots, l_r}) \right)\right)\right),
\]
where $g, g_{l_1}, \ldots, g_{l_1,\ldots, l_r}: \mathbb{R} \rightarrow \mathbb{R}$
are
$(p,C)$-smooth univariate functions and $\bold{x}^{l_1,\ldots,l_r}$ are single components of $\bold{x}\in\mathbb{R}^d$ (not necessarily different for two different indices $(l_1,\ldots,l_r)$).
With the use of a penalized least squares estimate, they proved
that in this setting the rate $n^{-2p/(2p+1)}$ can be achieved.
The rate of convergence
of neural network regression estimates
has been analyzed by
\cite{Bar91,Bar93, Bar94, BK17, KoKr05,KoKr17, McCaGa94,Sch17, Suz19,OhnKim19,FM18,OoSu19}.
For the $L_2$-error of a
single hidden layer neural network, \cite{Bar94} proves a dimensionless rate of $n^{-1/2}$
(up to some logarithmic factor), provided the Fourier transform has a finite first
moment (which basically
requires that the function becomes smoother with increasing
dimension $d$ of $X$).
\cite{McCaGa94} showed a rate of $n^{(-2p/(2p+d+5))+\varepsilon}$ for the $L_2$-error of suitably defined single hidden layer neural network estimators for $(p,C)$-smooth functions, but their study was restricted to the use of a certain cosine squasher as an activation function.
The rate of convergence of
neural network regression
estimates based on two layer neural networks has been analyzed in
\cite{KoKr05}. Therein, interaction models were studied,
where the regression function satisfies
\[
m(\bold{x})
=
\sum_{I \subseteq \{1, \dots, d\}, |I|=d^*}
m_I(\bold{x}_I), \qquad \bold{x}=(x^{(1)}, \dots, x^{(d)})^{\top} \in \mathbb{R}^d
\]
for some $d^* \in \{1, \dots, d\}$ and $m_I:\mathbb{R}^{d^*} \rightarrow \mathbb{R}$
$(I \subseteq \{1, \dots, d\}, |I| \leq d^*)$, where
\[
\bold{x}_{\{i_1,\ldots,i_{d^*}\}}=
(x^{(i_1)}, \dots, x^{(i_{d^*})})
\quad
\mbox{for }
1 \leq i_1 < \ldots < i_{d^*} \leq d,
\]
and
in case that all $m_I$ are $(p,C)$-smooth for some $p \leq 1$
it was shown that suitable neural network regression estimators achieve a rate of convergence of $n^{-2p/(2p+d^*)}$
(up to some logarithmic factor),
which is again a convergence rate independent of $d$.
In \cite{KoKr17}, this result was extended
to so--called $(p,C)$-smooth generalized hierarchical interaction models of
order $d^*$, which are defined as follows:
\begin{definition}
\label{deold}
Let $d \in \mathbb{N}$, $d^* \in \{1, \dots, d\}$ and $m:\mathbb{R}^d \rightarrow \mathbb{R}$.
\noindent
\textbf{a)}
We say that $m$ satisfies a generalized hierarchical interaction model
of order $d^*$ and level $0$, if there exist $\bold{a}_1, \dots, \bold{a}_{d^*} \in
\mathbb{R}^{d}$
and
$f:\mathbb{R}^{d^*} \rightarrow \mathbb{R}$
such that
\[
m(\bold{x}) = f(\bold{a}_1^{\top} \bold{x}, \dots, \bold{a}_{d^*}^{\top} \bold{x})
\quad \mbox{for all } \bold{x} \in \mathbb{R}^d.
\]
\noindent
\textbf{b)}
We say that $m$ satisfies a generalized hierarchical interaction model
of order $d^*$ and level $l+1$, if there exist $K \in \mathbb{N}$,
$g_k: \mathbb{R}^{d^*} \rightarrow \mathbb{R}$ $(k \in \{1, \dots, K\})$
and
$f_{1,k}, \dots, f_{d^*,k} :\mathbb{R}^{d} \rightarrow \mathbb{R}$ $(k \in \{1, \dots, K\})$
such that $f_{1,k}, \dots, f_{d^*,k}$
$(k \in \{1, \dots, K\})$
satisfy a generalized
hierarchical interaction model
of order $d^*$ and level $l$
and
\[
m(\bold{x}) = \sum_{k=1}^K g_k \left(
f_{1,k}(\bold{x}), \dots, f_{d^*,k}(\bold{x})
\right)
\quad \mbox{for all } \bold{x} \in \mathbb{R}^d.
\]
\noindent
\textbf{c)}
We say that the generalized hierarchical interaction model defined above
is $(p,C)$-smooth, if all functions $f$ and $g_k$ occurring in
its definition are $(p,C)$--smooth according to \autoref{intde2}.
\end{definition}
It was shown that for such models
least squares estimators based on
suitably defined multilayer
neural networks (in which the number of hidden layers depends
on the level of the generalized interaction model) achieve the rate of convergence $n^{-2p/(2p+d^*)}$
(up to some logarithmic factor) in case
$p \leq 1$.
\cite{BK17} showed that this result even holds
for $p>1$ provided the squashing function is suitably
chosen. Similiar rate of convergence
results as in \cite{BK17}
have been shown in
\cite{Sch17}
for neural network regression estimates using
the ReLU activation function. Here slightly more general function spaces, which
fulfill some composition assumption, were studied.
Related results have been shown in
\cite{Suz19}
in case of Besov spaces as a model for the smoothness
of the regression function and in
\cite{OhnKim19} in case of non-ReLU activation
functions.
\cite{FM18} derived
results concerning estimation by neural networks of piecewise
polynomial regression
functions with partitions having rather general smooth boundaries.
In \cite{OoSu19} the rate of convergence of ResNet-type convolutional
neural networks have been analyzed. Here the convolutional neural
networks corresponds to a fully connected deep neural network
with constant width and depth converging to infinity for sample size
tending to infinity. The class of neural networks uses the ReLU
activation function and very small bounds on the absolute value of the weights in the hidden layers
and a large bound on the absolute value of the weights in the
output layer. In case of a $(p,C)$--smooth regression function
up to a logarithmic factor the rate of convergence $n^{-2p/(2p+d)}$
is shown.
The main results in \cite{BK17} and \cite{Sch17} are
new approximation results for neural networks. Here \cite{Sch17}
bounds the supremum norm error of the approximation of smooth
functions on a cube, while the corresponding approximation
bound in \cite{BK17} holds only on a subset of the
cube of measure close to one, which is sufficient in order
to bound the approximation error of the neural network in $L_2$.
In both papers a further restriction of the network
architecture, in form of a sparsity constraint, is needed to show their theoretical
results. Thus the topology of the
neural network is difficult in view of an implementation of the
corresponding least squares estimate.
In particular, in \cite{Sch17} the topology of the neural network
was not completely specified, it was described how many weights
are nonzero but not which of the weights are nonzero.
\subsection{Main results in this article}
The above results lead to the conjecture that network sparsity
is necessary in order to be able to
derive good rates of convergence of neural network
regression estimates.
Our main result in this article is
that this is not the case.
To show this, we derive similar rate of convergence
results as in \cite{BK17} and in \cite{Sch17}
for least squares estimators based on simple fully connected feedforward neural networks.
In these networks
either the number of neurons per hidden layer is fixed and the number of hidden
layers tends to infinity suitably fast for sample size tending to infinity,
or the number of hidden layers is bounded by some logarithmic factor
in the sample size and the number of neurons per hidden layer
tends to infinity suitably fast for sample size tending to infinity.
In the first case the networks will be much deeper than the class
of networks considered for the least squares estimates in
\cite{BK17} and \cite{Sch17}, where the number of hidden layers
is either bounded by a constant or by some logarithmic factor
in the sample size.
From an approximation theoretical point of view we derive two new error bounds
for the approximation of $(p,C)$--smooth functions by
(very wide or very deep) neural networks using the ReLU activation function,
which are essential to show our convergence result. In particular, we generalize the approximation result from \cite{Y18} from H\"older--smooth to $(p,C)$--smooth functions. Compared to previous works based on sparse neural network estimates our result does not focus on the number of non--zero parameters but on the overall number of parameters in the network. In particular, we show that in case of networks with constant width and $W$ weights we can achieve an approximation error of size $W^{-2p/d}$ instead of $W^{-p/d}$ as stated in \cite{BK17} and \cite{Sch17}. By bounding the number of parameters in this sense, the topology of our neural networks is much easier in view of an implementation of the corresponding least squares estimate. For instance, as shown in \autoref{lst:e1}, using Python's packages \texttt{tensorflow} and \texttt{keras} enables us an easy and fast implementation. Although sparsely connected networks are often prefered in practical applications, there are some open questions about an efficient implementation of these networks. So-called pruning methods, for instance, start with large strongly connected neural networks and delete redundant parameters during the training process. The main drawback is, that due to the large initial size of the networks, the computational costs of the method are high. That is why the implementation of sparsely connected networks is critical questioned (see e.g. \cite{U19, Liu18}).
With regard to our convergence result we analyze a slightly more general function space, which includes
all the other types of structures of $m$ mentioned earlier.
\\
Independently of us, \cite{YZ19} published a similar result for the approximation of smooth functions by simple fully connected deep neural networks. For a network with width $2d+10$ and $W$ weights, they also showed an approximation rate of $W^{-2p/d}$. After the original version of our paper a relating arXiv article was uploaded by \cite{LS20}. Therein our approximation result, where either width or depth are varied, was generalized to ReLU networks where both width and depth are varied simultaneously.
\lstinputlisting[language=Python, caption={Python code for fitting of fully connected neural networks to data $x_{learn}$ and $y_{learn}$}, label={lst:e1}]{pytest.py}
\subsection{Notation}
Throughout the paper, the following notation is used:
The sets of natural numbers and real numbers
are denoted by $\mathbb{N}$ and $\mathbb{R}$, respectively. Furthermore, we set $\mathbb{N}_0=\mathbb{N} \cup \{0\}$. For $z \in \mathbb{R}$, we denote
the smallest integer greater than or equal to $z$ by
$\lceil z \rceil$ and the largest integer smaller or equal to $z$ by
$\lfloor z \rfloor$. We set $z_+=\max\{z,0\}$.
Vectors are denoted by bold letters, e.g. $\bold{x} = (x^{(1)}, \dots, x^{(d)})^T$. We define $\bold{1}=(1, \dots, 1)^T$ and $\bold{0} = (0, \dots, 0)^T$. A $d$-dimensional multi-index is a $d$-dimensional vector $\bold{j} = (j^{(1)}, \dots, j^{(d)})^T \in \mathbb{N}_0^d$. As usual, we define $\|\bold{j}\|_1 = j^{(1)}+\dots+j^{(d)}$, $\bold{j}! = j^{(1)}! \cdots j^{(d)}!$,
\[
\bold{x}^{\bold{j}} = (x^{(1)})^{j^{(1)}}\cdots (x^{(d)})^{j^{(d)}} \ \mbox{and} \ \partial^{\bold{j}} = \frac{\partial^{j^{(1)}}}{\partial (x^{(1)})^{j^{(1)}}} \cdots \frac{\partial^{j^{(d)}}}{\partial (x^{(d)})^{j^{(d)}}}.
\]
Let $D \subseteq \mathbb{R}^d$ and let $f:\mathbb{R}^d \rightarrow \mathbb{R}$ be a real-valued
function defined on $\mathbb{R}^d$.
We write $\bold{x} = \arg \min_{\bold{z} \in D} f(z)$ if
$\min_{\bold{z} \in {\mathcal{D}}} f(\bold{z})$ exists and if
$\bold{x}$ satisfies
$\bold{x} \in D$ and $f(\bold{x}) = \min_{\bold{z} \in {\mathcal{D}}} f(\bold{z})$.
The Euclidean and the supremum norms of $\bold{x} \in \mathbb{R}^d$
are denoted by $\|\bold{x}\|$ and $\|\bold{x}\|_\infty$, respectively.
For $f:\mathbb{R}^d \rightarrow \mathbb{R}$
\[
\|f\|_\infty = \sup_{\bold{x} \in \mathbb{R}^d} |f(\bold{x})|
\]
is its supremum norm, and the supremum norm of $f$
on a set $A \subseteq \mathbb{R}^d$ is denoted by
\[
\|f\|_{\infty,A} = \sup_{\bold{x} \in A} |f(\bold{x})|.
\]
Furthermore we define the norm $\| \cdot \|_{C^q(A)}$ of the smooth function space $C^q(A)$ by
\begin{align*}
\|f\|_{C^q(A)} :=\max\left\{\|\partial^{\mathbf{j}}f\|_{\infty, A}: \|\mathbf{j}\|_1 \leq q, \mathbf{j} \in \mathbb{N}^d\right\}
\end{align*}
for any $f \in C^q(A)$.
Let $\bold{z}_1, \dots, \bold{z}_n \in \mathbb{R}^d$, set
$\bold{z}_1^n := (\bold{z}_1, \dots, \bold{z}_n)$,
let $\mathcal{F}$ be a set of functions $f: \mathbb{R}^d \to \mathbb{R}$ and let $\epsilon > 0$.
We denote by $\mathcal{N}_1(\epsilon, \mathcal{F}, \bold{z}_1^n)$ the $\epsilon-\Vert \cdot \Vert_{1}$-covering number on $\bold{z}_1^n$, i.e. the minimal number $N \in \mathbb{N}$ such that there exist functions $f_1, \dots, f_N: \mathbb{R}^d \to \mathbb{R}$ with the property that for every $f \in \mathcal{F}$ there is a $j=j(f) \in \{1, \dots, N\}$ such that
\begin{align*}
\frac{1}{n} \sum_{i=1}^n |f(\bold{z}_i) - f_j(\bold{z}_i)| < \epsilon.
\end{align*}
We define the truncation operator $T_{\beta}$ with level $\beta > 0$ as
\begin{equation*}
T_{\beta}u =
\begin{cases}
u \quad &\text{if} \quad |u| \leq \beta\\
\beta \cdot {\rm sign}(u) \quad &\text{otherwise}.
\end{cases}
\end{equation*}
Furthermore, for $f:\mathbb{R}^d \rightarrow \mathbb{R}$ we define $T_\beta f:\mathbb{R}^d \rightarrow
\mathbb{R}$ by $(T_\beta f)(\bold{x})=T_\beta (f(\bold{x}))$. And if ${\cal F}$ is a set
of functions $f:\mathbb{R}^d \rightarrow \mathbb{R}$ we set
\[
T_\beta {\cal F} = \{ T_\beta f \, : \, f \in {\cal F} \}.
\]
\subsection{Outline}
The main result is presented in Section \ref{se2}. Our new results
concerning the approximation of $(p,C)$--smooth functions by deep neural networks are described in Section \ref{se3}. Section \ref{se4} deals with a result concerning the approximation of hierarchical composition models (see Definition \ref{de2} below) by neural networks. Section \ref{se5} contains the proof of the main result.
\section{Main result}
\label{se2}
As already mentioned above, the only possible way to avoid the so--called curse of dimensionality is to restrict
the underlying function class. We therefore consider functions, which fulfill the following definition:
\begin{definition}
\label{de2}
Let $d \in \mathbb{N}$ and $m: \mathbb{R}^d \to \mathbb{R}$ and let
$\P$ be a subset
of $(0,\infty) \times \mathbb{N}$
\noindent
\textbf{a)}
We say that $m$ satisfies a hierarchical composition model of level $0$
with order and smoothness constraint $\mathcal{P}$, if there exists a $K \in \{1, \dots, d\}$ such that
\[
m(\bold{x}) = x^{(K)} \quad \mbox{for all } \bold{x} = (x^{(1)}, \dots, x^{(d)})^{\top} \in \mathbb{R}^d.
\]
\noindent
\textbf{b)}
We say that $m$ satisfies a hierarchical composition model
of level $l+1$ with order and smoothness constraint $\mathcal{P}$, if there exist $(p,K) \in \P$, $C>0$, \linebreak $g: \mathbb{R}^{K} \to \mathbb{R}$ and $f_{1}, \dots, f_{K}: \mathbb{R}^d \to \mathbb{R}$, such that
$g$ is $(p,C)$--smooth,
$f_{1}, \dots, f_{K}$ satisfy a hierarchical composition model of level $l$
with order and smoothness constraint $\mathcal{P}$
and
\[m(\bold{x})=g(f_{1}(\bold{x}), \dots, f_{K}(\bold{x})) \quad \mbox{for all } \bold{x} \in \mathbb{R}^d.\]
\end{definition}
For $l=1$ and some order and smoothness constraint $\mathcal{P} \subseteq (0,\infty) \times \mathbb{N}$ our space of hierarchical composition models becomes
\begin{align*}
\mathcal{H}(1, \mathcal{P}) = \{&h: \mathbb{R}^{d} \to \mathbb{R}: h(\bold{x}) = g(x^{(\pi(1))}, \dots, x^{(\pi(K))}), \text{where} \notag \\
& g:\mathbb{R}^{K} \to \mathbb{R} \ \text{is} \ (p, C) \ \text{--smooth} \ \text{for some} \ (p, K) \in \mathcal{P} \notag \\
& \text{and} \ \pi: \{1, \dots, K\} \to \{1, \dots, d\}\}.
\end{align*}
For $l > 1$, we recursively define
\begin{align*}
\mathcal{H}(l, \mathcal{P}) := \{&h: \mathbb{R}^{d} \to \mathbb{R}: h(\bold{x}) = g(f_1(\bold{x}), \dots, f_{K}(\bold{x})), \text{where} \notag\\
& g:\mathbb{R}^{K} \to \mathbb{R} \ \text{is} \ (p, C) \text{--smooth} \ \text{for some} \
(p, K) \in \mathcal{P} \notag \\
& \text{and} \ f_i \in \mathcal{H}(l-1, \mathcal{P})\}.
\end{align*}
In practice, it is conceivable, that there exist input--output--relationships, which can be described by a regression function contained in $\mathcal{H}(l,\mathcal{P})$. Particulary, our assumption is motivated by applications in connection with complex technical systems, which are constructed in a modular form. Here each modular part can be again a complex system, which also explains the recursive construction in \autoref{de2}.
It is shown in \cite{BK17} and in \cite{Sch17}
that the function classes used therein generalize all other models
mentioned in our article.
As the function class of \cite{BK17} (see \autoref{deold}) forms some special case of $\mathcal{H}(l,\mathcal{P})$ in form of an alternation between summation and composition, this is also true for our more general model. Compared to the function class studied in \cite{Sch17}, our definition forms a slight generalization, since we allow different smoothness and order constraints within the same level in the composition. In particular, also the additional examples
mentioned in \cite{Sch17} are contained in our function class.
Our main result is the following theorem.
\begin{theorem}
\label{th1}
Let $(\bold{X}, Y), (\bold{X}_1, Y_1), \dots, (\bold{X}_n, Y_n)$ be independent and identically distributed
random values such that $\rm{supp}(\bold{X})$ is bounded and
\begin{equation*}
\mathbf{E}\left\{ \exp(c_1 \cdot Y^2) \right\} < \infty
\end{equation*}
for some constant $c_1 > 0$. Let the corresponding regression function $m$ be contained in the class $\mathcal{H}(l, \mathcal{P})$ for some $l \in \mathbb{N}$ and $\mathcal{P} \subseteq [1,\infty) \times \mathbb{N}$. Each function $g$ in the definition of $m$ can be of different smoothness $p_g=q_g+s_g$ ($q_g \in \mathbb{N}_0$ and $s_g \in (0,1]$) and of different input dimension $K_g$, where $(p_g,K_g) \in \mathcal{P}$. Denote by $K_{max}$ the maximal input dimension and by $p_{\max}$ the maximal smoothness of one of the functions $g$. Assume that for each $g$ all partial derivatives of order less than or equal to $q_g$ are bounded, i.e.,
\begin{equation*}
\Vert g\Vert_{C^{q_g}(\mathbb{R}^d)} \leq c_{2}
\end{equation*}
for some constant $c_2 >0$ and that $p_{\max}, K_{\max} < \infty$. Let each function $g$ be Lipschitz continuous with Lipschitz constant $C_{Lip} \geq 1$.
Let $\tilde{m}_n$ be defined as in \eqref{least}
for some $L_n, r_n \in \mathbb{N}$,
and define $m_n = T_{c_3 \cdot \log(n)} \tilde{m}_n$ for some $c_3 >0$ sufficiently large.
\noindent
{\bf a)} Choose $c_{4}, c_{5} >0$ sufficiently large and set
\[
L_n = \left\lceil
c_{4} \cdot \log n
\right\rceil
\quad \mbox{and} \quad
r_n =
\left\lceil
c_{5} \cdot
\max_{(p,K) \in \P} n^{\frac{K}{2(2p+K)}}
\right\rceil.
\]
Then
\begin{equation*}
{\bf E} \int |m_n(\bold{x}) - m(\bold{x})|^2 {{\bf P}}_{\bold{X}}(d\bold{x}) \leq c_6 \cdot (\log(n))^6 \cdot \max_{(p,K) \in \mathcal{P}} n^{-\frac{2p}{2p+K}}
\end{equation*}
holds for sufficiently large $n$.
\noindent
{\bf b)} Choose $c_{7}, c_{8} >0$ sufficiently large and set
\[
L_n = \left\lceil
c_{7} \cdot \max_{(p,K) \in \P} n^{\frac{K}{2(2p+K)}}
\cdot \log n
\right\rceil
\quad \mbox{and} \quad
r_n = r=
\left\lceil
c_{8}
\right\rceil.
\]
Then
\begin{equation*}
{\bf E} \int |m_n(\bold{x}) - m(\bold{x})|^2 {{\bf P}}_{\bold{X}}(d\bold{x}) \leq c_9 \cdot (\log(n))^6 \cdot \max_{(p,K) \in \mathcal{P}} n^{-\frac{2p}{2p+K}}
\end{equation*}
holds for sufficiently large $n$.
\end{theorem}
\begin{remark}
\autoref{th1} shows that in case that the regression function satisfies an hierarchical composition model with smoothness and order constraint $\mathcal{P}$ the $L_2$-errors of least squares neural network regression estimates based on a set of fully connected neural networks achieve the rate of convergence $\max_{(p,K) \in \mathcal{P}} n^{-2p/(2p+K)}$ (up to some logarithmic factor), which does not depend on $d$ and which does therefore circumvent the so-called \textit{curse of dimensionality}.
\end{remark}
\begin{remark}
Due to the fact that some parameters in the definition of the estimator in \autoref{th1} are usually unknown in practice, they have to be chosen in a data--dependent way. Out of a set of different numbers of hidden layers and neurons per layer the best estimator is then chosen adaptively. One simple possibility
to do this is to use the so--called {\it splitting of the sample} method,
cf., e.g., Section 2.4 and Chapter 7 in \cite{GKKW02}.
Here the sample is splitted into a learning sample of size $n_l$ and a
testing sample of size $n_t$, where $n_l+n_t=n$ (e.g.,
$n_l \approx n/2 \approx n_t$),
the estimator is computed for several different selections of width and depth using only the learning sample, the empirical $L_2$-risks
of these estimators are then computed on the testing sample, and finally
the parameter value is chosen for which the empirical $L_2$-risk
on the testing sample is minimal.
\end{remark}
\section{Approximation of smooth functions by
fully connected deep neural networks with ReLU activation function}
\label{se3}
The aim of this section is to present a new result concerning the approximation of $(p,C)$-smooth functions by deep neural networks.
\begin{theorem}
\label{th2}
Let $d \in \mathbb{N}$,
let $f:\mathbb{R}^d \rightarrow \mathbb{R}$ be $(p,C)$--smooth for some $p=q+s$,
$q \in \mathbb{N}_0$ and $s \in (0,1]$, and $C>0$. Let $a \geq 1$
and $M \in \mathbb{N}$ sufficiently large (independent of the size of $a$ but
\begin{align*}
M \geq 2 \ \mbox{and} \ M^{2p} \geq c_{10} \cdot \left(\max\left\{a, \|f\|_{C^q([-a,a]^d)}
\right\}\right)^{4(q+1)}
\end{align*}
must hold for some sufficiently large constant $c_{10} \geq 1$).
\\
a) Let $L, r \in \mathbb{N}$ such that
\begin{enumerate}
\item $L \geq 5+\lceil \log_4(M^{2p})\rceil \cdot \left(\lceil \log_2(\max\{q, d\} + 1\})\rceil+1\right)$
\item $r \geq 2^d \cdot 64 \cdot \binom{d+q}{d} \cdot d^2 \cdot (q+1) \cdot M^d$
\end{enumerate}
hold.
There exists a neural network
\begin{align*}
f_{net, wide} \in \mathcal{F}(L,r)
\end{align*}
with the property that
\begin{align}
\| f-f_{net, wide}\|_{\infty, [-a,a]^d} \leq
c_{11} \cdot \left(\max\left\{a, \|f\|_{C^q([-a,a]^d)}\right\} \right)^{4(q+1)} \cdot M^{-2p}.
\label{th2eq1}
\end{align}
b) Let $L, r \in \mathbb{N}$ such that
\begin{enumerate}
\item $L \geq 5M^d+\left\lceil \log_4\left(M^{2p+4 \cdot d \cdot (q+1)} \cdot e^{4 \cdot (q+1) \cdot (M^d-1)}\right)\right\rceil\\
\cdot \lceil \log_2(\max\{q,d\}+1)\rceil+\lceil \log_4(M^{2p})\rceil$
\item $r \geq 132 \cdot 2^d\cdot \lceil e^d\rceil
\cdot \binom{d+q}{d} \cdot \max\{ q+1, d^2\}$
\end{enumerate}
hold. There exists a neural network
\begin{align*}
f_{net, deep} \in \mathcal{F}(L,r)
\end{align*}
such that (\ref{th2eq1}) holds with
$f_{net,wide}$ replaced by $f_{net,deep}$.
\end{theorem}
\begin{remark}
The above result focuses on the convergence rate and no
attempt has been made to minimize the constants in the
definition of $L$ and $r$.
\end{remark}
The following corollary translates \autoref{th2} b) in terms of the
number of overall parameters versus the approximation accuracy.
\begin{corollary}
\label{c1}
Let $d \in \mathbb{N}$,
let $f:\mathbb{R}^d \rightarrow \mathbb{R}$ be $(p,C)$--smooth for some $p=q+s$,
$q \in \mathbb{N}_0$ and $s \in (0,1]$, and $C>0$. Let $a \geq 1$ and $\epsilon > 0$.
Then there exists a fully connected neural network $f_{net}$ with $c_{12} \cdot \epsilon^{-d/(2p)}$ parameters, such that
\begin{align*}
\|f-f_{net}\|_{\infty, [-a,a]^d} \leq \epsilon.
\end{align*}
\end{corollary}
\begin{proof}
The number of overall weights $W$ in a neural network with $L$ hidden layers and $r$ neurons per layer can be computed by
\begin{align*}
W=(d+1) \cdot r + (L-1) \cdot (r+1) \cdot r+(r+1).
\end{align*}
Using \autoref{th2} b), where we choose $M=\lceil c_{13} \cdot \epsilon^{-1/(2p)} \rceil$ for some constant $c_{13} >0$, implies the assertion.
\end{proof}
\begin{remark}
Compared with \cite{Sch17} and \cite{BK17}, where the total number of parameters is $c_{14} \cdot \epsilon^{-d/p}$ for some constant $c_{14} > 0$ in case of an approximation error of $\epsilon$, Corollary \ref{c1} gives a quadratic improvement.
\end{remark}
\begin{proof}[Sketch of the proof of \autoref{th2}]
The basic idea is to construct deep neural networks which approximate
a piecewiece Taylor polynomial with respect to a partition of $[-a,a]^d$
into $M^{2d}$ equivolume cubes. Our approximation starts on a coarse
grid with $M^d$ equivolume cubes and calculates the position of the cube
$C$ with $\bold{x} \in C$. This cube is then sub-partitioned into $M^d$ smaller
cubes to finally compute the values of our Taylor polynomial on the finer grid
with $M^{2d}$ cubes. Part a) and b) use a different approach to achieve this.\\
In part a) we exploit the fact that a network with $c_{15} \cdot M^d$ neurons per layer
has $c_{16} \cdot M^{2d}$ connections between two consecutive layers.
Then each of the $c_{16} \cdot M^{2d}$ weights in our network is matched
to one of the $c_{16} \cdot M^{2d}$ possible values of the derivatives of $f$. To detect the right
values of the derivatives for our Taylor polynomial we proceed in two steps: In the first two hidden layers
our network approximates the indicator function for every cube on the coarse grid. The output layer of those
networks is then multiplied by the derivatives of $f$ on the cube, respectively. And those values are the input of the $c_{17} \cdot M^d$ networks in the next two hidden layers, which approximate the indicator function multiplied by the values of the derivatives, respectively, on the $M^d$ smaller cubes of the sub-partition of $C$ with $\bold{x} \in C$. Using this two step approximation we finally detect the right values of the derivatives on the $M^{2d}$ equivolume cubes. In the
remaining layers we compute the Taylor polynomial. \\
In part b) in the first $c_{18} \cdot M^d$ layers of the network the values of the derivatives
of $f$ necessary for the computation of
a piecewise Taylor polynomial of $f$ with respect to the partition on the
coarse grid are determined. Then additional $c_{19} \cdot M^d$
layers of the network are used to compute a piecewise Taylor polynomial
of $f$ on the sub-partition (into $M^d$ smaller cubes) of the cube $C$ with $\bold{x} \in C$ (where $C$
is again one of the cubes of the coarse grid).
Here
the values of the derivatives are computed successively by computing
them one after another by a Taylor approximation using the previously
computed values and suitably defined correction terms.
\end{proof}
\section{Approximation of hierarchical composition models by neural networks}
\label{se4}
In this section we use \autoref{th2} to prove a result concerning the approximation of hierarchical composition models with smoothness and order constraint $\mathcal{P} \subseteq [1, \infty) \times \mathbb{N}$ by deep neural networks. In order to formulate this result, we observe in a first step,
that one has to compute different hierarchical composition models of some level $i$ $(i\in \{1, \dots, l-1\})$ to compute a function $h_1^{(l)} \in \mathcal{H}(l, \mathcal{P})$. Let $\tilde{N}_i$ denote the number of hierarchical composition models of level $i$, needed to compute $h_1^{(l)}$.
We denote in the following by
\begin{align}
\label{h}
h_j^{(i)}: \mathbb{R}^{d} \to \mathbb{R}
\end{align}
the $j$--th hierarchical composition model of some level $i$ ($j \in \{1, \ldots, \tilde{N}_i\}, i \in \{1, \ldots, l\}$), that applies a $(p_j^{(i)}, C)$--smooth function $g_j^{(i)}: \mathbb{R}^{K_j^{(i)}} \to \mathbb{R}$ with $p_j^{(i)} = q_j^{(i)} + s_j^{(i)}$, $q_j^{(i)} \in \mathbb{N}_0$ and $s_j^{(i)} \in (0,1]$, where $(p_j^{(i)}, K_j^{(i)}) \in \mathcal{P}$.
The computation of $h_1^{(l)}(\bold{x})$ can then be recursively described as follows:
\begin{equation}\label{hji}
h_j^{(i)}(\bold{x}) = g_{j}^{(i)}\left(h^{(i-1)}_{\sum_{t=1}^{j-1} K_t^{(i)}+1}(\bold{x}), \dots, h^{(i-1)}_{\sum_{t=1}^j K_t^{(i)}}(\bold{x}) \right)
\end{equation}
for $j \in \{1, \dots, \tilde{N}_i\}$ and $i \in \{2, \dots, l\}$
and
\begin{equation}\label{hj1}
h_j^{(1)}(\bold{x}) = g_j^{(1)}\left(x^{\left(\pi(\sum_{t=1}^{j-1} K_t^{(1)}+1)\right)}, \dots, x^{\left(\pi(\sum_{t=1}^{j} K_t^{(1)})\right)}\right)
\end{equation}
for some function $\pi: \{1, \dots, \tilde{N}_1\} \to \{1, \dots, d\}$.
Furthermore for \linebreak
$i \in \{1, \dots, l-1\}$
the recursion
\begin{align}
\label{N}
\tilde{N}_l = 1 \ \text{and} \ \tilde{N}_{i} = \sum_{j=1}^{\tilde{N}_{i+1}} K_j^{(i+1)}
\end{align}
holds.
\begin{figure}
\centering
\small{
\begin{tikzpicture}[
level/.style={rectangle = 4pt, draw, text centered, anchor=north, text=black},
input/.style={rounded corners=7pt, draw, rounded corners=1mm, text centered, anchor=north, text=black},
level distance=1cm
]
\node (H1l) [level] {$g_1^{(2)}$}
[level distance = 0.5cm]
[sibling distance = 3cm]
[level distance = 1cm]
child{
node (H1l1) [level] {\scriptsize $g_1^{(1)}$}
[level distance = 0.5cm]
[sibling distance = 1.2cm, level distance = 1cm]
child{
node (K1) [level] {\scriptsize $x^{(\pi(1))}$}
}
child{
node (K2) [level] {\scriptsize $x^{(\pi(2))}$}
}
}
child{
node (H2l1) [level] {\scriptsize $g_2^{(1)}$}
[level distance = 0.5cm]
[sibling distance=1.2cm, level distance = 1cm]
child{
node (K3) [level] {\scriptsize $x^{(\pi(3))}$}
}
child{
node (K4) [level] {\scriptsize $x^{(\pi(4))}$}
}
child{
node (K4) [level] {\scriptsize $x^{(\pi(5))}$}
}
}
child{
node (HK) [level] {\scriptsize $g_{3}^{(1)}$}
[level distance = 0.5cm]
[sibling distance = 1.2cm, level distance = 1cm]
child{
node (H1l2) [level] {\scriptsize $x^{(\pi(6))}$}
}
child{
node (State04) [level] {\scriptsize $x^{(\pi(7))}$}
}
}
;
\end{tikzpicture}}
\caption{Illustration of a hierarchical composition model of the class $\mathcal{H}(2, \mathcal{P})$ with the structure $h_1^{(2)}(x) = g_1^{(2)}(h_1^{(1)}(x), h_2^{(1)}(x), h_3^{(1)}(x))$, $h_1^{(1)}(x) = g_1^{(1)}(x^{(\pi(1))}, x^{(\pi(2))})$, $h_2^{(1)}(x)=g_2^{(1)}(x^{(\pi(3))}, x^{(\pi(4)}, x^{(\pi(5))})$ and $h_3^{(1)}(x) = g_3^{(1)}(x^{(\pi(6))}, x^{(\pi(7))})$, defined as in \eqref{hji} and \eqref{hj1}.}
\label{h2}
\end{figure}
\noindent
The exemplary structure of a function $h_1^{(2)} \in \mathcal{H}(2, \mathcal{P})$ is illustrated in \hyperref[fprod]{Fig.\ref*{h2}}.
Here one can get a perception of how the hierarchical composition models of different levels are stacked on top of each other. The approximation result of such a function $h_1^{(l)}$ by a neural network is summarized in the following theorem:
\begin{theorem}
\label{th3}
Let $m: \mathbb{R}^d \to \mathbb{R}$ be contained in the class $\mathcal{H}(l, \mathcal{P})$ for some $l \in \mathbb{N}$ and $\mathcal{P} \subseteq [1,\infty) \times \mathbb{N}$. Let $\tilde{N}_i$ be defined as in \eqref{N}. Each $m$ consists of different functions $h_j^{(i)}$ $(j \in \{1, \ldots, \tilde{N}_i\},$
$ i\in \{1, \dots, l\})$ defined as in \eqref{h}, \eqref{hji} and \eqref{hj1}.
Assume that the corrsponding functions $g_j^{(i)}$ are Lipschitz continuous with Lipschitz constant $C_{Lip} \geq 1$ and satisfy
\begin{equation*}
\|g_j^{(i)}\|_{C^{q_j^{(i)}}(\mathbb{R}^d)} \leq c_{20}
\end{equation*}
for some constant $c_{20} >0$. Denote by $K_{max} = \max_{i,j} K_j^{(i)} < \infty$ the maximal input dimension and by $p_{max} = \max_{i,j} p_j^{(i)} < \infty$
the maximal smoothness of the functions $g_j^{(i)}$. Let $a \geq 1$ and $M_{j,i} \in \mathbb{N}$ sufficiently large (each independent of the size of $a$, but $\min_{j,i} M_{j,i}^{2} >c_{21} \cdot a^{4(p_{max}+1)} /(2^{l} K_{\max} C_{Lip})^{l}$ must hold for some constant $c_{21}>0$ sufficiently large).
\\
a) Let $L, r \in \mathbb{N}$ be such that
\begin{itemize}
\item[(i)] $L \geq l \cdot \Bigg(5+\left\lceil \log_{4}\left(\max_{j,i} M_{j,i}^{2p_j^{(i)}}\right)\right\rceil $ \\
\hspace*{3cm} $\cdot \left(\lceil \log_2(\max\{K_{\max},p_{\max}\}+1)\rceil+1\right)\Bigg)$
\item[(ii)] $r \geq \max_{i \in \{1, \dots, l\}} \sum_{j=1}^{\tilde{N}_{i}} 2^{K_j^{(i)}} \cdot 64 \cdot \binom{K_j^{(i)}+q_j^{(i)}}{K_j^{(i)}} \cdot (K_j^{(i)})^2 \cdot (q_j^{(i)}+1) \cdot M_{j,i}^{K_j^{(i)}}$
\end{itemize}
hold. Then there exists a neural network $t_1$ of the network class $\mathcal{F}\left(L, r\right)$
with the property that
\begin{equation}
\label{th3a}
\|t_1-m\|_{\infty, [-a,a]^d} \leq c_{22} \cdot a^{4(p_{\max}+1)} \cdot \max_{j,i} M_{j,i}^{-2p_j^{(i)}}.
\end{equation}
\noindent
b) Let $L, r \in \mathbb{N}$ such that
\begin{itemize}
\item[(i)] $L
\geq
\sum_{i=1}^{l}
\sum_{j=1}^{\tilde{N}_i}
\Big(
5M_{j,i}^{K_j^{(i)}}+
\\
\hspace*{1cm}
\left\lceil
\log_{4}
\left(
M_{j,i}^{2p_j^{(i)}+4\cdot K_j^{(i)} \cdot (q_j^{(i)}+1)}
\cdot e^{4 \cdot (q_j^{(i)} +1) \cdot (M_{j,i}^{K_j^{(i)}}-1)}
\right)
\right\rceil \\
\hspace*{2cm} \cdot
\lceil
\log_2(\max\{K_j^{(i)},q_j^{(i)}\}+1)
\rceil + \Big\lceil \log_4\Big(M_{j,i}^{2p_j^{(i)}}\Big)\Big\rceil\Big)$
\item[(ii)] $r \geq 2 \sum_{t=1}^{l-1} \tilde{N}_t + 2d +
132 \cdot 2^{K_{max}} \cdot \lceil e^{K_{max}}\rceil
\cdot \binom{{K_{max}}+ \lceil p_{max} \rceil}{{K_{max}}} \cdot
\\
\hspace*{7cm} \max\{\lceil p_{max}\rceil +1, K_{max}^2\}
$
\end{itemize}
hold. Then there exists a neural network $t_2$ of the network class $\mathcal{F}\left(L, r\right)$ with the property that
\eqref{th3a} holds with $t_1$ replaced by $t_2$.
\end{theorem}
In the construction of our network we will compose smaller subnetworks to successively build the final network. For two networks $f \in \mathcal{F}(L_f, r_f)$ and $g \in \mathcal{F}(L_g, r_g)$ with $L_f, L_g, r_f, r_g \in \mathbb{N}$ the \textit{composed} neural network $f \circ g$ is contained in the function class $\mathcal{F}(L_f+L_g, \max\{r_f, r_g\})$.
In the literature (see e.g. \cite{Sch17}) the composition of two networks is often defined by $f \circ \sigma(g)$. Thus for every composition
an additional layer is added. We follow a different approach. Instead of using an additional layer, we "melt" the weights of both networks $f$ and $g$ to define $f \circ g$. The following example clarifies our idea: Let
\begin{align*}
f(x) = \beta_f \cdot \sigma(\alpha_f \cdot x) \ \mbox{and} \ g(x) = \beta_g \cdot \sigma(\alpha_g \cdot x), \quad \mbox{for} \ \alpha_f, \alpha_g, \beta_f, \beta_g \in \mathbb{R},
\end{align*}
then we have
\begin{align*}
f \circ g = f(g(x)) = \beta_f \cdot \sigma(\alpha_f \cdot \beta_g \cdot \sigma(\alpha_g \cdot x)).
\end{align*}
\begin{figure}[h]
\centering
\def1.5cm{1.5cm}
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=1.5cm]
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black!25,minimum size=12pt,inner sep=0pt]
\tikzstyle{input neuron}=[neuron, fill=gray];
\tikzstyle{output neuron}=[neuron, fill= gray];
\tikzstyle{hidden neuron}=[neuron, fill=gray];
\tikzstyle{annot} = [text width=4em, text centered]
\node[] (I) at (0,-1) {$x$};
\node[hidden neuron] (H) at (1.5cm,-1 cm) {};
\node[right of=H] (O) {$f(x)$};
\node [annot, below of=H, node distance=1cm] (text) {\textit{network $f$}};
\path (I) edge node[above,midway] {$\alpha_f$} (H);
\path (H) edge node[above,midway] {$\beta_f$} (O);
\path (H) edge (O);
\node[] (I1) at (5,-1) {$x$};
\node[hidden neuron] (H1) at (6.5,-1 cm) {};
\node[right of=H1] (O1) {$g(x)$};
\node [annot, below of=H1, node distance=1cm] (text) {\textit{network $g$}};
\path (I1) edge (H1);
\path (H1) edge node[above,midway] {$\beta_g$} (O1);
\path (I1) edge node[above,midway] {$\alpha_g$} (H1);
\end{tikzpicture}
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=1.5cm]
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black!25,minimum size=12pt,inner sep=0pt]
\tikzstyle{input neuron}=[neuron, fill=blue];
\tikzstyle{output neuron}=[neuron, fill= gray];
\tikzstyle{hidden neuron}=[neuron, fill=gray];
\tikzstyle{annot} = [text width=4em, text centered]
\node[] (I) at (2,-1) {$x$};
\node[hidden neuron] (H) at (3.5,-1 cm) {};
\node[hidden neuron, right of=H] (H2) {};
\node[ right of=H2, xshift=0.4cm] (O) {$(f \circ g)(x)$};
\node [annot, below of=H, xshift= 0.8cm, node distance=1cm] (text) {\textit{network $f \circ g$}};
\path (I) edge node[above,midway] {$\alpha_g$} (H);
\path (H) edge node[above,midway] {$\beta_g \cdot \alpha_f$} (H2);
\path (H2) edge node[above,midway] {$\beta_f$} (O);
\end{tikzpicture}
\caption{Illustration of the composed network $f \circ g$}
\label{f3}
\end{figure}
\hyperref[fprod]{Fig.\ref*{f3}} illustrates our idea by the network representation as an acyclic graph. This clearly shows, why we do not need an additional layer in our composed network.
\begin{proof}
a)
The computation of the function $m(\bold{x})=h_1^{(l)}(\bold{x})$ can be recursively described as in \eqref{hji} and \eqref{hj1}.
The basic idea of the proof is to define a composed network, which approximately computes the functions $h_1^{(1)}, \dots, h_{\tilde{N}_1}^{(1)}, h_1^{(2)}, \dots, h_{\tilde{N}_2}^{(2)}, \dots, h_1^{(l)}$.
\\
\\
For the approximation of $g_j^{(i)}$
we will use the networks
\begin{equation*}
f_{net, wide, g_{j}^{(i)}} \in \mathcal{F}(L_0, r_j^{(i)})
\end{equation*}
described in \autoref{th2} a), where
\begin{align*}
L_0 &= 5+\left\lceil \log_{4}\left(\max_{j,i} M_{j,i}^{2p_j^{(i)}}\right)\right\rceil \cdot \left(\lceil \log_2(\max\{K_{\max},p_{\max}\}+1)\rceil+1\right)
\end{align*}
and
\begin{align*}
r_j^{(i)} = & 2^{K_j^{(i)}} \cdot 64 \cdot \binom{K_j^{(i)}+q_j^{(i)}}{K_j^{(i)}} \cdot (K_j^{(i)})^2 \cdot (q_j^{(i)}+1) \cdot M_{j,i}^{K_j^{(i)}}
\end{align*}
for $j \in \{1, \dots, \tilde{N}_i\}$ and $i \in \{1, \dots, l\}$.
\\
\\
To compute the values of $h_1^{(1)}, \dots, h_{\tilde{N}_1}^{(1)}$ we use the networks
\begin{align*}
\hat{h}_1^{(1)}(\bold{x})&= f_{net, wide, g_{1}^{(1)}}\left(x^{(\pi(1))}, \dots, x^{(\pi(K_1^{(1)}))}\right)\\
& \quad \vdots\\
\hat{h}_{\tilde{N}_1}^{(1)}(\bold{x})& = f_{net, wide, g_{\tilde{N}_1}^{(1)}}\left(x^{(\pi(\sum_{t=1}^{\tilde{N}_1-1} K_t^{(1)} +1))}, \dots, x^{(\pi(\sum_{t=1}^{\tilde{N}_1} K_t^{(1)}))}\right).
\end{align*}
To compute the values of $h_1^{(i)}, \dots, h_{\tilde{N}_i}^{(i)}$ $(i \in \{2, \dots, l\})$ we use the networks
\begin{align*}
\hat{h}_j^{(i)}(\bold{x}) = f_{net, wide, g_{j}^{(i)}}\left(\hat{h}_{\sum_{t=1}^{j-1} K_t^{(i)}+1}^{(i-1)}(\bold{x}), \dots, \hat{h}_{\sum_{t=1}^{j} K_t^{(i)}}^{(i-1)}(\bold{x})\right)
\end{align*}
for $j \in \{1, \dots, \tilde{N}_i\}$. Finally we set
\begin{align*}
t_1(\bold{x}) = \hat{h}_1^{(l)}(\bold{x}).
\end{align*}
\begin{figure}
\centering
\tikzstyle{line} = [draw, -latex']
\tikzstyle{annot} = [text width=4em, text centered]
\tikzstyle{mycirc} = [circle,fill=white, minimum size=0.005cm]
\footnotesize{
\begin{tikzpicture}[node distance = 3cm, auto]
\node [] (x1) {\scriptsize $x^{(1)}$};
\node [below of=x1, node distance =1cm] (x2) {\scriptsize $x^{(2)}$};
\node [below of=x2, node distance = 1cm] (dots) {\scriptsize $\vdots$};
\node [below of=dots, node distance = 1cm] (xd) {\scriptsize $x^{(d)}$};
\node [annot, above of=x1, node distance=2cm] (text) {\textit{Input}};
%
\node [right of=x1, above of=x1, node distance=1.5cm] (fnet1) {\scriptsize $f_{net, wide, g_1^{(1)}}$};
\node [below of=fnet1, node distance=3cm] (fnet2) {\scriptsize $\vdots$};
\node [right of = xd, below of =xd, node distance=1.5cm] (fnetg1) {\scriptsize $f_{net, wide, g_{\tilde{N}_1}^{(1)}}$};
\node [annot, above of=fnet1, node distance=0.5cm] (text) {\textit{Level 1}};
\node [right of=fnet1, below of = fnet1, node distance=1.5cm] (fnet4) {\scriptsize $f_{net, wide, g_1^{(2)}}$};
\node [below of=fnet4, node distance=1.5cm] (fnet2) {\scriptsize $\vdots$};
\node [right of=fnetg1, above of= fnetg1, node distance=1.5cm] (fnet5) {\scriptsize $f_{net, wide, g_{\tilde{N}_2}^{(2)}}$};
\node [annot, above of=fnet4, node distance=2cm] (text) {\textit{Level 2}};
\node [right of=fnet4, node distance=1.5cm] (dots1) {\scriptsize $\dots$};
\node [right of=fnet5, node distance=1.5cm] (dots2) {\scriptsize $\dots$};
\node [annot, above of=dots1, node distance=2cm] (text) {$\dots$};
\node [right of=dots1, below of=dots1, node distance=1.5cm] (fnet6) {$f_{net, wide, g_{1}^{(l)}}$};
\node [right of= fnet6, node distance=1.9cm] (t1) {$t_1(\bold{x})$};
\node [annot, above of=fnet6, node distance=3.5cm] (text) {\textit{Level l}};
\path [line] (x1) -- (fnet1);
\path [line] (x2) -- (fnet1);
\path [line] (xd) -- (fnet1);
\path [line] (x1) -- (fnetg1);
\path [line] (x2) -- (fnetg1);
\path [line] (xd) -- (fnetg1);
\path [line] (fnet6) -- (t1);
\path [line] (fnet1) -- (fnet4);
\path [line] (fnetg1) -- (fnet4);
\path [line] (fnet1) -- (fnet5);
\path [line] (fnetg1) -- (fnet5);
\path [line] (fnet4) -- (dots1);
\path [line] (fnet5) -- (dots2);
\path [line] (dots1) -- (fnet6);
\path [line] (dots2) -- (fnet6);
\end{tikzpicture}}
\caption{Illustration of the neural network $t_1$}
\label{h1}
\end{figure}
\hyperref[fprod]{Fig.\ref*{h1}} illustrates the computation of the network $t_1(\bold{x})$. It is easy to see that $t_1(\bold{x})$ forms a composed network, where the networks $\hat{h}_1^{(i)}, \dots, \hat{h}_{\tilde{N}_i}^{(i)}$ are computed in parallel (i.e., in the same layers) for $i \in \{1, \dots, l\}$, respectively. Since each $\hat{h}_j^{(i)}$ $(j \in \{1, \dots, \tilde{N}_i\})$ needs $L_0$ layers and $r_j^{(i)}$ neurons per layer,
this network is contained in the class
\begin{align*}
\mathcal{F}\left(l \cdot L_0, \max_{i \in \{1, \dots, l\}} \sum_{j=1}^{\tilde{N}_i} r_j^{(i)}\right)
\subseteq
\mathcal{F}\left( L,r \right)
.
\end{align*}
Using induction on $i$ it is easy to see that $t_1$ satisfies
\begin{align}
\|t_1 - m \|_{\infty, [-a,a]^d} \leq c_{23} \cdot a^{4 \cdot (p_{\max} +1)} \cdot \max_{j,i} M_{j,i}^{-2p_j^{(i)}}.
\end{align}
A complete proof can be found in Supplement A. By successively applying $f_{id}$ to the output of the network $t_1$, we can easily enlarge the number of hidden layers in the network.
This shows the assertion of the theorem.
\\
\\
b)
Denote $h_1^{(1)}, \dots, h_{\tilde{N}_1}^{(1)}, \dots, h_1^{(l-1)}, \dots, h_{\tilde{N}_{l-1}}^{(l-1)}, h_1^{(l)}$ by $h_1, h_2, \dots, h_{\sum_{t=1}^{l} \tilde{N}_t}$, such that
\begin{equation*}
h_j^{(i)}(\bold{x}) = h_{N_j^{(i)}}(\bold{x}),
\end{equation*}
where
\begin{equation*}
N_j^{(i)}= \sum_{t=1}^{i-1} \tilde{N}_t +j
\end{equation*}
for $i \in \{1, \dots, l\}$ and $j \in \{1, \dots, \tilde{N}_i\}$. Then we have
\begin{equation}
\label{th3beq1}
h_j(\bold{x}) =g_j^{(1)}\left(x^{\left(\pi(\sum_{t=1}^{j-1} K_t^{(1)}+1)\right)}, \dots, x^{\left(\pi(\sum_{t=1}^{j} K_t^{(1)})\right)}\right)
\end{equation}
for $j \in \{1, \dots, \tilde{N}_1\}$
and
\begin{equation}
\label{th3beq2}
h_{N_j^{(i)}}(\bold{x}) = g_{j}^{(i)}\left(h_{N^{(i-1)}_{\sum_{t=1}^{j-1} K_t^{(i)}+1}}(\bold{x}), \dots, h_{N^{(i-1)}_{\sum_{t=1}^{j} K_t^{(i)}}}(\bold{x})\right)
\end{equation}
for $j \in \{1, \dots, \tilde{N}_i\}$ and $i \in \{2, \dots, l\}$.
\newline
In our neural network we will compute $h_1, h_2, \dots, h_{\sum_{t=1}^{l} \tilde{N}_t}$ successively. In the construction of the network each $g_j^{(i)}$ will be approximated by a network
\begin{equation*}
f_{net, deep, g_{j}^{(i)}} \in \mathcal{F}(L_j^{(i)}, r_0)
\end{equation*}
described in \autoref{th2} b), where
\begin{align*}
L_j^{(i)} &= 5M_{j,i}^{K_j^{(i)}}+\left\lceil \log_{4}\left(M_{j,i}^{2p_j^{(i)}+4\cdot K_j^{(i)} \cdot (q_j^{(i)}+1)} \cdot e^{4 \cdot (q_j^{(i)} +1) \cdot (M_{j,i}^{K_j^{(i)}}-1)}\right)\right\rceil\\
& \hspace*{2cm} \cdot \left(\lceil \log_2(\max\{K_j^{(i)},q_j^{(i)}\}+1)\rceil\right)+\lceil \log_4(M_{j,i}^{2p_j^{(i)}})\rceil
\end{align*}
and
\begin{align*}
r_0 = &
132 \cdot 2^{K_{max}} \cdot \lceil e^{K_{max}}\rceil
\cdot \binom{{K_{max}}+ \lceil p_{max} \rceil}{{K_{max}}} \cdot
\max\{\lceil p_{max}\rceil +1, K_{max}^2\}
\end{align*}
with $M_{j,i} \in \mathbb{N}$ sufficiently large. Furthermore we use the identity network
\begin{align*}
f_{id}(z) = \sigma(z)-\sigma(-z) = z
\end{align*}
with
\begin{align*}
f_{id}^{0} (z) &= z, \quad &z \in \mathbb{R}, \notag\\
f_{id}^{t+1} (z) &= f_{id}\left(f_{id}^t(z)\right) = z, \quad &z \in \mathbb{R}, t \in \mathbb{N}_0
\end{align*}
and
\begin{align*}
f_{id}^t (x^{(1)}, \dots, x^{(d)}) = (f_{id}^t(x^{(1)}), \dots, f_{id}^t(x^{(d)}))=(x^{(1)}, \dots, &x^{(d)}), \\
& x^{(1)}, \dots, x^{(d)} \in \mathbb{R}
\end{align*}
to shift some values or vectors in the next hidden layers, respectively. We set
\begin{align*}
\tilde{L}_{N_j^{(i)}} = L_j^{(i)}
\end{align*}
for $i \in \{1, \dots, l\}$ and $j \in \{1, \dots, \tilde{N}_i\}$.
\begin{figure}[h!]
\centering
\pagestyle{empty}
\tikzstyle{line} = [draw, -latex']
\tikzstyle{annot} = [text width=4em, text centered]
\tikzstyle{mycirc} = [circle,fill=white, minimum size=0.005cm]
\footnotesize{
\begin{tikzpicture}[node distance = 2cm, auto]
\node [] (x1) {\scriptsize $x^{(1)}$};
\node [below of=x1, node distance =1cm] (x2) {\scriptsize $x^{(2)}$};
\node [below of=x2, node distance = 1cm] (dots) {\scriptsize $\vdots$};
\node [below of=dots, node distance = 1cm] (xd) {\scriptsize $x^{(d)}$};
\node [annot, above of=x1, node distance=2cm] (text) {\textit{Input}};
%
\node [right of=dots, below of=dots, node distance=2cm] (fid1) {\scriptsize $f_{id}^{L_1^{(1)}}$};
\node [right of = x2, above of = x2, node distance=2cm] (fnetg1) {\scriptsize $f_{net, deep, g_1^{(1)}}$};
\node [annot, right of=text, node distance=2cm] (text1) {$\hat{h}_1(\bold{x})$};
\path [line] (x1) -- (fid1);
\path [line] (x2) -- (fid1);
\path [line] (xd) -- (fid1);
\path [line] (x1) -- (fnetg1);
\path [line] (x2) -- (fnetg1);
\path [line] (xd) -- (fnetg1);
\node [right of=fid1] (fid2) {\scriptsize $f^{L_2^{(1)}}_{id}$};
\node [ above of=fid2, node distance=3cm] (fnetg2) {\scriptsize $f_{net, deep, g_2^{(1)}}$};
\node [ right of=fnetg1, node distance=2cm] (fid3) {\scriptsize $f^{L_2^{(1)}}_{id}$};
\node [annot, right of=text1, node distance=2cm] (text2) {$\hat{h}_2(\bold{x})$};
\path [line] (fnetg1) -- (fid3);
\path [line] (fid1) -- (fnetg2);
\path [line] (fid1) -- (fid2);
\node [right of=fid3] (fid4) {\scriptsize $f^{L_3^{(1)}}_{id}$};
\node [below of=fid4] (fid5) {\scriptsize $f^{L_3^{(1)}}_{id}$};
\node [right of=fid2] (fid6) {\scriptsize $f^{L_3^{(1)}}_{id}$};
\node [ above of=fid6, node distance=2cm] (fnetg3) {\scriptsize $f_{net, deep, g_3^{(1)}}$};
\node [annot, right of=text2, node distance=2cm] (text3) {$\hat{h}_3(\bold{x})$};
\path [line] (fid3) -- (fid4);
\path [line] (fnetg2) -- (fid5);
\path [line] (fid2) -- (fnetg3);
\path [line] (fid2) -- (fid6);
\node [right of=fid4] (dots) {\scriptsize $\dots$};
\node [below of=dots] (dots1) {\scriptsize $\dots$};
\node [right of=fnetg3] (dots2) {\scriptsize $\dots$};
\node [below of=dots2] (dots3) {\scriptsize $\dots$};
\path [line] (fid4) -- (dots);
\path [line] (fid5) -- (dots1);
\path [line] (fnetg3) -- (dots2);
\path [line] (fid6) -- (dots3);
\node [annot, right of=text3, node distance=2cm] (text4) {$\dots$};
\node [right of=dots] (fid7) {\scriptsize $f^{L_{\tilde{N}_1}^{(1)}}_{id}$};
\node [below of=fid7, node distance=1cm] (vdots) {\scriptsize $\vdots$};
\node [right of=dots1] (fid8) {\scriptsize $f^{L_{\tilde{N}_1}^{(1)}}_{id}$};
\node [right of=dots2] (fid9) {\scriptsize $f^{L_{\tilde{N}_1}^{(1)}}_{id}$};
\node [right of=dots3] (fid10) {\scriptsize $f^{L_{\tilde{N}_1}^{(1)}}_{id}$};
\node [ above of=fid10, node distance=1cm] (fnetgd) {\scriptsize $f_{network, g_{\tilde{N}_1}^{(1)}}$};
\node [annot, right of=text3, right of=text3, node distance=2cm] (text3) {$\hat{h}_{\tilde{N}_1}(\bold{x})$};
\path [line] (dots) -- (fid7);
\path [line] (dots1) -- (fid8);
\path [line] (dots2) -- (fid9);
\path [line] (dots3) -- (fnetgd);
\path [line] (dots3) -- (fid10);
\end{tikzpicture}}
\caption{Illustration of the neural network, which computes $h_1, \dots, h_{\tilde{N}_1}$}
\label{fprod}
\end{figure}
\hyperref[fprod]{Fig.\ref*{fprod} } illustrates how the functions $h_1, \dots, h_{\tilde{N}_1}$ are computed by our network and gives an idea of how the smaller networks are stacked on top of each other. The main idea is, that we successively apply the network $f_{network, g_j^{(i)}}$ in consecutive layers. Here we make use of the identity network $f_{id}$, which enables us to shift the input value as well as every already computed function in the next hidden layers without an error. As described in \eqref{th3beq1} and \eqref{th3beq2} our network successively computes
\begin{align*}
&\hat{h}_i(\bold{x})=\hat{g}_i^{(1)}(\bold{x}) := f_{net, deep, g_i^{(1)}}\left(x^{\left(\pi(\sum_{t=1}^{j-1} K_t^{(1)}+1)\right)}, \dots, x^{\left(\pi(\sum_{t=1}^{j} K_t^{(1)})\right)}\right)
\end{align*}
for $i \in \{1, \dots, \tilde{N}_1\}$
and
\begin{align*}
\hat{h}_{N_j^{(i)}}(\bold{x}) = f_{net, deep, g_j^{(i)}}\left(\hat{h}_{N^{(i-1)}_{\sum_{t=1}^{j-1} K_t^{(i)} +1}}(\bold{x}), \dots, \hat{h}_{N_{\sum_{t=1}^{j} K_t^{(i)}}^{(i-1)}}(\bold{x})\right)
\end{align*}
for $i \in \{2, \dots, l\}$ and $j \in \{1, \dots, \tilde{N}_i\}$
Finally we set
\begin{align*}
t_2(\bold{x}) = \hat{h}_{N_{1}^{(l)}}(\bold{x})= \hat{h}_{\sum_{t=1}^{l} \tilde{N}_t}(\bold{x}).
\end{align*}
Remark that for notational simplicity we have substituted every network $f_{id}$ in the input of the functions $\hat{h}_j$ by the real value (since $f_{id}$ computes this value without an error).
Since each network $\hat{h}_j$ for $j \in \{1, \dots, \sum_{t=1}^{l} \tilde{N}_t\}$ needs $\tilde{L}_j$ layers and $r_0$ neurons per layer and we further need $2d$ neurons per layer to successively apply $f_{id}$ to the input $\bold{x}$ and $2$ neurons per layer to apply $f_{id}$ to the at most $\sum_{t=1}^{l-1} \tilde{N}_t$ already computed functions in our network the final network $t_2$ is contained in the class
\begin{align*}
\mathcal{F}\left(\sum_{j=1}^{\sum_{t=1}^{l} \tilde{N}_t} \tilde{L}_j, 2 \cdot \sum_{t=1}^{l-1} \tilde{N}_t+2d+r_0\right).
\end{align*}
Using induction on $i$, it is easy to see that $t_2$ satisfies
\begin{align}
\|t_2 - m \|_{\infty, [-a,a]^d} \leq c_{24} \cdot a^{4(p_{\max}+1)} \cdot \max_{j,i} M_{j,i}^{-2p_j^{(i)}}.
\end{align}
A complete proof can be found in Supplement A. As described in part a) we can easily enlarge the number of hidden layers by successively apply $f_{id}$ to the output of the network $t_2$.
\end{proof}
\section{Proof of Theorem 1}
\label{se5}
a) Standard bounds of empirical process theory (cf. Lemma 18 in Supplement B) lead to
\begin{align*}
& {\bf E} \int |m_n(\bold{x}) - m(\bold{x})|^2 {{\bf P}}_{\bold{X}} (d\bold{x})\\
&\leq \frac{c_{25} (\log n)^2 \left(\sup_{\bold{x}_1^n \in (\mathbb{R}^d)^n}\log\left(
\mathcal{N}_1 \left(\frac{1}{n \cdot c_3 \cdot \log n}, T_{c_{3} \cdot \log(n)} \mathcal{F}(L_n,r_n), \bold{x}_1^n\right)
\right)+1\right)}{n}\\
&\quad + 2 \inf_{f \in \mathcal{F}(L_n, r_n)} \int |f(\bold{x})-m(\bold{x})|^2 {{\bf P}}_{\bold{X}} (d\bold{x}).
\end{align*}
Set
\begin{align*}
(\bar{p},\bar{K}) \in \mathcal{P} \ \mbox{such that} \ (\bar{p}, \bar{K}) = \arg \min_{(p,K) \in \mathcal{P}} \frac{p}{K}.
\end{align*}
The fact that $1/n^{c_{26}} \leq 1/(n \cdot c_3 \cdot \log(n)) \leq c_{3} \cdot \log(n)/8$, $L_n \leq c_{27} \cdot \log(n)$ and \linebreak
$r_n \leq c_{28} \cdot n^{\frac{1}{2(2\bar{p}/\bar{K} + 1)}}$
holds for $c_{26}, c_{27}, c_{28} >0$,
allows us to apply Lemma 19 in Supplement B to bound the first summand by
\begin{align}
\label{th1eq1}
&\frac{c_{25} \cdot (\log(n))^2 \cdot c_{29} \cdot (\log(n))^3 \cdot \log\left(c_{29} \cdot \log(n) \cdot n^{\frac{2}{(2(2\bar{p}/\bar{K} + 1))}} \right) \cdot c_{29} \cdot n^{\frac{1}{(2\bar{p}/\bar{K} + 1)}}}{n}\notag\\
\leq & \frac{c_{30} \cdot (\log(n))^6 \cdot n^{\frac{1}{2\bar{p}/\bar{K} +1}}}{n}
\leq c_{30} \cdot (\log(n))^6 \cdot n^{-\frac{2\bar{p}}{2\bar{p} +\bar{K}}}
\end{align}
for a sufficiently large $n$.
Regarding the second summand we apply \autoref{th3} a), where we choose
\begin{align*}
M_{j,i} = \bigg\lceil n^{\frac{1}{2(2p_j^{(i)}+K_j^{(i)})}}\bigg\rceil.
\end{align*}
Set
\[
a_n = (\log n)^{\frac{1}{4 \cdot (p_{max}+1)}}.
\]
W.l.o.g. we assume $\rm{supp}(\bold{X}) \subseteq$ $[-a_n,a_n]^d$. \autoref{th3} a) allows us to bound
\[
\inf_{f \in \mathcal{F}(L_n, r_n)} \int |f(\bold{x})-m(\bold{x})|^2 {{\bf P}}_{\bold{X}} (d\bold{x})
\]
by
\begin{align*}
&c_{31} \cdot \left(a_n^{4(p_{\max}+1)}\right)^2 \cdot
\max_{j,i} M_{j,i}^{-4p_j^{(i)}} = c_{31} \cdot (\log(n))^2 \cdot \max_{j,i} n^{-\frac{2p_j^{(i)}}{2p_j^{(i)} + K_j^{(i)}}}.
\end{align*}
This together with \eqref{th1eq1} and the fact that
\begin{align*}
\max_{(p,K) \in \P} n^{- \frac{2p}{2p+K}}
=
n^{-\frac{2\bar{p}}{2\bar{p} +\bar{K}}} = \max_{j,i} n^{-\frac{2p_j^{(i)}}{2p_j^{(i)} + K_j^{(i)}}}
\end{align*}
implies the assertion.
\\
\\
Part b) follows by a slight modification of the proof of \autoref{th1} a), where we use \autoref{th3} b) instead of a) to bound the approximation error.
\section*{Acknowledgement}
The authors are grateful to the many comments und suggestions that were brought up by the AE and four referees improving an early version of this manuscript.
\bigskip
\begin{center}
{\large\bf SUPPLEMENTARY MATERIAL}
\end{center}
Supplement description:
\begin{description}
\item[Section A: Network Approximation of Smooth Functions:] This section contains the long and rather technical proof of \autoref{th2} and the induction proofs of \autoref{th3}, that show the accuracy of the networks.
\item[Section B: Auxiliary Results and Further Proofs:] This section contains the auxiliary results and further proofs of all lemmata, that follow in a straightforward modification from earlier results.
\end{description}
\bibliographystyle{abbrv}
|
1,116,691,499,091 | arxiv | \section{Introduction}
Kondo model, describing a magnetic moment
embedded in a system of non-interacting fermions continues to attract attention of both theorists and experimentalists for more than half a century.
In spite of the model seeming simplicity, it is well known that the model is as far from being simple, as one can get, especially when additional complicated factors, like spin-anisotropy or non-flat behaviour of the electrons density of states (DOS) are to be taken into account.
A very insightful approach to this model is based on scaling equation pioneered by Anderson \cite{anderson,hewson}.
As far as spin-anisotropic model is concerned, to the best of our knowledge only the case of the $XXZ$ model (see below) with the flat DOS was analysed
in a detailed way \cite{hewson}.
A pseudogap Kondo model, with a
power-law DOS $\rho(\epsilon)\sim|\epsilon|^r$, which
has recently attracted a lot of attention \cite{sengupta,wehling,vojta,uchoa}
(for review, see Ref. \onlinecite{fritz}). In particular, graphene,
where the Kondo effect was observed recently \cite{chen}, is considered
as a typical realization of this model.
More generally, one is interested in Kondo problem for spin coupled to electrons with the pseudogap
or diverging DOS \cite{vojta3,zhuravlev,kanao,cazalilla,mitchell2,mitchell,shirakawa}.
A model of a Kondo-like impurity interacting spin-isotropically with a band of fermions for which the DOS is zero or small near the Fermi energy was considered in Ref. \onlinecite{fradkin}.
There renormalization-group arguments were used to demonstrate that this model has a nontrivial zero-temperature phase
transition at a finite coupling constant, in contrast to the zero-coupling-constant transition of the
Kondo model with constant DOS.
Numerical and perturbative
renormalization study on Kondo and Anderson
models have provided a comprehensive understanding of
phase diagrams and thermodynamic properties \cite{fritz, chen2,buxton,bulla,bulla2,glossop,fritz2}.
However, a spin-anisotropic pseudogap Kondo model
to our best knowledge was never considered.
The rest of the paper is constructed as follows.
We formulate in Section \ref{poor} poor man's scaling equation for the spin-anisotropic model
with the power law DOS. In
Section \ref{XXZm} solutions of scaling equations satisfying $J_x=J_y$ condition, referred to as $XXZ$ model, are presented
as a preparation for the Section \ref{xyzm}, where we integrate the general scaling equation for the spin-anisotropic model.
Some important mathematical details and geometric interpretation of the solution are relegated to the Appendix.
\section{Poor man's scaling for the spin-anisotropic model}
\label{poor}
\subsection{Hamiltonian and scaling equation}
Poor man's scaling is the renormalization idea applied to the model of a single magnetic impurity in the Fermi sea of itinerant electrons.
The Hamiltonian of the model can be written as
\begin{eqnarray}
\label{hamilto}
H=H_0+V=\sum_{{\bf k}\alpha}\epsilon_{\bf k}c_{{\bf k}\alpha}^{\dagger}c_{{\bf k}\alpha}+V,
\end{eqnarray}
where $c^{\dagger}_{{\bf k}\alpha}$ and $c_{{\bf k}\alpha}$ are electron creation and annihilation operators, $\epsilon_{\bf k}$ is the energy of itinerant electron with wave vector ${\bf k}$ and spin $\alpha$, and the operator $V$ describes interaction between the electrons and the impurity.
To formulate the renormalization procedure we need three objects: the Hamiltonian of the system $H$, the Hilbert space ${\cal H}$
(which is the product of the band of itinerant electrons of width $D$ and of the Hilbert space where the impurity lives), and the $T$ matrix,
given by the series
\begin{eqnarray}
\label{pertu}
T=V+VG_0V+VG_0VG_0V+\dots,
\end{eqnarray}
where $G_0=(E+i0-H_0)^{-1}$.
Suppose we are interested only in the matrix elements of the $T$ matrix between the electron states at a distance from the Fermi energy much less than the band width. Can we ignore the band edges? The answer is ``No'', because of virtual transitions of the electrons to the band edges, and the virtual transitions is all what quantum mechanics is about. Can we take into account all virtual transitions using perturbative expansion (\ref{pertu})? The answer is again ``No'', because the series diverges due to transitions to the stats close to the Fermi energy.
The brilliant idea of Anderson was that we can take into account virtual transition to the band edges perturbatively,
that is by taking into account only a few first terms in Eq. (\ref{pertu}),
thus
reducing the band width $D$ of the itinerant electrons and calculating
the renormalization of the Hamiltonian due to the elimination of the above mentioned virtual transitions.
Thus we reduce the Hilbert space ${\cal H}$ and renormalize the Hamiltonian $H$ accordingly, to keep the $T$
matrix constant.
And we can repeat this procedure again and again.
Now let us consider Kondo model. We find it appropriate to write down the exchange part of the Hamiltonian
in explicitly rotation invariant form
\begin{eqnarray}
\label{hamiltonian}
V=H_{\text{ex}}=\sum_{ij}J_{ij}S^is^j(0),
\end{eqnarray}
where
$\vec{S}$ is the (siting at ${\bf r}=0$) impurity spin operator (spin is one half),
$\vec{s}(0)=\frac{1}{2}\sum_{{\bf k}{\bf k}'\alpha\beta}c_{{\bf k}'\alpha}^{\dagger}\vec{\sigma}_{\alpha\beta}c_{{\bf k}\beta}$,
($\sigma^x,\sigma^y,\sigma^z$ are Pauli matrices) is the itinerant electrons spin operator, and $J_{ij}$ are the anisotropic exchange coupling constants.
Let us consider first the traditional case of flat DOS.
Poor Man's scaling consists in reducing the band width $D$ of the itinerant electrons and calculating perturbatively
the renormalized interactions due to the elimination of the virtual excitations to the band edges.
For the isotropic model
\begin{eqnarray}
\label{hamiltonianiso}
H_{\text{ex}}=J\vec{S}\cdot\vec{s}(0)
\end{eqnarray}
scaling equation in the lowest order is \cite{anderson,hewson}
\begin{eqnarray}
\frac{d J}{d\ln\Lambda}=-2\rho J^2,
\end{eqnarray}
where $\rho$ is DOS.
Consider generalization of Hamiltonian (\ref{hamiltonian}):
\begin{eqnarray}
\label{hamiltoniansu}
H_{\text{ex}}=\sum_{ij=1}^{N^2-1}J_{ij}S^iT^j,
\end{eqnarray}
where $S^i$ and $T^j$ are traceless generators of the group $SU(N)$.
Quadratic term in the scaling equation appears due to elimination of virtual transition of electron to the band edges in the lowest order of perturbation theory. Hence,
in the process of renormalization of the anisotropic Hamiltonian there appears the factor
\begin{eqnarray}
\label{tensorsu}
&&J_{ij}S^iT^jJ_{kl}S^kT^l\\
&&=\frac{1}{4}J_{ij}J_{kl}\left(\{S^i, S^k\}\{T^j,T^l\}+[S^i, S^k][T^j,T^l]\right)\nonumber
\end{eqnarray}
(in Eqs. (\ref{tensorsu})- (\ref{scalinga000}) we accept the summation convention: in all equations summation from 1 to $N^2-1$ with respect to every repeated index is implied).
Taking into account that
\begin{eqnarray}
\label{ksu}
\{S_i,S_k\}&=&\frac{\delta_{ik}}{N}+d_{ikn}S_n \nonumber\\
\left[S_i,S_k\right]&=&if_{ikn}S_n,
\end{eqnarray}
where $N\times N$ unit matrix is suppressed, the $d$-coefficients are symmetric in all indices,
and $f$ are structure constants, we obtain
\begin{eqnarray}
\label{tensor1}
&&4J_{ij}S^iT^jJ_{kl}S^kT^l=\frac{1}{N^2}J_{ij}J_{ij}\nonumber\\
&&+\frac{1}{N}J_{ij}J_{jk}d_{ikm}\left(S^m+T^m\right)\\
&&+J_{ij}J_{kl}\left(d_{ikm}d_{jln}-f_{ikm}f_{jln}\right)S^mT^n.\nonumber
\end{eqnarray}
Thus for the $SU(2)$ Kondo model one obtains \cite{cox,irkhin}
\begin{eqnarray}
\label{scalinga000}
\frac{d J_{mn}}{d\ln\Lambda}=-\rho J_{ij}J_{kl}\epsilon_{ikm}\epsilon_{jln},
\end{eqnarray}
where $\epsilon$ is Levi-Civita symbol.
The microscopic tensor $J_{ik}^{(m)}$ can always be reduced to principal axes by rotation of the coordinate system, and it keeps it's
diagonal form in the process of renormalization, as we see from Eq. (\ref{scalinga000}). So we can write down the exchange Hamiltonian as
\begin{eqnarray}
\label{h}
H_{\text{ex}}=\sum_iJ_{i}S^is^i(0),
\end{eqnarray}
and the scaling equation as
\begin{eqnarray}
\label{ggg}
\frac{d J_x}{d\ln\Lambda}&=&-2\rho J_yJ_z \nonumber\\
\frac{d J_y}{d\ln\Lambda}&=&-2\rho J_xJ_z \\
\frac{d J_z}{d\ln\Lambda}&=&-2\rho J_xJ_y,\nonumber
\end{eqnarray}
and sweep under the carpet the question of stability of the diagonal solution by adding to the poor man's scaling a possible rotation of coordinate system at each step. \footnote{Actually, the Hamiltonian (\ref{hamiltonian}) (or (\ref{h})) is more meaningful for the theory of two level systems, $S^i$ and $s^i$ being pseudospin operators \cite{cox}, than for the orthodoxal Kondo model \cite{nozieres}.}
Now let us return to the case when the electron dispersion law determines the power law dependence of the DOS upon the energy
\begin{eqnarray}
\label{e}
\rho(\epsilon)=C|\epsilon|^r,\;\;\;\text{if}\;\;|\epsilon|<D,
\end{eqnarray}
where $r$ can be either positive or negative ($r>-1$) \cite{mitchell}.
For the DOS we consider, in distinction from the standard renormalization procedure \cite{hewson}, one has additionally to rescale the unit of length \cite{fradkin}.
Thus for the isotropic model (\ref{hamiltonianiso})
scalar scaling equation is \footnote{It is known that physics of the problem can change at $r=1/2$ \cite{fradkin,fritz}, and thus the range of validity of scaling equations (\ref{scalinga00}) is an open problem.
However, we do not discuss here this problem and admit the limitation of our approach: the value of $r$ determines just the scale of the problem (see below).}
\begin{eqnarray}
\label{iso}
\frac{d J}{d\ln\Lambda}=rJ-2GJ^2,
\end{eqnarray}
where $G=CD^r$, and $\Lambda=D'/D$; $D'$ is the actual width of the itinerant electrons band after the exclusion of the virtual excitations to the
edges, was obtained and studied previously \cite{fradkin}.
Combining spin-anisotropy and power law DOS we should add linear terms to the RHS of Eq. (\ref{ggg}) (with zero $N$) to obtain
\begin{eqnarray}
\label{scalinga00}
\frac{d J_x}{d\ln\Lambda}&=&rJ_x-2GJ_yJ_z \nonumber\\
\frac{d J_y}{d\ln\Lambda}&=&rJ_y-2GJ_xJ_z \\
\frac{d J_z}{d\ln\Lambda}&=&rJ_z-2GJ_xJ_y.\nonumber
\end{eqnarray}
\subsection{Symmetries and fixed points of the scaling equation}
\label{sis}
Equation (\ref{scalinga000}) is symmetric with respect to all space rotations (the group $K$ \cite{landau3}), from which only the symmetry with respect to permutation of the indices $x,y,z$ is left for Eq. (\ref{scalinga00}). Additionally, both equations are symmetric with respect to space inversion accompanied by the inversion of the direction of flow and change of sign of $r$.
So further on we consider explicitly only the case $r>0$ (and present the results for negative $r$ in some cases).
We introduce $\lambda=\Lambda^r$ and, unless $G$ appear explicitly in the equation, measure $J$ in units of $r/2G$.
So Eq. (\ref{scalinga00}) becomes
\begin{eqnarray}
\label{scalinga0b}
\lambda\frac{dJ_x}{d\lambda}&=&J_x-J_yJ_z\nonumber\\
\lambda\frac{dJ_y}{d\lambda}&=&J_y-J_xJ_z\\
\lambda\frac{dJ_z}{d\lambda}&=&J_z-J_xJ_y.\nonumber
\end{eqnarray}
When we look for the flow lines of Eq. (\ref{scalinga0b}), the parameter $\lambda\in(0,+\infty)$ (and decreases along
a flow line). When we consider the physical problem, the parameter $\lambda\in (0,+1]$, and Eq. (\ref{scalinga0b}) becomes the initial (final) value problem with
\begin{eqnarray}
\label{subs}
J_n(1)=2GJ_n^{(m)}/r.
\end{eqnarray}
Equation (\ref{scalinga0b}) has a trivial fixed point
\begin{eqnarray}
\label{zero}
J_x^*=J_y^*=J_z^*=0,
\end{eqnarray}
corresponding to the impurity spin decoupled from the electron environment, and four non-trivial ones
\begin{eqnarray}
\label{odd}
|J_x^*|=|J_y^*|=|J_z^*|=1;\;\;\; \;\;\;J_xJ_yJ_z=1,
\end{eqnarray}
corresponding to finite isotropic antiferromagnetic Heisenberg exchange.
Apart from the finite fixed points given by Eqs. (\ref{zero}) and (\ref{odd}), Eq. (\ref{scalinga0b}) has infinite fixed points (more precisely, rays starting at the origin and going to the infinity, which serve as attractors for the flow lines). However, it will be more convenient to discuss these attractors later (in the Subsection \ref{tt}).
It is obvious that the trivial fixed point is stable. (We remind that we consider here only the case of positive $r$.)
To analyze stability of the non-trivial fixed points, we write $J_n=J_n^*+\delta J_n$ and linearize Eq. ({\ref{scalinga0b}) with respect to deviations from the fixed point $\delta J_n$. Thus we obtain
\begin{eqnarray}
\label{scaling2}
\frac{d \delta J_n}{d\lambda}=\sum_mT_{nm}\delta J_m,
\end{eqnarray}
where the eigenvalues of matrix $T$ for any fixed point are $-1$ and doubly degenerate $2$. Hence all the non-trivial fixed points are semi-stable and, hence, are critical points.
Taking the second step we introduce
$\widetilde{J}_n=J_n/\lambda$,
and Eq. (\ref{scalinga00}) takes the form
\begin{eqnarray}
\label{scalinga0}
\frac{d \widetilde{J}_x}{d\lambda}&=&-\widetilde{J}_y\widetilde{J}_z\nonumber\\
\frac{d \widetilde{J}_y}{d\lambda}&=&-\widetilde{J}_x\widetilde{J}_z\\
\frac{d \widetilde{J}_z}{d\lambda}&=&-\widetilde{J}_x\widetilde{J}_y.\nonumber
\end{eqnarray}
\section{Integration of scaling equations for the $XXZ$ model}
\label{XXZm}
\subsection{What we can learn from isotropic model}
We start our analysis from the simple case of
isotropic model ($J_x=J_y=J_z=J$), though it was analyzed before. However we prefer to reproduce the analysis, because this way we understand the pattern, which
will repeat itself throughout the paper.
Eq. (\ref{scalinga0}) in this case can be solved immediately
\begin{eqnarray}
\label{elem}
\widetilde{J}=\frac{1}{\lambda+\psi}\Longrightarrow J=\frac{\lambda}{\lambda+\psi}.
\end{eqnarray}
From the point of view of a mathematician, $\psi$ is just the constant of integration.
From the point of view of a physicist
$\psi$ for the particular problem has the particular value, connected with microscopic parameters by Eq. (\ref{subs}).
For $\psi>0$, when $\lambda$ decreases to zero,
$J(\lambda)$ converges to the trivial fixed point, which means that the spin is decoupled from the environment (when energy goes to zero). The value of $\psi=0$ means a critical point, that is energy independent interaction of the spin with the environment. And finally,
if $-1<\psi<0$, then $J(\lambda)$ has a relevant pole at some finite
value of $\lambda$, corresponding to finite value of $D'$. (This pole is similar to Landau pole \cite{landau}.)
Formally, this pole just means that the perturbation theory (and scaling equation (\ref{scalinga00}) is a clever but still perturbation theory)
breaks down. On the other hand, as it is known since long ago, the value of $D'$ mentioned above
provides an estimate of Kondo temperature $T_K$
\begin{eqnarray}
\label{Kondo}
T_K\sim D'=D(-\psi)^{1/r}.
\end{eqnarray}
So the parameter $\psi$ has a clear physical meaning.
Substituting Eq. (\ref{elem})
into Eq. (\ref{subs}) one obtains \cite{mitchell2}
\begin{eqnarray}
\label{Kondo2}
T_K\sim D\left(1-\frac{r}{2GJ^{(m)}}\right)^{1/r},
\end{eqnarray}
provided
\begin{eqnarray}
0<\frac{r}{2GJ^{(m)}}<1.
\end{eqnarray}
Note that if we take the limit $r\to 0$ in Eq. (\ref{Kondo2}), we obtain
\begin{eqnarray}
T_K\sim De^{-1/2GJ^{(m)}}.
\end{eqnarray}
\subsection{The $XXZ$ model}
\label{ing}
Now let us go to the $XXZ$ model ($J_x=J_y$). Eq. (\ref{scalinga0}) in this case takes the form
\begin{eqnarray}
\label{scalinga98}
\frac{d \widetilde{J}_x}{d\lambda}&=& -\widetilde{J}_z \widetilde{J}_x \nonumber \\
\frac{d \widetilde{J}_z}{d\lambda}&=&-\widetilde{J}_x^2.
\end{eqnarray}
We immediately obtain the first integral of Eq. (\ref{scalinga98})
\begin{eqnarray}
\label{scalingac2}
\widetilde{J}_x^2-\widetilde{J}_z^2=\pm A^2.
\end{eqnarray}
Substituting into Eq. (\ref{scalinga98}) and integrating
we get
\begin{eqnarray}
\label{d1b}
J_x&=&\pm A\lambda\cdot\mathrm{csc(h)}(A\lambda+\psi) \nonumber\\
J_z&=&A\lambda\cdot\mathrm{cot(h)}(A\lambda+\psi).
\end{eqnarray}
In Eq. (\ref{d1b}) $\mathrm{cos(h)}$ stands for either trigonometric or hyperbolic cosine, and similar for $\mathrm{cot(h)}$.
Note that presence of two integration constants ($\psi$ and $A$) in the solution (\ref{d1b}) reflects two symmetries of Eq. (\ref{scalinga0}): with respect to transformation $\lambda\to\lambda+\psi$ and with respect to transformation $\lambda\to A\lambda,\widetilde{J}\to \widetilde{J}/A$.
For trigonometric functions $\psi\in(-\pi/2,\pi/2]$, for hyperbolic functions
$\psi\in(-\infty,+\infty)$.
Together with fixed points, separatrices form the skeleton of a flow diagram. In our case non-trivial separatrices are
described by putting $\psi=0$ in Eq. (\ref{d1b}). Thus we get
four separatrices (ending at the critical points), described by the equations
\begin{eqnarray}
\label{s1a}
\frac{\widetilde{J}_z}{|\widetilde{J}_x|}&=&\mathrm{cos(h)}\left(\sqrt{|\widetilde{J}_x^2-\widetilde{J}_z^2|}\right)
\end{eqnarray}
(in the case of $\cos$ the solution of Eq. (\ref{s1a}) should also satisfy $\sqrt{\widetilde{J}_x^2-\widetilde{J}_z^2}<\pi$).
The asymptotic of the solution of Eq. (\ref{s1a}) is
\begin{eqnarray}
\widetilde{J}_x&=&\pm\left(\widetilde{J}_z-\frac{\pi^2}{2\widetilde{J}_z}\right),\;\;\;\widetilde{J}_z\ll -1 \nonumber\\
\widetilde{J}_x &=& \pm \widetilde{J}_ze^{-\widetilde{J}_z},\;\;\; \widetilde{J}_z\gg 1.
\end{eqnarray}
The trivial separatrices are $\widetilde{J}_x=0$ and
$ \widetilde{J}_x=\pm \widetilde{J}_z$.
A flow diagram, as described by Eq. (\ref{d1b}), is shown in Fig. \ref{XXZ}. Because of the symmetry of Eq. (\ref{scalinga0b}) it is enough to plot only the upper part of the phase plain $J_x\geq 0$. We observe the non-interacting phase, corresponding to the
trivial fixed point, the phase of infinite isotropic antiferromagnetic Heisenberg exchange,
(both phases corresponding to stable fixed points) We also observe
the critical line of finite isotropic antiferromagnetic Heisenberg exchange, ending at the critical point (semi-stable fixed point). This Figure shows an example of asymptotic symmetry \cite{pokrovskii}. After the renormalization the system becomes isotropic (or trivial) even it was anisotropic microscopically.
The same flow diagram as in Fig. \ref{XXZ}, is shown in Fig. \ref{large}, but this time the plot includes larger values of $J_x,J_z$
(or smaller values of $r$). The main purpose of this Figure is to illustrate how our results are reduced to those obtained
for the constant DOS \cite{hewson} when $r\to 0$.
For large $J_x,J_z$ the linear terms in Eq. (\ref{scalinga98}) can be neglected, and the flow diagram naturally looks like the one
from Ref. \onlinecite{hewson}, which consists
of hyperbolas.
However, when in the process of evolution at least one of $J_x,J_z$ becomes of order one, the principal deviations from the hyperbolas can be clearly seen.
Fig. \ref{large} also illustrates how the fixed line $J_x=0$ obtained for $r=0$ \cite{hewson} emerges when $r\to 0$.
Flow diagram for negative $r$, presented on Fig. \ref{negative}, is just the diagram from Fig. \ref{XXZ}, replotted taken into account the symmetry
of the problem mentioned in the beginning of the Subsection \ref{sis}. Because change of sign of $r$ should be accompanied by the inversion of the direction of flow (and space inversion), after this change the previously stable fixed points become unstable (and of no physical interest) and vice versa. Thus for negative $r$ we have two phases, corresponding respectively to infinite isotropic antiferromagnetic Heisenberg exchange and infinite Ising exchange. We want to emphasise again, that physical fixed points for negative $r$ correspond to nonphysical fixed points for positive $r$.
Notice also that for negative $r$ the critical point is ferromagnetic (and the critical line is totally different from that for positive $r$). Thus Fig. \ref{negative} shows (depending upon the initial conditions) either the same asymptotic symmetry we had for positive $r$ or dynamical
generation of anisotropy.
\begin{figure}[h]
\vskip -.5cm
\hskip -.5cm
\includegraphics[width= 1.05\columnwidth]{Fig1.eps}
\vskip -1cm
\caption{(color online) Flow diagram as described by Eq. (\ref{d1b}). Trivial fixed point is shown by black circle, critical point by red circle, isotropic model by orange dot dashed line, Ising model by violet dot dot dashed line, critical line by red solid line. The non-interacting (infinite isotropic antiferromagnetic Heisenberg exchange) phase is shown by green dashed (blue dotted) lines. }
\label{XXZ}
\end{figure}
\begin{figure}[h]
\vskip -1cm
\hskip -.5cm
\includegraphics[width= 1.05\columnwidth]{Fig2.eps}
\vskip -1cm
\caption{ (color online) Same as Fig. \ref{XXZ}, but with a wider plot interval. }
\label{large}
\end{figure}
\begin{figure}[h]
\vskip -1cm
\hskip -.5cm
\includegraphics[width= 1.05\columnwidth]{Fig3.eps}
\vskip -1cm
\caption{(color online) Flow diagram for negative $r$.
The infinite Ising exchange phase is shown by magenta dot short dashed lines.}
\label{negative}
\end{figure}
As far as Kondo effect is concerned, we can repeat verbatim the two paragraphs following Eq. (\ref{elem}), only $\psi$ in the RHS of Eq. (\ref{Kondo}) should be divided by $A$. The values of $\psi$ and $A$ are found this time by substituting Eq. (\ref{d1b}) into Eq. (\ref{subs}).
\footnote{We recently learned that part of results of the Subsection \ref{ing} were independently obtained in Ref. \onlinecite{ingersent}.}
\section{Integration of scaling equations in the general case}
\label{xyzm}
\subsection{Tale of two integrals}
\label{tt}
Let us generalize Eq. (\ref{scalinga0}) to
\begin{eqnarray}
\label{scalinga0g}
\frac{d \widetilde{J}_x}{d\lambda}&=&Q\widetilde{J}_y\widetilde{J}_z\nonumber\\
\frac{d \widetilde{J}_y}{d\lambda}&=&R\widetilde{J}_x\widetilde{J}_z\\
\frac{d \widetilde{J}_z}{d\lambda}&=&S\widetilde{J}_x\widetilde{J}_y,\nonumber
\end{eqnarray}
where $Q,R,S$ are some constants. Equation (\ref{scalinga0g}) includes the $XYZ$ model ($P=Q=R=-1$), and also other, much more important cases, like Euler top \cite{landau2}.
From Eq. (\ref{scalinga0g}) follows
\begin{eqnarray}
\label{scaling}
\frac{d}{d\lambda}\left(a\widetilde{J}_x^2+b\widetilde{J}_y^2+c\widetilde{J}_z^2 \right)=0,
\end{eqnarray}
where $a,b,c$ are arbitrary constants satisfying
\begin{eqnarray}
\label{pqr}
aQ+bR+cS=0.
\end{eqnarray}
Hence we immediately get two first integrals
\begin{eqnarray}
\label{pro2}
a_1\widetilde{J}_x^2+b_1\widetilde{J}_y^2+c_1\widetilde{J}_z^2&=&A_1\nonumber\\
a_2\widetilde{J}_x^2+b_2\widetilde{J}_y^2+c_2\widetilde{J}_z^2&=&A_2,
\end{eqnarray}
where $(a_1,b_1,c_1)$ and $(a_2,b_2,c_2)$ should be linearly independent.
Equations (\ref{pro2}) seems to contain a lot of constants, but if we consider it (and Eq. (\ref{pqr}) as defining straight line in the space with the coordinates $(x,y,z)=(\widetilde{J_x}^2,\widetilde{J_y}^2,\widetilde{J_z}^2)$, we are motivated to present this equation in the canonical form
\begin{eqnarray}
\label{pro5}
\frac{x-x_0}{Q}=\frac{y-y_0}{R}=\frac{z-z_0}{S}.
\end{eqnarray}
Thus in the space introduced above each flow lines lies on the ray with the direction $(Q,R,S)$.
Returning temporarily to the Kondo problem we solve ($Q=R=S=-1$), we immediately understand by inspection of Eq. (\ref{pro5}) that the attractors of Eq. (\ref{scalinga0b}) going to infinity
corresponds to isotropic Hamiltonian.
\subsection{General solution}
\label{gs}
Two integrals being found, we are left with a single equation for a single variable ${\cal P}$,
which is naturally to chose according to the equation
\begin{eqnarray}
\label{pro55}
\frac{x-x_0}{Q}=\frac{y-y_0}{R}=\frac{z-z_0}{S}=\frac{\cal P}{QRS}.
\end{eqnarray}
Also, Eq. (\ref{pro2}) contains just two constants. To emphasize this fact we put on $x_0,y_0,z_0$ condition
\begin{eqnarray}
\label{constraint}
\frac{x_0}{Q}+\frac{y_0}{R}+\frac{z_0}{S}=0.
\end{eqnarray}
Substituting Eq. (\ref{pro55}) into Eq. (\ref{scalinga0g}) and taking into account Eq. (\ref{constraint}) we obtain \cite{shiba}
\begin{eqnarray}
\label{we}
\left[\frac{d{\cal P}(\lambda)}{d\lambda}\right]^2=4\left[{\cal P}(\lambda)\right]^3-g_2{\cal P}(\lambda)-g_3,
\end{eqnarray}
where
\begin{eqnarray}
\label{constraint2}
g_2&=&-4(RSx_0+QSy_0+QRz_0)\nonumber\\
g_3&=&-4Q^2R^2S^2x_0y_0z_0.
\end{eqnarray}
Hence ${\cal P}(\lambda)$ is Weierstrass elliptic function ${\cal P}(\lambda;\omega_1,\omega_2)$ \cite{abram}, and
$\omega_1$, $\omega_2$ are connected with $g_2$, $g_3$ by equation
\begin{eqnarray}
\label{constraint3}
g_2&=&60\sum_{(m,n)\neq (0,0)}(m\omega_1+n\omega_2)^{-4} \nonumber \\
g_3&=&140\sum_{(m,n)\neq (0,0)}(m\omega_1+n\omega_2)^{-6}.
\end{eqnarray}
Thus we obtain the solution of Eq. (\ref{scalinga0b}) as
\begin{eqnarray}
\label{ammw}
J_x^2 &=&\lambda^2\left[{\cal P}(\lambda+\psi;\omega_1,\omega_2)/RS+x_0\right]\nonumber\\
J_y^2 &=&\lambda^2\left[{\cal P}(\lambda+\psi;\omega_1,\omega_2)/QS+y_0\right]\\
J_z^2 &=&\lambda^2\left[{\cal P}(\lambda+\psi;\omega_1,\omega_2)/QR+z_0\right].\nonumber
\end{eqnarray}
The solution represents a two-parameter ($\omega_1,\omega_2$) family of the flow lines, with $x_0,y_0,z_0$ being connected with these two parameters by Eqs. (\ref{constraint}), (\ref{constraint2}), (\ref{constraint3}).
Alternative (and more convenient) representation of the solution of Eq. (\ref{scalinga0b}) through Jacobi elliptic functions is presented in the Appendix. The result for the problem we solve is
\begin{eqnarray}
\label{amm}
J_x &=&\pm A\lambda\cdot\mathrm{ns}(A\lambda+\psi,k)\nonumber\\
J_y &=&\pm A\lambda\cdot\mathrm{cs}(A\lambda+\psi,k)\\
J_z &=&\pm A\lambda\cdot\mathrm{ds}(A\lambda+\psi,k),\nonumber
\end{eqnarray}
where the factors $\pm 1$ should satisfy condition that their product is equal to $+1$, plus the
solutions which
can be obtained from Eq. (\ref{amm}) by interchanging $J_x,J_y,J_z$.
Equation (\ref{amm}) is the main result of the paper. It represents a two-parameter family of the flow lines. The parameter $\psi\in(-K(k),K(k)]$, where $K$ is the complete elliptic integral of the first kind,
and the parameter $k\in [0,1]$, and, as we show in the appendix, has the simple geometric meaning.
The results of the previous Section are the particular case of those obtained in this Section.
In fact, because
\begin{eqnarray}
\left\{\begin{array}{l}\mathrm{ns}(\phi,0)=\csc(\phi)\\
\mathrm{cs}(\phi,0)=\cot(\phi)\\
\mathrm{ds}(\phi,0)=\csc(\phi)\end{array}\right.\;\;\;
\left\{\begin{array}{l}\mathrm{ns}(\phi,1)=\coth(\phi)\\
\mathrm{cs}(\phi,1)=\mathrm{csch}(\phi)\\
\mathrm{ds}(\phi,1)=\mathrm{csch}(\phi)\end{array}\right.,
\end{eqnarray}
for $k=0$
Eq. (\ref{amm}) contains Eq. (\ref{d1b}) with trigonometric functions and $J_z$ and $J_y$ interchanged,
and for $k=1$ it contains the same equation with hyperbolic functions and $J_z$ and $J_x$ interchanged.
The values of $J_x,J_y,J_z$ obtained from Eq. (\ref{amm}) for $\lambda=0$ and $\psi \neq 0$ corresponds to the trivial fixed point, which describes the non-interacting phase. Further on we consider only the case $r>0$.
For $\psi = 0$ the value $\lambda=0$ corresponds to one of the critical points, and $\lambda\in(0,2K(k))$ corresponds to
the critical surface.
In Fig.\ref{surface} we show the critical surface of one critical point.
Due to the symmetry of equations mentioned in Sec. II,
three other critical surfaces can be obtained from presented on Fig. \ref{surface} by rotations by the angle $\pi$ about the Cartesian axes.
Fig. \ref{all} with all four critical surfaces gives
a complete picture of the phase diagram for the $XYZ$ model, with critical surfaces separating between the infinite isotropic antiferromagnetic Heisenberg exchange phase and the phase, corresponding to the impurity spin decoupled from the electron environment.
\begin{figure}[h]
\includegraphics[width= 1\columnwidth]{Fig4.eps}
\caption{ (color online) Critical surface of the critical point $(J_x,J_y,J_z)=(1,1,1)$ is shown in red.
Solid (dotted) line is a critical line (asymptotes) on $J_x=J_z$ plane (painted in grey); these lines are the same as in Fig. \ref{XXZ}.}
\label{surface}
\end{figure}
\begin{figure}[h]
\includegraphics[width= .7\columnwidth]{Fig5.eps}
\caption{ (color online)
All four critical surfaces. Critical
points
are denoted by colored circles.}
\label{all}
\end{figure}
As far as Kondo effect is concerned, we can repeat verbatim the last paragraph of Section \ref{XXZm},
only this time Eq. (\ref{amm}) should be used instead of Eq. (\ref{d1b}).
\section{Conclusions}
\label{conclusions}
We considered a single magnetic impurity described by the spin--anisotropic s-d(f) exchange (Kondo) model.
We formulated the explicitly rotation invariant scaling equation for the power law electron DOS
and solved this equation
in terms of elliptic functions.
We found the infinite isotropic antiferromagnetic Heisenberg exchange phase, the phase, corresponding to the impurity spin decoupled from the electron environment (for the pseudogap DOS) and the infinite Ising exchange phase (for the DOS diverging at the Fermi level). We studied in details the critical surface
corresponding to the finite isotropic antiferromagnetic Heisenberg exchange (for the pseudogap DOS).
\begin{acknowledgments}
This work has been supported in part by RIKEN iTHES Project and Molecular Systems.
One of the authors (E.K.) thanks RIKEN for the hospitality extended to him during
his stay.
The other author (K.N.) is supported by Grant-in-Aid for JSPS Fellows (Grant No. 16J07637).
The authors are grateful to N. Andrei, Y. Avishay, K. Ingersent, D. Khveshchenko, T. Kimura, Chi-Cheng Lee, A. Mitchell, A. Nevidomskyy, Y. Ohyama, R. Sakano, Y. Teratani, and M. Vojta for valuable discussions,
and particularly to V. Yu. Irkhin for bringing to their attention Refs. \onlinecite{cox,irkhin} and to A. Oguri for bringing to their attention Refs. \onlinecite{shiba,yosida}.
\end{acknowledgments}
\begin{appendix}
\section{If you have seen one, you have seen them all}
We explained in Section \ref{poor} that if we know the quadratic terms in poor man's scaling equation for the Hamiltonian (\ref{hamiltonianiso}), we know these terms in poor man's scaling equation for the Hamiltonian (\ref{hamiltonian}). We claim that this remains true
for the higher order terms.
For example, from scaling equation for the isotropic Hamiltonian (\ref{hamiltonianiso}) \cite{hewson}
\begin{eqnarray}
\label{cube}
\frac{d J}{d\ln\Lambda}=-2\rho J^2+2\rho^2J^3,
\end{eqnarray}
follows scaling equation for reduced anisotropic Hamiltonian
\begin{eqnarray}
\label{hamiltonianr}
H_{exc}=\sum_iJ_{i}S^is^i(0)
\end{eqnarray}
in the form
\begin{eqnarray}
\label{cu}
\frac{d J_x}{d\ln\Lambda}&=&-2\rho J_yJ_z+\rho^2(J_y^2+J_z^2)J_x \nonumber\\
\frac{d J_y}{d\ln\Lambda}&=&-2\rho J_xJ_z+\rho^2(J_x^2+J_z^2)J_y \nonumber\\
\frac{d J_z}{d\ln\Lambda}&=&-2\rho J_xJ_y+\rho^2(J_x^2+J_y^2)J_z.
\end{eqnarray}
This result can be obtained by considering expression
\begin{eqnarray}
\sum_{ikl}J_iS^i\sigma^i J_kS^k\sigma^kJ_lS^l\sigma^l
\end{eqnarray}
and applying twice Eq. (\ref{tensor1}).
\section{Solving Eq. (\ref{scalinga0g}) using Jacobi elliptic functions}
For convenience of the reader we present here basic facts concerning Jacobi elliptic functions.
There are three major functions:
$\mathrm{sn}(\lambda,k)$ solves the differential equation
\begin{eqnarray}
\left(\frac{du}{d\lambda}\right)^2=(1-u^2)(1-k^2u^2);
\end{eqnarray}
$\mathrm{cn}(\lambda,k)$ solves the differential equation
\begin{eqnarray}
\left(\frac{du}{d\lambda}\right)^2&=&(1-u^2)(1-k^2+k^2u^2);
\end{eqnarray}
$\mathrm{dn}(\lambda,k)$ solves the differential equation
\begin{eqnarray}
\left(\frac{du}{d\lambda}\right)^2=(1-u^2)(u^2-1+k^2).
\end{eqnarray}
Also, there are nine minor functions:
\begin{eqnarray}
\mathrm{pq}(\lambda,k)=\frac{\mathrm{pn}(\lambda,k)}{\mathrm{qn}(\lambda,k)},
\end{eqnarray}
where $\mathrm{p}$ and $\mathrm{q}$ are any of the letters $\mathrm{n,s,c,d}$ ($\mathrm{nn}(\lambda)\equiv 1$),
named by the first letter of the numerator followed by the first letter of the denominator.
The rules of differentiation of the elliptic functions are:
\begin{eqnarray}
\label{jac}
&&\left\{\begin{array}{l} \frac{d}{d\lambda}\mathrm{sn}=\mathrm{cn}\cdot\mathrm{dn} \\
\frac{d}{d\lambda}\mathrm{cn}=-\mathrm{sn}\cdot\mathrm{dn} \\
\frac{d}{d\lambda}\mathrm{dn}=-k^2\cdot\mathrm{sn}\cdot\mathrm{cn}
\end{array}\right.
\left\{\begin{array}{l} \frac{d}{d\lambda}\mathrm{nc}=\mathrm{sc}\cdot\mathrm{dc}\\
\frac{d}{d\lambda}\mathrm{sc}=\mathrm{nc}\cdot\mathrm{dc}\\
\frac{d}{d\lambda}\mathrm{dc}=(1-k^2)\cdot\mathrm{nc}\cdot\mathrm{sc}
\end{array}\right.\nonumber\\
&&\left\{\begin{array}{l}
\frac{d}{d\lambda}\mathrm{ns}=-\mathrm{cs}\cdot\mathrm{ds} \\
\frac{d}{d\lambda}\mathrm{cs}=-\mathrm{ns}\cdot\mathrm{ds} \\
\frac{d}{d\lambda}\mathrm{ds}=-\mathrm{ns}\cdot\mathrm{cs}
\end{array}\right.\hskip .7cm
\left\{\begin{array}{l}
\frac{d}{d\lambda}\mathrm{nd}=k^2\cdot\mathrm{sd}\cdot\mathrm{cd} \\
\frac{d}{d\lambda}\mathrm{sd}=\mathrm{cd}\cdot\mathrm{nd} \\
\frac{d}{d\lambda}\mathrm{cd} =-(1-k^2)\cdot\mathrm{sd}\cdot\mathrm{nd}
\end{array}\right..\nonumber\\
\end{eqnarray}
(the argument of all functions is $\lambda$, and the modulus is $k$).
These simple rules allow to integrate Eq. (\ref{scalinga0g}) just by inspection. In fact,
there are four options for the signs of $P,Q,R$ in Eq. (\ref{scalinga0g}): three pluses, two pluses and one minus, one plus and two minuses, and three minuses. Equation (\ref{jac}) allows one
to solve Eq. (\ref{scalinga0g}) for each option; one should just read two previous paragraphs backward.
Thus Eq. (\ref{scalinga0}) is solved by the set of functions (in the domain $\widetilde{J}_x^2>\widetilde{J}_z^2>\widetilde{J}_y^2$)
\begin{eqnarray}
\label{cb}
\widetilde{J}_x&=&\mathrm{ns}(\lambda,k)\nonumber\\
\widetilde{J}_y&=&\mathrm{cs}(\lambda,k) \\
\widetilde{J}_z&=&\mathrm{ds}(\lambda,k).\nonumber
\end{eqnarray}
Action on this solution by the group of transformations $\lambda\to\lambda+\psi$ and $\lambda\to A\lambda,\widetilde{J}\to \widetilde{J}/A$ gives the 3-parameter family of solutions \cite{shiba,yosida}
\begin{eqnarray}
\label{c6}
\widetilde{J}_x&=&A\cdot\mathrm{ns}(A\lambda+\psi,k)\nonumber\\
\widetilde{J}_y&=&A\cdot\mathrm{cs}(A\lambda+\psi,k)\\
\widetilde{J}_z&=&A\cdot\mathrm{ds}(A\lambda+\psi,k), \nonumber
\end{eqnarray}
wherefrom follows Eq. (\ref{amm}).
Permutation of the indices $x,y,z$ gives solutions in other domains.
\section{Let a hundred flowers blossom}
The two representations of the solution we got (through Weierstrass and through Jacobi elliptic functions) in spite of looking differently, are equivalent.
Weierstrass elliptic function can be expressed through Jacobi one using equation
\begin{eqnarray}
{\cal P}(\lambda)=e_3+\frac{e_1-e_3}{\mathrm{sn}^2w},
\end{eqnarray}
where $e_{1,2,3}$ are three roots of
the RHS of Eq. (\ref{we}), considered as a polynomial,
and where the modulus $k$ of the Jacobi function equals
\begin{eqnarray}
k\equiv \sqrt {\frac{e_2-e_3}{e_1-e_3}}
\end{eqnarray}
and it's argument $w$ equals
\begin{eqnarray}
w\equiv z\sqrt {e_1-e_3}.
\end{eqnarray}
Starting from Eq. (\ref{scalinga0g}) we get
\begin{eqnarray}
e_1=-RSy_0,\;\;e_2=-QSz_0,\;\;e_3=-QRx_0.
\end{eqnarray}
Thus for $Q=R=S=-1$ we recover Eq. (\ref{c6}).
\section{Special elliptic cones}
\label{elli}
To understand the geometric meaning of the solution (\ref{amm}) let us start from elementary geometry.
The Euclidean space ($x,y,z)$ can be in a unique way foliated into the elliptic cones of special ($\alpha+\beta+\gamma=0$) type
\begin{eqnarray}
\label{P}
\alpha x^2+\beta y^2+\gamma z^2=0.
\end{eqnarray}
(For $\alpha=0$ or $\beta=0$ the special cone is a pair of planes.)
The foliation includes three families of cones (the axis of the cones of each family is one of the Cartesian axes).
These families will be referred to as $x$-cones, $y$-cones and $z$-cones.
An apex angle of a given cone is the angle between the cone's axis and the section of the cone by a plane
which contains the cone axis.
For example, for a $z$-cone, $\theta_{zx}$ is
the angle between the OZ axis and the section of the cone by the plane $x=0$. It is obvious that
\begin{eqnarray}
\label{P2}
\cos\theta_{zx}=-\frac{\beta}{\gamma}.
\end{eqnarray}
Now let us go from geometry to calculus. Recalling the identity, elliptic functions satisfy
\begin{eqnarray}
1-k^2+k^2\mathrm{cn}^2(\lambda,k)-\mathrm{dn}^2(\lambda,k)=0,
\end{eqnarray}
one realizes that the solution (\ref{amm}) satisfies
\begin{eqnarray}
(1-k^2)J_x^2+k^2J_y-J_z^2=0.
\end{eqnarray}
Hence the special cones in the phase space $J_x,J_y,J_z$ remain invariant under the evolution (this is why they were introduced above), and
the parameter $k^2$ in the solution (\ref{amm}) is the cosine of the $\theta_{zx}$ apex angle of the special cone, the solution belongs to.
Note that the stable fixed point $(J_x,J_y,J_z)=(0,0,0)$ of Eq. (\ref{scalinga00}) is
the apex of all special cones \cite{brodsky}.
\section{Mathematical conclusions}
In this paper we have integrated specific systems of $m$ coupled scaling equations in terms of known functions. Two specific features of the systems allowed us to do it.
First, it was possible to reduce each system to the form
\begin{eqnarray}
\label{mc}
\frac{d \widetilde{J}_n}{d\lambda}&=&R_n\frac{\Phi(\widetilde{J})}{\widetilde{J}_n},
\end{eqnarray}
where $\Phi$ is some function of the coupling constants, and $R_n$ are constants. This allowed us to obtain $m-1$ first integrals of the system. Geometrically this means that in the space with the coordinates $(x_1,\dots x_m)=(\widetilde{J}_1^2\dots \widetilde{J}_m^2)$ a flow line of Eq. (\ref{mc}) lies on an arbitrary ray in the direction $(R_1,\dots R_m$).
The integrals being found, we are left with a single differential equation for a single variable (whatever function of the coupling constants is chosen as such variable). This equation is of the first order, solved with respect to the derivative of the dependent variable, and has the RHS which does not contain the independent variable. It means that a system which can be reduced to Eq. (\ref{mc}) (for any $\Phi$, any $m$ and any set of constants
$R_n$) can be integrated in quadratures.
Second, the function $ \Phi$ being what it was, we were able to write down the solutions in terms of known transcendental functions
(circular trigonometric and hyperbolic for $m=2$ and elliptic for $m=3$).
Jacobi elliptic functions appear when we chose a coupling constant (or inverse coupling constant, or the ratio of two coupling constants, as we understand looking at Eq. (\ref{jac})) as the above mentioned variable. Weierstrass elliptic functions appear when the square of a coupling constant is chosen.
\end{appendix}
|
1,116,691,499,092 | arxiv | \Section{\Section}
\def\Subsection#1{\global\advance\subsecno by1\relax\medskip %
\leftline{\bf\the\secno.\the\subsecno\ #1}%
\par\nobreak\smallskip\nobreak}
\def\tagsubsection#1{%
\warnIfChanged#1{\the\secno.\the\subsecno}%
\xdef#1{\the\secno.\the\subsecno}%
\ifWritingAuxFile\immediate\write\auxfile{\noexpand\xdef\noexpand#1{#1}}\fi%
}
\def\Subsection{\Subsection}
\def\Subsection{\Subsection}
\def\uppercase\expandafter{\romannumeral\appno}{\uppercase\expandafter{\romannumeral\appno}}
\def\makeNormalizedRomappno{%
\expandafter\makeNormal\expandafter\normalizedromappno%
\expandafter{\romannumeral\appno}%
\edef\normalizedromappno{\uppercase{\normalizedromappno}}}
\def\Appendix#1{\global\advance\appno by1\relax\global\meqno=1\global\secno=0%
\global\subsecno=0%
\bigbreak\bigski
\centerline{\twelvepoint \bf Appendix %
\uppercase\expandafter{\romannumeral\appno}. #1}%
\par\nobreak\medskip\nobreak}
\def\tagappendix#1{\makeNormalizedRomappno%
\warnIfChanged#1{\normalizedromappno}%
\xdef#1{\normalizedromappno}%
\ifWritingAuxFile\immediate\write\auxfile{\noexpand\xdef\noexpand#1{#1}}\fi%
}
\def |
1,116,691,499,093 | arxiv | \section*{Introduction}
A \textbf{symplectic foliation} on a manifold $M$ is a (regular) foliation $\mathcal{F}$, endowed with a 2-form
$\omega$ on $T\mathcal{F}$ whose restriction to each leaf $S$ of $\mathcal{F}$ is a symplectic form
\[\omega_{S}\in\Omega^2(S).\]
Equivalently, a symplectic foliation is a Poisson structure of constant rank.
In this paper we prove a normal form result for symplectic foliations around a leaves. The result uses the
\textbf{cohomological variation} of $\omega$ at the leaf $S$, which is a linear map (see section \ref{Section} for the
definition)
\begin{equation}\label{EQ_coh_var}
[\delta_S\omega]_x:\nu_x^*\rmap H^2(\widetilde{S}_{hol}),\ \ x\in S,
\end{equation}
where $\nu$ denotes the normal bundle of $T\mathcal{F}$, and $\widetilde{S}_{hol}$ is the holonomy cover of $S$.
The cohomological variation arises in fact from a linear map:
\begin{equation}\label{EQ_vari}
\delta_S\omega_x:\nu_x^*\rmap \Omega^2_{\textrm{closed}}(\widetilde{S}_{hol}).
\end{equation}
The local model for the foliation around $S$, which appears in the classical results of Reeb and Thurston, is the flat
bundle $(\widetilde{S}_{hol}\times\nu_x)/\pi_1(S,x)$, where $\pi_1(S,x)$ acts on the second factor via the linear
holonomy
\begin{equation}\label{EQ_lin_hol}
dh:\pi_1(S,x)\rmap Gl(\nu_x).
\end{equation}
For a symplectic foliations the flat bundle can be endowed with leafwise closed 2-forms, which are symplectic in a
neighborhood of $S$; namely, the leaf through $v\in\nu_x$ carries the closed 2-form $j^1_S(\omega)_v$ whose
pull-back to $\widetilde{S}_{hol}\times\{v\}$ is
\begin{equation*
p^*(j^1_{S}(\omega)_v)=p^*(\omega_S)+\delta_S\omega_x(v).
\end{equation*}
Our main result is the following:
\begin{theorem}\label{Theorem} Let $S$ be an embedded leaf of the symplectic foliation $(M,\mathcal{F},\omega)$. If
the holonomy group of $S$ is finite and the cohomological variation (\ref{EQ_coh_var}) at $S$ is a surjective map, then
some open around $S$ is isomorphic as a symplectic foliation to an open around $S$ in the flat bundle
$(\widetilde{S}_{hol}\times\nu_x)/\pi_1(S,x)$ endowed with the family of closed 2-forms $j^1_S(\omega)$ by a
diffeomorphism which fixes $S$.
\end{theorem}
This result is not a first order normal form theorem, since the holonomy group and the holonomy cover depend on the
germ of the foliation around the leaf. The first order jet of the foliation at $S$ sees only the linear holonomy group
$H_{lin}$ (i.e.\ the image of $dh$) and the corresponding linear holonomy cover denoted $\widetilde{S}_{lin}$. Now,
the map (\ref{EQ_vari}) is in fact the pull-back of a map with values in
$\Omega^2_{\textrm{closed}}(\widetilde{S}_{lin})$. Using this remark, and an extension to noncompact leaves of a
result of Thurston (Lemma \ref{Lemma2}), we obtain the following consequence of Theorem \ref{Theorem}.
\begin{corollary}\label{Corollary1}
Under the assumptions that $S$ is embedded, $\pi_1(S,x)$ is finitely generated, $H_{lin}$ is finite,
$H^1(\widetilde{S}_{lin})=0$ and the cohomological variation
\[[\delta_S\omega]_x:\nu_x^*\rmap H^2(\widetilde{S}_{lin})\]
is surjective, the conclusion of Theorem \ref{Theorem} holds.
\end{corollary}
Our result is clearly related to the normal form theorem for Poisson manifolds around symplectic leaves from
\cite{CrMa}. Both results have the same conclusion, yet the conditions of Theorem \ref{Theorem} are substantially
weaker. More precisely, for regular Poisson manifolds, the hypothesis of the main result in \emph{loc.cit.} are (see
Corollary 4.1.22 and Lemma 4.1.23 \cite{Teza}):
\begin{itemize}
\item the leaf $S$ is compact,
\item $\pi_1(S,x)$ is finite,
\item the cohomological variation is an isomorphism, when viewed as a map
\[[\delta_S\omega]_x:\nu_x^*\rmap H^2(\widetilde{S}_{uni}),\]
where $\widetilde{S}_{uni}$ is the universal cover of $S$.
\end{itemize}
There is yet another essential difference between Theorem 1 and the result from \cite{CrMa}, namely, even in the
setting of Corollary \ref{Corollary1}, the result presented here is a first order result only in the world of symplectic
foliations, and not in that of Poisson structures. The information that a Poisson bivector has constant rank is not
detectable from its first jet.
A weaker version of Theorem \ref{Theorem} is part of the PhD thesis \cite{Teza} of the
second author.
\medskip
\noindent \textbf{Acknowledgments.} This research was financially supported by the ERC Starting Grant no. 279729.
\section{The local model and the cohomological variation}\label{Section}
In this section we describe the local model of a symplectic foliation around a leaf, and define the cohomological
variation of the symplectic structure on the leaves. In the case of general Poisson manifolds, the local model was first
constructed by Vorobjev \cite{Vorobjev}. The approach presented here is more direct; for the relation between these
two constructions see \cite{Teza}.
Let $(M,\mathcal{F})$ be a foliated manifold, and denote its normal bundle by
\[\nu:=TM/T\mathcal{F}.\]
Then $\nu$ carries a flat $T\mathcal{F}$ connection, called the \textbf{Bott connection}, given by
\[\nabla:\Gamma(T\mathcal{F})\times\Gamma(\nu)\rmap \Gamma(\nu),\ \ \nabla_X(\overline{Y}):=\overline{[X,Y]},\]
where, for a vector field $Z$, we denote by $\overline{Z}$ its class in $\Gamma(\nu)$. For a path $\gamma$ inside a
leaf $S$, parallel transport with respect to $\nabla$ gives the \textbf{linear holonomy} transformations:
\[dh(\gamma):\nu_{\gamma(0)}\diffto \nu_{\gamma(1)}.\]
This map depends only on $\gamma$ modulo homotopies inside $S$ with fixed endpoints. Applying $dh$ to closed loops
at $x$, we obtain the \textbf{linear holonomy group}
\[H_{lin,x}:=dh(\pi_1(S,x))\subset Gl(\nu_x).\]
The \textbf{linear holonomy cover} of a leaf $S$ at $x$, denoted by $\widetilde{S}_{lin,x}$ is the covering space
corresponding to the kernel of $dh$; thus it is a principal $H_{lin,x}$ bundle over $S$. Also, $\widetilde{S}_{lin,x}$
can be defined as the space of classes of paths in $S$ starting at $x$, where we identify two such paths if they have the
same endpoint and they induce the same holonomy transport.
The Bott connection induces a foliation $\mathcal{F}_{\nu}$ on $\nu$ whose leaves are the orbits of $dh$; i.e.\ the leaf
of $\mathcal{F}_{\nu}$ through $v\in \nu_x$ covers the leaf $S$ through $x$, and is given by
\[\widetilde{S}_{v}:=\{dh(\gamma)v : \gamma \textrm{ is a path in }S\textrm{ starting at }x\}.\]
Therefore, $\widetilde{S}_{lin,x}$ covers of the leaves of the foliation $\mathcal{F}_{\nu}$ above $S$ via the maps
\begin{equation}\label{Covering}
p_v:\widetilde{S}_{lin,x}\rmap \widetilde{S}_v, \ \ p_v([\gamma])=dh(\gamma)v,\ \ v\in \nu_{x}.
\end{equation}
The \textbf{local model} of the foliation around the leaf $S$ is the foliated manifold
\[(\nu_{S},\mathcal{F}_{\nu_S}), \textrm{ where }\mathcal{F}_{\nu_S}:=\mathcal{F}_{\nu}|_{\nu_S}.\]
The linear holonomy induces an isomorphism between the local model and the flat bundle from the Introduction
\[(\widetilde{S}_{lin,x}\times\nu_x)/H_{lin,x}\diffto \nu_S,\ \ [\gamma,v]\mapsto p_v([\gamma]).\]
Consider now a symplectic structure $\omega$ on the foliation $\mathcal{F}$, i.e.\ a 2-form on $T\mathcal{F}$
\[\omega\in\Omega^2(T\mathcal{F})\]
whose restriction to each leaf is symplectic. We first construct a closed foliated 2-form $\delta\omega$ on
$(\nu,\mathcal{F}_{\nu})$, which represents the derivative of $\omega$ in the transversal direction. For this, choose
an extension $\widetilde{\omega}\in \Omega^2(M)$ of $\omega$ and let
\[\Omega(X,Y):=d\widetilde{\omega}(X,Y,\cdot),\ \ X,Y\in T\mathcal{F}.\]
Since $\omega$ is closed along the leaves of $\mathcal{F}$, $\Omega(X,Y)\in\nu^*$, thus $\Omega\in
\Omega^2(T\mathcal{F};\nu^*)$.
Now, the dual of the Bott connection on $\nu^*$ induces a differential $d_{\nabla}$ on the space of foliated forms with
values in the conormal bundle $\Omega^{\bullet}(T\mathcal{F};\nu^*)$; this can be given explicitly by the classical
Koszul formula
\[d_{\nabla}:\Omega^{\bullet}(T\mathcal{F};\nu^*)\rmap \Omega^{\bullet+1}(T\mathcal{F};\nu^*),\]
\begin{align*}
d_{\nabla}\eta(X_0, \ldots , X_{p})=&\sum_{i}(-1)^{i} \nabla_{X_i}\eta(X_0, \ldots , \widehat{X}_i, \ldots , X_{p})+\\
& + \sum_{i< j} (-1)^{i+j}\eta([X_i, X_j],X_0, \ldots , \widehat{X}_i, \ldots, \widehat{X}_j, \ldots , X_{p}),
\end{align*}
for $\eta\in \Omega^{p}(T\mathcal{F};\nu^*)$, $X_i\in\Gamma(T\mathcal{F})$. Denote the resulting cohomology by
$H^{\bullet}(\mathcal{F};\nu^*)$.
It is easy to see that $\Omega$ is $d_{\nabla}$-closed. In fact, this construction can be preformed in all degrees, and it
produces a canonical map (see e.g.\ \cite{CrFe2})
\[d_{\nu}:H^{\bullet}(\mathcal{F})\rmap H^{\bullet}(\mathcal{F};\nu^*),\]
which maps $[\omega]$ to $[\Omega]$. Also, if $\widetilde{\omega}+\alpha$ is a second extension of $\omega$ (where
$\alpha$ vanishes along $\mathcal{F}$), then $\Omega$ changes by $d_{\nabla}\lambda$, where $\lambda\in
\Omega^1(T\mathcal{F};\nu^*)$, is given by \[\lambda(X):=\iota_{X}\alpha\ \ \textrm{ for } \ X\in T\mathcal{F}.\]
Note that there is a natural embedding
\[\mathcal{J}:\Omega^{\bullet}(T\mathcal{F};\nu^*)\rmap \Omega^{\bullet}(T\mathcal{F}_{\nu}),\ \ \mathcal{J}(\eta)_v:=p^*(\langle\eta,v\rangle)|_{T\mathcal{F}_{\nu}},\ \ v\in\nu,\]
where $p:\nu\to M$ is the projection. It is easy to see that under $\mathcal{J}$ the differential $d_{\nabla}$
corresponds to the leafwise de Rham differential $d_{\mathcal{F}_{\nu}}$ on the leaves of $\mathcal{F}_{\nu}$. In
particular, we obtain a closed foliated 2-form
\[\delta\omega:=\mathcal{J}(\Omega)\in \Omega^2(T\mathcal{F}_{\nu}),\]
which we call the \textbf{vertical derivative} of $\omega$. Since $\delta\omega$ vanishes on $M$ (viewed as the zero
section), it follows that $p^*(\omega)+\delta\omega$ is nondegenerate on the leaves in an open around $M$; thus
\[(\nu,\mathcal{F}_{\nu},p^*(\omega)+\delta\omega)\]
is a symplectic foliation around $M$.
Consider now a symplectic leaf $S$. Restricting $p^*(\omega)+\delta\omega$ to the leaves above $S$, we obtain
closed foliated 2-forms along the leaves of the $\mathcal{F}_{\nu_S}$, denoted by
\[j^1_S(\omega):=p^*(\omega_S)+\delta_S\omega\in\Omega^2(T\mathcal{F}_{\nu_S}),\]
where $\omega_S:=\omega|_{S}$ and $\delta_S\omega:=\delta\omega |_{\nu_S}$. Any open neighborhood of $S$ in
\[(\nu_S,\mathcal{F}_{\nu_S},j^1_S(\omega))\]
on which $j^1_S(\omega)$ is symplectic will be regarded as the \textbf{local model} of the symplectic foliation around
$S$; i.e.\ we think about the local model as a germ of a symplectic foliation around $S$.
In order to define the cohomological variation of $\omega$, consider first the linear map
\begin{equation}\label{EQ_variation}
\delta_S\omega_x: \nu_x\rmap \Omega^2_{closed}(\widetilde{S}_{lin,x}), \ \ v\mapsto p_v^*(\delta_S\omega),
\end{equation}
where the map $p_v$ is the covering map defined by (\ref{Covering}). By the discussion above, choosing a different
extension of $\omega$ changes $p_v^*(\delta_S\omega)$ by an exact 2-form; hence the cohomology class
$[p^*_v(\delta_S\omega)]$ is independent of the 2-form $\Omega$ used to construct $\delta_{S}\omega$. The induced
linear map to the cohomology of $\widetilde{S}_{lin,x}$, will be called the \textbf{cohomological variation} of
$\omega$ at $S$
\[[\delta_S\omega]_x:\nu_{x}\rmap H^2(\widetilde{S}_{lin,x}), \ \ v\mapsto [p^*_v(\delta_S\omega)].\]
In the Introduction we denoted the lifts of $[\delta_S\omega_x]$ to the holonomy cover $\widetilde{S}_{hol}$,
respectively to the universal cover $\widetilde{S}_{uni}$ of $S$, by the same symbol.
We finish this section by proving that, up to isomorphism, the local model is independent of the choices involved. The
proof uses a version of the Moser Lemma for symplectic foliations (Lemma \ref{Lemma5} from the next section).
\begin{proposition}\label{Proposition_independence_of_extension}
Different choices of $\Omega\in \Omega^2(T\mathcal{F},\nu^*)$ satisfying $d_{\nu}[\omega]=[\Omega]$ produce
local models that are isomorphic around $S$ by a diffeomorphism that fixes $S$.
\end{proposition}
\begin{proof}
A second 2-form is of the form
\[\Omega'=\Omega+d_{\nabla}\lambda,\]
for some $\lambda\in \Omega^1(T\mathcal{F};\nu^*)$. We apply the Lemma \ref{Lemma5} to the symplectic foliation
\[(\nu,\mathcal{F}_{\nu},p^*(\omega)+\delta\omega),\]
and the foliated 1-form $\alpha:=\mathcal{J}(\lambda)$ which vanishes along $M$. The resulting diffeomorphism is
foliated. In particular, above any leaf $S$ of $\mathcal{F}$ it sends the local model corresponding to $\Omega$ to the
local model corresponding to $\Omega'$.
\end{proof}
\section{Five lemmas}
In this section we prove some auxiliary results used in the proof of Theorem \ref{Theorem}.
\medskip
\noindent\textbf{Reeb Stability around non-compact leaves}
\medskip
Consider a foliated manifold $(M,\mathcal{F})$ and let $S$ be an embedded leaf. The classical Reeb Stability Theorem
(see e.g.\ \cite{MM}) says that, if the holonomy group $H_{hol}$ is finite and $S$ is compact, then a saturated
neighborhood of $S$ in $M$ is isomorphic as a foliated manifold to the flat bundle
\begin{equation*}
(\widetilde{S}_{hol}\times T)/H_{hol},
\end{equation*}
where $T$ is a small transversal that is invariant under the holonomy action of $H_{hol}$. Since actions of finite
groups can be linearized, it follows that the holonomy of $S$ equals the linear holonomy of $S$. So, some neighborhood
of $S$ in $(M,\mathcal{F})$ is isomorphic as a foliated manifold with the flat bundle from the previous section
\begin{equation}\label{Model_lin}
(\widetilde{S}_{lin}\times \nu_x)/H_{lin}.
\end{equation}
Below we show that the proof of the Reeb Stability Theorem from \cite{MM} can be adapted to the non-compact case,
at the expense of saturation of the open.
\begin{lemma}\label{Lemma1}
Let $(M,\mathcal{F})$ be a foliation and let $S\subset M$ be an embedded leaf. If $S$ has finite holonomy, then an
open neighborhood of $S$ in $M$ is isomorphic as a foliated space to an open around $S$ in the local model
(\ref{Model_lin}), by a diffeomorphism that fixes $S$.
\end{lemma}
\begin{proof}
Since the holonomy is finite, it equals the linear holonomy, and we denote $H:=H_{hol}=H_{lin}$ and
$\widetilde{S}:=\widetilde{S}_{hol}=\widetilde{S}_{lin}$.
The assumption that $S$ be embedded allows us to restrict to a tubular neighborhood; so we assume that the foliation
is on a vector bundle $p:E\to S$ (with $E\cong \nu_{S}$), for which $S$, identified with the zero section, is a leaf. Then
the holonomy of paths in $S$ is represented by germs of a diffeomorphism between the fibers of $E$.
Each point in $S$ has an open neighborhood $U\subset E$ satisfying
\begin{itemize}
\item $S\cap U$ is 1-connected,
\item for $x\in S\cap U$, $E_x\cap U$ is a connected neighborhood of $x$,
\item for every $x,y\in S\cap U$, the holonomy along any path in $S\cap U$ connecting them is defined as a diffeomorphism between the spaces
\[h_{x}^y:E_x\cap U\diffto E_y\cap U.\]
\end{itemize}
Let $\mathfrak{U}$ be locally finite cover of $S$ by opens $U\subset E$ of the type just described, such that for all
$U,U'\in\mathfrak{U}$, $U\cap U'\cap S$ is connected (or empty), and such that each $U\in \mathfrak{U}$ is relatively
compact.
We fix $x_0\in S$, $U_0\in\mathfrak{U}$ an open containing $x_0$, and denote by
\[V:=E_{x_0}.\]
Consider a path $\gamma$ in $S$ starting at $x_0$ and with endpoint $x$. Cover the path by a chain of opens in
$\mathfrak{U}$
\[\xi=(U_{0},\ldots, U_{k(\xi)}),\]
such that there is a partition
\[0=t_0<t_1<\ldots t_{k-1}<t_k=1,\]
with $\gamma([t_{j-1},t_{j}])\subset U_{j}$. Since the holonomy transformations inside $U_{j}$ are all trivial, and all
the intersections $U_i\cap U_j\cap S$ are connected, it follows that the holonomy of $\gamma$ only depends on the
chain $\xi$ and is defined as an embedding
\[h(\gamma)=h_{x_0}^x(\xi):O(\xi)\hookrightarrow E_x,\]
where $O(\xi)\subset V$ is an open neighborhood of $x_0$, which is independent of $x\in U_{k(\xi)}$. Denote by
$\mathcal{Z}$ the space of all chains in $\mathfrak{U}$
\[ \xi=(U_{0}, \ldots, U_{k(\xi)}),\ \ \textrm{with} \ U_{l}\cap U_{l+1}\neq \emptyset.\]
Denote by $K$ the kernel of $\pi_1(S,x_0)\to H$. The holonomy cover $\widetilde{S}\to S$ can be described as the
space of all paths $\gamma$ in $S$ starting at $x_0$, and two such paths $\gamma_1$ and $\gamma_2$ are equivalent
if they have the same endpoint, and the homotopy class of $\gamma_2^{-1}\circ \gamma_1$ lies in $K$. The projection
is then given by $[\gamma]\mapsto \gamma(1)$. Denote by $\widetilde{x}_0$ the point in $\widetilde{S}$
corresponding to the constant path at $x_0$. So, we can represent each point in $\widetilde{S}$ (not uniquely!) by a
pair $(\xi,x)$ with $\xi \in \mathcal{Z}$ and endpoint $x\in U_{k(\xi)}\cap S$.
The group $H$ acts freely on $\widetilde{S}$ by pre-composing paths. For every $g\in H$ fix a chain
$\xi_g\in\mathcal{Z}$, such that $(\xi_g,x_0)$ represents $\widetilde{x}_0g$. Consider the open
\[\widetilde{O}_0:=\bigcap_{g\in H}O(\xi_{g})\subset V,\]
on which all holonomies $h_{x_0}^{x_0}(\xi_{g})$ are defined, and a smaller open $\widetilde{O}_1\subset
\widetilde{O}_0$ around $x_0$, such that $h_{x_0}^{x_0}(\xi_{g})$ maps $\widetilde{O}_1$ into $\widetilde{O}_0$.
Hence the composition
\begin{equation*}
h_{x_0}^{x_0}(\xi_g)\circ h_{x_0}^{x_0}(\xi_h): \widetilde{O}_{1}\hookrightarrow V,
\end{equation*}
is well defined. Since the germs of $h_{x_0}^{x_0}(\xi_g)\circ h_{x_0}^{x_0}(\xi_h)$ and $h_{x_0}^{x_0}(\xi_{gh})$
are the same, by shrinking $\widetilde{O}_1$ if necessary, we may assume that
\begin{equation}\label{composition}
h_{x_0}^{x_0}(\xi_g)\circ h_{x_0}^{x_0} (\xi_h)=h_{x_0}^{x_0}(\xi_{gh}): \widetilde{O}_{1}\hookrightarrow V, \ \ \ \ \forall \ g,h\in H.
\end{equation}
Consider the following open
\[O:=\bigcap_{g\in H}h_{x_0}^{x_0}(\xi_g)(\widetilde{O}_{1}).\]
Then $O\subset \widetilde{O}_{1}$, and for $h\in H$, we have that
\begin{align*}
h_{x_0}^{x_0}(\xi_{h})(O)&\subseteq \bigcap_{g\in H}h_{x_0}^{x_0}(\xi_{h})\circ h_{x_0}^{x_0}(\xi_{g})(\widetilde{O}_{1})
=\\
&=\bigcap_{g\in H}h_{x_0}^{x_0}(\xi_{hg})(\widetilde{O}_{1})=\bigcap_{g\in H}h_{x_0}^{x_0}(\xi_{g})(\widetilde{O}_{1})= O.
\end{align*}
So $h_{x_0}^{x_0}(\xi_h)$ maps $O$ to $O$, and by (\ref{composition}) it follows that the holonomy transport along
$\xi_g$ defines an action of $H$ on $O$, which we further denote by
\[h(g):=h_{x_0}^{x_0}(\xi_g):O\diffto O.\]
Since $H$ is a finite group acting on $O$ with a fixed point $x_0$, by Bochner's Linearization Theorem, we can
linearize the action around $x_0$. So, by shrinking $O$ if necessary, the action is isomorphic to the linear holonomy
action of $H$ on $V$. In particular, this implies that $O$ contains arbitrarily small $H$-invariant open neighborhoods
of $x_0$.
Since $\mathfrak{U}$ is a locally finite cover by relatively compact opens, there are only finitely many chains in
$\mathcal{Z}$ of a certain length. Denote by $\mathcal{Z}_n$ the set of chains of length at most $n$. Let $c\geq 1$
be such that $\xi_g\in \mathcal{Z}_c$ for all $g\in H$.
By the above, and by the basic properties of holonomy, there exist open neighborhoods $\{O_n\}_{n\geq 1}$ of $x_0$
in $O$:
\[\ldots \subset O_{n+1}\subset O_n\subset O_{n-1}\subset \ldots \subset O_1\subset O\subset V,\]
satisfying the following:
\begin{enumerate}[1)]
\item for every chain $\xi\in \mathcal{Z}_n$, $O_n\subset O({\xi})$,
\item for every two chains $\xi,\xi'\in \mathcal{Z}_n$ and $x\in U_{k(\xi)}\cap U_{k(\xi')}\cap S$, such that the pairs $(\xi,x)$ and
$(\xi',x)$ represent the same element in $\widetilde{S}$, we have that
\[h_{x_0}^x(\xi)=h_{x_0}^x(\xi'):O_n\hookrightarrow E_x,\]
\item $O_n$ is $H$-invariant,
\item for every $g\in H$, $\xi\in\mathcal{Z}_{n}$ and $x\in U_{k(\xi)}\cap S$, we have that
\[h_{x_0}^x(\xi_g\cup \xi)=h_{x_0}^x(\xi)\circ h(g): O_{n+c}\hookrightarrow E_{x}.\]
\end{enumerate}
Denote by $\widetilde{S}_n$ the set of points in $\widetilde{x}\in \widetilde{S}$ for which every element in the orbit
$\widetilde{x}H$ can be represented by a pair $(\xi,x)$ with $\xi\in\mathcal{Z}_n$. Note that for $n\geq c$,
$\widetilde{S}_n$ is nonempty, $H$-invariant, open, and connected. Consider the following $H$-invariant open
neighborhood of $\widetilde{S}\times\{x_0\}$:
\[\mathcal{V}:=\bigcup_{n\geq c}\widetilde{S}_n\times O_{n+c}\subset \widetilde{S}\times V.\]
On $\mathcal{V}$ we define the map
\[\widetilde{\mathcal{H}}:\mathcal{V}\rmap E, \ \ \ \widetilde{\mathcal{H}}(\widetilde{x},v):=h_{x_0}^x(\xi)(v),\]
for $(\widetilde{x},v)\in\widetilde{S}_n\times O_{n+c}$, where $(\xi,x)$ is pair representing $\widetilde{x}$ with
$\xi\in\mathcal{Z}_{n}$ and $x\in U_{k(\xi)}$. By the properties of the opens $O_n$, $\widetilde{\mathcal{H}}$ is
well defined. Since the holonomy transport is by germs of diffeomorphisms and preserves the foliation, it follows that
$\widetilde{\mathcal{H}}$ is a foliated local diffeomorphism, which sends the trivial foliation on $\mathcal{V}$ with
leaves $\mathcal{V}\cap \big(\widetilde{S}\times\{v\}\big)$ to $\mathcal{F}|_{E}$.
We prove now that $\widetilde{\mathcal{H}}$ is $H$-invariant. Let $(\widetilde{x},v)\in\widetilde{S}_n\times
O_{n+c}$ and $g\in H$. Consider chains $\xi$ and $\xi'$ in $\mathcal{Z}_{n}$ representing $\widetilde{x}$ and
$\widetilde{x}g$ respectively, with $x\in U_{k(\xi)}\cap U_{k(\xi')}\cap S$. Then $\xi'$ and $\xi_g\cup \xi$ both belong
to $\mathcal{Z}_{n+c}$ and $(\xi',x)$, $(\xi_g\cup\xi,x)$ both represent $\widetilde{x}g\in\widetilde{S}$. Using
properties 2) and 4) of the opens $O_n$, we obtain $H$-invariance:
\begin{align*}
\widetilde{\mathcal{H}}(\widetilde{x}g,h(g^{-1})v)&=h_{x_0}^x(\xi')(h(g^{-1})v)=h_{x_0}^x(\xi_g\cup \xi)(h(g^{-1})v)=\\
&=h_{x_0}^x(\xi)\circ h(g)\circ h(g^{-1})v=h_{x_0}^x(\xi)(v)=\widetilde{\mathcal{H}}(\widetilde{x},v).
\end{align*}
Since the action of $H$ on $\mathcal{V}$ is free and preserves the foliation on $\mathcal{V}$, we obtain an induced
local diffeomorphism of foliated manifolds:
\[\mathcal{H}: \mathcal{V}/H\subset (\widetilde{S}\times V)/H \rmap E.\]
We prove now that $\mathcal{H}$ is injective. Let $(\widetilde{x},v), (\widetilde{x}',v')\in \mathcal{V}$ be such that
\[\widetilde{\mathcal{H}}(\widetilde{x},v)=\widetilde{\mathcal{H}}(\widetilde{x}',v').\]
Denoting by $x=p(\widetilde{\mathcal{H}}(\widetilde{x},v))=p(\widetilde{\mathcal{H}}(\widetilde{x}',v'))$, we have
that $\widetilde{\mathcal{H}}(\widetilde{x},v)$, $\widetilde{\mathcal{H}}(\widetilde{x}',v')\in E_x$. Hence
$\widetilde{x}$ and $\widetilde{x}'$, both lie in the fiber of $\widetilde{S}\to S$ over $x$, thus there is a unique $g\in
H$ with $\widetilde{x}'=\widetilde{x}g$. Let $n,m\geq c$ be such that $(\widetilde{x},v)\in \widetilde{S}_n\times
O_{n+c}$ and $(\widetilde{x}',v')\in \widetilde{S}_m\times O_{m+c}$, and assume also that $n\leq m$. Consider
$\xi\in\mathcal{Z}_n$ and $\xi'\in \mathcal{Z}_m$ such that $(\xi,x)$ represents $\widetilde{x}$ and $(\xi',x)$
represents $\widetilde{x}'$. Then we have that
\begin{equation}\label{EQ10}
h_{x_0}^x(\xi)(v)=h_{x_0}^x(\xi')(v').
\end{equation}
Since both $(\xi',x)$ and $(\xi_g\cup \xi,x)$ represent $\widetilde{x}'\in \widetilde{S}$, and both have length $\leq
m+c$, again by the properties 2) and 4) we obtain
\[h_{x_0}^x(\xi')(v')=h_{x_0}^x(\xi_g\cup \xi)(v')=h_{x_0}^x(\xi)(h(g)(v')).\]
Since $h_{x_0}^x(\xi)$ is injective, (\ref{EQ10}) implies that $v=h(g)(v')$. So, we obtain
\[(\widetilde{x},v)=(\widetilde{x}'g^{-1},h(g)(v')),\]
which proves injectivity of $\mathcal{H}$.
\end{proof}
\noindent\textbf{Thurston Stability around non-compact leaves}
\medskip
To obtain the first order normal form result (Corollary \ref{Corollary1}), we will use the following extension to
non-compact leaves of a result of Thurston \cite{Thurston}.
\begin{lemma}\label{Lemma2}
Let $S$ be an embedded leaf of a foliation such that $K_{lin}$, the kernel of $dh:\pi_1(S,x)\to H_{lin}$, is finitely
generated and $H^1(\widetilde{S}_{lin})=0$. Then the holonomy group $H_{hol}$ of $S$ coincides with the linear
holonomy group $H_{lin}$ of $S$.
\end{lemma}
\begin{proof}
Denote by $V:=\nu_x$, the normal space at some $x\in S$. The linear holonomy gives an identification of the normal
bundle of $S$ in $M$ with the vector bundle $(\widetilde{S}_{lin}\times V)/H_{lin}$. Passing to a tubular
neighborhood, we may assume that the foliation $\mathcal{F}$ is on $(\widetilde{S}_{lin}\times V)/H_{lin}$, and that
its linear holonomy coincides with the holonomy of the flat bundle, i.e.\ the first order jet along $S$ of $\mathcal{F}$
equals the first order jet along $S$ of flat bundle foliation. Consider the covering map
\[p:\widetilde{S}_{lin}\times V\rmap (\widetilde{S}_{lin}\times V)/H_{lin}.\]
The leaf $\widetilde{S}_{0}:=\widetilde{S}_{lin}\times\{0\}$ of the pull-back foliation $p^*(\mathcal{F})$ on
$\widetilde{S}_{lin}\times V$ satisfies:
\begin{enumerate}[(1)]
\item $\widetilde{S}_{0}$ has trivial linear holonomy;
\item $H^1(\widetilde{S}_{0})=0$;
\item $\pi_1(\widetilde{S}_{0})\cong K_{lin}$ is finitely generated.
\end{enumerate}
Thurston shows in \cite{Thurston} that, under the assumption that $\widetilde{S}_0$ is compact, the first two
conditions imply that the holonomy group of $\widetilde{S}_0$ vanishes. It is straightforward to check that Thurston's
argument actually doesn't use the compactness assumption, but it only uses condition (3); and we conclude that also in
our case the holonomy at $\widetilde{S}_0$ of $p^*(\mathcal{F})$ vanishes.
Now consider a loop $\gamma$ in $S$ based at $x$ such that $[\gamma]\in K_{lin}$. This is equivalent to saying that
$\gamma$ lifts to a loop in $\widetilde{S}_{lin}$, hence to a loop $\widetilde{\gamma}$ in $\widetilde{S}_0$. The
holonomy transport along $\widetilde{\gamma}$ induced by $p^*(\mathcal{F})$ projects to the holonomy transport of
$\gamma$ induced by $\mathcal{F}$, and since the first is trivial, so is the latter. This proves that $K_{lin}$ is
included in the kernel of $\pi_1(S,x)\to H_{hol}$, and since the other inclusion always holds, we obtain that
$H_{hol}=H_{lin}$.
\end{proof}
\noindent\textbf{Foliated cohomology of products}
\medskip
Let $M$ and $N$ be two manifolds. Consider the product foliation $TM\times N$ on $M\times N$, with leaves
\[M\times\{y\}\subset M\times N,\ \ y\in N.\]
We denote the complex computing the corresponding foliated cohomology by
\[\big(\Omega^{\bullet}(TM\times N),d\big).\]
The elements of $\Omega^{\bullet}(TM\times N)$ can be regarded as smooth families of forms on $M$:
\[\eta=\left\{\eta_y\in \Omega^{\bullet}(M)\right\}_{y\in N}\ \ \textrm{with} \ \ d\eta=\left\{d\eta_y\in \Omega^{\bullet+1}(M)\right\}_{y\in N}.\]
Denote the corresponding cohomology groups by
\[H^{\bullet}(TM\times N).\]
We need two versions of these groups associated to a leaf $M\times\{x\}$, for a fixed $x\in N$. Denote the subcomplex
of foliated forms vanishing on $M\times\{x\}$ by
\[\big(\Omega^{\bullet}_{x}(TM\times N),d\big),\]
and the associated cohomology by
\[H^{\bullet}_{x}(TM\times N).\]
Finally, consider the complex of germs at $M\times\{x\}$ of foliated forms
\[\big(\Omega^\bullet_{\mathrm{g}_{x}}(TM\times N),d\big).\]
This space is the quotient of $\Omega^\bullet(TM\times N)$ by the space of foliated forms that vanish on some open in
$M\times N$ that contains $M\times\{x\}$. The leafwise de Rham differential induces a differential on
$\Omega^\bullet_{\mathrm{g}_{x}}(TM\times N)$. Denote the resulting cohomology by
\[H^{\bullet}_{\mathrm{g}_{x}}(TM\times N).\]
Let also $C^{\infty}_{x}(N)$ denote the space of smooth functions on $N$ vanishing at $x$, and
$C^{\infty}_{\mathrm{g}_{x}}(N)$ denote the space of germs of smooth functions on $N$ around $x$.
These three versions of foliated cohomology come with natural pairings with the homology of $M$, which yield maps:
\begin{align}\label{pairing}
\nonumber &\Psi: H^{\bullet}(TM\times N)\rmap \mathrm{Hom}(H_{\bullet}(M);C^{\infty}(N)),\\
&\Psi_x:H^{\bullet}_{x}(TM\times N)\rmap \mathrm{Hom}(H_{\bullet}(M);C^{\infty}_{x}(N)),\\
\nonumber &\Psi_{\mathrm{g}_{x}}:H^{\bullet}_{\mathrm{g}_{x}}(TM\times N)\rmap \mathrm{Hom}(H_{\bullet}(M);C^{\infty}_{\mathrm{g}_{x}}(N)).
\end{align}
We explain the third map; the first two are constructed similarly. Consider an element $[\eta]\in
H^q_{\mathrm{g}_{x}}(TM\times N)$, which is represented by a foliated $q$-form $\eta$ that is closed on some open
containing $M\times\{x\}$. We define the corresponding linear map:
\[\Psi_{\mathrm{g}_{x}}([\eta]):H_{q}(M)\rmap C_{\mathrm{g}_{x}}^{\infty}(N).\]
Represent an element $[c]\in H_{q}(M)$ as $c=\sum_i a_i \sigma_i$, where $\sigma_i:\Delta_q\to M$ are smooth
$q$-simplices. Define
\[\langle \eta, c\rangle \in C^{\infty}(N),\ \ y\mapsto \sum_i a_i \int_{\Delta_q}(\sigma_i \times \{y\})^*(\eta).\]
The germ at $x$ of the function $\langle \eta, c\rangle$ is independent of the choice of the representatives, yielding a
well-defined element $\Psi_{\mathrm{g}_{x}}([\eta])([c]):=\langle [\eta],[c]\rangle\in C^{\infty}_{\mathrm{g}_x}(N)$.
\begin{lemma}\label{Lemma3}
The maps from (\ref{pairing}) are linear isomorphisms.
\end{lemma}
\begin{proof}
Denote the constant sheaves on $M$ associated to the groups $C^{\infty}(N)$, $C^{\infty}_x(N)$ and
$C^{\infty}_{\mathrm{g}_x}(N)$ by $\mathcal{S}_1$, $\mathcal{S}_2$ and $\mathcal{S}_3$, respectively.
By standard arguments, the de Rham differential along $M$ induces resolutions $\mathcal{S}_i\to \mathcal{C}_i^{\bullet}$ by fine sheaves on $M$:
\[\mathcal{C}_1^{\bullet}(U):=\Omega^{\bullet}(TU\times N),\ \ \mathcal{C}_2^{\bullet}(U):=\Omega^{\bullet}_x(TU\times N),\ \ \mathcal{C}_3^{\bullet}(U):=\Omega^{\bullet}_{\mathrm{g}_x}(TU\times N).\]
Hence, the foliated cohomologies from (\ref{pairing}) are isomorphic to the sheaf cohomologies with coefficients in $\mathcal{S}_1$, $\mathcal{S}_2$ and $\mathcal{S}_3$ respectively. On the other hand, for any vector space $V$, denoting by $\underline{V}$ the constant sheaf on $M$, one has a natural isomorphism:
\begin{equation}\label{YAE}
\Phi_V:H^{\bullet}(M;\underline{V})\diffto \mathrm{Hom}\big(H_{\bullet}(M);V\big).
\end{equation}
Hence, we obtain isomorphisms:
\begin{align}\label{isose}
\nonumber \Phi: H^{\bullet}(TM\times N)&\diffto \mathrm{Hom}(H_{\bullet}(M);C^{\infty}(N)),\\
\Phi_x:H^{\bullet}_{x}(TM\times N)&\diffto \mathrm{Hom}(H_{\bullet}(M);C^{\infty}_{x}(N)),\\
\nonumber \Phi_{\mathrm{g}_x}:H^{\bullet}_{\mathrm{g}_{x}}(TM\times N)&\diffto \mathrm{Hom}(H_{\bullet}(M);C^{\infty}_{\mathrm{g}_{x}}(N)).
\end{align}
We still have to check that these maps coincide with those from (\ref{pairing}). For this we will exploit the
naturality of the maps in (\ref{YAE}).
In the first case, consider the evaluation map $ev_y:C^{\infty}(N)\to \mathbb{R}$, for $y\in N$. This induces a sheaf map $ev_y^M:\mathcal{S}_1\to \underline{\mathbb{R}}$ into the constant sheaf over $M$, which is covered by a map $ev_y^M:\mathcal{C}_1^{\bullet}\to \Omega_M^{\bullet}$ into the standard de Rham resolution of $\underline{\mathbb{R}}$. Hence the map $H^{\bullet}(M;\mathcal{S}_1)\to H^{\bullet}(M;\underline{\mathbb{R}})$ induced by $ev_y$ becomes
\[H^{\bullet}(TM\times N)\stackrel{ev_y^M}{\rmap} H^{\bullet}(M), \ \ [\omega]\mapsto [\omega|_{M\times\{y\}}].\]
By naturality of (\ref{YAE}), it follows that the following square commutes:
\[\begin{array}[c]{ccc}
H^{\bullet}(TM\times N)&\stackrel{\Phi}{\rmap} &\mathrm{Hom}\big(H_{\bullet}(M);C^{\infty}(N)\big)\\
\downarrow\scriptstyle{ev_y}&&\downarrow\scriptstyle{ev_y}\\
H^{\bullet}(M)&\stackrel{\Phi_{\mathbb{R}}}{\rmap} &\mathrm{Hom}\big(H_{\bullet}(M);\mathbb{R}\big).
\end{array}\]
Since $\Phi_{\mathbb{R}}$ is the usual isomorphism given by integration, and by the explicit description of the map
$\Psi$, this implies that $\Psi=\Phi$.
For the second map in (\ref{pairing}) and (\ref{isose}) we proceed similarly, but using the inclusion $i:C^{\infty}_x(N)\to C^{\infty}(N)$ instead of $ev_y$. This gives rise to a sheaf map $\mathcal{S}_2\to \mathcal{S}_1$ which lifts to their resolutions, and then we obtain a commutative square
\[\begin{array}[c]{ccc}
H^{\bullet}_x(TM\times N)&\stackrel{\Phi_x}{\rmap} &\mathrm{Hom}\big(H_{\bullet}(M);C^{\infty}_x(N)\big)\\
\downarrow\scriptstyle{i}&&\downarrow\scriptstyle{i}\\
H^{\bullet}(TM\times N)&\stackrel{\Phi}{\rmap} &\mathrm{Hom}\big(H_{\bullet}(M);C^{\infty}(N)\big).
\end{array}\]
Using also that $\Psi=\Phi$, this implies the equality $\Psi_x=\Phi_x$.
Similarly, for the third map in (\ref{pairing}) and (\ref{isose}), but using the projection map $p:C^{\infty}(N)\to C^{\infty}_{\mathrm{g}_x}(N)$ (instead of the inclusion), we obtain a commutative square
\[\begin{array}[c]{ccc}
H^{\bullet}(TM\times N)&\stackrel{\Phi}{\rmap} &\mathrm{Hom}\big(H_{\bullet}(M);C^{\infty}(N)\big)\\
\downarrow\scriptstyle{p}&&\downarrow\scriptstyle{p}\\
H^{\bullet}_{\mathrm{g}_x}(TM\times N)&\stackrel{\Phi_{\mathrm{g}_x}}{\rmap} &\mathrm{Hom}\big(H_{\bullet}(M);C^{\infty}_{\mathrm{g}_x}(N)\big).
\end{array}\]
Again, since $\Psi=\Phi$, we obtain that $\Psi_{\mathrm{g}_x}=\Phi_{\mathrm{g}_x}$. This concludes the proof.
\end{proof}
We will use the following consequences of Lemma \ref{Lemma3} (the first appeared in \cite{GLSW}).
\begin{corollary}\label{Corollary3}
Let $\eta\in \Omega^{q}(TM\times N)$ be a foliated $q$-form such that $\eta_y\in \Omega^{q}(M)$ is exact for all
$y\in N$. Then there exists $\theta\in \Omega^{q-1}(TM\times N)$ such that $d\theta=\eta$. Moreover, if
$\eta_{x}=0$ for some $x\in N$, then one can choose $\theta$ such that $\theta_{x}=0$.
\end{corollary}
\begin{proof}
In the first case, we need that $[\eta]=0$ in $H^{\bullet}(TM\times N)$, and in the second, that $[\eta]=0$ in
$H^{\bullet}_x(TM\times N)$. Since $\langle [\eta_y],[c]\rangle=0$, for all $[c]\in H_{q}(M)$ and all $y\in N$,
the description of the maps $\Psi$ and $\Psi_x$ and Lemma \ref{Lemma3} imply the result.
\end{proof}
\begin{corollary}\label{Corollary4}
Let $\eta$ be a closed foliated $q$-form defined on some open $\mathcal{U}\subset M\times N$ around
$M\times\{x\}$. Then there exists a closed foliated $q$-form $\widetilde{\eta}$ on $M\times N$, such that
$\eta|_{\widetilde{\mathcal{U}}}=\widetilde{\eta}|_{\widetilde{\mathcal{U}}}$, for some open
$\widetilde{\mathcal{U}}\subset \mathcal{U}$ containing $M\times\{x\}$.
\end{corollary}
\begin{proof}
First, we claim that the projection $p:\Omega^{\bullet}(TM\times N)\to \Omega^{\bullet}_{\mathrm{g}_x}(TM\times
N)$ induces a surjective map in cohomology. By the description of the maps $\Psi$ and $\Psi_{\mathrm{g}_x}$, we
have a commutative diagram
\[\begin{array}[c]{ccc}
H^{\bullet}(TM\times N)&\stackrel{\Psi}{\rmap} &\mathrm{Hom}\big(H_{\bullet}(M);C^{\infty}(N)\big)\\
\downarrow\scriptstyle{p}&&\downarrow\scriptstyle{p}\\
H^{\bullet}_{\mathrm{g}_x}(TM\times N)&\stackrel{\Psi_{\mathrm{g}_x}}{\rmap} &\mathrm{Hom}\big(H_{\bullet}(M);C^{\infty}_{\mathrm{g}_x}(N)\big).
\end{array}\]
By Lemma \ref{Lemma3}, the horizontal maps are isomorphisms, and since the vertical map on the right is surjective,
so is the vertical map on the left.
Consider a foliated $q$-form $\eta'\in \Omega^q(TM\times N)$, such that
$\eta'|_{\mathcal{U}'}=\eta|_{\mathcal{U}'}$ for some open $\mathcal{U}'\subset \mathcal{U}$ containing
$M\times\{x\}$. Then $\eta'$ defines a class $[\eta']\in H^q_{\mathrm{g}_{x}}(TM\times N)$. By the above, there is a
closed foliated $q$-form $\eta''\in\Omega^q(TM\times N)$, such that $[\eta'']=[\eta']\in
H^{q}_{\mathrm{g}_x}(TM\times N)$. Thus, there is some foliated $q-1$-form $\theta$ and an open
$\widetilde{\mathcal{U}}\subset \mathcal{U}'$ containing $M\times\{x\}$ such that
$\eta'|_{\widetilde{\mathcal{U}}}=(\eta''+d\theta)|_{\widetilde{\mathcal{U}}}$. The closed foliated $q$-form
$\widetilde{\eta}:=\eta''+d\theta$ satisfies the conclusion:
$\widetilde{\eta}|_{\widetilde{\mathcal{U}}}=\eta|_{\widetilde{\mathcal{U}}}$.
\end{proof}
\noindent\textbf{Equivariant submersions}
\medskip
We prove now that submersions can be equivariantly linearized.
\begin{lemma}\label{Lemma4}
Let $G$ be compact Lie group acting linearly on the vector spaces $V$ and $W$. Consider $f:V\to W$ a smooth
$G$-equivariant map, such that $f(0)=0$. If $f$ is a submersion at $0$, then there exists a $G$-equivariant embedding
$\chi:U\hookrightarrow V$, where $U$ is an invariant open around $0$ in $V$, such that $\chi(0)=0$ and
\[f(\chi(v))=df_0 (v), \ \textrm{ for }\ v\in U.\]
\end{lemma}
\begin{proof}
Since $G$ is compact, we can find a $G$-equivariant projection $p_K:V\to K$, where $K:=\ker(df_0)$. The differential
at $0$ of the $G$-equivariant map
\[(f,p_K):V\rmap W\times K, \ v\mapsto(f(v),p_K(v))\]
is $(df_{0},p_K)$. So $(f,p_K)$ is a diffeomorphism when restricted to some open $U_0$ in $V$ around $0$, which we
may assume to be $G$-invariant. Define the embedding as follows
\[\chi:U\hookrightarrow V, \ \ \ \ \chi:=(f,p_K)^{-1}\circ(df_{0},p_K),\]
where $U:=(df_{0},p_K)^{-1}(U_0)$. Clearly $U$ is $G$-invariant, $\chi$ is $G$-equivariant and $\chi(0)=0$. Since
\[\left(f(\chi(v)),p_K(\chi(v))\right)=\left(df_{0}(v),p_K(v)\right),\]
we also have that $f(\chi(v))=df_{0}(v)$.
\end{proof}
\noindent\textbf{The Moser Lemma for symplectic foliations}
\medskip
The following is a version for symplectic foliations of the Moser Lemma.
\begin{lemma}\label{Lemma5}
Let $(M,\mathcal{F},\omega)$ be a symplectic foliation. Consider a foliated 1-form
\[\alpha\in \Omega^1(T\mathcal{F}),\]
that vanishes on an embedded saturated submanifold $Z$ of $M$. Then $\omega+d_{\mathcal{F}}\alpha$ is
nondegenerate in a neighborhood $U$ of $Z$, and the resulting symplectic foliation
\[(U,\mathcal{F}|_{U},\omega|_{U}+d_{\mathcal{F}}\alpha|_{U})\] is isomorphic around $Z$ to $(M,\mathcal{F},\omega)$ by a
foliated diffeomorphism that fixes $Z$.
\end{lemma}
\begin{proof}
Since $\alpha$ vanishes on $Z$ and $Z$ is saturated, it follows that also $d_{\mathcal{F}}\alpha$ vanishes on $Z$.
Thus, there is an open $V$ around $Z$ such that $\omega+d_{\mathcal{F}}\alpha$ is nondegenerate along the leaves
of $\mathcal{F}|_{V}$. Moreover, by the classical tube lemma from topology, we may choose $V$ such that
\[\omega_t:= \omega+td_{\mathcal{F}}\alpha\in \Omega^2(T\mathcal{F})\]
is nondegenerate along the leaves of $\mathcal{F}|_{V}$, for all $t\in [0,1]$. Consider the time dependent vector field
$X_t$ on $V$, tangent to $\mathcal{F}$, determined by
\[\iota_{X_t}\omega_t=-\alpha, \ \ X_t\in \Gamma(T\mathcal{F}|_{V}).\]
Since $X_t$ vanishes along $Z$, again by the tube lemma, there is an open $O\subset V$ around $Z$, such that the
flow $\Phi^t_{X}$ of $X_t$ is defined up to time 1 on $O$. We claim that $\Phi^1_X$ gives the desired isomorphism.
Clearly $\Phi^1_X$ preserves the foliation and is the identity on $Z$. On each leaf $S$, we have that
\begin{align*}
\frac{d}{dt}\Phi_X^{t*}(\omega_{t}|_S)=\Phi_X^{t*}(L_{X_t}\omega_{t}|_S+d_{\mathcal{F}}\alpha|_{S})=\Phi_X^{t*}(d\iota_{X_t}\omega_{t}|_S+d\alpha|_{S})=0.
\end{align*}
Thus $\Phi_X^{t*}(\omega_{t}|_S)$ is constant, and since $\Phi_X^0=\textrm{Id}$, we have that
\[\Phi_X^{1*}\left((\omega+d_{\mathcal{F}}\alpha)|_S\right)=\omega|_{S}.\]
So, $\Phi^1_X$ is an isomorphism between the symplectic foliations
\[\Phi_X^1:(O,\mathcal{F}|_{O},\omega|_{O})\diffto (U,\mathcal{F}|_{U},\omega|_{U}+d_{\mathcal{F}}\alpha|_{U}),\]
where $U:=\Phi^1_X(O)$.
\end{proof}
\section{Proof of Theorem 1}
Since the holonomy of $S$ is finite, it coincides with the linear holonomy. Consider $x\in S$ and denote by
$V:=\nu_{x}$, by $H:=H_{hol}=H_{lin}$, and by $\widetilde{S}:=\widetilde{S}_{hol}=\widetilde{S}_{lin}$.
Applying Lemma \ref{Lemma1}, we obtain that some neighborhood of $S$ in $M$ is diffeomorphic as a foliated
manifold to an open $\mathcal{U}$ around $S$ in the flat bundle
\[(\widetilde{S}\times V)/H.\]
The symplectic leaves correspond to the connected components of $S_v\cap \mathcal{U}$, where
\[S_v:=(\widetilde{S}\times Hv)/H,\ \ v\in V.\]
We claim that there exists $\omega_1$, a closed foliated 2-form on $(\widetilde{S}\times V)/H$ that extends
$\omega|_{\mathcal{U}_1}$, for some open $\mathcal{U}_1\subset \mathcal{U}$ around $S$. For this, consider the
projection
\[p:\widetilde{S}\times V\rmap (\widetilde{S}\times V)/H,\]
and denote by $\widetilde{\mathcal{U}}:=p^{-1}(\mathcal{U})$ and by $\widetilde{\omega}:=p^*(\omega)$, which
is a closed foliated 2-form on the product foliation restricted to $\widetilde{\mathcal{U}}$. By Corollary
\ref{Corollary4}, there is a closed extension $\widetilde{\omega}_0$ of
$\widetilde{\omega}|_{\widetilde{\mathcal{U}}_0}$, where $\widetilde{\mathcal{U}}_0\subset
\widetilde{\mathcal{U}}$ is an open around $\widetilde{S}\times \{0\}$. Define $\widetilde{\omega}_1$ by averaging
over $H$
\[\widetilde{\omega}_1:=\frac{1}{|H|}\sum_{g\in H}g^*(\widetilde{\omega}_0).\]
Since $\widetilde{\omega}$ is invariant, it follows that $\widetilde{\omega}_1$ coincides with $\widetilde{\omega}$
on $\widetilde{\mathcal{U}}_1:=\bigcap_{g\in H} g\widetilde{\mathcal{U}}_0$. Since $\widetilde{\omega}_1$ is
invariant, it is of the form $\widetilde{\omega}_1=p^*(\omega_1)$, where $\omega_1$ is a closed foliated 2-form on
$(\widetilde{S}\times V)/H$, which extends the restriction to $\mathcal{U}_1:=p(\widetilde{\mathcal{U}}_1)$ of
$\omega$.
We will identify foliated $q$-forms $\eta$ on $(\widetilde{S}\times V)/H$, with smooth $H$-equivariant families
$\{\eta_v\in \Omega^q(\widetilde{S})\}_{v\in V}$, where $\eta_v:=p^*(\eta)|_{\widetilde{S}\times\{v\}}$.
We compute now the variation of $\omega$ at $S$. Since $\omega$ and $\omega_1$ coincide around $S$, they have the
same variation at $S$. Using the extension of $\omega$ (or equivalently of $\omega_1$) that vanishes on vectors
tangent to the fibers of the projection to $S$, we see that the variation $\delta_S\omega$ is given by the
$H$-equivariant family:
\[\delta_{S}\omega_v:=\frac{d}{d\epsilon}\omega_{ \epsilon v}|_{\epsilon=0}\in \Omega^2(\widetilde{S}),\]
The local model is represented by the $H$-equivariant family of 2-forms:
\[j^1_{S}(\omega)_v=p^*(\omega_S)+\delta_S\omega_v\in \Omega^2(\widetilde{S}).\]
Consider the $H$-equivariant map
\[f:V\rmap H^2(\widetilde{S}),\ \ f(v)=[\omega_{1,v}]-[p^*(\omega_S)].\]
Smoothness of $f$ follows from Lemma \ref{Lemma3}. Clearly, $f(0)=0$ and its differential at $0$ is the
cohomological variation
\[df_{0}(v)=[\delta_{S}\omega]v, \ \ \forall \ v\in V.\]
By our hypothesis, $f$ is a submersion at $0$. So we can apply Lemma \ref{Lemma4} to find an $H$-equivariant
embedding
\[\chi:U\hookrightarrow V,\]
where $U$ is an $H$-invariant open neighborhoods of $0$ in $V$, such that
\[\chi(0)=0 \ \textrm{ and }\ f(\chi(v))=df_0(v).\]
By $H$-equivariance, $\chi$ induces a foliation preserving embedding
\[\widetilde{\chi}: (\widetilde{S}\times U)/H \hookrightarrow (\widetilde{S}\times V)/H, \ \ \widetilde{\chi}([y,v])=[y,\chi(v)]\]
that restricts to a diffeomorphism between the leaf $S_v$ and the leaf $S_{\chi(v)}$. The pullback of $\omega_1$ under
$\widetilde{\chi}$ is the $H$-equivariant family
\[\omega_2=\{\omega_{2, v}:=\omega_{1, \chi(v)}\}_{v\in U}.\]
We have that
\[[\omega_{2,v}]-[p^*(\omega_S)]=[\omega_{1,\chi(v)}]-[p^*(\omega_S)]=f(\chi(v))=df_0(v)=[\delta_S\omega]v.\]
Equivalently, this relation can be rewritten as
\[\omega_{2,v}=j^1_{S}(\omega)_v+\alpha_v, \forall v\in U,\]
where $\{\alpha_v\}_{v\in U}$ is an $H$-equivariant family of exact 2-forms that vanishes for $v=0$. By Corollary
\ref{Corollary3}, $p^*(\alpha)$ is an exact foliated form on $\widetilde{S}\times U$, and moreover, we can choose a
primitive $\widetilde{\beta}\in\Omega^1(T\widetilde{S}\times U)$ such that $\widetilde{\beta}_0=0$. By averaging,
we may also assume that $\widetilde{\beta}$ is $H$-equivariant, thus it is of the form $\widetilde{\beta}=p^*(\beta)$
for a foliated 1-form on $\beta$ on $(\widetilde{S}\times U)/H$ that vanishes along $S$. We obtain:
\[\omega_2=j^1_S\omega+d \beta.\]
Applying Lemma \ref{Lemma5}, we conclude that, on some open around $S$, $j^1_S\omega$ and $\omega_2$ are
related by a foliated diffeomorphism. Now, $\omega_2$ and $\omega_1$ are related by $\widetilde{\chi}$, and
$\omega_1$ and $\omega$ have the same germ around $S$. This concludes the proof.\\
\noindent\textbf{Proof of Corollary \ref{Corollary1}}
\medskip
Schreier's Lemma says that a subgroup of finite index of a finitely generated group is also finitely generated (see e.g.\
section 5.6 in \cite{Rotman}). Hence, $K_{lin}$ is finitely generated. By Lemma \ref{Lemma2}, $H_{hol}=H_{lin}$,
in particular $H_{hol}$ is finite, and so we are in the setting of Theorem \ref{Theorem}.
\bibliographystyle{amsplain}
\def\lllll{}
|
1,116,691,499,094 | arxiv | \section{Introduction}
\label{s:int}
\par\noindent
\emph{Fundamental diagrams} representing the dependence of the
pedestrian speed on the local density are one of the basic methods in
studying pedestrians dynamics. They contain macroscopic information useful to identify
the key effects affecting the general behavior of pedestrian flows and to
test the validity of pedestrian models.
In \cite{FD,Weid}, e.g., their main properties are discussed and
experimental tests are performed. In particular it is seen that
many different effects, such as passing manoeuvres, space reduction,
and internal friction, have to be taken into account to
explain the main features of the diagrams.
In this paper, we use \emph{Zero Range Processes} (ZRP),
originally proposed by Spitzer \cite{S},
to recover the same behaviors of the fundamental diagrams,
excepting perhaps the existence of an upper density above which the
pedestrian velocity drops to zero. Our attention focuses on pedestrians moving in dark corridors, where the lack of visibility hinders them to find the exit. This research line follows a similar path as in \cite{Bell1,Bell2,Pareschi,Albi}, where the authors used a kinetic formulation to investigate the role of the leaders to control crowds evacuation when visibility is reduced, and extends our previous works on this topic; compare e.g. \cite{CM01,CM02,CM03} (group formation and cooperation in the dark)
For the current framework, we assume that
more particles can occupy the same site of a one--dimensional array of discrete positions (modeling a long dark corridor) and
no interaction among the
individuals takes place. The dynamics of the system is determined
only by the \emph{escape rate}, namely, the frequency at which
a site releases the individuals.
The key idea in our model is to assume that the escape rate is proportional
to the number of individuals on the site up to an \emph{saturation
threshold} above which such a rate stays constant. The second
ingredient we use is that escape is maintained low until a certain
\emph{activation threshold} is reached.
The rationale behind our modeling ideas fits the following
{\em Gedanken experiment}.
Imagine a flow of pedestrians
on a lane and consider a partition of this lane in squared (or rectangular) cells; cf. Figure \ref{fig:threshold}.
The rate at which a walker leaves one cell is proportional
to the number of pedestrian occupying the cell up to a limit
which is reached when the ``forward row'' of the cell is
full, cf. the right panel of Figure \ref{fig:threshold}. In this case, indeed, the pedestrians on the back are prevented from exiting the
cell due to the presence of an obstacle.
Thus, the escape rate from a cell increases proportionally to
the number of pedestrians within the cell until this number reaches
the total number of walkers that can be fit into the first row.
On the other hand, the escape rate from a cell increases proportionally to the
number of individuals provided that an efficient communication network (allowing the individuals to exchange
informations about the location of the exit) can established inside the cell. Now, assuming that the interaction range, cf. the left panel of
Figure \ref{fig:threshold}, between any pair of individuals is finite and much less than the size of the cell, the onset of an efficient communication network requires the
number of individuals to exceed a minimal value which allows a proper interaction inside the cell.
These effects are captured by using ZRP with, respectively, a saturation
and an activation threshold \cite{CCM2016jnet}. In essence, our modelling is
rather simple: no interaction between pedestrians on different
cells is taken into account. This choice is deliberate -- we want to
keep the level of modelling as low as possible to show that,
even in such cases, it is possible to recover the
qualitative behavior of the fundamental diagrams.
In the particular ZRP introduced in this paper,
the two thresholds can be tuned so as to switch from an independent motion
of the particles to a motion that can be mapped to a simple exclusion process.
When considering the hydrodynamic limits of our model [(i) reversible dynamics, (ii) dynamics with a drift], the resulting macroscopic dynamics exhibit a non--trivial dependence on the thresholds, which is, to our knowledge, yet unexplored.
\begin{figure}
\begin{picture}(400,80)(-70,0)
\setlength{\unitlength}{.026cm}
\thicklines
\put(10,20){\line(1,0){120}}
\put(10,50){\line(1,0){120}}
\thinlines
\put(55,20){\line(0,1){30}}
\put(85,20){\line(0,1){30}}
\put(82.5,32.5){\circle*{5}}
\put(82.5,32.5){\circle{15}}
\put(67.5,40){\circle*{5}}
\put(67.5,40){\circle{15}}
\put(65.5,27.5){\circle*{5}}
\put(65.5,27.5){\circle{15}}
\thicklines
\put(10,70){\line(1,0){120}}
\put(10,100){\line(1,0){120}}
\thinlines
\put(55,70){\line(0,1){30}}
\put(85,70){\line(0,1){30}}
\put(82.5,82.5){\circle*{5}}
\put(82.5,82.5){\circle{15}}
\put(67.5,90){\circle*{5}}
\put(67.5,90){\circle{15}}
\put(65.5,77.5){\circle*{5}}
\put(65.5,77.5){\circle{15}}
\put(72.5,82.5){\circle*{5}}
\put(72.5,82.5){\circle{15}}
\thicklines
\put(210,20){\line(1,0){120}}
\put(210,50){\line(1,0){120}}
\thinlines
\put(255,20){\line(0,1){30}}
\put(285,20){\line(0,1){30}}
\put(258,27.5){\circle*{5}}
\put(258,32.5){\circle*{5}}
\put(258,47.5){\circle*{5}}
\put(282.5,27.5){\circle*{5}}
\put(282.5,37.5){\circle*{5}}
\put(282.5,42.5){\circle*{5}}
\put(282.5,47.5){\circle*{5}}
\thicklines
\put(210,70){\line(1,0){120}}
\put(210,100){\line(1,0){120}}
\thinlines
\put(255,70){\line(0,1){30}}
\put(285,70){\line(0,1){30}}
\put(258,77.5){\circle*{5}}
\put(258,72.5){\circle*{5}}
\put(258,82.5){\circle*{5}}
\put(258,87.5){\circle*{5}}
\put(258,92.5){\circle*{5}}
\put(258,97.5){\circle*{5}}
\put(282.5,77.5){\circle*{5}}
\put(282.5,72.5){\circle*{5}}
\put(282.5,82.5){\circle*{5}}
\put(282.5,87.5){\circle*{5}}
\put(282.5,92.5){\circle*{5}}
\put(282.5,97.5){\circle*{5}}
\put(277,72.5){\circle*{5}}
\put(272.5,87.5){\circle*{5}}
\put(265,97.5){\circle*{5}}
\end{picture}
\caption{Sketch of pedestrians moving through a cell of the obscure tunnel driven by a two-threshold biased dynamics.
\textit{Left panel}: the smaller black-filled circles represent the individuals located inside a cell, the bigger circles comprising the smaller black ones represent the interaction range of each individual. For an efficient communication network to be settled, a certain overlap among the bigger circles is needed, which is hence guaranteed by requiring the number of individuals in the cell to exceed the activation threshold $A$. \textit{Right panel}: as soon as the front row of the cell is full, the number of individuals occupying that front row, corresponding to the saturation threshold $S$, fixes an upper bound to the escape rate from the cell.}
\label{fig:threshold}
\end{figure}
The motivation for this study stems from our interest in the motion of pedestrian flows in dark or in heavily obscured corridors, where the internal dynamics of pedestrians can change depending on the willingness to cooperate (here: to adhere to large groups) or to be selfish (here: to perform independent random walks); see \cite{CM01,CM02,CM03} for more details in this direction.
To be able to understand the behavioral change leading individuals from cooperation to selfishness and eventually backwards, we thus opted for the introduction of two thresholds affecting the microscopic dynamics of the particle system. From the evacuation point of view, the central question is:
{\em Which values of the thresholds yield higher evacuation fluxes (currents), or, in other words, allow for lower (average) residence times?}
It is worth noting that this particular traffic scenario is intimately related to the dynamics of molecular motors seen from the perspective of
processivity (cf., e.g., \cite{LH93}). For transporting at molecular scales, one distinguishes between processive and non--processive motors. The processive ones perform
best when working in small groups (porters), while the non--processive motors work best in large groups (rowers). Their joint collective dynamics has been investigated in \cite{Joa06}. If the motors suddenly change their own processivity from porters to rowers (for instance, due to particular environmental conditions, or due to a command control from a hierarchical structure), then our approach based on zero range processes with threshold approximates conceptually well the changing--in--processivity dynamics.
Threshold effects are not new in microscopic dynamics. They are usually introduced to model dynamics undergoing
sudden changes when some dynamical observable exceeds an {\em a priori}
prescribed value. A natural application of this point of view appears in the context of infections propagation models, where an individual
gets infected if the number of infected neighbors is large enough.
A very well--studied situation is the Bootstrap Percolation
problem \cite{CLR} in which, for instance, on a square lattice, a site
becomes infected as soon as the number of its neighboring
infected sites is larger than a fixed threshold value. In this
context, the most interesting and surprising situation is the
one in which the threshold is precisely half of the total neighboring
sites. In such a case, new scaling laws have been discovered
in the infinite volume limit \cite{AL,CC}.
In the next sections we will focus on the hydrodynamic limit of our ZRP built on thresholds, subjected to periodic boundary conditions and equipped with either symmetric or asymmetric jump probabilities. The asymmetry in the jump probabilities breaks the condition of detailed balance and gives hence rise to a net particle current across the system. We will explicitly highlight the effect of the thresholds (microscopic information) on the macroscopic transport equations
and discuss, in particular, the dependence of the structure of the effective diffusion coefficient and of the effective current on both the thresholds and the local pedestrian density. Our analysis allows one to recover some known results available for the independent particle model and for the simple exclusion process, and sets also the stage for a deeper understanding of the hydrodynamic limit of ZRP with a fixed number of thresholds.
\section{The model}
\label{s:modello}
\par\noindent
We consider a positive integer $L$ and define a ZRP
\cite{EH,pres} on the finite torus (periodic boundary conditions)
$\Lambda:=\{1,\dots,L\}\subset\bb{Z}$.
We fix $N\in\bb{Z}_+$ and consider the finite
\emph{state space}
$\Omega_{L,N}$:
\begin{equation}
\label{mod000}
\Omega_{L,N}
=
\Big\{
\omega\in\{0,\dots,N\}^\Lambda:\,
\sum_{x=1}^L\omega_x=N
\Big\}
\,\,
\end{equation}
where the integer $\omega_x$ denotes the \emph{number of particles}
at the site $x\in\Lambda$ in the \emph{state}
$\omega$.
We pick $A,S\in\{1,\dots,N\}$ with $S\ge A$,
the \emph{activation} and \emph{saturation thresholds}, respectively.
We define, next, the \emph{intensity function}
\begin{equation}
\label{soglia}
g(k)
=
\left\{
\begin{array}{ll}
0 & \textrm{ if } k=0\\
1 & \textrm{ if } 1\leq k\le A\\
k-A+1 & \textrm{ if } A< k\le S\\
S-A+1 & \textrm{ if } k> S\\
\end{array}
\right.
\end{equation}
for each $k\in\bb{Z}_+$. The intensity function, and all the quantities
that we shall define below, do depend on the two thresholds $A$ and $S$,
but we skip them from the notation for simplicity.
The ZRP considered in this paper
is the Markov process $(\omega_t)_{t\ge0}\in\Omega_{L,N}$,
such that each
site $x\in\Lambda$ is updated with intensity $g(\omega_x(t))$
and, once such a site $x$ is chosen, a particle jumps with
probability $p\in[0,1]$ to the
neighboring right site $x+1$ or with probability $1-p$
to the neighboring left site $x-1$
(recall periodic boundary
conditions are imposed). For more details
we refer the reader to \cite{pres,KL}.
In our model, the intensity function is related to the
(time dependent, in general) \textit{hop rates}
\begin{displaymath}
r^{(x,x-1)}(\omega_x(t))
=
(1-p)g(\omega_x(t))
\;\;\textrm{ and }\;\;
r^{(x,x+1)}(\omega_x(t))
=
pg(\omega_x(t))
\end{displaymath}
and coincides, hence, with the \textit{escape rate}
$
r^{(x,x-1)}(\omega_x(t))
+
r^{(x,x+1)}(\omega_x(t))
=g(\omega_x(t))
$
at which a particle leaves the site $x$.
Thus, the effect of the thresholds is to control the
escape rate from the site. More precisely, the activation threshold $A$
keeps the escape rate low and fixed to unity for all sites for
which $\omega_x(t)\le A$, regardless of the number of particles on $x$.
The saturation threshold $S$, instead,
holds the escape rate fixed to a maximum value for all sites for
which $\omega_x(t)\ge S$, regardless, again, of the number of particles on $x$.
In the intermediate case, $A<\omega_x(t)<S$
the escape rate increases proportionally to the actual number of particles on
$x$, see \eqref{soglia}.
We remark that in the limiting case $A=1$ and $S=N$,
the intensity function becomes
$g(k)=k$, for $k>0$, hence
the well known
\emph{independent particle} model is recovered.
A different limiting situation is the one in which
the intensity function is set equal to $1$ for any $k\ge1$ and
equal to zero for $k=0$.
In this case, the configurations of the ZRP can be mapped to the
simple exclusion model states, see e.g. \cite{EH}, and we shall thus refer to the latter case as the
\emph{simple exclusion}--like model. Such a model is
found, in our set--up, when $A=S$.
We point out that one of the interesting features of our model is the
fact that it is able to tune between two very different dynamics, namely,
the independent particle and simple exclusion--like behaviors \cite{EH}:
this tuning can be realized in two ways, i.e.,
by keeping $S=N$ and varying $A$ or by keeping $A=1$ and
varying $S$.
We are interested in studying
the hydrodynamic limit of this model, i.e. as $N\to \infty$ and $L\to \infty$. In particular we shall exploit
the fact that the intensity function is not decreasing to
use well established theories and derive in our set--up
the limiting (effective) diffusion coefficient as well as the limiting (effective) current in presence of the
two thresholds. As we shall discuss later, the behavior of such
macroscopic quantities with the local density will exhibit very
peculiar features inherited from the microscopic properties of the
dynamics. In particular, it will be possible to give a nice
interpretation of the diagrams in terms of pedestrian motion, and
the related fundamental diagrams will be explained in the framework
of our very simple model.
We let the \emph{Gibbs measure} with \emph{fugacity} $z\in\mathbb{R}$
of the ZRP introduced above
be the product measure on $\mathbb{N}^\Lambda$
\begin{equation}
\label{gibbs}
\prod_{x=1}^L \nu_{z}(\eta_x)
\;\;\textrm{ for any }\;\;\eta=(\eta_1,\dots,\eta_L)\in\mathbb{N}^\Lambda
\end{equation}
with
\begin{equation}
\label{nu}
\nu_{z}(0)=C_{z}
\;\;\textrm{ and }\;\;
\nu_{z}(k)=C_{z} \frac{z^k}{g(1)\cdots g(k)}
\;\;\textrm{ for }k\ge1
\;\;,
\end{equation}
where $C_{z}$ is a normalizing factor depending, in general,
on $z$, $A$, and $S$, namely,
\begin{equation}
\label{norm000}
C_z
=
\Big[
1+\sum_{k=1}^\infty\frac{z^k}{g(1)\cdots g(k)}
\Big]^{-1}
\;\;
\end{equation}
It is of interest to compute the mean value (against the
Gibbs measure) of the intensity function $g$. By using \eqref{nu}, we get
\begin{equation}
\label{mediaI}
\nu_{z}[g(\omega_x)]
=
\sum_{k=0}^\infty \nu_{z}(k)
\,g(k)
=
\sum_{k=1}^\infty \nu_{z}(k)
\,g(k)
=
C_{z} z + C_{z} z \sum_{k=2}^\infty \frac{z^{k-1}}{g(1)\cdots g(k-1)}
=z,
\end{equation}
where we have used that $g(0)=0$ and, in the last step,
we recalled \eqref{norm000}.
We find relevant to stress that the expression of such an expectation, as
a function of the activity, does not depend on the particular
choice of the intensity function.
Note, also, that the intensity is a site--dependent function,
whereas its expected value, with respect to the Gibbs measure given
in \eqref{gibbs}, is not. This is due to the fact that
the Gibbs measure is not site--dependent, which, in turn, stems from the
imposed periodic boundary conditions and from the translationally invariant jump
probabilities.
\begin{figure}[h]
\centering
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-rb01a.pdf}
\hspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-rb01b.pdf}
\vspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-rb02a.pdf}
\hspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-rb02b.pdf}
\caption{\textit{Left panel, top row}:
Graph of the function $\bar\rho(z)$ for
$A=1$ and for different values of the saturation
threshold, i.e. $S=1,2,5,\infty$.
\textit{Right panel, top row}:
Graph of the function $\bar\rho(z)$ for
$S=\infty$ and for different values of the activation threshold, i.e. $A=1,2,5,\infty$. \textit{Left panel, bottom row}: Graph of the
function $\bar\rho(z)$ for
$A=3$ and for $S=3,4,10,\infty$. \textit{Right panel, bottom row}:
Graph of the function $\bar\rho(z)$ for $S=10$ and for $A=1,2,5,10$.
}
\label{f:rhobar}
\end{figure}
In discussing the hydrodynamic limit, a special role is
played by the function
\begin{equation}
\label{dens}
\bar{\rho}(z)
=
\sum_{k=0}^\infty k\,\nu_{z}(k)
\end{equation}
It is possible to prove a nice expression for the function $\bar{\rho}$
independently of the particular choice of the intensity function.
Indeed, recalling \eqref{nu}, equation \eqref{dens} can be rewritten as
\begin{displaymath}
\bar{\rho}(z)
=
C_{z}\sum_{k=1}^\infty k\,\frac{z^k}{g(1)\cdots g(k)}
=
z\,C_{z}\frac{\textrm{d}}{\textrm{d}z}
\sum_{k=1}^\infty \frac{z^k}{g(1)\cdots g(k)}
=
z\,C_{z}\frac{\textrm{d}}{\textrm{d}z}
\frac{1}{C_{z}}\,
\sum_{k=1}^\infty \nu_{z}(k)
\end{displaymath}
which implies
\begin{equation}
\label{dens02}
\bar{\rho}(z)
=
z\,C_{z}\frac{\textrm{d}}{\textrm{d}z}
\frac{1}{C_{z}}
=
-\frac{z}{C_{z}}\,\frac{\textrm{d}}{\textrm{d}z}C_{z}
=
-z\,\frac{\textrm{d}}{\textrm{d}z}\log C_{z}
\;\;.
\end{equation}
At the same level of generality,
it is not difficult to prove that
$\bar\rho(z)$ is an increasing function of the
fugacity. Indeed, after some straightforward
algebra, one can prove that
\begin{equation}
\label{derivata}
\frac{\partial}{\partial z}\bar\rho(z)
=
\frac{\partial}{\partial z}
C_{z}\sum_{k=1}^\infty \frac{kz^k}{g(1)\cdots g(k)}
=
\frac{1}{z}[\nu_{z}(\eta_1^2)-(\nu_{z}(\eta_1))^2]
>0
\;\; .
\end{equation}
We mention that the above result is strictly connected to the fact
that $(-\log C_{z})$ is a convex function.
Finally, we observe that
$\bar{\rho}$ is defined for any positive $z$
if $A$ is finite and $S=\infty$. On the other hand, it displays
a singularity, i.e. it is defined for $z$ small enough,
if $S$ is finite or when $A=S$ (simple exclusion--like model);
see Figure~\ref{f:rhobar}.
\begin{figure}
\centering
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-D03a.pdf}
\hspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-D03b.pdf}
\vspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-D04a.pdf}
\hspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-D04b.pdf}
\caption{\textit{Left panel, top row}: Behavior of the diffusion coefficient
$D(\rho)$ vs. $\rho$ for $A=1$ and for different values of the
saturation threshold,
i.e., $S=1,2,5,\infty$.
\textit{Right panel, top row}: Behavior of the diffusion coefficient
$D(\rho)$ vs. $\rho$ for
$S=\infty$ and for different values of the activation threshold,
i.e., $A=1,2,5,\infty$.
\textit{Left panel, bottom row}: Behavior of the diffusion coefficient
$D(\rho)$ vs. $\rho$ for $A=3$ and for
$S=3,4,10,\infty$. \textit{Right panel, bottom row}: Behavior of the diffusion coefficient
$D(\rho)$ vs. $\rho$ for $S=10$ and for $A=1,2,5,10$.}
\label{fig:diff}
\end{figure}
\section{Hydrodynamic limit for reversible dynamics}
\label{s:hydlimeq}
\par\noindent
The dynamics with $p=1/2$ is
{\em reversible} with respect to the invariant measure.
The evolution of the distribution of the particles
on the space $\Lambda$ for the ZRP
with thresholds $A$ and $S$ introduced above can be described in the diffusive
hydrodynamic limit via the time evolution of the \textit{density
function} $\rho(x,t)$, with the space variable $x$
varying in the interval $[0,1]$ and $t\ge0$.
In the framework of one--dimensional ZRP,
hydrodynamic equations are derived rigorously under
the assumption that the intensity function is not decreasing.
We refer to \cite[Chapter~III]{pres} and
\cite[Chapter~5]{KL}
for a detailed discussion and the rigorous proof.
The first proof of this result can be found in \cite{DMF}
and is based on the results reported in \cite{Andjel}.
It suffices, here, to recall the main findings:
one can prove that for $p=1/2$ the continuous
space density $\rho(x,t)$
is the solution of the partial differential equation
\begin{equation}
\label{diffusion}
\frac{\partial}{\partial t}\rho
=
-\frac{\partial}{\partial x}{J(\varrho)}
\end{equation}
where the \textit{macroscopic flux} $J(\varrho)$ is defined as
\begin{equation}
\label{J}
J(\varrho)
=
-\frac{1}{2}
D(\rho)
\frac{\partial}{\partial x}\rho
\end{equation}
with the \textit{diffusion coefficient} $D$ given by
\begin{equation}
\label{D}
D(\rho)
=
\frac{\partial }{\partial \rho}
\nu_{\bar{z}(\rho)}\left[g(\omega_1)\right]
\;\;.
\end{equation}
Note that the diffusion coefficient is here computed in terms
of the mean of the intensity function evaluated against the
single site Gibbs measure with fugacity corresponding to the
local value of the density.
Note that, even if it is not coded in the notation,
the diffusion coefficient $D$ depends on the values of the thresholds.
One of the main multi-scale aspects of our analysis is, indeed, precisely the link between the two thresholds $A$ and $S$ and the effective diffusion coefficient $D$.
We shall first recall the well known results which hold in the
limiting cases corresponding to the independent particles and simple exclusion--like
dynamics.
\begin{rem}
\label{rem1}
\textit{Independent particle model:\/}
For $A=1$ and $S=\infty$, one has
$C_z=\exp\{-z\}$. Hence, by \eqref{dens02}, it holds
$\bar{\rho}(z) = z$.
Thus,
recalling \eqref{mediaI} and
the definition of $\bar{z}$ given
below \eqref{derivata},
one finds
$\nu_{\bar{z}(\rho)}\left[g(\omega_1)\right]
=
\nu_{\rho}\left[g(\omega_1)\right]
=
\rho
$.
Thus, by using \eqref{D},
the diffusion coefficient reads
$D(\rho)
=1$.
\end{rem}
\begin{rem}
\label{rem2}
\textit{Simple exclusion--like model:\/}
For $A=S$ (either finite or infinite),
one has $g(k)=1$ for any $k\ge1$ and $g(0)=0$.
Hence, $C_z=1-z$, and it holds
$\bar{\rho}(z) = z/(1-z)$.
Thus, proceeding as above, one finds the law
$D(\rho)=1/(1+\rho)^2$, cf. \cite{Ferrari}.
\end{rem}
Hence, in the two limiting cases, one can easily determine the expression of the diffusion coefficient. In the general case, i.e.
for arbitrary values of the thresholds $A$ and $S$, we
exploit the following strategy.
We use, first, \eqref{norm000} and \eqref{dens02} to compute
$\bar{\rho}(z)$, whose explicit expression in terms of special
functions is reported in Appendix~\ref{sec:app1}.
Then, we compute the diffusion coefficient via the equation \eqref{D},
where we use equation \eqref{mediaI} to express the average
of the intensity function with respect to the Gibbs measure and invert
the function $\bar{\rho}(z)$ to obtain $\bar{z}(\rho)$.
More concisely, we write
\begin{equation}
\label{Dgen}
D(\rho)
=
\frac{\partial}{\partial\rho}
\nu_{\bar{z}(\rho)}\left[g(\omega_1)\right]
=
\frac{\partial}{\partial\rho}
\bar{z}(\rho)
=
\Big(
\frac{\partial}{\partial z}
\bar{\rho}(z)
\Big)^{-1}
\Big|_{z=\bar{z}(\rho)}
\end{equation}
We remark that the explicit expression of the
quantity
$\partial\bar\rho(z)/\partial z$ appearing in
\eqref{Dgen} is quite
lengthy and will be omitted here.
By performing the above computation, we thus obtain the expression
of the diffusion coefficient $D(\rho)$.
Figure~\ref{fig:diff} shows the behavior of the diffusion
coefficient as a function of the local density
and parameterized by the values
of the thresholds. In particular, the upper left panel of
Figure~\ref{fig:diff} refers to the case $A=1$
and for different values of $S$: the simple exclusion--like model is recovered for $S=1$ , while the independent particle model is attained for
$S=\infty$. Similarly, the upper right panel illustrates the case with
$S=\infty$
and for different values of $A$: here the independent particle model corresponds to $A=1$ and the simple
exclusion--like model is found for $A=\infty$. As shown in both the upper panels of Figure~\ref{fig:diff}, in the independent particle
case the diffusion coefficient is
constant with respect to the local density and is equal to unity.
A noteworthy feature of the diffusion coefficient, clearly visible in the upper right panel as well as in both the lower panels of Figure~\ref{fig:diff},
is the loss of monotonicity of the function $D(\rho)$ occurring at values of $\rho$ exceeding
some critical value (depending, in general, on $A$ and $S$).
This remark can be interpreted
as the effect,
at the hydrodynamic level, of an activation threshold $A>1$ and/or
$S< \infty$ acting at the more microscopic, dynamical, level: both conditions locally pull
the dynamics
away from the independent particle behavior.
Note, for instance, the behavior of $D(\rho)$ displayed in the lower left panel of Figure~\ref{fig:diff},
referring to the case $A=3$.
Considering, in particular, the green curve, corresponding to
$S=10$, one observes
the onset of a double loss of monotonicity of the function $D(\rho)$:
for small values of the density, $D$ stays close to the simple exclusion--like behavior and
decreases with $\rho$,
then, after one first critical value of the density, it starts rising up, until it eventually
drops down again, when $\rho$ exceeds an upper critical value.
This reflects precisely the existence of a double
threshold for the intensity function,
described by \eqref{soglia}.
More precisely, if the local density is smaller than some
critical value (close to the activation threshold $A$) the behavior is
essentially simple exclusion--like, because the intensity function
is fixed to unity for the typical values of the number of on site particles
corresponding to such a density. On the other hand,
if the local density exceeds this first critical value, the typical
number of on site particles happens to fall above the activation
threshold. Hence, since in this regime the intensity function
is proportional to the number of on site particles, the diffusion
coefficient starts growing as a function of the local density.
Finally, if the local density exceeds a second critical value
(close to the saturation threshold $S$), the intensity function
attains a constant value independently on the number of on site
particles, and the diffusion coefficient behaves, as it again pertains to the simple exclusion--like regime, as a decreasing
function of the local density.
The effect of the two thresholds on the diffusion coefficient is, therefore, clear:
at fixed saturation threshold, the diffusion coefficient decreases with
increasing activation threshold.
On the other hand, at fixed activation threshold ,
the diffusion coefficient increases with
increasing saturation threshold.
Moreover, in presence of reversible dynamics,
the dependence of the diffusion coefficient with respect to
density may become non--monotonic.
\begin{figure}
\centering
\includegraphics[width=6cm]{ccm-fig-zrpnew-pde01.pdf}
\includegraphics[width=6cm]{ccm-fig-zrpnew-pde06.pdf}
\\
$\phantom.$
\\
\includegraphics[width=6cm]{ccm-fig-zrpnew-pde07.pdf}
\includegraphics[width=6cm]{ccm-fig-zrpnew-pde08.pdf}
\\
\caption{Plot of the solution of the equation \eqref{diffusion} $\rho(x,t)$
vs.\ $x$, with a given initial condition and with periodic boundary conditions.
Dotted and dashed lines refer to the two limiting cases corresponding,
respectively, to the independent particle (Remark~\ref{rem1})
and to the simple exclusion--like (Remark~\ref{rem2}) process.
Solid lines refer to intermediate cases
$A=5$ and $S=10$ (gray) and $A=2$ and $S=10$ (black).
Different panels, in lexicographic order,
report data referring to time
$t=0$, $t=0.01$, $t=0.02$, $t=0.1$.}
\label{pde}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=6cm]{ccm-fig-zrpnew-pde02.pdf}
\includegraphics[width=6cm]{ccm-fig-zrpnew-pde03.pdf}
\\
$\phantom.$
\\
\includegraphics[width=6cm]{ccm-fig-zrpnew-pde04.pdf}
\includegraphics[width=6cm]{ccm-fig-zrpnew-pde05.pdf}
\\
\caption{The same as in Figure~\ref{pde}.
Different panels, in lexicographic order,
report data referring to time
$t=0.0001$, $t=0.0002$,
$t=0.0005$, $t=0.0008$.}
\label{pde02}
\end{figure}
The resulting behavior of the diffusion coefficient can be better understood by also
recalling that the number $\mathcal{N}(t)$ of particles departing from the site $x\in[0,1]$, is described by a
non--homogeneous Poisson process with time--dependent rate parameter
$g(\omega_x(t))$ (the escape rate). Thus, given a small $\delta>0$, it holds
$$
P_t[\mathcal{N}(\delta)=1]\simeq g(\omega_x(t)) \delta
$$
where $P_t[\mathcal{N}(\delta)=1]$ is the probability of
exactly one change in $\omega_x(t)$ in the time interval
$(t,t+\delta)$. Then, for values of the threshold $A$ and
$S$ different, respectively, from $1$ and $\infty$ (independent particle model),
$g(\omega_x)$ takes a lower value compared to that
referring to the independent particle model,
with a minimum (corresponding to $g(\omega_x)=1$) attained when $A=S$
(i.e., simple exclusion--like model).
The effect of the threshold on the dynamics, in the hydrodynamic limit,
is also visible in Figure~\ref{pde},
showing the profiles, at different times, of the function
$\rho(x,t)$ solving \eqref{diffusion}--\eqref{J}, for four different choices of the thresholds.
The numerical solutions of the PDE \eqref{diffusion}
exhibit the fastest decay in the independent particle case and
the slowest one in the simple exclusion--like case.
Whereas in the two other plotted cases the decay rate
is intermediate.
This is in perfect agreement with the data plotted for the diffusion
coefficient in Figure~\ref{fig:diff}: indeed,
such a coefficient is maximal in the independent particle case and
minimal in the simple exclusion--like situation.
Similar data, at different times, have been plotted in
Figure~\ref{pde02}.
We note that the curves corresponding to the two cases
$A=5$ and $S=10$ (gray) and $A=2$ and $S=10$ (black)
swaps while time goes by.
This behavior is in perfect agreement with that
shown by the diffusion coefficient in
the bottom right panel of Figure~\ref{pde02}.
\section{Hydrodynamic limit in presence of a drift}
\label{hydlimdrift}
In Section \ref{s:hydlimeq} we discussed the effect of the thresholds
on the diffusion equation describing the macroscopic behavior of the
system in the hydrodynamic limit. In this Section we investigate
how the dynamics depends on the thresholds under the effect of an
external field breaking the condition of detailed balance and
inducing a non--vanishing particle current across the system.
That is, we tackle, here, the analysis of the hydrodynamic limit of the ZRP with
$p\neq1/2$ and in presence of the two thresholds.
The evolution of the distribution of the particles
for a ZRP
subjected to the two aforementioned thresholds
and to a non--vanishing drift can be described, in the hydrodynamic limit, in terms of the \textit{density
function} $\rho(x,t)$ with the space variable $x$
varying in the interval $[0,1]$ and $t\ge0$.
\begin{figure}
\centering
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-v05a.pdf}
\hspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-v05b.pdf}
\vspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-v06a.pdf}
\hspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-v06b.pdf}
\caption{\textit{Left panel, top row}: Behavior of the velocity
$v/(2p-1)$ vs. $\rho$ for $A=1$ and for different
values of the saturation threshold,
i.e., $S=1,2,5,\infty$.
\textit{Right panel, top row}: Behavior of the velocity
$v/(2p-1)$ vs. $\rho$ for $S=\infty$ and for different
values of the activation threshold,
i.e., $A=1,2,5,\infty$. \textit{Left panel, bottom row}:
Behavior of the velocity
$v/(2p-1)$ vs. $\rho$ for $A=3$ and for $S=3,4,10,\infty$.
\textit{Right panel, bottom row}: Behavior of the velocity
$v/(2p-1)$ vs. $\rho$ for $S=10$ and for $A=1,2,5,10$.}
\label{fig:v}
\end{figure}
It can be proven that the equation governing the evolution of
the macroscopic local density $\varrho$ is \eqref{diffusion}
with the
\textit{macroscopic current} $J(\varrho)$ defined as
\begin{equation}
\label{Jns}
J(\varrho)
=
(2p-1)
\nu_{\bar{z}(\rho)}\left[g(\omega_1)\right]
\end{equation}
where, we recall, the intensity function is defined in \eqref{soglia} and
the Gibbs measure is defined in \eqref{nu}, see \cite[equation (1.3)]{CR}.
In this out--of--equilibrium regime, the relevant quantity we look at
is the \textit{velocity}, defined as
$v(\rho)=J(\rho)/\rho$. In particular, it is worth clarifying, here, how the constitutive relation $v$
vs.\ $\rho$ is affected by the activation and
saturation thresholds.
This point may also lead to a more detailed understanding of the
so--called ``fundamental diagrams'', typically invoked in the context of
pedestrian flows investigations.
We can now use our results of Section~\ref{s:modello} to compute
the current.
First, note that, for any value of the threshold,
by \eqref{mediaI}, it holds
\begin{equation}
\label{flussom}
J(\rho)=(2p-1)\,\bar{z}(\rho).
\end{equation}
It is not possible to write such an expression explicitly, but for the
independent particle and simple exclusion--like cases,
in which cases it is straightforward to derive the well known results
\begin{equation}
\label{udfipse}
v(\rho)=2p-1
\;\;\;\textrm{ and }\;\;\;
v(\rho)=\frac{2p-1}{1+\rho},
\end{equation}
respectively, where we used the results in Remark~\ref{rem1} and Remark \ref{rem2}.
Figure~\ref{fig:v} shows the behavior of the velocity $v$
as a function of the local density for different values of $A$ and $S$.
An inspection of the upper left panel of Figure~\ref{fig:v}
confirms that the velocity
divided by the bias $(2p-1)$
is equal to unity for the independent particle model, and behaves as $(1+\rho)^{-1}$ in the simple exclusion--like case.
Similarly to the case of the diffusion coefficient, we also notice the presence of a
non--monotonic behavior of $v$ as a function of $\rho$,
occurring if $A>1$ and
$S\gg A$.
Again, this effect can be ascribed to the peculiar properties of
the microscopic dynamics, constrained by the two thresholds.
In particular, the right top panel of Figure~\ref{fig:v}
shows the case $S=\infty$. In absence of limitations due to the exits capacity,
if no limitation on the communication occurs ($A=1$),
the typical speed is maximal and it does not depend on the
local density.
On the other hand, when $A>1$, the speed decreases until the density exceeds a critical value (depending on the two thresholds), and, after that,
it starts to increase until it attains the ideal maximal value
at large $\rho$.
Indeed, if the density is below such a critical value the intensity
function is equal to one independently on the typical number of on site particles, hence the number of particles that
leaves a site per unit of time does not depend on the number of particles
on it. On the other hand, when such a critical value is
overcome, the density function starts to behave proportionally
to the number of on site particles and the typical velocity
starts to increase with the local density.
In the extreme case $A=\infty$, no communication
is possible however large is the density, hence the typical
velocity is a monotonic decreasing function of $\rho$.
In the right bottom panel, the case $S=10$ is portrayed: the graphs
show that as a result of the constraints imposed by the two dynamical thresholds, there exists a local value of the
density optimizing the typical speed. Such a density has to be large
enough so that communication is efficient but, also, small enough
so that the limitation on the escape capacity do not cause an abrupt
drop of the typical velocity.
We also run a set of Monte Carlo simulations for a ZRP on a finite lattice equipped with periodic boundary conditions, in order to check the consistency of the results for the velocity $v(\rho)$ obtained above in the hydrodynamic limit.
The dynamics on the finite lattice was performed using the following steps:
\begin{itemize}
\item[(i)] a number $\tau$ is chosen at random with
exponential distribution of
parameter $\sum_{x=1}^Lg(\omega_x(t))$, and time is correspondingly updated to
$t+\tau$;
\item[(ii)]
a site is chosen at random with probability
$g(\omega_x(t))/\sum_{x=1}^Lg(\omega_x(t))$;
\item[(iii)] a particle is moved from the selected site to one of its
nearest neighbors on the right or on the left with probability $p$ or,
respectively, $1-p$.
\end{itemize}
Starting from an arbitrary initial configuration $\omega_o$ at time $t=0$, the simulation is let then evolve for $n_{tot}\sim10^7$ steps.
The stationary current is then obtained by computing the difference
between the
the total number of particles hopping from the site $L$ to the site $1$
and that of particles jumping from $1$ to $L$,
and dividing, then, the resulting value by the total time. \\
It is worth also remarking that the considered magnitude of $n_{tot}$ was chosen large enough to guarantee the achievement of a stationary value of the current.
\begin{figure}
\centering
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-sim01.pdf}
\hspace{1mm}
\includegraphics[width=7.7cm]{ccm-fig-zrpnew-sim02.pdf}
\caption{Comparison of analytical results for the velocity current in the hydrodynamic limit (cf. Fig. \ref {fig:v}) with Monte Carlo simulations. The squares and the triangles, in the plots, denote the results of simulations obtained with $p=0.6$ and, respectively, $p=0.8$.}
\label{fig:simul}
\end{figure}
In Fig. \ref{fig:simul} the results of the Monte Carlo simulations obtained with $p=0.6$ and $p=0.8$, $L=100$ and for increasing values of $N$, are displayed together with the corresponding curves shown in Fig. \ref{fig:v} and referring to the hydrodynamic limit. The left plot of Fig. \ref{fig:simul} shows two cases, the first corresponding to $A=1$ and $S=5$, whereas the second to $A=3$ and $S=10$. Similarly, the right plot shows two different cases: the first with $A=5$ and $S=10$, while the second with $A=5$ and $S=N$. The plots reveal that our numerical simulations succeed to quantitatively reproduce the predicted behavior of $v(\rho)$ which holds in the hydrodynamic limit, including, in particular, the non-monotonic behavior of the velocity present for finite values of the two thresholds $A$ and $S$.
\section{Possible interpretations of the two thresholds}
\par\noindent
It is worth mentioning that working with two thresholds leads to rich descriptions in terms of modeling. In particular, a double-threshold dynamics is amenable to be interpreted in multiple fashions, viz.
\begin{itemize}
\item[(i)] {\em Porous media interpretation}: Essentially, the bulk porosity estimates how many particles can be accommodated in a cell. This connects to the saturation threshold. The saturation threshold is essentially proportional to the surface porosity, since it is a measure of the exits capacity. We refer to \cite{Bear} for building a possible closer look on the porous media interpretation.
\item[(ii)] {\em Mechanical interpretation}: Imagine, for a moment, that the tunnels are equipped with valve-like doors whose opening results from the balance between the pressure inside the cell and an outer pressure exerted by a spring.
A minimal -- structural -- opening of the door, with the spring maintained at rest, corresponds to the presence of an activation threshold. Any further opening of the door is hence achieved by compensating the external pressure of the spring, which is considered to increase proportionally to the displacement of the door, as dictated by the Hooke's law of mechanics. Finally, the maximal opening of the door, in presence of the minimum elongation of the spring, corresponds to the saturation threshold.
See, e.g., \cite{Aiki-rice} for a scenario describing how pressure/temperature-controlled shape--memory alloys facilitate the functioning of the Japanese rice cooking machine.
\item[(iii)] {\em Psychologico--geometrical interpretation}:
The activation threshold is a measure of the domain of communication between the individuals and the level this communication is processed towards a decision on the motion (either on orientation in the dark, or on the chosen speed). Essentially, we imagine that this activation threshold is inversely proportional to {\em the level of trust} (see our interpretations proposed in \cite{CRAS-CCM}). The saturation threshold is then directly proportional to the capacity of the exit(s).
\item[(iv)] {\em A phase transitions perspective}: The assumption here is that pedestrians evacuating the obscure tunnel undergo a first transition of first kind (like the ice-water transition, cf. Landau's classification): from being trapped in the dark tunnel and being free to go in corridors where they can choose their own desired velocity. The parallel can be made a bit more precise by applying the Clapeyron equation in this context to translate difference in temperatures into difference in pressures. The two thresholds can now be seen as the direct counterparts of the accumulated heat content (amount of phonons) needed
to melt the ice (the activation threshold) and the amount of accumulated heat content needed to evaporate water (the saturation threshold). Essentially, we mean here that the dynamics is ``frozen'' for densities below the activation threshold and people ``evaporate'' from the tunnel for densities of the order of magnitude of the saturation threshold. Remotely related connections to phase transitions supposed to happen
in social systems are reported, for instance, in \cite{PT1,PT2}.
\end{itemize}
\section{Discussion}
\par\noindent
\subsection{Multiscale modeling perspectives}
\par\noindent
We considered a one--dimensional ZRP equipped with
periodic boundary conditions and characterized by symmetric or asymmetric jump probabilities.
The novelty of our approach stems from introducing the two thresholds $A$ and $S$ affecting the stochastic dynamics, together with their interpretations in terms of communication efficiency and exit capacity.
From the mathematics viewpoint, the thresholds can be tuned to control the magnitude of the intensity function, thus allowing one to span a broad variety of zero range dynamics, ranging from the independent particle models to the simple exclusion--like processes.
We then investigated the hydrodynamic limit of the considered ZRP for different values of the thresholds, and discussed the effect of such dynamical constraints on some macroscopic quantities, e.g. the effective diffusion coefficient, the particle density and the effective outgoing current. We recovered known results in the limiting scenarios, and also provided explicit formulae for arbitrary thresholds, provided the activation and saturation thresholds coincide. Our investigation thus provides a noteworthy bridge between the features of the microscopic stochastic dynamics and some macroscopic observables relevant in the hydrodynamic description of the model, which are also experimentally accessible. Further investigations are needed, next, to extend our results to the even more challenging scenario characterized by the use of non--periodic boundary conditions in the zero range dynamics.
From the pedestrians evacuation viewpoint, we explored the effects of
communication on the effective transport properties of the crowd of pedestrians. More precisely, we were able to emphasize the effect of two thresholds
on the structure of the effective nonlinear diffusion coefficient. One threshold models pedestrians' communication efficiency in the dark, while the other one
describes the tunnel capacity. Essentially, we observe that if the evacuees show a maximum trust (leading to a fast communication), they tend to quickly find the exit
and hence the collective action tends to prevent the occurrence of disasters.
In our context, ``a high activation threshold increases the diffusion coefficient" means that
``higher trust among pedestrians improves
communication in the dark"
and therefore the exits can be found more easily. The exit capacity is accounted for by the magnitude of the saturation threshold.
Consequently, a higher saturation threshold leads to an improved capacity of the exists (e.g. larger doors, or more exits \cite{Ronchi}) and, hence, the evacuation rate is correspondingly higher.
Similarly, in presence of a drift, the fundamental diagrams become
non--monotonic with respect to the local pedestrian density. We were able to point out that the fundamental diagrams become independent on the local density as soon as the exit capacity is unbounded.
Interestingly, we were able to detect situations (see, for instance,
Figure~\ref{fig:v}) in which there are particular pedestrian densities optimizing the
speed (see e.g. Figure 2b in \cite{Rajat} for real pedestrian traffic cases where this effect has been observed). It appears that such an {\em optimizing density} must be large
enough so that communication is efficient but, also, small enough
so that the limitation on the escape capacity do not cause an abrupt
drop of the typical flow velocity.
\subsection{Qualitative validation}
\par\noindent
If one wants to make predictions, then models must be calibrated with empirical data. Designing a crowd experiment to test our
pedestrians-moving-in-dark model is a challenge from many perspectives (including ethical and practical aspects)
that we don't undertake here. As future plan, we wish to adapt our model to make progress toward a quantitative validation for scenarios involving pedestrians moving in regions filled with a dense smoke, with specific reference to the crowd experiments made by the Department of Fire Safety Engineering of
the Lund University, Sweden, see e.g. \cite{Ronchi,Ronchi2} and references cited therein. In that case, the main target would be to set up a parameter identification procedure at the ZRP level for finding suitable combinations of the thresholds $A$ and $S$ to recover
typical smoke concentration-dependent speed-density relations (fundamental diagrams). Our simulations based on the current ZRP model with two thresholds give hope in this direction in the sense that, for the drift-dominated dynamics endowed with a full communication among pedestrians (i.e. for $A=1$), we are able to recover for the saturation threshold $S=5$ the same monotonic shape of real pedestrian traffic fundamental diagrams as reported in \cite{Zhang}, e.g. To see this trend, compare Figure~\ref{fig:v} (left panel, top row, $S=5$). Furthermore, we note in the same Figure that as the density increases the fundamental diagram tends towards a linear profile regardless of the choice of the threshold $S$. Such situation is considered as standard for pedestrian dynamics, compare for instance Figure 3.4 on page 33 in \cite{Corbetta} or \cite{SS08}.
\section*{Acknowledgements}
The authors wish to thank Errico Presutti (Gran Sasso Science Institute,
L'Aquila, Italy), Anna De Masi (University of L'Aquila, Italy),
and Claudio Landim (IMPA, Rio de Janeiro, Brazil) for useful discussions.
ENMC thanks ICMS (TU/e, Eindhoven, The Netherlands) for the very
kind hospitality and for financial support.
|
1,116,691,499,095 | arxiv | \section{Introduction}
It is widely known that sufficient data volume is necessary for training a successful machine learning algorithm \cite{domingos2012few} for medical image analysis.
Data with high class imbalance or of insufficient variability \cite{shin2016deep} leads to poor classification performance.
This often proves to be problematic in the field of medical imaging where abnormal findings are by definition uncommon.
Moreover, in the case of image segmentation tasks, the time required to manually annotate volumetric data only exacerbates this disparity; manually segmenting an abnormality in three dimensions can require upwards of fifteen minutes per study making it impractical in a busy radiology practice.
The result is a paucity of annotated data and considerable challenges when attempting to train an accurate algorithm.
While traditional data augmentation techniques (e.g., crops, translation, rotation) can mitigate some of these issues, they fundamentally produce highly correlated image training data.
In this paper we demonstrate one potential solution to this problem by generating synthetic images using a generative adversarial network (GAN) \cite{goodfellow2014generative}, which provides an additional form of data augmentation and also serves as a effective method of data anonymization.
Multi-parametric magnetic resonance images (MRIs) of abnormal brains (with tumor) are generated from segmentation masks of brain anatomy and tumor.
This offers an automatable, low-cost source of diverse data that can be used to supplement the training set.
For example, we can alter the tumor's size, change its location, or place a tumor in an otherwise healthy brain, to systematically have the image and the corresponding annotation.
Furthermore, GAN trained on a hospital data to generate synthetic images can be used to share the data outside of the institution, to be used as an anonymization tool.
Medical image simulation and synthesis have been studied for a while and are increasingly getting traction in medical imaging community \cite{8305584}.
It is partly due to the exponential growth in data availability, and partly due to the availability of better machine learning models and supporting systems.
Twelve recent research on medical image synthesis and simulation were presented in the special issue of Simulation and Synthesis in Medical Imaging \cite{8305584}.
This work falls into the synthesis category, and most related works are those of Chartsias \textit{et al} \cite{8071026} and Costa \textit{et al} \cite{8055572}.
We use the publicly available data set (ADNI and BRATS) to demonstrate multi-parametric MRI image synthesis and Chartsias \textit{et al} \cite{8071026} use BRATS and ISLES (Ischemic Stroke Lesion Segmentation (ISLES) 2015 challenge) data set.
Nonetheless, evaluation criteria for synthetic images were demonstrated on MSE, SSIM, and PSNR, but not directly on diagnostic quality.
Costa \textit{et al} \cite{8055572} used GAN to generate synthetic retinal images with labels, but the ability to represent more diverse pathological pattern was limited compared to this work.
Also, both previous works were demonstrated on 2D images or slices/views of 3D images, whereas in this work we directly process 3D input/output.
The input/output dimension is 4D when it is multi-parametric (T1/T2/T1c/Flair).
We believe processing data as 3D/4D in nature better reflects the reality of data and their associated problems.
Reflecting the general trend of the machine learning community, the use of GANs in medical imaging has increased dramatically in the last year.
GANs have been used to generate a motion model from a single preoperative MRI \cite{hu2017intraoperative}, upsample a low-resolution fundus image \cite{mahapatra2017image}, create a synthetic head CT from a brain MRI \cite{nie2017medical}, and synthesizing T2-weight MRI from T1-weighted ones (and vice-versa) \cite{dar2018image}.
Segmentation using GANs was demonstrated in \cite{zhang2017deep,yang2017automatic}.
Finally, Frid-Adar \textit{et al}. leveraged a GAN for data augmentation, in the context of liver lesion classification \cite{frid2018synthetic}.
To the best of our knowledge, there is no existing literature on the generation of synthetic medical images as form of anonymization and data augmentation for tumor segmentation tasks.
\section{Data}
\label{sec:data}
\subsection{Dataset}
We use two publicly available data set of brain MRI:
\\\\
\textbf{Alzheimer's Disease Neuroimaging Initiative (ADNI) data set}\\
The ADNI was launched in 2003 as a public-private partnership, led by principal investigator Michael W. Weiner, MD.
The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer's disease (AD).
For up-to-date information on the ADNI study, see \url{www.adni-info.org}.
We follow the approach of \cite{schwarz2016large} that is shown to be effective for segmenting the brain atlas of ADNI data.
The atlas of white matter, gray matter, and cerebrospinal fluid (CSF) in the ADNI T1-weighted images are generated using the SPM12 \cite{ashburner2005unified} segmentation and the ANTs SyN \cite{tustison2014large} non-linear registration algorithms.
In total, there are 3,416 pairs of T1-weighted MRI and their corresponding segmented tissue class images.
\\
\\
\textbf{Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) data set}\\
BRATS utilizes multi-institutional pre-operative MRIs and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas \cite{menze2015multimodal}.
Each patient's MRI image set includes a variety of series including T1-weighted, T2-weighted, contrast-enhanced T1, and FLAIR, along with a ground-truth voxel-wise annotation of edema, enhancing tumor, and non-enhancing tumor.
For more details about the BRATS data set, see \url{braintumorsegmentation.org}.
While the BRATS challenge is held annually, we used the BRATS 2015 training data set which is publicly available.
\subsection{Dataset Split and Pre-Processing}
As a pre-processing step, we perform skull-stripping \cite{iglesias2011robust} on the ADNI data set as skulls are not present in the BRATS data set.
The BRATS 2015 training set provides 264 studies, of which we used the first 80\% as a training set, and the remaining 20\% as a test set to assess final algorithm performance.
Hyper-parameter optimization was performed within the training set and the test set was evaluated only once for each algorithm and settings assessed.
Our GAN operates in 3D, and due to memory and compute constraints, training images were cropped axially to include the central 108 slices, discarding those above and below this central region, then resampled to $128\times 128\times 54$ for model training and inference.
For a fair evaluation of the segmentation performance to the BRATS challenge we used the original images with a resolution of $256\times 256\times 108$ for evaluation and comparison.
However, it is possible that very small tumor may get lost by the downsampling, thus affecting the final segmentation performance.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{GAN-workflows}\\
\caption{Illustration of training GAN for (a) MRI-to-brain segmentation; (b) label-to-MRI synthesis; (c) MRI-to-tumor segmentation.}
\label{fig:GAN-workflows}
\vspace{-5pt}
\end{figure}
\section{Methods}
\label{sec:method}
The image-to-image translation conditional GAN (\texttt{pix2pix}) model introduced in \cite{Isola_2017_CVPR} is adopted to translate label-to-MRI (synthetic image generation) and MRI-to-label (image segmentation).
For brain segmentation, the generator \textit{G} is given a T1-weighted image of ADNI as input and is trained to produce a brain mask with white matter, grey matter and CSF.
The discriminator \textit{D} on the other hand, is trained to distinguish ``real'' labels versus synthetically generated ``fake'' labels.
During the procedure (depicted in Figure~\ref{fig:GAN-workflows} (a)) the generator \textit{G} learns to segment brain labels from a T1-weighted MRI input.
Since we did not have an appropriate off-the-shelf segmentation method available for brain anatomy in the BRATS data set, and the ADNI data set does not contain tumor information, we first train the \texttt{pix2pix} model to segment normal brain anatomy from the T1-weighted images of the ADNI data set.
We then use this model to perform inference on the T1 series of the BRATS data set.
The segmentation of neural anatomy, in combination with tumor segmentations provided by the BRATS data set, provide a complete segmentation of the brain with tumor.
The synthetic image generation is trained by reversing the inputs to the generator and training the discriminator to perform the inverse task (i.e., ``is this imaging data acquired from a scanner or synthetically generated?'' as opposed to ``is this segmentation the ground-truth annotation or synthetically generated?'' -- Figure~\ref{fig:GAN-workflows} (b)).
We generate synthetic abnormal brain MRI from the labels and introduce variability by adjusting those labels (e.g., changing tumor size, moving the tumor's location, or placing tumor on a otherwise tumor-free brain label).
Then GAN segmentation module is used once again, to segment tumor from the BRATS data set (input: multi-parametric MRI; output: tumor label).
We compare the segmentation performance \textit{1)} with and without additional synthetic data, \textit{2)} using only the synthetic data and fine-tuning the model on 10\% of the real data; and compare their performance of GAN to a top-performing algorithm\footnote{\url{https://github.com/taigw/brats17}} \cite{wang2017automatic}from the BRATS 2017 challenge.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{tm_manipulate_workflow}\\
\caption{Workflow of getting synthetic images with variation. On BRATS data set, MRI-to-label image translation GAN is applied to T1-weighted images to get brain atlas. It is then merged with the tumor label given in the BRATS data set, possibly with alterations (shift tumor location; enlarge; shrink). The merged labels (with possibly alterations) are then used as an input to label-to-MRI GAN, to generate synthetic multi-parametric MRI with brain tumor.}
\label{fig:tm_manipulate_workflow}
\vspace{-5pt}
\end{figure}
\subsection{Data Augmentation with Synthetic Images}
The GAN trained to generate synthetic images from labels allows for the generation of arbitrary multi-series abnormal brain MRIs.
Since we have the brain anatomy label and tumor label separately, we can alter either the tumor label or the brain label to get synthetic images with the characteristics we desire.
For instance, we can alter the tumor characteristics such as size, location of the existing brain and tumor label set, or place tumor label on an otherwise tumor-free brain label.
Examples of this are shown in Figure~\ref{fig:tm_manipulate_examples}.
The effect of the brain segmentation algorithm's performance has not been evaluated in this study.
Since the GAN was first trained on 3,416 pairs of T1-weighted (T1) images from the ADNI data set, generated T1 images are of the high quality, and, qualitatively difficult to distinguish from their original counterparts.
BRATS data was used to train the generation of non-T1-weighted image series.
Contrast-enhanced T1-weighted images use the same image acquisition scheme as T1-weighted images.
Consequently, the synthesized contrast-enhanced T1 images appear reasonably realistic, although higher contrast along the tumor boundary is observed in some of the generated images.
T2-weighted (T2) and FLAIR image acquisitions are fundamentally different from the T1-weighted images, resulting in synthetic images that are less challenging to distinguish from scanner-acquired images.
However, given a sufficiently large training set on all these modalities, this early evidence suggests that the generation of realistic synthetic images on all the modalities may be possible.
Other than increasing the image resolution and getting more data especially for the sequences other than T1-weighted images, there are still a few important avenues to explore to improve the overall image quality.
For instance, more attention likely needs to be paid for the tumor boundaries so it does not look superimposed and discrete when synthetic tumor is placed.
Also, performance of brain segmentation algorithm and its ability to generalize across different data sets needs to be examined to obtain higher quality synthetic images combining data sets from different patient population.
The augmentation using synthetic images can be used in addition to the usual data augmentation methods such as random cropping, rotation, translation, or elastic deformation \cite{milletari2016v}.
Moreover, we have more control over the augmented images using the GAN-based synthetic image generation approach, that we have more input-option (i.e., label) to perturb the given image than the usual data augmentation techniques.
The usual data augmentation methods rely mostly on random processes and operates on the whole image level than specific to a location, such as tumor.
Additionally, since we generate image from the corresponding label, we get more images for training without needing to go through the labor-intensive manual annotation process.
Figure~\ref{fig:real_synthetic_training} shows the process of training GAN with real and synthetic image and label pairs.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{tm_manipulate_examples}\\
\caption{Examples of generated images. The first row depicts the original (``real'') images on which the synthetic tumors were based. Generated images without adjustment of the segmentation label are shown in the second row. Examples of generated images with various adjustments to the tumor segmentation label are shown in the third through fifth rows. The last row depicts examples of synthetic images where a tumor label is placed on a tumor-free brain label from the ADNI data set.}
\label{fig:tm_manipulate_examples}
\vspace{-5pt}
\end{figure}
\subsection{Generating Anonymized Synthetic Images with Variation}
Protection of personal health information (PHI) is a critical aspect of working with patient data.
Often times concern over dissemination of patient data restricts the data availability to the research community, hindering development of the field.
While removing all DICOM metadata and skull-stripping will often eliminate nearly all identifiable information, demonstrably proving this to a hospital's data sharing committee is near impossible.
Simply de-identifying the data is insufficient.
Furthermore, models themselves are subject to caution when derived from sensitive patient data.
It has been shown \cite{carlini2018secret} that private data can be extracted from a trained model.
Development of a GAN that generates synthetic, but realistic, data may address these challenges.
The first two rows of Figure~\ref{fig:tm_manipulate_examples} illustrate how, even with the same segmentation mask, notable variations can be observed between the generated and original studies.
This indicates that the GAN produces images that do not reflect the underlying patients as individuals, but rather draws individuals from the population in aggregate.
It generates new data that cannot be attributed to a single patient but rather an instantiation of the training population conditioned upon the provided segmentation.
\begin{figure}[t!]
\centering
\includegraphics[width=.95\linewidth]{real_synthetic_training}\\
\caption{Training GAN for tumor segmentation with (a) real; and (b) synthetic image-label pairs. Synthetic data generation can increase the training data set with desired characteristics (e.g., tumor size, location, etc.) without the need of labor-intensive manual annotation.}
\label{fig:real_synthetic_training}
\vspace{-5pt}
\end{figure}
\begin{table}[t]
\vspace{-5pt}
\renewcommand{\arraystretch}{1.3}
\renewcommand{\multirowsetup}{\centering}
\setlength{\belowrulesep}{0pt}
\setlength{\aboverulesep}{0pt}
\caption{Dice score evaluation (mean / standard deviation) of GAN-based segmentation algorithm and BRATS'17 top-performing algorithm \cite{wang2017automatic}, trained on ``real'' data only; real + synthetic data; and training on synthetic data only and fine-tuning the model on 10\% of the real data. GAN-based models were trained both with (with aug) and without (no aug) including the usual data augmentation techniques (crop, rotation, translation, and elastic deformation) during training. All models were trained for 200 epochs to convergence.}
\label{tab:results_gan_sota}
\begin{center}
\begin{tabular}{|c|||c|c||c|c|}
\hline
\multirow{2}{*}{Method} & \multirow{2}{*}{Real} & \multirow{2}{*}{Real + Synthetic} & \multirow{2}{*}{Synthetic only} & Synthetic only, \\
& & & & fine-tune on 10\% real \\ \hline
\cmidrule{1-5}
GAN-based (no aug) & 0.64/0.14 & 0.80/0.07 & 0.25/0.14 & 0.80/0.18 \\
\hline
GAN-based (with aug) & 0.81/0.13 & 0.82/0.08 & 0.44/0.16 & 0.81/0.09 \\
\hline
Wang \textit{et al}. \cite{wang2017automatic} & 0.85/0.15 & 0.86/0.09 & 0.66/0.13 & 0.84/0.15 \\
\hline
\end{tabular}
\end{center}
\vspace{-5pt}
\end{table}
\section{Experiments and Results}
\subsection{Data Augmentation using Synthetic Data}
Dice score evaluation of the whole tumor segmentation produced by the GAN-based model and the model of Wang \textit{et al}. \cite{wang2017automatic} (trained on real and synthetic data) are shown in Table~\ref{tab:results_gan_sota}.
The segmentation models are trained on 80\% of the BRATS'15 training data only, and the training data supplemented with synthetic data.
Dice scores are evaluated on the 20\% held-out set from the BRATS'15 training data.
All models are trained for 200 epochs on NVIDIA DGX systems.
A much improved performance with the addition of synthetic data is observed without usual data augmentation (crop, rotation, elastic deformation; GAN-based (no-aug)).
However, a small increase in performance is observed when added with usual data augmentation (GAN-based (no-aug)), and it applies also to the model of Wang \textit{et al}. \cite{wang2017automatic} that incorporates usual data augmentation techniques.
Wang \textit{et al.} model operates in full resolution (256x256) combining three 2D models for each axial/coronal/sagittal view, whereas our model and generator operates in half the resolution (128x128x54) due to GPU memory limit.
We up-sampled the GAN-generated images twice the generated resolution for a fair comparison with BRATS challenge, however it is possible that very small tumor may get lost during the down-/up- sampling.
A better performance may be observed using the GAN-based model with an availability of GPU with more memory.
Also, we believe that the generated synthetic images having half the resolution, coupled with the lack of the image sequences for training other than T1-weighted ones possibly led to the relatively small increase in segmentation performance compared to using the usual data augmentation techniques.
We carefully hypothesize that with more T2/Flair images being available, better image quality will be observed for these sequences and so better performance for more models and tumor types.
\subsection{Training on Anonymized Synthetic Data}
We also evaluated the performance of the GAN-based segmentation on synthetic data only, in amounts greater than or equal to the amount of real data but without including any of the original data.
The dice score evaluations are shown in Table~\ref{tab:results_gan_sota}.
Sub-optimal performance is achieved for both our GAN-based and the model of Wang \textit{et al}. \cite{wang2017automatic} when training on an amount of synthetic data equal to the original 80\% training set.
However, higher performance, comparable to training on real data, is achieved when training the two models using more than five times as much synthetic data (only), and fine-tuning using a 10\% random selection of the ``real'' training data.
In this case, the synthetic data provides a form of pre-training, allowing for much less ``real'' data to be used to achieve a comparable level of performance.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a generative algorithm to produce synthetic abnormal brain tumor multi-parametric MRI images from their corresponding segmentation masks using an image-to-image translation GAN.
High levels of variation can be introduced when generating such synthetic images by altering the input label map.
This results in improvements in segmentation performance across multiple algorithms.
Furthermore, these same algorithms can be trained on completely anonymized data sets allowing for sharing of training data.
When combined with smaller, institution-specific data sets, modestly sized organizations are provided the opportunity to train successful deep learning models.
\bibliographystyle{plain}
|
1,116,691,499,096 | arxiv | \section{Introduction}
Toric varieties associated to root systems have been considered,
investigated and used in several papers
\cite{CP83}, \cite{VK85}, \cite{Pro90}, \cite{DL94}, \cite{Kly95}, \cite{BJ08}.
The so called crystallographic arrangements are generalizations
of the classical root systems and their Weyl chamber structure.
In this paper we establish a one to one correspondence between crystallographic
arrangements and toric varieties which are smooth and projective,
and which have the property of being strongly symmetric, see Def.\ \ref{def:stsym},
a property which has not been used in the previous papers mentioned above.
Crystallographic arrangements have originally been used in the theory of pointed Hopf algebras:
Classical Lie theory leads to the notion of Weyl groups which are special reflection groups
characterized by a certain integrality and which are therefore also
called \emph{crystallographic} reflection groups.
A certain generalization of the universal enveloping algebras of Lie algebras
yields Hopf algebras to which one can associate \emph{root systems} and \emph{Weyl groupoids}
(see \cite{p-H-06}, \cite{p-HS-08}, \cite{p-AHS-08}). The case of \emph{finite Weyl groupoids}
has recently been treated including a complete classification in a series of papers
\cite{p-CH09a}, \cite{p-CH09b}, \cite{p-CH09d}, \cite{p-CH09c}, \cite{p-CH10}.
The theorems needed for the classification reveal an astonishing connection:
It turns out that finite Weyl groupoids correspond to certain simplicial
arrangements called \emph{crystallographic} \cite{p-C10}:
Let $\Ac $ be a simplicial arrangement of finitely many real hyperplanes in a
Euclidean space $V$ and let $R$ be a
set of nonzero covectors such that $\Ac =\{\alpha ^\perp \,|\,\alpha \in R\}$.
Assume that $\RR\alpha \cap R=\{\pm\alpha\}$ for all $\alpha\in R$.
The pair $(\Ac ,R)$ is called crystallographic, see \cite[Def.\ 2.3]{p-C10} or Def.\ \ref{def:cryarr},
if for any chamber $K$ the elements of $R$
are integer linear combinations of the covectors defining the walls of $K$.
For example, crystallographic Coxeter groups give rise to crystallographic
arrangements in this sense, but there are many other.
Thus the main feature of crystallographic arrangements is the integrality.
But integrality is also the fundamental property of a fan in toric geometry.
Indeed, the set of closed chambers of a rational simplicial arrangement is a fan
which is strongly symmetric.
A closer look reveals that the property \emph{crystallographic} corresponds
to the smoothness of the variety.
We obtain (see Thm.\ \ref{thmcor}):
\begin{theor}
There is a one to one correspondence between crystallographic arrangements
and strongly symmetric\ smooth complete fans.
\end{theor}
Thus the classification of finite Weyl groupoids \cite{p-CH10} gives:
\begin{corol}
Any strongly symmetric\ smooth complete toric variety is isomorphic to a product of
\begin{enumerate}
\item varieties of dimension two corresponding to triangulations of a
convex $n$-gon by non-intersecting diagonals (see Section \ref{ranktwo}),
\item varieties of dimension $r>2$ corresponding to the reflection arrangements of type $A_r$, $B_r$, $C_r$
and $D_r$, or out of a series of $r-1$ further varieties,
\item $74$ further ``sporadic'' varieties.
\end{enumerate}
\end{corol}
To each crystallographic arrangement $\Ac$, we construct a polytope $P$
such that the toric variety of $P$ is isomorphic to the toric variety
corresponding to $\Ac$. Thus we obtain that the variety is projective,
see Section \ref{projectivity}.
Further, the strong symmetry of the fan $\Sigma$ associated to
$\Ac$
gives rise to a system $\{Y^E\}_{E\in L(\Ac)}$ of
smooth strongly symmetric\ toric varieties $Y^E\subseteq X_\Sigma$
(here $L(\Ac)$ is the poset of intersections of hyperplanes of $\Ac$).
This system mirrors the arrangement $\Ac$ in $X_\Sigma$ in a remarkable
way, see Section \ref{torarr}, and will be called the associated toric
arrangement.
The intersections $Y^H\cap T$ with the torus $T$ of $X_\Sigma$ for
$H\in \Ac$ are subtori of $T$ and form a toric arrangement.
This note is organized as follows. After recalling the notions
of fans and arrangements of hyperplanes in Section \ref{prel},
we collect some results on strongly symmetric\ fans in Section \ref{stsymfans}.
We then prove the main theorem (the correspondence) in Section
\ref{corresp}. In Section \ref{projectivity} we construct a
polytope for each crystallographic arrangement.
In Section \ref{ranktwo} we compare the well-known classifications
of smooth complete surfaces (specified for the centrally symmetric\ case) and
the corresponding arrangements of rank two.
In the following section we discuss the toric arrangements
associated to the crystallographic arrangements.
The last section consists of further remarks on irreducibility,
blowups, and automorphisms.
\vspace{\baselineskip}
\textbf{Acknowledgement.} We would like to thank M.\ Brion
for helpful remarks and hints to literature.
\section{Preliminaries}\label{prel}
Let us first recall the notions of hyperplane arrangements and of
fans for normal toric varieties.
For subsets $A$ in a real vector space $V$ of dimension $r$ and
a subset $B$ of its dual $V^*$ we set
\begin{eqnarray*}
A^\perp&=&\{b\in V^* \mid b(a) = 0 \; \forall \, a \in A \}, \\
B^\vee&=&\{a\in V \mid b(a) \geq 0 \; \forall \, b \in B \},\\
B^\perp&=&\{a\in V \mid b(a) = 0 \; \forall \, b \in B \}.
\end{eqnarray*}
An \emph{open} or \emph{closed simplicial cone} $\sigma$ is a subset $\sigma\subseteq V$ such
that there exist linearly independent $n_1,\ldots,n_d$, $d\in\NN$ with
\begin{eqnarray*}
&& \sigma = \langle n_1,\ldots,n_d\rangle_{\RR_{>0}}:=\RR_{>0} n_1+\hdots+\RR_{>0} n_d\\
\text{or} && \sigma = \langle n_1,\ldots,n_d\rangle_{\RR_{\ge 0}}:=\RR_{\ge 0} n_1+\hdots+\RR_{\ge 0} n_d
\end{eqnarray*}
respectively.
\subsection{Fans and toric varieties}
Given a lattice $N$ in $V$ of rank $r$, its dual lattice
$M = \Hom(N,\ZZ)$ is viewed as lattice in $V^*$.
A subset $\sigma\subseteq V$ is called a (closed) \emph{strongly convex rational polyhedral cone}
if there exist $n_1,\ldots,n_d \in N$ such that
\[ \sigma=\langle n_1,\ldots,n_d\rangle_{\RR_{\geq 0}} \quad
\text{and} \quad \sigma\cap-\sigma=\{0\}. \]
We say that $n_1,\ldots,n_d$ are \emph{generators} of $\sigma$.
By abuse of notation we will call such a cone simply an ``$N$-cone''.\\
We call $\sigma$ \emph{simplicial}, if $\sigma$ is a closed simplicial cone.
If $\sigma$ is generated by a subset of a $\ZZ$-basis of $N$,
then we say that $\sigma$ is \emph{smooth}.\\
Let $\sigma$ be an $N$-cone. We write
$\langle\sigma\rangle_\RR:=\sigma+(-\sigma)$
for the subspace spanned by $\sigma$.
The \emph{dimension} $\dim(\sigma)$ of $\sigma$ is the dimension of $\langle\sigma\rangle_\RR$.
Identifying $N_\RR=N\otimes_\ZZ\RR$ with $V$, we consider fans
$\Sigma$ in $N_\RR$ of strongly convex rational polyhedral cones as defined
in the standard theory of toric varieties, see \cite{Oda}, \cite{CLS}:
A \emph{face} of $\sigma$ is the intersection of $\sigma$ with a supporting hyperplane,
$\sigma \cap m^\perp$, $m \in V^*$, $m(a)\ge 0$ for all $a\in\sigma$.
Faces of codimension $1$ are called \emph{facets}.\\
A \emph{fan} in $N$ is a nonempty collection of $N$-cones $\Sigma$ such that
\begin{enumerate}
\item any face $\tau$ of a cone $\sigma \in \Sigma$ is contained in $\Sigma$,
\item any intersection $\sigma_1 \cap \sigma_2$ of two cones $\sigma_1,\sigma_2\in\Sigma$ is a face of
$\sigma_1$ and $\sigma_2$.
\end{enumerate}
For $k\in\NN$ we write $\Sigma(k)=\{\sigma\in\Sigma\mid \dim(\sigma)=k\}$.
For $S\subseteq\Sigma$ we write $\Supp S=\bigcup_{\sigma\in S}\sigma$
for the \emph{support} of $S$.
The fan $\Sigma$ and its associated toric variety $X_\Sigma$
(over the ground field $\CC$) is called \emph{simplicial} if any cone of $\Sigma$
is simplicial. It is well-known that $X_\Sigma$ for finite $\Sigma$ is nonsingular (smooth)
if and only if each cone $\sigma$ of $\Sigma$ is smooth. Moreover,
$X_\Sigma$ is complete (compact) if and only if $\Sigma$ is finite
and $\Supp \Sigma=N_\RR$.
\subsection{Crystallographic arrangements}\label{A_R}
Let $\Ac$ be a simplicial arrangement in $V=\RR^r$, i.e.\
$\Ac=\{H_1,\ldots,H_n\}$ where $H_1,\ldots,H_n$ are distinct linear hyperplanes in $V$
and every component of $V\smallsetminus\bigcup_{H\in\Ac} H$ is an open simplicial cone.
Let $\Kc(\Ac)$ be the set of connected components of $V\smallsetminus \bigcup_{H\in\Ac} H$;
they are called the {\it chambers} of $\Ac$.
For each $H_i$, $i=1,\ldots,n$ we choose an element $x_i\in V^*$ such that $H_i=x_i^\perp$.
Let
\[ R = \{\pm x_1,\ldots,\pm x_n\}\subseteq V^*. \]
For each chamber $K\in\Kc(\Ac)$ set
\begin{eqnarray*}
W^K &=& \{ H\in\Ac \mid \dim(H\cap \overline K) = r-1 \}, \\
B^K &=& \{ \alpha\in R \mid \alpha^\perp\in W^K,\quad
\{\alpha\}^\vee \cap K = K \} \subseteq R.
\end{eqnarray*}
Here, $\overline{K}$ denotes the closure of $K$.
The elements of $W^K$ are the {\it walls} of $K$ and
$B^K$ ``is'' the set of normal vectors of the walls of $K$ pointing to the inside.
Note that
\[ \overline K = \bigcap_{\alpha\in B^K} \{\alpha\}^\vee, \]
and that $B^K$ is a basis of $V^*$ because $\Ac$ is simplicial.
Moreover, if $\alpha^\vee_1,\ldots,\alpha^\vee_r$ is the dual basis to
$B^K=\{\alpha_1,\ldots,\alpha_r\}$, then
\begin{equation}\label{simp_cone}
K = \Big\{ \sum_{i=1}^r a_i\alpha_i^\vee \mid a_i> 0 \quad\mbox{for all}\quad
i=1,\ldots,r \Big\}.
\end{equation}
\begin{defin}\label{def:cryarr}
Let $\Ac$ be a simplicial arrangement and $R\subseteq V^*$ a finite set
such that $\Ac = \{ \alpha^\perp \mid \alpha \in R\}$ and $\RR\alpha\cap R=\{\pm \alpha\}$
for all $\alpha \in R$.
For $K\in\Kc(\Ac)$ set
\[ R^K_+ = R \cap \sum_{\alpha \in B^K} \RR_{\ge 0} \alpha. \]
We call $(\Ac,R)$ a \emph{crystallographic arrangement} if
for all $K\in\Kc(\Ac)$:
\begin{itemize}
\item[(I)] \quad $R \subseteq \sum_{\alpha \in B^K} \ZZ \alpha$.
\end{itemize}
\end{defin}
\begin{remar}
Notice that one can prove that in fact if $(\Ac,R)$ is crystallographic,
then $R \subseteq \pm\sum_{\alpha \in B^K} \NN_0 \alpha$ (see \cite{p-C10}).
\end{remar}
\section{Strong symmetry of fans}\label{stsymfans}
\begin{defin}\label{def:stsym}
We call a fan $\Sigma$ in $V$ \emph{strongly symmetric} if it is complete and
if there exist hyperplanes $H_1,\ldots,H_n$ in $V$ such that
\[\Supp \Sigma(r-1)=H_1\cup \ldots\cup H_n.\]
We write $\Ac(\Sigma):=\{H_1,\ldots,H_n\}$.
We call a toric variety $X_\Sigma$ \emph{strongly symmetric} if $\Sigma$ is strongly symmetric.
We call a fan $\Sigma$ \emph{centrally symmetric} if $\Sigma = -\Sigma$.
We call a toric variety $X_\Sigma$ \emph{centrally symmetric} if $\Sigma$ is centrally symmetric.
\end{defin}
\begin{remar}
One could also call a strongly symmetric\ fan \emph{strongly complete}
because for any $\tau\in\Sigma$ the collection of $\sigma\cap\langle\tau\rangle_\RR$,
$\sigma\in\Sigma$, is a complete fan in $\langle\tau\rangle_\RR$ as a subfan
of $\Sigma$.
\end{remar}
\begin{lemma}\label{tauHi}
Let $\tau$ be an $(r-1)$-dimensional cone in $\RR^r$ and
$H_1,\ldots,H_n$ be hyperplanes in $\RR^r$.
If $\tau\subseteq H_1\cup\ldots\cup H_n$, then $\tau\subseteq H_i$
for some $1\le i\le n$.
\end{lemma}
\begin{proof}
We construct inductively sets $T_i\subseteq \tau$ with $i+r-1$ elements such that each
subset $B$, $|B|=r-1$, is linearly independent:
Let $T_0:=\{n_1,\ldots,n_{r-1}\}$,
where $n_1,...,n_{r-1}\in\tau$ are linearly independent and span $\langle\tau\rangle_\RR$.
Given $T_i$, let
\[ \Xi_i:=\{\langle v_1,\ldots,v_{r-2}\rangle \mid v_1,\ldots,v_{r-2}\in T_i\} \]
be the set of subspaces generated by $r-2$ elements of $T_i$.
Since $\tau$ has dimension $r-1$, $\bigcup_{U\in\Xi_i} U\ne \langle\tau\rangle_\RR$.
For any $w\in \tau\smallsetminus \bigcup_{U\in\Xi_i} U$, $T_{i+1}:=T_i\cup\{w\}$
has the required property.
Now consider the $(r-1)n$ elements of $T_{(r-1)(n-1)}$.
Let $\ell$ be the maximal number of elements in any $H_i$. Then $\ell\ge r-1$.
Then there is an $1\le i\le n$ such that $r-1$ of these elements lie in $H_i$.
These are linearly independent and belong to $\tau$,
so $\tau\subseteq\langle\tau\rangle_\RR\subseteq H_i$.
\end{proof}
\begin{lemma}\label{stsymHi}
Let $\Sigma$ be an $r$-dimensional fan. Then the following are equivalent:
\begin{enumerate}
\item\label{p1} $\Sigma$ is complete, and for all $\tau\in\Sigma(r-1)$, $\sigma\in\Sigma$,
\[ \sigma\cap \langle\tau\rangle_\RR \in \Sigma, \]
\item the fan $\Sigma$ is strongly symmetric.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume $(1)$. Let $\tau\in\Sigma(r-1)$. Since $\Sigma$ is complete,
$\langle\tau\rangle_\RR\subseteq \Supp \Sigma$.
Thus $\langle\tau\rangle_\RR=\bigcup_{\sigma\in\Sigma}\langle\tau\rangle_\RR\cap\sigma$.
By (\ref{p1}), $\Sigma':=\{\langle\tau\rangle_\RR\cap\sigma\mid\sigma\in\Sigma\}$
is a subfan of $\Sigma$. Further, $\Supp \Sigma'(r-1)=\Supp \Sigma'=\langle\tau\rangle_\RR$
because $\Sigma'$ is complete in $N\cap\langle\tau\rangle_\RR$ and
the maximal cones in $\Sigma'$ have dimension $r-1$.
Hence for each $\tau$ of codimension $1$, $\langle\tau\rangle_\RR$ is a union
of elements of $\Sigma(r-1)$.
This implies $\Supp \Sigma(r-1)=\bigcup_{\tau\in\Sigma(r-1)}\langle\tau\rangle_\RR$
(finite union by definition of complete).
Now assume $\Supp \Sigma(r-1)=H_1\cup \ldots\cup H_n$ for some hyperplanes $H_1,\ldots,H_n$.
Let $\tau\in\Sigma(r-1)$ and $\sigma\in\Sigma$.
Then by Lemma \ref{tauHi}, $\langle\tau\rangle_\RR=H_i$ for some $1\le i\le n$,
and there exist $\eta_1,\ldots,\eta_k\in\Sigma(r-1)$ with $H_i=\eta_1\cup\ldots\cup\eta_k$.
But $\sigma\cap H_i= \bigcup_{j=1}^k \sigma\cap\eta_j$,
so $\stackrel{\circ}{\sigma}\cap H_i=\emptyset$, i.e.\ $H_i$ is a supporting
hyperplane and $\sigma\cap H_i$
is a face of $\sigma$ and thus an element of $\Sigma$.
\end{proof}
\begin{lemma}\label{AtoSigma}
Let $\Sigma$ be an $r$-dimensional strongly symmetric\ fan.
Then the set of all intersections of closed chambers of $\Ac(\Sigma)$ is $\Sigma$.
In particular, $\Sigma$ is centrally symmetric.
\end{lemma}
\begin{proof}
Let $\sigma\in\Sigma(r)$. Then the facets of $\sigma$ are in
$\Supp \Sigma(r-1)=H_1\cup \ldots\cup H_n$ and
$\stackrel{\circ}{\sigma}\subseteq \RR^r\smallsetminus \Supp \Sigma(r-1)$.
Since $\Sigma$ is complete, $\Sigma(r)$ is the set of closed chambers
of $\Ac$.
\end{proof}
\begin{defin}
Let $\Sigma$ be a fan in $N$, $\delta \in \Sigma$, and write
$\kappa : V \rightarrow V/\langle\delta\rangle_\RR$ for the canonical projection.
Then
\[ \Star(\delta) = \{ \overline\sigma=\kappa(\sigma) \subseteq V/\langle\delta\rangle_\RR \mid \delta \subseteq \sigma \in \Sigma\}\]
is a fan in $N(\delta):=\kappa(N)$ (compare \cite[Ex.\ 3.2.7]{CLS}).
Its toric variety is isomorphic to the orbit closure $V(\delta)$ in $X_\Sigma$.
\end{defin}
\begin{lemma}\label{stsymstar}
Let $\Sigma$ be an $r$-dimensional fan. Then the following are equivalent:
\begin{enumerate}
\item The fan $\Sigma$ is strongly symmetric,
\item the fan $\Star(\sigma)$ is strongly symmetric\ for all $\sigma \in \Sigma$.
\end{enumerate}
\end{lemma}
\begin{proof}
We use Lemma \ref{stsymHi}.
Assume $(1)$. Let $\sigma \in \Sigma$ and consider a cone $\overline\tau \in \Star(\sigma)$ of codimension one.
Then $\langle\overline\tau\rangle_\RR=\overline{\langle\tau\rangle}_\RR\subseteq V/\langle\sigma\rangle_\RR$
and hence for any cone $\overline\pi \in \Star(\sigma)$ we have
$\overline\pi \cap \langle\overline\tau\rangle_\RR = \overline{\pi \cap \langle\tau\rangle_\RR} \in \Star(\sigma)$,
because $\pi \cap \langle\tau\rangle_\RR$ is a cone in $\Sigma$ containing $\sigma$; thus $\Star(\sigma)$
is strongly symmetric.
Since $\Sigma=\Star(\{0\})$, $(1)$ follows from $(2)$.
\end{proof}
\begin{propo}
Let $\Sigma$ be an $r$-dimensional complete fan. Then the following are equivalent:
\begin{enumerate}
\item The fan $\Sigma$ is strongly symmetric,
\item the fan $\Star(\sigma)$ is centrally symmetric\ for all $\sigma \in \Sigma$,
\item the fan $\Star(\delta)$ is centrally symmetric\ for all $\delta \in \Sigma(r-2)$.
\end{enumerate}
\end{propo}
\begin{proof}
The implication $(1) \Rightarrow (2)$ follows from Lemma \ref{AtoSigma} and Lemma \ref{stsymstar};
$(2) \Rightarrow (3)$ is obvious.
Suppose that $\Star(\delta)$ is centrally symmetric\ for any $\delta\in\Sigma(r-2)$.
We have to show that for any $\tau_0\in\Sigma(r-1)$,
$H:=\langle\tau_0\rangle_\RR\subseteq S:=\Supp \Sigma(r-1)$. Suppose $H\nsubseteq S$.
Let $\{\tau_0,\ldots,\tau_k\}=\{\tau\in\Sigma(r-1)\mid \tau\subseteq H\}$. Then
\[ \tau_0\cup\ldots\cup \tau_k \subsetneqq H. \]
Let $p$ be a point of the relative border $\partial(\tau_0\cup\ldots\cup \tau_k)$ in $H$.
Then there is an $i$ with $p\in\partial\tau_i$ and
a $\delta\in\Sigma(r-2)$, $\delta\subseteq\tau_i$ such that
$p\in\delta\subseteq\tau_i\subseteq\langle\tau_i\rangle_\RR=H$.
We have $\overline{\tau_i}\in\Star(\delta)$, $\overline{\tau_i}\subseteq \overline{H}$,
and $\dim \overline{H}=1$.
Because $\Star(\delta)$ is centrally symmetric, $-\overline{\tau_i}\in\Star(\delta)$.
Then $-\overline{\tau_i}=\overline{\tau'}$ for some $\delta\subseteq\tau'\in\Sigma(r-1)$
with $\overline{\tau'}\subseteq\overline{H}$.
Then $\tau'\subseteq H$, $\delta\subseteq\tau_i\cap\tau'$ and $\tau'\ne\tau_i$.
Hence $\delta=\tau_i\cap\tau'$ because $\dim(\delta)=r-2$.
But then $p\notin\partial(\tau_0\cup\ldots\cup\tau_k)$, contradicting the
assumption.
\end{proof}
\begin{examp}
There are of course fans which are centrally symmetric\ but not strongly symmetric.
Here is such an example which is smooth:
Let $R$ be the standard basis of $\RR^3$ and $\Sigma_R$ be the fan as defined
in Lemma \ref{lem1}. Blowing up along two opposite cones $\sigma,-\sigma\in\Sigma_R$
preserves the central symmetry, but the resulting fan is not strongly symmetric.
\end{examp}
In the case of smooth strongly symmetric\ fans, we obtain
\begin{lemma}\label{lemrestr}
Let $\Sigma$ be a smooth strongly symmetric\ fan in $N$, $\sigma\in\Sigma$ and $E:=\langle\sigma\rangle_\RR$.
Then $N\cap E$ is a lattice of rank $\dim(\sigma)$ and
$\Sigma^E:=\{\eta\cap E\mid \eta\in\Sigma\}\subseteq\Sigma$ is a smooth strongly symmetric
fan in $N\cap E$.
\end{lemma}
\begin{proof}
Using a $\ZZ$-basis of $\sigma$ one finds that $N\cap E$ is a sublattice of $N$
of rank $\dim(\sigma)$ and that the inclusion $N\cap E\hookrightarrow N$ is split.
Consider first a $\sigma\in\Sigma(r-1)$ and let $E:=\langle\sigma\rangle_\RR$.
By Lemma \ref{stsymHi}, $\eta\cap E\in\Sigma$ for all $\eta\in\Sigma$.
Thus $\Sigma^E$ is a subfan of $\Sigma$ and it is complete since $\Supp \Sigma=V$.
Write $\Supp \Sigma(r-1)=E\cup H_2\cup\ldots\cup H_n$ for hyperplanes
$H_2,\ldots,H_n$. Then
\[ \Supp \Sigma^E(r-2)=(H_2\cup\ldots\cup H_n)\cap E
=(H_2\cap E)\cup\ldots\cup (H_n\cap E), \]
i.e.\ $\Sigma^E$ is strongly symmetric.
The claim is true for arbitrary $\sigma\in\Sigma$ by induction on $\dim(\sigma)$.
\end{proof}
\section{The correspondence}\label{corresp}
\begin{lemma}\label{lem1}
Let $(\Ac,R)$ be a crystallographic arrangement in $V$. Set
\[ M_R:=\sum_{\alpha\in R} \ZZ\alpha \cong \ZZ^r \]
and let $N_R$ be the dual lattice to $M_R$.
Then the set $\Sigma_R$ of all intersections of closed chambers of $\Ac$ is a
strongly symmetric\ smooth fan in $N_R$.
\end{lemma}
\begin{proof}
It is clear that $\Sigma_R$ is a strongly symmetric\ fan.
Let $\sigma\in\Sigma_R$ be of maximal dimension, i.e.\ $\sigma=\overline{K}$
for a chamber $K\in\Kc(\Ac)$.
By Equation \ref{simp_cone}, $\sigma$ is generated by the basis of $N_R$ dual to $B^K$,
hence $\sigma$ is smooth.
\end{proof}
\begin{lemma}\label{lem2}
Let $\Sigma$ be a strongly symmetric\ smooth fan in $N\subseteq V=\RR^r$.
Then there exists a set $R\subseteq V^*$ such that
$(\Ac,R)$ is a crystallographic arrangement, where
\[ \Ac = \Ac(\Sigma) = \{ \langle\tau\rangle_\RR \mid \tau\in\Sigma(r-1)\}. \]
\end{lemma}
\begin{proof}
Since $\Sigma$ is strongly symmetric, $\Ac$ is a finite set of hyperplanes,
and by Lemma \ref{AtoSigma}, the set of all intersections of closed
chambers of $\Ac$ is $\Sigma$.
Further,
\[ \bigcup_{\sigma\in\Sigma(r)} \stackrel{\circ}{\sigma}
\:=\: V\setminus \bigcup_{H\in\Ac} H \]
since each facet of a $\sigma\in\Sigma(r)$ is
contained in a hyperplane of $\Ac$ and since $\Sigma$ is complete.
The cones $\stackrel{\circ}{\sigma}$ in the above union are open simplicial cones,
because $\sigma$ is smooth, hence $\Ac$ is a simplicial arrangement.
Let $\sigma\in\Sigma$ be a cone of maximal dimension. Since
$\sigma$ is smooth, there exists a unique $\ZZ$-basis of $N$
generating $\sigma$. We will prematurely denote $B^{K_\sigma}$ its dual basis,
where $K_\sigma$ is the chamber with $\overline{K_\sigma} = \sigma$.
Now set $R$ to be the union of all the $B^{K_\sigma}$ for $\sigma\in\Sigma(r)$.
Clearly,
\[ R \subseteq \sum_{\alpha \in B^{K_\sigma}} \ZZ \alpha, \]
since each $B^{K_\sigma}$ is a $\ZZ$-basis of $M=\Hom(N,\ZZ)$ and $R\subseteq M$.
It remains to show that for each hyperplane $H=\langle\tau\rangle_\RR\in\Ac$,
$\tau\in\Sigma(r-1)$, there is a vector $x\in R$ such that $R \cap H^\perp=\{\pm x\}$.
Let $\sigma\in\Sigma(r)$ containing $\tau$,
and $x$ be the element with $\{x\}=B^{K_\sigma}\cap H^\perp$.
In particular $x$ is primitive.
Assume $\lambda x\in R$ for a $\lambda\in\ZZ$.
Then there exists a $\sigma'\in\Sigma$ with $\lambda x\in B^{K_{\sigma'}}$.
Thus $\lambda=\pm 1$ since $B^{K_{\sigma'}}$ is a $\ZZ$-basis of $M$.
\end{proof}
\begin{theor}\label{thmcor}
The map $(\Ac,R) \mapsto \Sigma_R$ from the set of crystallographic arrangements
to the set of strongly symmetric\ smooth fans is a bijection.
\end{theor}
\begin{proof}
This is Lemma \ref{lem1} and Lemma \ref{lem2}.
\end{proof}
\begin{corol}
A complete classification of strongly symmetric\ smooth toric varieties is now known.
\end{corol}
\begin{proof}
This is \cite[Thm.\ 1.1]{p-CH10}.
\end{proof}
\begin{defin}
We denote the toric variety of the fan $\Sigma_R$ by $X(\Ac,R)$ or $X(\Ac)$ and
call it the toric variety of the arrangement $(\Ac,R)$.
\end{defin}
\begin{remar}
For a fixed crystallographic arrangement $(\Ac,R)$, choosing another lattice
than $M_R$ may result in a strongly symmetric\ fan which is not smooth.
Further, the correspondence $(\Ac,R)\mapsto\Sigma_R$ extends
by its definition to a correspondence between rational simplicial
arrangements and simplicial strongly symmetric\ fans. However,
there exist rational simplicial non-crystallographic arrangements, i.e.,
there is a basis with respect to which all co\-vectors of the hyperplanes have
rational coordinates, although there is no lattice $M$ for which the
corresponding fan is smooth.
The smallest example in dimension three has $12$ hyperplanes and is
denoted $\Ac(12,1)$ in \cite{p-G-09} (compare the catalogue \cite{p-G-09}
with the list in \cite{p-CH09c}).
\end{remar}
\begin{remar}
Any smooth complete fan in $N$ can be visualized by a triangulation of
the sphere $S=V\smallsetminus\{0\}/\RR_{>0}$, see \cite[Sect.\ 1.7]{Oda}.
Such a fan is centrally symmetric\ if and only if its triangulation is invariant under
the reflection $p\leftrightarrow -p$ of $S$, and the
strong symmetry of the fan $\Sigma_R$ of a crystallographic arrangement
$(\Ac,R)$ means that its triangulation is induced by the hyperplane sections
$H\cap S$, $H\in\Ac$.
In particular in dimension $3$ Tsuchihashi's characterization by admissible
$N$-weights (see \cite[Cor.\ 1.32]{Oda}) for strongly symmetric fans agrees
with the classification in \cite{p-CH09c}. For higher dimension the correspondence
to Weyl groupoids produces similar conditions if one considers certain products
of reflections.
For a geometric interpretation of the strong symmetry of $X(\Ac)$ see
Rem.\ \ref{georem}.
\end{remar}
\begin{examp}\label{exwg37}
The crystallographic arrangement with the largest number of hyperplanes in dimension three
has $37$ hyperplanes. Fig.\ \ref{wg37} is a projective image of this \emph{sporadic} arrangement:
The triangles correspond to the maximal cones; one hyperplane is the line at infinity.
\begin{figure}
\begin{center}
\includegraphics[width=1.3\textwidth,clip=true,trim=100pt 200pt 0pt 200pt]{wg37proj.pdf}
\end{center}
\caption{The largest crystallographic arrangement in dimension three
(see Example \ref{exwg37})\label{wg37}}
\end{figure}
\end{examp}
\vspace{-9pt}
We further obtain a new proof of \cite[Prop.\ 5.3]{BC10}:
\begin{corol}\label{restrU}
Let $\Ac$ be a crystallographic arrangement and $E$ be an intersection
of hyperplanes of $\Ac$. Then the restriction $\Ac^E$ of $\Ac$ to $E$,
\[ \Ac^E:=\{E\cap H\mid H \in\Ac,\:\: E\nsubseteq H\} \]
is a crystallographic arrangement.
\end{corol}
\begin{proof}
This follows from Thm.\ \ref{thmcor}, the fact that subfans of
smooth fans are smooth, and Lemma \ref{lemrestr}.
\end{proof}
\section{Projectivity}\label{projectivity}
Let $(\Ac,R)$ be a crystallographic arrangement and $N,M,V,V^*$ be as in
Section \ref{corresp}, $\Sigma:=\Sigma_R$.
We first prove that $X(\Ac)=X_\Sigma$ is projective by
constructing a polytope $P$ such that $X_P\cong X_\Sigma$.
\begin{propo}\label{polytop}
Let $\Ac$ be a crystallographic arrangement. For a chamber $K$ let
\[ \rho^K:=\frac{1}{2} \sum_{\alpha\in R_+^K} \alpha. \]
Then the set $\{\rho^K\mid K \in \Kc(\Ac)\}$ is the set of vertices of
an integral convex polytope $P$ in $\frac{1}{2}M$.
\end{propo}
\begin{proof}
For each chamber $K$ define a simplicial cone by
\[ S^K:=\rho^K-\langle\alpha\mid \alpha\in B^K\rangle_{\RR_{\ge 0}}. \]
Let $P$ be the polytope
\[ P:=\bigcap_{K \in \Kc(\Ac)} S^K. \]
Let $K$ be a chamber. We prove that $\rho^K$ is a vertex of $P$ by showing
$\rho^K\in P$: Let $K'$ be a chamber.
Notice first that for $\alpha\in R$ we have
\[ \alpha\in R_+^K \quad\Longleftrightarrow\quad -\alpha\in R\smallsetminus R_+^K,\]
which implies $R_+^{K'}\smallsetminus R_+^K=-R_+^K\smallsetminus R_+^{K'}$. Thus
\[ \rho^K = \rho^{K'}
-\frac{1}{2}\sum_{\alpha\in R_+^{K'}\smallsetminus R_+^K}\alpha
+\frac{1}{2}\sum_{\alpha\in R_+^K\smallsetminus R_+^{K'}}\alpha
= \rho^{K'}-\sum_{\alpha\in R_+^{K'}\smallsetminus R_+^K}\alpha\in S^{K'}. \]
\end{proof}
\begin{remar}
The set $\{\rho^K\mid K \in \Kc(\Ac)\}$ of the last proposition is
the orbit of one fixed $\rho^K$ under the action of the Weyl groupoid $\Wc(\Ac)$
since for a simple root $\alpha\in B^K$ we have $\sigma_\alpha(\rho^K)=\rho^K-\alpha$
(see \cite{p-CH09a}).
\end{remar}
\begin{corol}
Let $\Ac$ be a crystallographic arrangement. Then $X_\Sigma$ is a projective variety
isomorphic to $X_P$, where $P$ is the polytope of Prop.\ \ref{polytop}.
\end{corol}
\begin{proof}
This is Prop.\ \ref{polytop} and \cite[Thm.\ 2.22]{Oda}.
\end{proof}
We now describe an explicit immersion of $X_\Sigma$ into $\PP_1^R\cong\PP_1^{2n}$.
\begin{defin}
For any $\sigma\in\Sigma$, $\alpha\in R$ let
\[ s_\alpha(\sigma) = \begin{cases}
+1 & \text{ if } \alpha(\sigma)=\RR_{\ge 0} \\
\ \ 0 & \text{ if } \alpha(\sigma)=\{0\} \\
-1 & \text{ if } \alpha(\sigma)=\RR_{\le 0}
\end{cases} \]
and let $s(\sigma)=(s_\alpha(\sigma))_{\alpha\in R}$.
\end{defin}
\begin{defin} Let $2n=|R|$, let $V'$ be a $2n$-dimensional vector space
over $\RR$ and $(e_\alpha)_{\alpha\in R}$ be a basis of $V'^*$. Further,
let $M':=\ZZ\{e_\alpha\mid \alpha\in R\}\subseteq V'^*$ be the lattice
generated by this basis and let $N'$ be the dual lattice.
Then $\Ac':=\{e_{\alpha}^\perp\mid \alpha\in R\}$ is a Boolean arrangement
and we call the corresponding fan $\Sigma':=\Sigma(\Ac')$ a \emph{Boolean fan}.
Notice that
\[ X_{\Sigma'}\cong \PP_1^{2n}. \]
\end{defin}
Consider the homomorphism $M'\rightarrow M$, $e_\alpha\mapsto \alpha$ for
$\alpha\in R$ and its dual
\[ \varphi : N\rightarrow N',\quad n \mapsto (\alpha(n))_{\alpha\in R}. \]
\begin{lemma}
Choose a chamber $K$. Then with respect to the basis ${B^K}^*$ of $N$ the map
$\varphi$ is represented by a matrix of the form
\[ \begin{pmatrix} 1&&0\\&\ddots&\\0&&1\\ *&\cdots &*\\\vdots &&\vdots \end{pmatrix}. \]
It follows that $\varphi$ is a split monomorphism and in particular
$N'/\varphi(N)$ is torsion free.
\end{lemma}
\begin{lemma}\
\begin{enumerate}
\item The map $\varphi$ is a map of fans $(N,\Sigma)\rightarrow (N',\Sigma')$.
\item For any $\sigma'\in\Sigma'$, $\varphi(V)\cap\sigma'\in\Sigma$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Let $\sigma\in\Sigma$ and let $\sigma'\in\Sigma'$ be the cone
with $s(\sigma')=s(\sigma)$. Then $\varphi(\sigma)\subseteq \sigma'$.\\
(2) If $\sigma'\in\Sigma'$ is maximal, let $s(\sigma')=(\varepsilon_1,\ldots,\varepsilon_{2n})$
with $\varepsilon_\nu\in\{\pm 1\}$, and let
\[ \tau=\bigcap_\nu\{x\in V\mid \varepsilon_\nu \alpha_\nu(x)\ge 0 \}. \]
Then $\tau\in\Sigma$ and $\tau=\varphi^{-1}(\sigma')$.
If $\sigma'$ is arbitrary, then $\sigma'=\sigma_1'\cap\ldots\cap\sigma_k'$
for maximal $\sigma_i'$ and then $\varphi^{-1}(\sigma')=\bigcap\varphi^{-1}(\sigma_i')\in\Sigma$.
\end{proof}
\begin{corol}\label{f_inj}
The induced toric morphism $f=\varphi_* : X_\Sigma\rightarrow X_{\Sigma'}$ is proper
and $X_\Sigma\twoheadrightarrow f(X_{\Sigma})$ is the normalization of the closed (reduced) image.
\end{corol}
\begin{proof}
See \cite[Prop.\ 1.14]{Oda}.
\end{proof}
\begin{propo}\label{closed_imm}
The map $X_\Sigma\rightarrow X_{\Sigma'}$ is a closed embedding of nonsingular
toric varieties.
\end{propo}
\begin{proof}
Let $\sigma$ be a maximal cone, $K$ the corresponding chamber and
$B^K\subseteq R$ the basis of $M$. If $\sigma'\in\Sigma'$ is the cone
with $s(\sigma)=s(\sigma')$ ($\sigma=\varphi(V)\cap\sigma'$), then
the dual cone to $\sigma'$ is
\[ \sigma'^\vee = \langle e_\alpha\in R \mid s_\alpha(\sigma')=1 \rangle_{\RR_{\ge 0}}. \]
The map $\sigma'^\vee\cap M' \rightarrow \langle B^K\rangle_{\ZZ_{\ge 0}}$ is surjective,
so $\CC[\sigma'^\vee\cap M'] \rightarrow \CC[\langle B^K\rangle_{\ZZ_{\ge 0}}]$ is a surjective
homomorphism of $\CC$-algebras giving rise to the closed embedding
\[ f|_{U_\sigma} : U_\sigma \rightarrow U'_{\sigma'}, \]
where $f=\varphi_*$ as in Cor.\ \ref{f_inj}.
Because $U_\sigma$ is dense in $X_\Sigma$, the closure of $f(U_\sigma)$
equals $f(X_\Sigma)$, hence $f(U_\sigma)=f(X_\Sigma)\cap U'_{\sigma'}$.
It follows that $f(X_\Sigma)$ is smooth and that $X_\Sigma\rightarrow f(X_\Sigma)$
is an isomorphism. The injectivity of $f$ follows from that of $f|_{U_\sigma}$
because then $f|_{\orb(\sigma)}$ is an injective map $\orb(\sigma)\rightarrow\orb(\sigma')$
for each cone $\sigma$ of the orbit decomposition of $X_\Sigma$.
\end{proof}
\section{Remarks on surfaces}\label{ranktwo}
For $2$-dimensional fans of complete toric surfaces obviously
strongly symmetric\ is the same as centrally symmetric. The classification of smooth complete toric surfaces,
see \cite[Cor.\ 1.29]{Oda} can be specialized as follows. It turns out that
this classification coincides with the classification of crystallographic arrangements
of rank two \cite{p-CH09b,p-CH09d}.
Let $\Sigma$ be the fan of a smooth complete toric surface with rays
$\rho_1,\ldots,\rho_s$ ordered counterclockwise with primitive
generators $n_1,\ldots,n_s$. There are integers $a_1,\ldots,a_s$ such that
\[ n_{j-1}+n_{j+1}+a_jn_j = 0 \]
for $1\le j\le s$ where $n_{s+1}:=n_1$, $n_0:=n_s$.
The integers $a_j$ are the self-intersection
numbers of the divisors $D_j$ associated to the rays $\rho_j$.
The circular weighted graph $\Gamma(\Sigma)$ has as its vertices on $S^1$ the rays
$\rho_j$ with weights $a_j$. These weights satisfy the identity
\[ \begin{pmatrix}0&-1\\ 1&-a_s\end{pmatrix} \cdots
\begin{pmatrix}0&-1\\ 1&-a_1\end{pmatrix} =
\begin{pmatrix}1&0\\ 0&1\end{pmatrix}. \]
Conversely, to any circular weighted graph with this identity there is a
smooth complete toric surface with this graph, unique up to toric isomorphisms.
All these surfaces are obtained from the basic surfaces $\PP_2$, $\PP_1\times\PP_1$,
and the Hirzebruch surfaces $\FF_a$, $a\ge 2$, by a finite succession of
blowing-ups. If the surface $X_\Sigma$ is centrally symmetric, then the number $s$ of rays
is even, $s=2t$, and $a_{t+j}=a_j$ for $1\le j\le t$. In this case
\[ \begin{pmatrix}0&-1\\ 1&-a_t\end{pmatrix} \cdots
\begin{pmatrix}0&-1\\ 1&-a_1\end{pmatrix} =
\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}, \]
which is ``dual'' to the formula of the classification of crystallographic
arrangements of rank two (see \cite{p-CH09b}).
Note further that sequences $a_1,\ldots,a_t$ satisfying this formula are
in bijection with triangulations of a convex $t$-gon by non-intersecting
diagonals. The numbers in Fig.\ \ref{trianggon} are
$-a_1,\ldots,-a_t$; these are certain entries of the Cartan matrices
of the corresponding Weyl groupoid (see \cite{p-CH09d} for more details).
Attaching a triangle to the $t$-gon corresponds to a double blowing-up
on the variety.
\begin{figure}
\begin{center}
\includegraphics[width=0.3\textwidth,natwidth=690,natheight=683]{wg2s1223124.png}
\end{center}
\caption{Triangulation of a $t$-gon\label{trianggon}}
\end{figure}
One can subdivide a smooth complete $2$-dimensional fan $\Sigma$
by filling in the opposite $-\rho$ of each ray $\rho$ in order to get
a complete centrally symmetric\ fan $\Sigma_C$. However, $\Sigma_C$ need not be smooth
as in Example \ref{cnotsmooth}. But by inserting further pairs
$\rho,-\rho$ of rays one can desingularize the surface $X_{\Sigma_C}$ in
an even succession of blowing-ups to obtain a smooth complete centrally symmetric\
surface $X_{\tilde \Sigma}$ with a surjective toric morphism
$X_{\tilde \Sigma}\rightarrow X_\Sigma$.
\begin{examp}\label{cnotsmooth}
Let $\Sigma$ be the fan of the Hirzebruch surface $\FF_a$, $a\ge 2$, with
the primitive generators
\[ n_1=(1,0),\quad n_2=(0,1),\quad n_3=(-1,a),\quad n_4=(0,-1). \]
The fan $\Sigma_C$ is then obtained by adding the rays spanned by $(-1,0)$
and $(1,-a)$. This fan is no longer smooth. After filling in the rays
spanned by $(1,-\nu)$ for $1\le\nu<a$, we obtain a smooth complete centrally symmetric\
fan $\tilde \Sigma$ with $2a$ rays. In case $a=2$ its circular graph has
the weights $(-1,-2,-1,-2;-1,-2,-1,-2)$ (this corresponds to the
reflection arrangement of type $B$ and $C$).
\end{examp}
\begin{examp}
In good cases the centrally symmetric\ fan $\Sigma_C$ may already be smooth.
As an example let $\Sigma$ be the fan of $\PP_2$ spanned
by $(1,0)$, $(0,1)$ and $(-1,-1)$. Then the fan $\Sigma_C$ is spanned,
in counterclockwise order, by
\[ (1,0),(1,1),(0,1),(-1,0),(-1,-1),(0,-1). \]
This is the fan of the blow up $\tilde \PP_2$ of $\PP_2$ at the
three fixed points of the torus action. The corresponding arrangement
is the reflection arrangement of type $A_2$.
Its circular graph has the weights
\[ (-1,-1,-1;-1,-1,-1). \]
The same surface can be obtained by blowing up $\PP_1\times\PP_1$
in two points corresponding to the enlargement of the weighted graph
$(0,0,0,0)$ by inserting $-1$ after the first and third place,
see \cite[Cor.\ 1.29]{Oda}.
Notice that $\PP_1\times\PP_1$ corresponds
to the reducible reflection arrangement of type $A_1\times A_1$.
One should also note here that
$\tilde\PP_2$ and $\PP_1\times\PP_1$ are the only toric Del Pezzo
surfaces which are centrally symmetric.
\end{examp}
\section{Parabolic subgroupoids and toric arrangements}
If $(\Ac,R)$ is a crystallographic arrangement in $V$ and $E$
is an intersection of hyperplanes of $\Ac$, then by Cor.\ \ref{restrU}
the restriction $\Ac^E$ is again crystallographic. The dual
statement is that $\Star(\delta)$ for $\delta\in\Sigma_R$ is the
fan of a crystallographic arrangement which corresponds to a parabolic
subgroupoid, see below.
Both constructions may be translated to the corresponding toric
varieties in a compatible way. This gives rise to posets of
toric varieties which we call \emph{toric arrangements} (see Section \ref{torarr}).
\subsection{Star fans and parabolic subgroupoids}
Let $(\Ac,R)$ be a crystallographic arrangement, $\Sigma_R$
be the corresponding smooth strongly symmetric\ fan in $\RR^r$, $\delta\in\Sigma$,
$E:=\langle\delta\rangle_\RR$ and $d:=\dim(E)$.
Let $R_E:=R\cap E^\perp$ and
\[ \Ac_E:=\{\overline{\alpha^\perp}\subseteq V/E\mid \alpha\in R_E\}, \]
and notice that $\overline{\alpha^\perp}$ are hyperplanes in $V/E$ because
$\alpha\in E^\perp$. Remark also that $\Ac_E$ only depends on $E$.
By \cite[Cor.\ 2.5]{p-CH09c}, $R_E$ is a set of real roots of a parabolic subgroupoid
of $\Wc(\Ac(\Sigma))$ (see \cite[Def.\ 2.3]{p-HW-10} for the precise definition
of a parabolic subgroupoid). Here, $\Wc(\Ac(\Sigma))$ is the Weyl groupoid
of the Cartan scheme given by the crystallographic arrangement $\Ac(\Sigma)$
as described in \cite[Prop.\ 4.5]{p-C10}.
Thus $(\Ac_E,R_E)$ is a crystallographic arrangement.
It corresponds to the fan $\Star(\delta)$:
\begin{propo}\label{Udelta}
Let $(\Ac,R)$ be a crystallographic arrangement and let $\delta$ be
a $d$-dimensional cone of the fan $\Sigma_R$.
Then the orbit closure $V(\delta)\subseteq X(\Ac)$ of $\orb(\delta)$
corresponds to the crystallographic arrangement
\[ \Ac_E = \{\overline{H}\subseteq V/E\mid H\in\Ac\}
= \{\langle\overline{\tau}\rangle_\RR \mid \overline{\tau}\in\Star(\delta)(r-d-1)\}, \]
where $E=\langle\delta\rangle_\RR$ as above.
\end{propo}
\begin{proof}
Let $\overline{H}$ be in the left set. Then $\delta\subseteq E\subseteq H$,
thus there exists a $\tau\in\Sigma(r-1)$ with $\delta\subseteq\tau\subseteq H$.
Hence $\langle\overline{\tau}\rangle_\RR$ is in the right hand set.
Now let $\langle\overline{\tau}\rangle_\RR$ be in the right hand set.
Then $E\subseteq \langle\tau\rangle_\RR\subseteq H$ for an $H\in\Ac$
and so $\langle\overline{\tau}\rangle_\RR\subseteq\overline{H}$. But since these
have the same dimension, they are equal.
\end{proof}
\begin{corol}
Let $\Sigma$ be a strongly symmetric fan in $\RR^r$ and $\delta,\delta'\in\Sigma$
with $\langle\delta\rangle_\RR=\langle\delta'\rangle_\RR$.
Then $\Star(\delta)=\Star(\delta')$ and $V(\delta)\cong V(\delta')$ even so
$V(\delta)\ne V(\delta')$.
\end{corol}
\begin{proof}
As in Prop.\ \ref{Udelta}, $\Star(\delta)$ only depends on $\langle\delta\rangle_\RR$
because $\Star(\delta)$ is strongly symmetric. Note that here smoothness is not used.
\end{proof}
\begin{corol}
Let $\Sigma$ be a smooth strongly symmetric fan in $\RR^r$,
$\Wc(\Ac(\Sigma))$ the corresponding Weyl groupoid, and
$\delta\in\Sigma$.
Then the Weyl groupoid $\Wc(\Ac(\Star(\delta)))$ is equivalent to a connected component of
a parabolic subgroupoid of $\Wc(\Ac(\Sigma))$.
\end{corol}
\subsection{Associated toric arrangements}\label{torarr}
Let as before $\Sigma$ be the fan of a crystallographic arrangement $(\Ac,R)$
and as in \cite[Def.\ 2.1]{OT} let $L(\Ac)$ be the poset of nonempty
intersections of elements of $\Ac$.
By Lemma \ref{lemrestr}, for any $E\in L(\Ac)$
we are given the strongly symmetric\ smooth subfan
\[ \Sigma^E = \{\sigma\cap E\mid \sigma\in\Sigma\}
= \{\sigma\in\Sigma\mid \sigma\subseteq E\} \]
of $\Sigma$.
Let $X^E$ denote its toric variety. The inclusion
$\iota : N^E=N\cap E\hookrightarrow N$ is then a sublattice and compatible with the fans
$\Sigma^E$ and $\Sigma$ and induces a toric morphism
\[ f^E : X^E \rightarrow X(\Ac)=X_\Sigma. \]
\begin{lemma}\label{Eimm}
The map $f^E$ is a closed immersion with image $Y^E\subseteq X(\Ac)$
of dimension $\dim E$.
\end{lemma}
\begin{proof}
The subspace $E$ is spanned by any cone $\tau\in\Sigma^E$ of
maximal dimension $s:=\dim E$. Using a $\ZZ$-basis of $\tau$
as in the proof of Lemma \ref{lemrestr} one finds that
$N^E=N\cap E$ is a sublattice of $N$ of rank $s$
and that the inclusion $\iota : N^E\hookrightarrow N$ is split.
The induced map $\iota_\RR$ sends a cone $\sigma$ to itself
and thus gives rise to a proper toric morphism $f^E$.
Let $M^E$ be the dual lattice of $N^E$ and $\sigma\in \Sigma^E$.
Using the duals of bases of $N^E$ and $N$, one finds that the induced
dual map $\iota^*:M\cap \sigma^\vee \rightarrow M^E\cap\sigma^\vee$
is surjective. Then
\[ f^E|_{U^E_\sigma} : U^E_\sigma \rightarrow U_\sigma \]
is a closed immersion, where $U^E_\sigma\subseteq X^E$ and
$U_\sigma\subseteq X_\Sigma$ denote the open affine spectra defined
by $M^E\cap\sigma^\vee$ resp.\ $M\cap\sigma^\vee$.
As in the proof of Prop.\ \ref{closed_imm} we conclude that $f^E$
is globally a closed immersion.
\end{proof}
\begin{remar}
Note that $Y^E$ is not invariant under the torus action on $X_\Sigma$
but is a strongly symmetric\ smooth toric variety on its own with torus $T^E=N^E\otimes\CC^*\subseteq T$.
\end{remar}
\begin{propo}\label{tor_arr}
With the above notation the subvarieties $Y^E\subseteq X_\Sigma$ have the
following properties.
\begin{enumerate}[(i)]
\item Each $Y^E$, $E\in L(\Ac)$, is invariant under the involution of $X_\Sigma$
defined by the central symmetry of $\Sigma$.
\item For each cone $\sigma\in\Sigma$,
\[ Y^E\cap\orb(\sigma) =
\begin{cases} \orb^E(\sigma) & \text{if } \sigma\subseteq E,\\
\emptyset & \text{if } \sigma\not\subseteq E,\end{cases}\]
and
\[ Y^E\cap V(\sigma) =
\begin{cases} V^E(\sigma) & \text{if } \sigma\subseteq E,\\
\emptyset & \text{if } \sigma\not\subseteq E,\end{cases}\]
where $\orb^E(\sigma)$ resp.\ $V^E(\sigma)$ denote the images of the
orbit of $\sigma$ resp.\ its closure in $X^E$.
\item When $F,E\in L(\Ac)$ with $F\subseteq E$, then
the composition $X^F\hookrightarrow X^E\hookrightarrow X_\Sigma$
is the inclusion $X^F\hookrightarrow X_\Sigma$.
\item\label{posiso} For any $E,F\in L(\Ac)$, $Y^{E\cap F}=Y^E\cap Y^F$.
\item The intersections $Y^E\cap T$ of $Y^E$ with the torus $T$ of $X_\Sigma$ are
the subtori $T^E=N^E\otimes\CC^*$ of $T$ of dimension $\dim(E)$ and constitute a toric arrangement.
\end{enumerate}
\end{propo}
\begin{defin}
We call the system $\{Y^E\}_{E\in L(\Ac)}$ the associated \emph{toric arrangement}
of the strongly symmetric smooth toric variety $X(\Ac)$.
\end{defin}
\begin{remar}
Prop.\ \ref{tor_arr} (\ref{posiso}) shows that the assignment $E\mapsto Y^E$ is
an isomorphism of posets.
\end{remar}
\begin{remar}\label{georem}
Prop.\ \ref{tor_arr} yields a geometric interpretation of the strong
symmetry of $X(\Ac)$ by its toric arrangement:
For any hyperplane $H\in\Ac$ the union of the curves $V(\tau)$, $\tau\subseteq H$,
$\dim(\tau)=r-1$, is the set of fixed points of $X(\Ac)$ under the action of the subtorus
$T^H=N^H\otimes\CC^*=Y^H\cap T$ of $T$. This union meets the hypersurface
$Y^H$ exactly in the set of its fixed points under the action of its torus $T^H$,
and does not meet any other $Y^{H'}$.
The same holds for any $E\in L(\Ac)$ for $Y^E$ and the varieties $V(\tau)$,
$\tau\subseteq E$, $\dim(\tau)=\dim E$, inside any other $Y^F$, $E\subseteq F\in L(\Ac)$.
\end{remar}
\begin{proof
(i) follows from the fact that $f^E$ is induced by the map
$\iota$ between strongly symmetric fans.
(ii) follows from the orbit decompositions of $Y^E$ and $V(\sigma)$ and the
fact that $f^E$ maps $\orb^E(\sigma)$ into $\orb(\sigma)$, because
$\iota_\RR(\sigma)=\sigma$ for $\sigma\in\Sigma^E$.
If $\sigma\not\subseteq E$, no $\orb^E(\tau)$, $\tau\subseteq E$, can
meet $\orb(\sigma)$. If $\sigma\subseteq E$, $\orb^E(\sigma)=\orb(\sigma)\cap Y^E$.
(iii) follows directly from the definition of the morphisms $f^E$.
(iv) It is sufficient to assume that $F$ is a hyperplane $H\in\Ac$ with
$E\not\subseteq H$. Let $s=\dim E$. Then $\dim Y^{E\cap H}=s-1$ and
$Y^{E\cap H}\subseteq Y^E\cap Y^H$. Suppose that there is a point
$x\in Y^E\cap Y^H$ and $x\notin Y^{E\cap H}$. Let then $\sigma\in\Sigma$
be a maximal cone with $x\in\orb(\sigma)$.
Then $Y^{E\cap H}\cap\orb(\sigma)\subsetneq Y^E\cap Y^H\cap \orb(\sigma)$.
By property (ii), $\sigma\subseteq E\cap H$ and
\[ Y^{E\cap H}\cap\orb(\sigma)=\orb^{E\cap H}(\sigma),\quad
Y^E\cap\orb(\sigma) = \orb^E(\sigma),\]
\[ Y^H\cap\orb(\sigma) = \orb^H(\sigma) \]
are subtori of $\orb(\sigma)$ of dimensions $s-1-\dim(\sigma)$,
$s-\dim(\sigma)$, $r-1-\dim(\sigma)$ and
$\dim(\orb(\sigma))=r-\dim(\sigma)$.
It follows that $Y^E\cap Y^H\cap \orb(\sigma)$ is a subtorus
of dimension $s-1-\dim(\sigma)$ , too.
Hence $Y^{E\cap H}\cap\orb(\sigma)= Y^E\cap Y^H\cap \orb(\sigma)$,
contradiction.
(v) follows from (ii) for the special case $T=\orb(\{0\})$. Then
the definition of a toric arrangement as in \cite{CP05} is satisfied.
\end{proof}
Property (ii) of Prop.\ \ref{tor_arr} also includes that the
intersections $Y^E\cap V(\sigma)$ are smooth, irreducible and
proper of dimension $\dim E-\dim \sigma$. Moreover, we have the
\begin{propo}With the above notation:
\begin{enumerate}
\item For any fixed orbit closure $V(\tau)\subseteq X(\Ac)$
the intersections $Y^E\cap V(\tau)$, $\tau\subseteq E$
constitute the toric arrangement $\{Y^{E/\langle\tau\rangle_\RR}\}$
of the variety $V(\tau)$ corresponding to the crystallographic
arrangement $\Ac_D$, $D=\langle\tau\rangle_\RR$ with
fan $\Star(\tau)$ as in Prop.\ \ref{Udelta}.
\item\label{p68_2} The intersections $Y^E\cap\orb(\tau)$, $\tau\subseteq E$,
form a toric arrangement of subtori in each orbit $\orb(\tau)$ of $X(\Ac)$.
\end{enumerate}
\end{propo}
\begin{proof}
Let $D=\langle\tau\rangle_\RR\subseteq E$ and $\overline{E}=E/D$.
Under the isomorphism $X_{\Star(\tau)}\cong V(\tau)$ an orbit
$\orb(\sigma)\subseteq V(\tau)$, $\tau\subseteq\sigma$, is
identified with the orbit $\orb(\overline{\sigma})$ with
$\overline{\sigma}\subseteq V/D$ the image of $\sigma$.
Likewise, an orbit $\orb^E(\sigma)$ in $X^E$ with
$\tau\subseteq\sigma\subseteq E$ can be identified
with the orbit $\orb^{\overline{E}}(\overline{\sigma})$ in the
variety $X_{\Star(\tau)^{\overline{E}}}\cong V^E(\tau)$ in $X^E$.
It follows that the embeddings $X^E\hookrightarrow X(\Ac)$
and $X_{\Star(\tau)^{\overline{E}}}\hookrightarrow X_{\Star(\tau)}=V(\tau)$
are compatible and thus that $Y^E\cap V(\tau)$ is the image of the latter.
(\ref{p68_2}) follows from (v) of Prop.\ \ref{tor_arr} since $\orb(\tau)$
is the torus of $V(\tau)$.
\end{proof}
\begin{examp}\label{exsurfarr}
The system $\{Y^E\}_{E\in L(\Ac)}$ for strongly symmetric\ toric
surfaces has the following special features (see Fig.\ \ref{exsurf}).
Here each $E$ is a line of $\Ac$.
\begin{enumerate}
\item For $\rho\subseteq E$, $Y^E\cap D_\rho=\orb^E(\rho)$
is a point $p_\rho\in\orb(\rho)$.\vspace{3pt}
\item $Y^E\smallsetminus(D_\rho\cup D_{-\rho})$ is the torus
$T^E\cong k^\ast$ of $Y^E$.\vspace{3pt}
\item $Y^E\cap D_{\rho'}=\emptyset$ for $\rho'\not\subseteq E$.\vspace{3pt}
\item $Y^E\cap Y^F=\{1\}\subseteq T$ for any $E,F\in L(\Ac)$.
\end{enumerate}
Notice here that all the divisors $D_\rho$ and $Y^E$ are
isomorphic to $\PP_1$ and that the intersections are transversal.
\begin{figure}
\begin{center}
\setlength{\unitlength}{0.4pt}
\begin{picture}(400,400)(120,500)
{{\moveto(305.48109,872.03413262)
\curveto(344.13249,828.35887262)(391.38165,788.06130262)(528.87255,783.21584262)
\strokepath}}
{{\moveto(462.9317,839.73657262)
\curveto(438.02479,765.57269262)(489.88627,677.88020262)(544.02133,665.33414262)
\strokepath}}
{{\moveto(536.94694,706.50914262)
\curveto(510.76744,691.83490262)(458.88784,635.55443262)(480.42621,546.36706262)
\strokepath}}
{{\moveto(166.87073,819.55060262)
\curveto(198.75795,772.44999262)(173.03507,693.05182262)(122.46158,645.95121262)
\strokepath}}
{{\moveto(115.73293,702.47194262)
\curveto(193.79234,656.19396262)(183.22661,606.96180262)(154.75914,557.13292262)
\strokepath}}
{{\moveto(127.84451,598.85060262)
\curveto(215.96429,600.21721262)(268.90654,563.34737262)(274.52927,475.04328262)
\strokepath}}
{{\moveto(458.8945,884.14572262)\lineto(180.32805,481.77194262)\strokepath}}
{{\moveto(100.92988,614.99938262)\lineto(592.12194,763.02987262)\strokepath}}
{{\moveto(109.00427,754.95548262)\lineto(558.47865,594.81340262)\strokepath}}
{{\put(598.12194,753.02987262){$Y^E$}}}
{{\put(553.72265625,651.55078125){$D_{\rho}$}}}
{{\put(122.203125,538.1640625){$D_{-\rho}$}\strokepath}}
\end{picture}
\end{center}
\caption{Example \ref{exsurfarr}\label{exsurf}}
\end{figure}
\end{examp}
There is an interesting formula for the divisor classes of the curves $Y^E$
in terms of the toric divisors $D_\rho$ as follows.
Keeping the notation of Section \ref{ranktwo}, let $a_1,\ldots,a_{2t}$ be
a chosen order of the weights of the circular graph of the surface $X(\Ac)$ with
corresponding divisors $D_1,\ldots,D_{2t}$, and let $Y_1=Y^E$ in case $E:=\langle n_1\rangle_\RR$.
Then the standard sequence
$0\rightarrow M\rightarrow \ZZ^{\Sigma(1)}\rightarrow\Pic X(\Ac)\rightarrow 0$
can be represented by the exact sequence
\[ 0\longrightarrow \ZZ^2 \stackrel{(Q,-Q)}{\longrightarrow}\ZZ^t\oplus\ZZ^t
\stackrel{\tiny\begin{pmatrix}A&I\\0&I\end{pmatrix}}{\longrightarrow}\ZZ^{t-2}\oplus\ZZ^t \longrightarrow 0 \]
where $Q^\vee=(n_1,\ldots,n_t)$ is the matrix of the first $t$ primitive elements
and $A^\vee$ is the matrix
\[ A^\vee = \begin{pmatrix}
a_1 & 1 & & & -1 \\
1 & a_2 & \ddots & & \\
& \ddots & \ddots & \ddots & \\
& & \ddots & \ddots & 1 \\
-1 & & & 1 & a_t
\end{pmatrix}\]
of rank $t-2$ expressing the relations $n_{j-1}+a_jn_j+n_{j+1}=0$.
To deduce the formula for $Y_1$ we choose $n_1,n_t$ as the basis of the
lattice $N$. Then
\[ Q^\vee=\begin{pmatrix}1&x_2&\cdots&x_{t-1}&0\\0&y_2&\cdots&y_{t-1}&1\end{pmatrix} \]
and $y_2=1$ since $A\cdot Q=0$.
\begin{propo}
With the above notation,
\begin{equation}\label{relD}
Y_1\sim D_2+\sum_{\nu=3}^{t-1} y_\nu D_\nu+D_t
\sim D_{t+2}+\sum_{\nu=3}^{t-1} y_\nu D_{\nu+2}+D_{2t}
\end{equation}
up to rational equivalence and $Y_1$ has selfintersection $Y_1^2=0$.
\end{propo}
\begin{remar}
Choosing $n_1$, $n_t$ as a basis, the columns of $Q^\vee$
become the positive roots of the associated Weyl groupoid at the
object corresponding to $Y_1$.
\end{remar}
The formula for the other $Y_\nu=Y^E$, $n_\nu\in E$, follows
by cyclic permutation of the indices. Note that the classes of
$D_2,\ldots,D_t$ are part of a basis of $\Pic X(\Ac)$.
The formula can be derived as follows. If $Y_1$ is equivalent
to $\sum c_\nu D_\nu$,
the intersection numbers $D_\nu^2=a_\nu$, $D_\mu D_\nu\in\{0,1\}$
for $\mu\ne\nu$ and
\[ Y_1 D_\nu =\begin{cases}1 & \nu\in\{1,t+1\}\\ 0 & \text{else} \end{cases} \]
yield a system of equations for the coefficients $c_2,\ldots,c_{2t}$.
This system has a unique solution modulo $(Q,-Q)$ such that $c_1=0$, $c_2=1$,
which is
\[ (c_2,\ldots,c_{2t})=(y_2,\ldots,y_{t-1},1,0,\ldots,0) \text{ mod } (Q,-Q). \]
For that one has to use the relations between the weights $a_1,\ldots,a_t$,
see Section \ref{ranktwo}.
The proof for $Y_1^2=0$ follows from the second equivalence of Equation \ref{relD}.
\begin{remar}
The relations between the weights $a_1,\ldots,a_{2t}$ naturally lead to the
Grassmanian and to cluster algebras of type $A$, see \cite{p-CH09d}
for more details.
\end{remar}
\section{Further remarks}
\subsection{Reducibility}
An arrangement $(\Ac,V)$ is called \emph{reducible} if there exist
arrangements $(\Ac_1,V_1)$ and $(\Ac_2,V_2)$ such that $V=V_1\oplus V_2$ and
\[ \Ac = \Ac_1\times\Ac_2:=\{H\oplus V_2\mid H\in \Ac_1\}
\cup \{V_1\oplus H\mid H\in \Ac_2\}, \]
compare \cite[Def.\ 2.15]{OT}.
It is easy to see that a crystallographic arrangement $(\Ac,V)$ is reducible
if and only if the corresponding Cartan scheme is reducible
in the sense of \cite[Def.\,4.3]{p-CH09a}, i.e.\ the generalized
Cartan matrices are decomposable.
For the fan $\Sigma$ corresponding to $\Ac$, reducibility translates to the fact
that there are fans $\Sigma_1$ and $\Sigma_2$ such that
\[ \Sigma = \Sigma_1\times \Sigma_2 =\{\sigma\times\tau \mid \sigma\in\Sigma_1,\tau\in\Sigma_2\}. \]
Notice that by Lemma \ref{lemrestr} the fans $\Sigma_1$ and $\Sigma_2$ are strongly symmetric\ and smooth as well.
\subsection{Inserting one hyperplane and blowups}
In higher dimension, the situation is much more complicated.
There are only finitely many crystallographic arrangements for each
rank $r>2$. Whether the insertion of new hyperplanes corresponds to
a series of blowing-ups is unclear. The case of a single new hyperplane
may be explained in the following way:
\begin{propo}
Let $(\Ac,R)$ and $(\Ac',R')$ be crystallographic arrangements of rank $r$ with $\Ac'=\Ac\dot\cup\{H\}$.
Then the toric morphism $X_{\Sigma'}\rightarrow X_{\Sigma}$ induced by the subdivision
is a blowup along two-dimensional torus invariant subvarieties of $X_{\Sigma}$.
\end{propo}
\begin{proof}
Let $\sigma\in\Sigma:=\Sigma_R$ be a maximal cone with $H\cap\stackrel{\circ}{\sigma} \ne\emptyset$.
We prove that $H$ star subdivides $\sigma$.
The hyperplane $H$ divides $\sigma$ into two parts $\sigma_1'$ and $\sigma_2'$ which
intersect in a codimension one cone $\tau'$.
Note that $|\sigma(1)|=r$, $|\sigma_1'(1)\cup\sigma_2'(1)|=r+1$,
thus there is exactly one ray $\rho'$ involved which is not in $\Sigma$.
Let $\rho_1\subseteq\sigma_1',\rho_2\subseteq\sigma_2'$ be the rays which are not
subsets of $\tau'$, and $\tau\subseteq\sigma$ be the cone generated by $\rho_1,\rho_2$.
Then $H\cap\tau=\rho'$.
But by Cor.\ \ref{restrU}, $\Ac'^{\langle\tau\rangle_\RR}$ is a crystallographic arrangement in which
$\langle\rho_1\rangle_\RR$, $\langle\rho'\rangle_\RR$, $\langle\rho_2\rangle_\RR$
are subsequent hyperplanes. By \ref{ranktwo} we obtain that $\rho$ is generated by the
sum of the generators of $\rho_1'$, $\rho_2'$.
\end{proof}
\subsection{Automorphisms}
Let $\Sigma$ be a strongly symmetric\ smooth fan,
$(\Ac,R)$ the corresponding crystallographic arrangement.
\begin{defin} If $\Ac$ comes from the connected simply connected Cartan scheme
$\Cc =\Cc (I,A,(\rho_i)_{i\in I},(C^a)_{a\in A})$, and $a\in A$, then we call
\[ \Aut(\Cc,a):=\{w \in \Hom(a,b) \mid b\in A,\:\: R^a=R^b \} \]
the \emph{automorphism group of $\Cc$ at $a$}. This is a finite subgroup
of $\Aut(\ZZ^r)\cong\Aut(M)$ because the number of all morphisms is finite.
\end{defin}
Since $\Cc$ is connected, $\Aut(\Cc,a)\cong \Aut(\Cc,b)$ for all $a,b\in A$.
The choice of $a\in A$ corresponds to the choice of a chamber and thus of
an isomorphism $\ZZ^r\cong M$.
Every element of $\Aut(\Cc,a)$ clearly induces a toric automorphism of $\Sigma$.
The groups $\Aut(\Cc,a)$ have been determined in \cite{p-CH10}, see \cite[Thm.\ 3.18]{p-CH10}
and \cite[A.3]{p-CH10}.
However, sometimes there are elements of $\Aut(\Sigma)$ which are not induced by an element
of $\Aut(\Cc,a)$.
For example, we always have the toric automorphism
\[ N\rightarrow N, \quad v\mapsto -v, \]
but there is a sporadic Cartan scheme of rank three with trivial automorphism group.
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,499,097 | arxiv | \section{Introduction}
The rapid integration of sensors technologies, fueled by computational algorithms, creates a unique opportunity for remote health monitoring, long-term fitness tracking, and fall detection. A core task to support these applications is activity recognition. However, differences in sensing platforms and user behavior has limited generalizability of the activity recognition models. For example, a user may replace an old mobile device with a new model. While the user wants to maintain the usability of a well-trained motion analysis app installed on the old device, the user would like to avoid providing additional manual annotations for model re-training on the new device.
\begin{figure}[tbh!]
\vspace{-3mm}
\centering
\includegraphics[width=2.6in]{figs/cross-p.jpg}
\vspace{-2mm}
\caption{Sensor readings from two smartphone models.}
\label{fig:cross}
\vspace{-3mm}
\end{figure}
Activity recognition performance is also adversely impacted by sensor biases from low-quality modules and sampling rate instability \cite{hharshort}. For example, F1-score declines $34.4$\% when training and test data belong to different devices (e.g., Samsung Galaxy S3 vs LG Nexus). Figure \ref{fig:cross} shows acceleration for one subject's walking behavior gathered by two smartphones. Such data divergence can propagate through the data processing pipeline, leading to accuracy decline.
\begin{figure}[tbh!]
\centering
\includegraphics[width=2.6in]{figs/framework.jpg}
\vspace{-2mm}
\caption{TransFall's sequential transfer learning design.}
\label{fig:frame}
\vspace{-3mm}
\end{figure}
We propose \textit{TransFall} to overcome the challenges caused by cross-domain variations while reducing the dependency on labeled training data. Shown in Figure \ref{fig:frame}, the framework starts with a two-tier data transformation layer based on marginal distribution matching approaches, followed by a label estimation layer using a kernel method encoded in a weighted least-mean-squares fitting, and ends with a model generation layer using the previous step's label set.
\begin{figure}[tbh!]
\centering
\includegraphics[width=\linewidth]{figs/dt}
\vspace{-2mm}
\caption{Digital twin vision.}
\label{fig:dt}
\vspace{-3mm}
\end{figure}
Development of transfer learning algorithms for activity recognition is also central in the long-term goal of creating a person's {\em digital twin}. A digital twin is a digital replica of the human subject, built from multiple information sources including mobile sensor data. This quantified self provides a platform to understand the relationship between human behavior and influencing factors including health, genetics, and the environment. A digital twin offers the basis for automating health assessment and evaluating potential interventions on digital prototypes. Fusing data from multiple information sources, and building a model that is robust enough to accurately depict a diverse population, relies on the ability to effectively transfer data between domains. TransFall thus represents one component for completing the digital twin vision.
\section{The Transfer Learning Framework}
\subsection{Problem Statement}\label{sec:problem}
Let $X_s$ be a set of $N_s$ labeled data samples from source domain $\mathcal{S}$, where $x^s \in X_s$ is a $d$-dimensional variable drawn from marginal distribution $P_s(x)$ and label set $Y_s$ represents $L$ activities. Furthermore, let $X_t$ be a set of $N_t$ unlabeled data samples from target domain $\mathcal{T}$, and $x^t \in X_t$ be drawn from marginal distribution $P_t(x)$ with the same set of activities. TransFall offers an activity recognition model $\mathcal{M}: \mathcal{X} \mapsto \mathcal{Y}$ capable of accurately estimating the corresponding labels for the data samples collected on $\mathcal{T}$. With changes to the sensing platform, there exists a distribution shift in covariates $x$ between $X_s$ and $X_t$, resulting in $P_s(x) \neq P_t(x)$. Therefore, the first task is to address the covariate shift through data transformation.
\subsection{Vertical Transformation}\label{sec:vertical}
The vertical transformation layer matches the marginal distributions of individual column variables between $X_s$ and $X_t$. Due to the multi-dimensional nature of data sample $x \in \mathcal{R}^d$, the marginal distribution $P(x)$ can be determined by a joint probability distribution $P(x_1, \cdots, x_d)$. Raw signals are first converted into vector objects by projecting the input signals onto a designated feature space $\mathcal{F}$. Each feature $f_i \in \mathcal{F}$ is computed independently. Hence, we use the naive Bayes approximation to factorize the joint probability distribution, or
$P(x) = P(x_1, \cdots, x_d) = \prod_{i = 1}^{d} P(x_i)$.
Therefore, the optimization objective of TransFall can be viewed as the summation of $d$ subordinate optimization problems for $d$ column variables, where $\phi_i \in \Phi$ is a mapping function that converts the original probability distribution of a one-dimensional variable into a different distribution.
\begin{equation}
\underset{\Phi}{\text{Minimize}} \sum_{i = 1}^{d} \int_{x_i} |P_s(x_i) - \phi_i (P_t(x_i))| dx_i
\end{equation}
In practice, the true distribution of a random variable is unattainable, and hence the normal distribution is commonly adopted to approximate the marginal distribution of sensor data. As a result, each column variable $x_i$ is assumed to be drawn from a normal distribution with mean $\mu_i$ and variance $\sigma_i^2$, denoted as $x_i \sim \mathcal{N}(\mu_i, \sigma_i^2)$.
To minimize the difference of the means and variances between column variables in $X_s$ and $X_t$, we perform a linear transformation on each dimension $i \in [1,d]$ in $X_t$ with respect to $X_s$. The output of the vertical transformation is the transformed target dataset $\tilde{X_t}$, where $\tilde{x}_i^t \sim \mathcal{N}(\mu_i^s, {\sigma_i^s}^2)$ for each $i \in [1,d]$.
\subsection{Horizontal Transformation} \label{sec:horizontal}
The horizontal transformation layer further reduces the discrepancies in multi-variate variables between $X_s$ and $\tilde{X_t}$ using importance sampling, a common method to address covariate shift \cite{imsample-1}. This technique finds a weight factor $\beta$ for $X_s$ that assigns higher weight to those source data samples that are more representative of the target dataset. The goal of this transformation is as follows:
\begin{equation}\label{eq:opt2}
\underset{\beta}{\text{Minimize}} \int_x |\beta(x)P_s(x) - P_t(x)| dx \; .
\end{equation}
Because distributions $P_s(x)$ and $P_t(x)$ are unknown in practice, we use a kernel-based algorithm, empirical Kernel Mean Matching (eKMM) \cite{datashift}, to find the optimal weight factor $\beta$ with the use of Reproducing Kernel Hilbert Space (RKHS) technique.
Let $\Phi: \mathcal{X} \mapsto \mathcal{F}$ be a function that maps a vector variable $X$ onto a feature space $\mathcal{F}$. The output of the eKMM algorithm is the optimal weight factor $\beta$, which can minimize the distance between the empirical means of $X_s$ and $\tilde{X_t}$ on the feature space $\mathcal{F}$, as shown in the following equations.
\begin{equation}\label{eq:kmm}
\begin{aligned}
& \underset{\beta}{\text{Minimize}}
& & \| \frac{1}{N_s}\sum_{i = 1}^{N_s}\beta_i \Phi(x_i^s) - \frac{1}{N_t}\sum_{j = 1}^{N_t}\Phi(x_j^t) \|^2 \\
& \text{Subject to}
& & \beta_i \geq 0 , \; i \in [1, N_t] \\
&&& |\frac{1}{N_t} \sum_{i = 1}^{N_t}\beta_i - 1| \leq \epsilon.
\end{aligned}
\end{equation}
The first constraint in (\ref{eq:kmm}) refers to the non-negative probability property and the second constraint guarantees that the re-weighted distribution $\beta(x)P_s(x)$ is close to a valid probability distribution that sums to $1$. We use the RKHS technique \cite{rkhs-2} to solve the optimization problem in (\ref{eq:kmm}) based on an important property:
\begin{proposition}
Given a positive definite kernel $k$ over a vector space $\mathcal{X}$, we can find a Hilbert space $\mathcal{H}$ and a mapping function $\Phi: \mathcal{X} \mapsto \mathcal{H}$, such that
$k(x_i, x_j) = \langle \Phi(x_i), \Phi(x_j) \rangle_{\mathcal{H}}$, where $x_i, x_j \in \mathcal{X}$.
\end{proposition}
With the use of a kernel function that is positive definite on Euclidean space $\mathcal{R}^{d}$, optimization (\ref{eq:kmm}) can be solved without explicitly defining the mapping function $\Phi$. For this purpose, we use Gaussian kernel. Therefore, the objective function (\ref{eq:kmm}) can be rephrased as:
\begin{equation}\label{eq:quad}
\underset{\beta}{\text{Minimize }} \frac{1}{2} \beta^\top K \beta - \kappa^\intercal \beta
\end{equation}
\noindent where the kernel matrix $K$ and the kernel expansion $\kappa$ are given by
\begin{equation} \label{eq:kernelmatrix}
\begin{aligned}
K_{ij} & := k(x_i^s, x_j^s); \;\;\;\;\;\;
\kappa_i & := \frac{N_s}{N_t} \sum_{j = 1}^{N_t} k(x_i^s, x_j^t)
\end{aligned}
\end{equation}
As a result, the optimal $\beta$ can be determined by solving (\ref{eq:quad}) with the constraints listed in (\ref{eq:kmm}) using quadratic programming. Note that, similar to the vertical transformation module, the horizontal transformation module has also the potential to be coupled with existing machine learning algorithms that support the sample re-weighting.
\subsection{Label Estimation}\label{sec:label}
Given the transformed target dataset $\tilde{X_t}$ and the weight factor $\beta$, which approximates the distribution of $\tilde{X_t}$ using the source dataset $X_s$, the label estimation module intends to estimate the label set $\hat{Y}_t$ for $\tilde{X_t}$ in preparation for training an activity recognition model for the target domain. The label estimation objective can be written as follows, for $x_i \in X_s$: \[\underset{f}{\text{Minimize}} \sum_{i = 1}^{N_s} |y_i - \beta(x_i)f(x_i)|\]
We can rewrite this optimization problem using a weighted least-mean-squares (LMS) fitting technique with a 2-norm regularization term as shown below. The LMS technique is commonly used for parameter estimation in linear models \cite{datashift}.
\begin{equation}\label{eq:minf}
\underset{f}{\text{Minimize }} \sum_{i = 1}^{N_s} \beta_i (y_i^s - f(x_i^s))^2 + \lambda \|f\|^2
\end{equation}
However, the optimal function $f$ in (\ref{eq:minf}) is not necessarily linear. Therefore, we convert (\ref{eq:minf}) using a linear model, based on the representer theorem \cite{representer}, to the following:
\begin{displaymath}
\underset{\alpha}{\text{Minimize }} \sum_{i = 1}^{N_s} \beta_i (y_i^s - \sum_{j = 1}^{N_s}\alpha_j k(x_j^s, x_i^s))^2 + \lambda \|\sum_{j = 1}^{N_s} \alpha_j k(x_j^s, \cdot)\|^2
\end{displaymath}
which can be written (after extension) as shown in (\ref{eq:mina}).
\begin{equation}\label{eq:mina}
\underset{\alpha}{\text{Minimize }} (Y_s - K\alpha)^\top \overline{\beta} (Y_s - K\alpha) + \lambda \alpha^\top K\alpha
\end{equation}
\noindent where $K$ represents the kernel matrix in (\ref{eq:kernelmatrix}) and $\overline{\beta}$ is a $N_s \times N_s$ diagonal matrix of $\beta$. If $K$ and $\overline{\beta}$ are full rank matrices, the optimal solution for $\alpha$ can be derived using:
\begin{equation}\label{eq:aresult}
\alpha = (\lambda {\overline{\beta}}^{ -1} + K)^{-1}Y_s
\end{equation}
Because the label set in activity recognition often contains multiple activity classes, we use a one-to-all approach for label estimation, by first solving $L$ optimal linear models $\alpha^m$ for all activity labels, and then combining $L$ corresponding estimations of the data sample $x_i^t \in \tilde{X_t}$, to make the final prediction.
\begin{equation}
\hat{y}^t_i = \underset{m}{\text{argmax }} \sum_{j = 1}^{N_s} \alpha_j^m k(x_j^s, x_i^t)
\end{equation}
\begin{table}[t!]
\caption{Dataset and experiment abbreviations.}
\small
\label{tab:notation}
\centering
\begin{tabular}{c|c|l}
\hline
&\textbf{Notation} & \multicolumn{1}{c}{\textbf{Description}} \\
\hline
\multirow{3}{*}{Dataset} & Phone & 8 smartphones and 9 subjects\\
& Watch & 4 smartwatches and 5 subjects\\
& HART & one smartphone and 30 subjects\\
\hline
\multirow{4}{*}{Cross-Platform} & P2P-S & Same-model phone-to-phone\\
& P2P-D & Different-model phone-to-phone\\
& W2W-S & Same-model watch-to-watch\\
& W2W-D & Different-model watch-to-watch\\
\hline
\multirow{6}{*}{Comparison Group} & NN & Nearest neighbor\\
& DT & Decision tree\\
& LR & Logistic regression\\
& SVM & Support vector machine\\
& Upper & Using ground truth data\\
& IWLSPC & Method in \cite{tl-23} \\
\hline
\end{tabular}
\end{table}
\section{Experimental Results}\label{sec:results}
We conducted experiments on three publicly-available datasets \cite{hharshort, hart2short}, as listed in Table \ref{tab:notation}. We evaluated the performance of TransFall in three scenarios including \textit{cross-platform}, \textit{cross-subject}, and \textit{hybrid}. We use the notations shown in Table \ref{tab:notation} to refer to various datasets, transfer learning scenarios, and comparison approaches. The comparison approach IWLSPC refers to the importance-weighted least-squares probabilistic classifier (IWLSPC) approach introduced in \cite{tl-23}. IWLSPC combines a least-squares probabilistic model with a sample re-weighting approach, to handle the changes in data distribution between the source and the target datasets. While the sample re-weighting technique in TransFall is similar to that of IWLSPC, TransFall performs a two-tier data transformation on both datasets, to empirically match the distributions of the two datasets for more accurate label estimation.
\begin{figure}[tbh!]
\centering
\includegraphics[width=\linewidth]{figs/label-cross-p.jpg}
\vspace{-2mm}
\caption{Results for cross-platform scenario sub-cases.}
\label{fig:label-cp}
\vspace{-5mm}
\end{figure}
\begin{figure}[tbh!]
\centering
\includegraphics[width=\linewidth]{figs/har-cross-p.jpg}
\vspace{-2mm}
\caption{Results for cross-platform scenario sub-cases.}
\label{fig:ar-cp}
\vspace{-5mm}
\end{figure}
\subsection{Cross-Platform Transfer Learning}
\label{sec:crossp}
This scenario refers to the case when source data and target data are gathered using different devices. We further divide this scenario into four cases as shown in Table \ref{tab:notation}.
Figure \ref{fig:label-cp} shows the results of label estimation using different approaches in the four sub-cases accordingly, where the red central mark on each box indicates the mean value of labeling accuracy, the bottom and top edges indicate the 25\textsuperscript{th} and 75\textsuperscript{th} percentiles respectively, and the outliers are denoted as plus symbols.
Overall, all the approaches achieve a higher accuracy on the data gathered by smartphones than that of smartwatches, mainly due to the less stability of data collection using smartwatches. Given the data gathered by smartphones, the labeling accuracy is higher in P2P-S case than that of P2P-D case. This result is coincident with general expectation, because source device and target device are in different models for P2P-D case, which results in large diversity between the obtained datasets due to the differences, such as sampling frequency and platform configuration.
Compared to the other five approaches shown in Figure \ref{fig:label-cp}, TransFall achieves the highest labeling accuracy in average, with a correct labeling rate of $0.88$, $0.79$, $0.41$ and $0.56$ in the four cases respectively. The increase in the labeling accuracy of TransFall compared to other approaches is $>2.7$\% for P2P-S, $>6.3$\% for P2P-D, $>16.6$\% for W2W-S, and $>16.1$\% for W2W-D.
In this task, logistic regression model was empirically chosen to carry out activity recognition after label information transfer. Figure \ref{fig:ar-cp} shows the results in four transfer learning cases. In general, the classification accuracy is consistent with the labeling accuracy, because the quality of training dataset is determined by the precision of labeling the target data in previous task.
In Figure \ref{fig:ar-cp}, the performance upper bound (referred to as ``Upper'') of machine learning model trained with the ground truth appears to be dramatically lower on the data gathered by smartwatches than that of smartphones. The accuracy upper bound is $93.3$\% in P2P-S case, but $75.9$\% in W2W-S case. This performance decline reflects the lower quality of sensor data samples collected using smartwatches.
TransFall still achieves improved performance over other approaches. Its classification accuracy is $88.4$\%, $76.6$\%, $35.9$\% and $47.5$\% for the four sub-cases, respectively. Moreover, the performance improvement of the machine learning model trained by TransFall is $>7.7$\% compared to the machine learning models trained by other approaches on smartphone data, and $>19$\% on smartwatch data.
\begin{figure}[tbh!]
\centering
\includegraphics[width=\linewidth]{figs/label-cross-s.jpg}
\vspace{-2mm}
\caption{Results of cross-subject activity recognition.}
\label{fig:label-cs}
\vspace{-4mm}
\end{figure}
\begin{figure}[tbh!]
\centering
\includegraphics[width = \linewidth]{figs/har-cross-s.jpg}
\vspace{-2mm}
\caption{Results of cross-subject activity recognition.}
\label{fig:ar-cs}
\vspace{-4mm}
\end{figure}
\subsection{Cross-Subject Transfer Learning}
\label{sec:crosss}
In this scenario, source data and target data are collected from two subjects using the same type of mobile device. Figure \ref{fig:label-cs} shows the results of label estimation on each dataset separately. Almost all the approaches perform best on the HART dataset than the other two datasets. One possible explanation is less noise in the HART dataset which results in a more informative classification task. Moreover, TransFall performs better than other approaches in the comparison group, with an accuracy increase of $>6.9$\% on the Phone dataset, $>9.5$\% on the Watch dataset, and $>3.4$\% on the HART dataset.
Given label sets obtained using the alternative approaches, we can examine the performance of activity models trained with each label set. Figure \ref{fig:ar-cs} shows the activity recognition results. Again, the most approaches perform best on the HART dataset. The empirical upper bound of classification accuracy is $95.9$\% on the HART dataset, $92.1$\% on the Phone dataset and $88.0$\% on the Watch dataset. TransFall still exhibits the best recognition performance, with an accuracy increase of $>7.4$\% on Phone dataset, $>20.2$\% on the Watch dataset, and $>4.4$\% on the HART dataset.
\section{Conclusion and Future Work}
\label{sec:conclusion}
TransFall integrates a two-tier data transformation to perform transfer learning. Experimental results demonstrate that TransFall can steadily improve activity recognition accuracy comparing to several alternative approaches. A limitation of TransFall is the assumption of consistent feature alignments between source and target datasets. Our future work involves improving the generalizability of the current framework to handle variations in feature space between source and target datasets.
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,499,098 | arxiv | \section{Introduction}
A grand challenge problem at the forefront of
space physics and astrophysics is to understand how the energy of
turbulent plasma flows and electromagnetic fields is converted into
energy of the plasma particles, either as heat or some other form of
particle energization. Under the typically low-density and
high-temperature conditions of turbulent plasmas in the heliosphere,
such as the solar wind, the turbulent dynamics is weakly collisional,
requiring the application of six-dimensional (3D-3V) kinetic plasma
theory to follow the evolution of the turbulence, where the damping of
the turbulent fluctuations occurs due to collisionless interactions
between the electromagnetic fields and the individual plasma
particles. Although \emph{in situ} spacecraft measurements in the
solar wind provide detailed information about the electromagnetic and
plasma fluctuations, these measurements are typically limited to one point
(or, at most, a few points) in space. Of great benefit to plasma
turbulence research would be a scheme to use single-point measurements
of the electromagnetic fields and particle velocity distribution
functions (VDFs) to diagnose the collisionless damping of the
turbulent fluctuations and to characterize how the damped turbulent
energy is distributed to particles with different velocities.
Here we present an innovative technique to identify and characterize
the collisionless mechanisms that govern the net transfer of energy
from the electromagnetic fields to the plasma particles by correlating
measurements of the electromagnetic fields and particle VDFs at a
single point in space. These \emph{field-particle correlations} yield
a local estimate of the rate of particle heating, and further provide
a characteristic \emph{velocity-space signature} of the collisionless
damping mechanism that leads to the energization of the plasma
particles.
Early attempts to explore wave-particle interactions using spacecraft
measurements sought the spatial or temporal coincidence of wave fields
with enhanced particle fluxes
\citep{Gough:1981,Park:1981,Kimura:1983}. Later, wave-particle
correlators were flown on rockets and spacecraft to identify the
phase-bunching of electrons by correlating the counts of electrons in
a single energy and angle bin with the phase of the dominant wave
\citep{Ergun:1991a,Ergun:1991b,Muschietti:1994,Watkins:1996,Ergun:1998,Ergun:2001,Kletzing:2005,Kletzing:2006}. Motivated
by modern particle instrumentation with improved temporal and
phase-space resolution, the field-particle correlation technique
described here takes a significant leap forward by recovering the
correlation as a function of particle velocity, generating a much more
detailed velocity-space signature of the collisionless interactions.
Although the novel field-particle correlation technique devised here
is intended for use in diagnosing the damping of turbulent
fluctuations in the weakly collisional solar wind, to illustrate the
concept in a simplified framework, we present here its application to
the 1D-1V Vlasov-Poisson system to explore the collisionless damping
of electrostatic fluctuations in an unmagnetized plasma. After this
demonstration of the fundamental concept of using field-particle
correlations to investigate collisionless damping of fluctuations, we
discuss the application of this technique to spacecraft observations
of solar wind turbulence.
\section{Particle Energization in a Vlasov-Poisson Plasma}
The dynamics of electrostatic fluctuations in a collisionless plasma is
governed by the Vlasov-Poisson equations, where the Vlasov equation
determines the collisionless evolution of the distribution function
for each species $s$, $f_s(x,v,t)$, and the Poisson equation
determines the self-consistent evolution of the electric field, $E(x,t)
= -\partial \phi(x,t)/\partial x$, dictated by the fluctuating charge
density in the plasma.
To diagnose the collisionless transfer of energy between fields and
particles, we define the \emph{phase-space energy density} for a
particle species $s$ as $w_s(x,v,t) = m_s v^2 f_s(x,v,t)/2$, the
energy density per unit length per unit velocity. Integrating $w_s$
over velocity yields the standard \emph{spatial energy density}, and
integrating over volume produces the total microscopic kinetic energy
of the species, $W_s$. Splitting $f_s$ into equilibrium and perturbed
components, $f_s(x,v,t)= f_{s0}(v) + \delta f_s(x,v,t)$---where the
magnitude of $\delta f_s$ is limited only by the physical constraint
$f_s\ge 0$---we can use the Vlasov equation to obtain an equation for
the rate of change of $w_s$,
\begin{eqnarray}
\nonumber
\frac{\partial w_s(x,v,t)}{\partial t} & = - \frac{m_sv^3}{2}\frac{\partial \delta f_s(x,v,t)}{\partial x} - \frac{q_sv^2}{2}
\frac{\partial f_{s0}(v)}{\partial v} E(x,t) \\ & - \frac{q_sv^2}{2}
\frac{\partial \delta f_{s}(x,v,t)}{\partial v} E(x,t).
\label{eq:dwxvdt}
\end{eqnarray}
The rate of change of $w_s$ is governed by three terms: from left to
right, the (linear) ballistic term, the linear wave-particle
interaction term, and the nonlinear wave-particle interaction term.
When integrated over space using either periodic or infinitely distant
boundary conditions, the ballistic and linear wave-particle
interaction terms yield zero net energy transfer. Only the nonlinear
wave-particle interaction term produces a net change in particle
energy. Therefore, the term $-q_sv^2 (\partial \delta f_{s}/\partial
v) E/2$ governs the net rate of energy transfer between the
electromagnetic fields and plasma particles that is associated with
collisionless damping \citep{Howes:2016prep}.
Taking the average of \eqref{eq:dwxvdt} over the entire spatial domain---the approach taken
in quasilinear theory---provides a rigorous approach to determine the
net transfer of energy between the fields and particles, but the
spatial information necessary to perform this average is not
observationally accessible using single-point measurements. At a
single point $x_0$, all three terms of \eqref{eq:dwxvdt} are nonzero.
These terms describe both the \emph{oscillatory energy transfer}
associated with wave motion and the \emph{secular energy transfer}
associated with a net transfer of energy between fields and particles.
Unless the collisionless damping rate is particularly strong, the
magnitude of the oscillatory energy transfer described by these terms
is generally much larger than that of the secular energy transfer, so
the key challenge is to devise a procedure to isolate the
small-amplitude rate of secular energy transfer governed by the
nonlinear wave-particle interaction term.
Note that this local approach is valuable even in numerical
simulations where full spatial information is accessible, because
there is significant evidence that energy dissipation is often
highly localized in space
\citep{Wan:2012,Karimabadi:2013,TenBarge:2013a,Wu:2013a,Zhdankin:2013,Zhdankin:2015a},
so spatial averaging may obscure the details of the local dissipation
mechanism, making it more difficult to identify the physical mechanism
responsible.
\section{Field-Particle Correlation}
The form of the nonlinear
wave-particle interaction term in \eqref{eq:dwxvdt} suggests that the
rate of change of phase-space energy density can be estimated by
correlating single-point measurements of the electric field and
particle VDFs. Below we specify the procedure to isolate the local
secular transfer of energy associated with the collisionless damping of
electrostatic fluctuations in a 1D-1V Vlasov-Poisson plasma.
Labeling discrete measurement times as $t_j \equiv j\Delta t$ for
$j=0,1,2,\ldots$, we define the single-point measurements at position
$x_0$ and time $t_j$ of the field as $E_j \equiv E(x_0,t_j)$ and the
perturbed distribution function as $\delta f_{sj}(v) \equiv \delta
f_{s} (x_0,v,t_j)$. For a correlation interval of $\tau=N\Delta t$, we
define the field-particle correlation at time $t_i$ at position $x_0$ by
\begin{equation}
C_1(x_0,v,t_i, \tau)\equiv \frac{1}{N}\sum_{j=i}^{i+N}-q_s\frac{v^2}{2}
\frac{\partial\delta f_{sj}(v)}{\partial v}E_j.
\label{eq:cfp_sum}
\end{equation}
Note that this correlation is not normalized since the product
directly corresponds to the rate of energy transfer, so the amplitude
variation of this product as a function of velocity yields valuable
information about the nature of the collisionless field-particle
interaction.
For single-point measurements, the general idea of diagnosing the
energy transfer at each point in phase space reduces to determining
the distribution of the energy transfer rate in velocity space,
producing a velocity-space signature characteristic of the physical
mechanism. Different collisionless mechanisms are likely to have
distinct velocity-space signatures of the energy transfer. We
illustrate this field-particle correlation analysis method for the
case of the Landau damping of Langmuir waves in a 1D-1V Vlasov-Poisson
plasma, but \emph{the concept of using field-particle correlations to
diagnose collisionless energy transfer is extremely general}. In
principle, this method can use single-point spacecraft measurements to
examine the energization of particles in any weakly collisional
heliospheric plasma.
\section{Numerical Results}
Using the Nonlinear Vlasov-Poisson Simulation Code \T{VP}
\citep{Howes:2016prep}, we apply field-particle correlations to
examine collisionless damping in three cases: (I) a moderately damped
standing Langmuir wave pattern with $k \lambda_{de}=0.5$; (II) a
weakly damped standing Langmuir wave pattern with $k
\lambda_{de}=0.25$; and (III) a moderately damped single propagating
Langmuir wave mode with $k \lambda_{de}=0.5$, where
$\lambda_{de}=\sqrt{k_BT_e/4 \pi n_e q^2}$ is the electron Debye
length. For cases I \& II, $\delta f_e(t=0)$ is a sine wave with
wavelength $k \lambda_{de}$; for case III, $\delta f_e(t=0)$ satisfies
the Langmuir wave linear dispersion relation.
The \T{VP} code evolves the nonlinear Vlasov-Poisson
system of equations for ion and electron species using second-order
centered finite differencing for spatial and velocity derivatives and
a third-order Adams-Bashforth scheme in time. Spatial boundary
conditions are periodic and a Green's function solution is used to
determine $\phi$. All cases have plasma parameters $T_i/T_e=1$ and
$m_i/m_e=100$ and numerical resolution $n_x=128$ and $n_v=256$ with a
simulation domain of length $L=2 \pi/k$. The cases with $k
\lambda_{de} = 0.5(0.25)$ have a resonant velocity of $\omega/k = 2.86
(4.4)v_{te}$ and a linear damping rate of $1.59\times 10^{-1}
(2.05\times 10^{-3})\omega_{pe}$.
\begin{figure}
\resizebox{3.4in}{!}{\includegraphics*[0.85in,0.75in][5.25in,6.05in]
{fig1.eps}
}
\caption{ \label{fig:Energy} Rate of energy transfer in velocity space
at $x=0$ between the fields and electrons for the cases I (a), II
(b), and III (c) as a function of velocity $v/v_{te}$ and time $t
\omega_{pe}$. Positive (negative) rates signify transfer
to (from) the particle distribution. Evolution of field energy $W_\phi$
(long-dashed gray), perturbed electron energy $\delta W_e$ (dashed
red), phase-space and time integrated energy transfer rate (dotted
blue), and the velocity and time integrated single-point energy
transfer rate (solid black) corresponding to each case (d)--(f). }
\end{figure}
In Fig.~\ref{fig:Energy}, we plot the instantaneous rate of change of
$w_e$ due to the nonlinear wave-particle interaction term, $-q_e v^2
\partial_v \delta f_e E/2$, at $x=0$ for the three cases
(a)--(c). Without calculating the correlation $C_1$ over an
appropriate time interval $\tau$, the largest rates of energy transfer
do not necessarily correspond to the resonant velocities, $v=\omega/k$
(dot-dashed black lines). The reason is that the larger amplitude
oscillating energy transfer of the Langmuir waves masks the smaller
amplitude secular energy transfer of the collisionless damping.
Also plotted in Fig.~\ref{fig:Energy} is the time evolution of the
electrostatic field energy $W_\phi=\int dx \ E^2/8\pi$ (long-dashed
gray) and the perturbed electron energy $\delta W_e=\int dx \int dv
\ m_e v^2 \delta f_e/2$ (dashed red), showing that $\partial \delta
W_e/\partial t \simeq -\partial W_\phi/\partial t$ because the Landau
damping of Langmuir waves transfers little of the electrostatic field
energy to the ions for $m_i/m_e=100$. Thus, we focus here strictly on
energy transferred to electrons. We also plot the nonlinear
wave-particle interaction term integrated over all phase-space and
time, $-\int_0^t dt'\int dx \int dv \ q_e v^2 (\partial \delta
f_e(x,v,t')/\partial v)E(x,t')/2 $ (dotted blue), demonstrating that
this term alone contains all of the net energy transfer to the
electrons. Finally, at the single-point $x=0$, we plot the
time-integrated transfer rate, $-\int_0^t dt'\int dv q_e v^2
\partial_v \delta f_e(0,v,t') E(0,t')/2$ (solid black), showing that
we obtain a significant net transfer of energy from the field to the
electrons for both moderately damped cases with $k \lambda_{de}=0.5$.
\begin{figure}
\resizebox{3.4in}{!}{\includegraphics*[0.85in,.7in][5.65in,3.5in]
{fig2.eps}
}
\caption{ \label{fig:corr} Field-particle
correlation~\eqref{eq:cfp_sum} at $x=0$ for case I with varying
correlation interval $\tau$ (colorbar) for (a) off-resonant and (b)
on-resonant velocities, along with the corresponding (c)
off-resonant and (d) on-resonant time-integrated energy transfer
rates $\int_0^t dt'C_1$.}
\end{figure}
To isolate the small-amplitude secular energy transfer in the presence
of a much larger amplitude oscillating energy transfer, we must select
an appropriate correlation interval $\tau$. In
Fig.~\ref{fig:corr}, we plot the correlation $C_1(v_0,t,\tau)$ from
Eq.~\ref{eq:cfp_sum} for a range of correlation intervals
$0 \le \omega_{pe} \tau \le 12$ (colorbar) for case I both for (a) an
off-resonance velocity $v_0=1.25 v_{te}$ and (b) an on-resonance
velocity $v_0=2.85 v_{te}$. The $\tau=0$ curve (dark blue) corresponds
to a vertical slice along Fig.~\ref{fig:Energy}(a) at the selected
velocity $v_0$. As the correlation interval $\tau$
increases, the large amplitude signal of the oscillating energy
transfer is increasingly averaged out. For this case, the normalized
wave period is $T \omega_{pe}=4.39$, and we find that for
correlation intervals $\tau>T$, the large-amplitude oscillating
energy transfer rate is significantly reduced, revealing the smaller
amplitude secular energy transfer rate beneath. Integrating the
correlation in time, $\int_0^tdt'\ C_1(v_0,t',\tau)$, we find (c)
little net energy at the non-resonant velocity, and (d) significant
particle energization at the resonant velocity $v_0=2.85 v_{te}$.
\begin{figure*}[ht]
\resizebox{7.1in}{!}{\includegraphics*[0.85in,0.7in][8.55in,4.6in]
{fig3.eps}
}
\caption{ \label{fig:fullCorr} Velocity space structure of the
field-particle correlation $C_1$ (top row) and $\int_0^t dt'C_1$
(bottom) as well as the velocity-integration of these quantities
(offsets) for case I, panels (a) and (d), II, (b) and (e), and III,
(c) and (f). The correlation interval $\tau \omega_{pe}$ is set to
$6.28$.}
\end{figure*}
In Fig.~\ref{fig:fullCorr}, we plot the key results of this Letter,
the field-particle correlations $C_1$ for $\tau \omega_{pe}=6.28$ as a
function of velocity and time for cases I--III, (a)--(c). With a
suitably long correlation interval $\tau>T$, the large
amplitude signal of the oscillating energy transfer, dominating
Fig.~\ref{fig:Energy}, is diminished, revealing the secular transfer
of energy. This velocity-space signature of the secular energy
transfer rate is concentrated around the resonant velocity for the
moderately damped cases, indicating a resonant process. Integrating
$C_1$ over velocity yields the net energy transfer rate at that point
in space (offset panels), equal to $jE$. This velocity integration
demonstrates a net transfer of energy to the particles, but loses all
velocity-space information that can be used to identify the nature of
the collisionless energy transfer mechanism. The weakly damped case
has a relatively insignificant energy transfer rate. In panels
(d)--(f), we plot the accumulated change in the electron phase-space
energy density, $\Delta w_e(x_0,v,t)=\int_0^tdt'\ C_1(v,t',\tau)$,
showing a loss of energy at $v < \omega /k$ and gain of energy at $v >
\omega /k$ for the moderately damped cases. This velocity-space
signature corresponds physically to a flattening of the distribution
function at the resonant velocity, consistent with the evolution of
the spatially averaged electron VDF predicted by quasilinear theory
\citep{Howes:2016prep}. The nearly-monotonic increase of $\int dv
dt' C_1$ for the moderately damped cases shows that $C_1$ serves as a
measure of collisionless damping rate and not merely the presence of
monochromatic waves.
\section{Application to Solar Wind Turbulence}
Proposed collisionless
damping mechanisms in the solar wind fall into three classes: (i)
coherent collisionless wave-particle interactions, such as Landau
damping, transit-time damping, or cyclotron damping
\citep{Landau:1946,Barnes:1966,Leamon:1998b,Quataert:1998,Leamon:1999,
Quataert:1999,Leamon:2000,Howes:2008b,Schekochihin:2009,TenBarge:2013a}
(ii) incoherent collisionless wave-particle interactions, primarily
leading to stochastic ion heating
\citep{Chen:2001,White:2002,Voitenko:2004,Bourouaine:2008,Chandran:2010a,Chandran:2010b,Chandran:2011,Bourouaine:2013};
and (iii) dissipation in coherent structures, specifically current
sheets, generally involving collisionless magnetic reconnection
\citep{Dmitruk:2004,Markovskii:2011,Matthaeus:2011,Osman:2011,Servidio:2011a,Osman:2012a,Osman:2012b,Wan:2012,Zhdankin:2013,Karimabadi:2013,Osman:2014a,Osman:2014b,Zhdankin:2015a}.
Under weakly collisional conditions, all of these mechanisms are
mediated by interactions between the electromagnetic fields and the
individual plasma particles, and therefore all will lead to a correlation
between the fields and particle VDFs. Each mechanism is likely to
generate a distinct velocity-space signature that can be diagnosed
using the general approach of field-particle correlations.
For the case of the damping of solar wind turbulence, the appropriate
form of the correlation will depend on the specific mechanism. For
example, ion transit-time damping
\citep{Barnes:1966,Quataert:1999}---the magnetic analogue of Landau
damping---will involve a correlation of the parallel perturbed
magnetic field $\delta B_\parallel$ and the ion parallel VDF $\delta
f_i(v_\parallel)$. In addition, the appropriate component of the field
may be difficult to measure in space, such as the parallel component
of the electric field, $E_\parallel$, that leads to Landau damping.
In this case, since the electromagnetic components are related by
Maxwell's equations, another field component may be used as a proxy
(since, at least in some instances, the fields have been shown to
satisfy linear eigenfunction relationships
\citep{Salem:2012,Howes:2012a,Klein:2012,Chen:2013a}). Although the
proxy correlation no longer corresponds directly to the transfer rate
of phase-space energy density, it may nonetheless indicate the order
of magnitude of the net energy transfer and its velocity-space
signature may reveal the resonant nature of the interaction.
The super-\Alfvenic flow of the solar wind is often
exploited to interpret the temporal fluctuations measured by the
spacecraft as the result of spatial fluctuations being swept past
the spacecraft by the solar wind flow, an approximation known as the
Taylor hypothesis \citep{Taylor:1938}. How does this solar wind flow
impact the field-particle correlation technique? The key step is to
perform the correlation over a suitably long correlation interval
$\tau$ in order to average out the generally larger-amplitude
oscillating energy transfer. Fundamentally, to average out the
oscillatory component, all that is necessary is that the
measurements span more than $2 \pi$ of the wave phase $\alpha$, a
function of time and position, $\alpha(x,t)=kx - \omega t$. If the
point of measurement is moving in space, $x_0(t)$, then the method
simply requires that the phase $\alpha(t)=k x_0(t)- \omega t$ span
more than $2 \pi$ over the correlation interval $\tau$, so the
technique is essentially insensitive to the solar wind flow.
The confirmation of this
assertion in a fully turbulent system is the focus of ongoing work.
The broadband nature of turbulent fluctuations could potentially smear
out the velocity-space signature associated with damping at a
particular wavelength. Preliminary studies
\citep{Howes:2016prep} indicate that the narrow
range of length scales over which certain damping mechanisms operate may
alleviate this potential problem.
Finally, it may be impractical to
compute the velocity derivative of the perturbed distribution function
due to the noise and limited resolution of spacecraft measurements, so
we may use an alternative correlation
\begin{equation}
C_2(x_0,v,t_i, \tau)= \frac{1}{N}\sum_{j=i}^{i+N} q_sv \delta f_{sj}(v)
E_j.
\label{eq:cfp2_sum}
\end{equation}
Note that this correlation is related to $C_1$ by an integration by
parts in velocity, so the velocity-integrated energy transfer rate is
identical to that of $C_1$ (see offset panels in Fig.~\ref{fig:C2}.
Both $C_2(v,t,\tau)$ and time-integrated correlation $\int_0^t
C_2(v,t', \tau) dt'$ with $\tau \omega_{pe}=6.28$ for case I are
plotted in Fig.~\ref{fig:C2}, yielding a velocity-space signature that
indeed indicates a resonant process.
\begin{figure}
\begin{center}
\resizebox{2.9in}{!}{\includegraphics*[0.85in,0.7in][3.5in,4.15in]
{fig4.eps}
}
\caption{ \label{fig:C2} Velocity space structure of $C_2$, top
panel, and $\int_0^t dt' C_2$, bottom, for case I, which may serve as
an alternative observable to $C_1$. Integration over velocity of
$C_1$, black line in offset, and $C_2$, red line, are in agreement.}
\end{center}
\end{figure}
\section{Conclusion}
Here we present a novel field-particle correlation technique that
requires only single-point measurements of the electromagnetic fields
and particle VDFs to return an estimate of the net rate of energy
transfer between fields and particles. Furthermore, this innovative
method yields valuable information about the distribution of this
energy transfer in velocity space, providing a vital new means to
identify the dominant collisionless mechanisms governing the damping
of the turbulent fluctuations beyond that provided by measurements of
velocity-integrated quantities such as $\V{j} \cdot \V{E}$.
This field-particle correlation technique fully exploits the vast
treasure of information contained in the \emph{fluctuations} of the
particle VDFs, potentially enabling new discoveries using single-point
spacecraft measurements. We believe this very general technique of
field-particle correlations will transform our ability to maximize the
scientific return from current, upcoming, and proposed spacecraft
missions, including the \emph{Magnetospheric Multiscale}
(\emph{MMS})\citep{Burch:2016}, \emph{Solar Probe Plus}
\citep{Fox:2015}, \emph{Turbulent Heating ObserveR} (\emph{THOR}), and
\emph{ElectroDynamics and Dissipation Interplanetary Explorer}
(\emph{EDDIE}) missions. Further testing and refinement of this
technique will characterize its sensitivity to the noise, limited
velocity-space resolution, and limited cadence of spacecraft
measurements, as well as its ability to extract a meaningful
velocity-space signature of the collisionless damping mechanism in the
presence of the broadband spectrum of fluctuations that is
characteristic of a turbulent plasma.
This work was supported by NSF AGS-1331355, NSF CAREER Award
AGS-1054061, and DOE DE-SC0014599.
\bibliographystyle{apj}
|
1,116,691,499,099 | arxiv | \section{Introduction}
\label{sec:introduction}
\subsection{Neutrino astronomy}
Ultra-high-energy (UHE) neutrino astronomy at energies above
$10^{17}$~eV is based on new, very efficient methods of neutrino
detection and on exciting theories for neutrino production. The most
interesting range of this astronomy covers tremendously high energies
above $10^{19} - 10^{20}$~eV. In fact, this energy scale gives only
the low-energy threshold, where the new observational methods, such as
space-based observations of fluorescent light and radio and acoustic methods,
start to operate. These methods allow observation of very large areas
and so detection of tiny fluxes of neutrinos. For example the exposure
of the space detector JEM-EUSO \cite{jemeuso} is planned to reach
$\sim 10^6$~km$^2$yr~sr. The upper limits obtained by radio
observations are presented in Fig.~\ref{fig:upper-limits}.
The basic idea of detection by EUSO is similar to the fluorescence
technique for observations of extensive air showers (EAS) from the
surface of the Earth. The UHE neutrino entering the Earth's atmosphere
produces an EAS. A known fraction of its energy, which reaches 90\%,
is radiated in the form of isotropic fluorescent light, which can be
detected by an optical telescope in space. There is little absorption
of up-going photons, so the fraction of flux detected is known, and
thus EUSO provides a calorimetric measurement of the primary
energy. In the JEM-EUSO project \cite{jemeuso} a telescope with
diameter 2.5 m will observe an area $\sim 10^5$~km$^2$ and will have a
threshold for EAS detection $E_{\rm th}\sim 1\times 10^{19}$~eV. The
observations are planned to start in 2012--2013.
UHE neutrinos may also be very efficiently detected by observations of
radio emission by neutrino-induced showers in ice or lunar regolith.
This method was originally suggested by G.~Askaryan in the 1960s
\cite{askarian}. Propagating in matter the shower acquires excess
negative electric charge due to scattering of the matter
electrons. The coherent Cerenkov radiation of these electrons produces
a radio pulse. Recently this method has been confirmed by laboratory
measurements \cite{saltz}. Experiments have searched for such
radiation from neutrino-induced showers in the Greenland and Antarctic
ice and in the lunar regolith. In all cases the radio emission can be
observed only for neutrinos of extremely high energies. Upper
limits on the flux of these neutrinos have been obtained in the GLUE
experiment \cite{glue} by radiation from the moon, in the FORTE
experiment \cite{forte} by radiation from the Greenland ice, and in the
ANITA \cite{anita} and RICE \cite{rice} experiments from the Antarctic
ice.
Probably the first proposal for detection of UHE neutrinos with
energies higher than $10^{17}$~eV was made in \cite{BS}. It was
proposed there to use the horizontal Extensive Air Showers (EAS) for
neutrino detection. Later this idea was transformed into the
Earth-skimming effect \cite{Fargion} for $\tau$ neutrinos. Recently
the Auger detector \cite{tau-auger} put an upper limit on UHE
neutrino flux using the Earth-skimming effect (see
Fig.~\ref{fig:upper-limits}).
\subsection{UHE neutrino sources}
What might these new large-area UHE neutrino observatories detect? On
the one hand, there are without doubt {\em cosmogenic neutrinos},
produced by UHECR particles interacting with the CMB photons. On the
other hand, there may be neutrinos produced in decays or annihilation
of superheavy particles; this is referred to as the {\em top-down
scenario}.
Cosmogenic neutrinos were first discussed in \cite{BZ}, soon after the
prediction of the GZK cutoff \cite{GZK}. There, it was shown that UHE
neutrino fluxes much higher than the observed UHECR flux can be
produced by protons interacting with CMB photons at large redshifts.
The predicted flux depends on the cosmological evolution of the
sources of UHE protons and on the assumed acceleration mechanisms.
Recent calculations of cosmogenic neutrino fluxes (see
e.g. \cite{Kal}--\cite{Sato}) are normalized to the observed UHECR
flux, with different assumptions about the sources.
The energies of cosmogenic neutrinos are limited by the maximum energy
of acceleration, $E_{\rm acc}^{\rm max}$. To provide neutrinos with
energies above $1\times 10^{20}$~eV, the energies of accelerated
protons must exceed $2\times 10^{21}$~eV. For non-relativistic shocks,
the maximum energy of acceleration $E_p^{\rm max}$ can optimistically
reach $1\times10^{21}$~eV. For relativistic shocks this
energy can be somewhat higher. Production of cosmogenic neutrinos
with still higher energies depends on less-developed ideas, such as
acceleration in strong electromagnetic waves, exotic plasma mechanisms of
acceleration and unipolar induction.
The top-down scenarios, on the other hand, naturally provide neutrinos
with energies higher and much higher than $1\times 10^{20}$~eV
\cite{HSW}. The mechanism common to many models assumes the
existence of superheavy particles with very large masses up to the GUT
scale $\sim 10^{16}$~GeV. Such particles can be produced by
Topological Defects (TD) (see \cite{CosmStrBookVilenkin} for a general review).
They then rapidly decay and produce a parton cascade, which is
terminated by production of pions and other hadrons. Neutrinos are
produced in hadron decays.
The production of unstable superheavy particles --- the constituent
fields of TD --- is a very common feature of the TD \cite{BS98}.
However, the dynamics of TD is highly nonlinear and complicated, the
distance between TDs is model-dependent, and the calculation of UHE
particle fluxes requires special consideration for different types of
TD \cite{BBV}.
{\em Cosmic strings} can release particles in the process of
self-interaction, and in the final evaporation of tiny loops, but only
a few particles are produced by each such interaction. Of more
interest are cosmic string {\em cusps}, where the string doubles back
on itself and moves with a huge Lorentz factor \cite{McGibbon}.
Particles emitted by cusps have energies much higher than their rest
masses, because of the boost. However, the flux from such events is
too low to be observed \cite{OBP1, OBP2}.
{\em Monopole-antimonopole pairs connected by strings}
\cite{Hill,Bh,OBP3} can release superheavy particles when the monopole
and antimonopole finally annihilate. However, such defects, similar
to superheavy dark matter (see below), would be accumulated inside
galaxies, and in particular in the Milky Way. The resulting UHECR
flux would be dominated by photons, which can reach us easily from
short distances. Such photons are not observed
\cite{gamma-Auger} at the level that would be necessary if top-down
production were to account for the observed UHECR.
If each monopole is attached to two strings, we have {\em necklaces}.
Necklaces are an attractive source for UHE neutrinos \cite{BV,ABK},
but simple models of necklaces may lead to rapid annihilation of the
monopoles \cite{Olum}. In other models, however, the monopoles may
survive for much longer, providing a detectable flux of UHE
neutrinos.\footnote{The main point of Ref.~\cite{Olum} is that the
relativistic motion of strings causes monopoles to develop large
velocities along the string. As a result monopoles frequently run
into one another and annihilate. A possible way to avoid this is to
consider light strings, which remain overdamped till very late times
and therefore move slowly. Another possibility is that the strings
have zero modes, which act as a one-dimensional gas on the strings and
slow the monopoles down. These models need further investigation.}
In a wide class of particle physics models, cosmic strings can be {\it
superconducting}, in which case they respond to external
electromagnetic fields as thin superconducting wires \cite{Witten}.
String superconductivity arises when a condensate of charged particles
(which can be either bosons or fermions) is bound to the string.
These particles have zero mass in the bound state, whereas away from
the string they have some mass $m_X$. Loops of superconducting string
develop electric currents as they oscillate in cosmic magnetic fields.
Near a cusp, a section of string acquires a large Lorentz boost
$\gamma_c$, and simultaneously the string current is increased by a
factor $\gamma_c$. If the current grows to a critical value $J_{max}$
charge carriers rapidly scatter off each other and are ejected from
the string.
The decay products of these particles can then be observed as
cosmic rays. This model will be the subject of the present paper.
Apart from TDs, superheavy particles can naturally be produced by
thermal processes \cite{BKV,KR} and by time-varying gravitational
fields \cite{Kolb-grav,Kuz-grav} shortly after the end of inflation.
These particles can survive until present and produce neutrinos in
their decays. Protected by symmetry (e.g. discrete gauge symmetry, in
particular R-parity in supersymmetric theories), these particles can
have very long lifetimes exceeding the age of the universe. The
resulting neutrino flux may exceed the observed flux of UHECR.
However, like any other form of CDM, superheavy particles accumulate
in the Milky Way halo and produce a large flux of UHE photons. The
non-observation of these photons puts an upper limit on the neutrino
flux from intergalactic space.
\subsection{The cascade bound}
The neutrino fluxes are limited from above. The most general upper
bound for UHE neutrinos, valid for both cosmogenic neutrinos and
neutrinos from top-down models, is given by the {\em cascade upper
limit}, first considered in \cite{BS,book}. The production of
neutrinos in these scenarios is accompanied by production of high
energy photons and electrons. Colliding with low-energy target
photons, a primary photon or electron produces an electromagnetic
cascade due to the reactions $\gamma+\gamma_{\rm target} \to e^++e^-$,
$e+\gamma_{\rm target} \to e'+\gamma'$, etc. The cascade spectrum is
very close to the EGRET observations in the range 3~MeV - 100~GeV
\cite{EGRET}. The observed energy density in this range is
$\omega_{\rm EGRET} \approx (2 - 3)\times 10^{-6}$~eV/cm$^3$. To be
conservative, we will use the lower end of this range. It
provides the upper limit for the cascade energy density. The upper
limit on UHE neutrino flux $J_{\nu}(>E)$ (sum of all flavors) is given
by the following chain of inequalities \begin{equation} \omega_{\rm
cas}>\frac{4\pi}{c}\int_E^{\infty}E'J_{\nu}(E')dE'>
\frac{4\pi}{c}E\int_E^{\infty}J_{\nu}(E')dE'\equiv
\frac{4\pi}{c}EJ_{\nu}(>E)\,.
\label{eq:int-limit}
\end{equation}
Here $c$ is the speed of light, but will generally work in units where
$c = 1$ and $\hbar = 1$. In terms of the differential neutrino
spectrum, Eq.\ (\ref{eq:int-limit}) gives $J_{\nu}(E)$ gives
\begin{equation}
E^2 J_{\nu}(E) < \frac{c}{4\pi}\omega_{\rm cas },~~ {\rm with}~
\omega_{\rm cas} <\omega_{\rm EGRET}
\label{cas-rig}
\end{equation}
\begin{figure*}[t]
\begin{center}
\mbox{\includegraphics[width=12cm,height=7cm]{uhenu-limits1.eps}}
\end{center}
\caption{
The experimental upper limits on UHE neutrino fluxes
in comparison with the electromagnetic cascade upper limit in assumption of
$E^{-2}$ generation spectrum (labeled ``$E^{-2}$ cascade'') and
with predictions for cosmogenic neutrinos.
Neutrino fluxes are given for one neutrino flavor $\nu_i+\bar{\nu}_i$.
}
\label{fig:upper-limits}
\end{figure*}
Eq.~(\ref{cas-rig}) gives a {\em rigorous} upper limit on the neutrino flux.
It is valid for neutrinos produced by HE protons, by topological defects, by
annihilation and decays of superheavy particles, i.~e., in all cases
when neutrinos are produced through decay of pions and kaons. It holds
for an arbitrary neutrino spectrum decreasing with energy. If one assumes
some specific shape of neutrino spectrum, the cascade limit becomes
stronger. For a generation spectrum proportional to $E^{-2}$, which is
used for analysis of observational data, one obtains a stronger upper
limit. Given for one neutrino flavor it reads \cite{pylos}
\begin{equation}
E^2J_i(E) \leq \frac{1}{3} \frac{c}{4\pi}\frac{\omega_{\rm cas}}
{\ln (E_{\rm max}/E_{\rm min})},
\label{cas-E2}
\end{equation}
where $E_{\rm max}$ and $E_{\rm min}$ give the range of neutrino
energies to which the $E^{-2}$ spectrum extends, and
$i=\nu_{\mu}+\bar{\nu}_{\mu}$, or $i=\nu_e+\bar{\nu}_e$, or
$i=\nu_{\tau}+\bar{\nu}_{\tau}$. This upper limit is shown in Fig.~\ref{fig:upper-limits}.
One can see that the observations almost reach the cascade upper
limit and thus almost enter the region of allowed fluxes.
The most interesting energy range in Fig.~\ref{fig:upper-limits}
corresponds to
$E_{\nu} > 10^{21}$~eV, where acceleration cannot provide protons
with sufficient energy for production of these neutrinos.
At present the region of $E_{\nu} > 10^{21}$~eV, and especially
$E_{\nu} \gg 10^{21}$~eV is considered as a signature of
top-down models, which provide these energies quite naturally.
\subsection{Model assumptions}
In this paper we consider superconducting string loops as a source of
UHE neutrinos. We consider a simple model in which a magnetic
field of magnitude $B$, occupying a fraction of space $f_B$, is
generated at some epoch $z_{max}\sim$ 2--3.
The strings are characterized by two parameters: the fundamental
symmetry breaking scale $\eta$ and the critical current $J_{max}$. We
take the mass per unit length of string to be $\mu = \eta^2$.
The predicted flux of UHE neutrinos depends on the typical length
of loops produced by the string network. This issue has been a
subject of much recent debate, with different simulations
\cite{Shellard,Maria,OVV,OV} and analytic studies \cite{Polchinski,Vitaly}
yielding different answers. Here we shall adopt the picture suggested
by the largest and, in our view, the most accurate simulations of
string evolution performed to date
\cite{OVV,OV}. According to this picture, the characteristic length of
loops formed at cosmic time $t$ is given by the scaling relation
\begin{equation}
l\sim \alpha t,
\label{alpha}
\end{equation}
with $\alpha\sim 0.1$.
For simplicity and transparency of the formulae obtained in this paper
we use several simplifications. We assume cosmology without $\Lambda$
term with $\Omega_{cdm} + \Omega_b=1$,
the age of the universe $t_0 = (2/3)H_0^{-1} = 3\times 10^{17}$~s,
$t_{eq} \sim 1\times 10^{12}$~s,~ and $(1+z)^{3/2}=t_0/t$
for the connection of age $t$ and redshift $z$ in the matter era.
We also assume the fragmentation function for the decay of
superheavy $X$ particle into hadrons is
\begin{equation}\label{E-2}
dN/dE \propto E^{-2},
\end{equation}
while Monte Carlo simulation and the DGLAP method give closer to
$E^{-1.92}$ \cite{DGLAP}.
These simplifications give us a great advantage in
understanding the dependence of calculated physical quantities on the
basic parameters of our model, in particular on fundamental string
parameter $\eta$.
Our aim in this paper is to obtain the order of magnitude of the flux
of UHE neutrinos and to indicate the signatures of the model. We
believe our simplified model assumptions are justified, given the
uncertainties of string evolution and of the evolution of cosmic
magnetic fields.
\section{Particle emission from superconducting strings}
\subsection{Particle bursts from cusps}
As first shown by Witten \cite{Witten}, cosmic strings are
superconducting in many elementary-particle models. As they
oscillate in cosmic magnetic fields, such strings develop electric
currents.
Assuming that the
string loop size is smaller than the coherence length of the field
$l\alt l_{B} \sim 1Mpc$, the electric current can be estimated as
\cite{Witten,CosmStrBookVilenkin}
\begin{equation}
J \sim 0.1 e^2 B l .
\label{CCurrent}
\end{equation}
Particles are ejected from highly accelerated parts of superconducting
strings, called cusps, where large electric currents can be induced
\cite{Spergel1,Babul}. The current near a cusp region is boosted as
\begin{equation}
J_{cusp} \sim \gamma_c J,
\end{equation}
where $J$ is the current away from the cusp region and $\gamma_c$ is the
Lorentz factor of the corresponding string segment. Particles are
ejected from portions of the string that develop Lorentz factors
\begin{equation}
\gamma_c \sim J_{max} \slash J,
\end{equation}
where the current reaches the critical value $J_{max}$. This maximum
current is model-dependent, but is bounded by $J_{max}\alt e\eta$,
where $\eta$ is the symmetry breaking scale of the string and $e\sim
0.1$ is the elementary electric charge in Gaussian units, renormalized
to take into account self-inductance \cite{CosmStrBookVilenkin}.
One may parametrize $J_{max}$ by introducing the parameter $i_c < 1$:
\begin{equation}
J_{max}=i_c e\eta,
\label{Jmaxeta}
\end{equation}
If the charge carrier is a superheavy particle $X$ with mass $m_X$,
the case which will be considered here, one may use $\epsilon_X^r$
for the energy of $X$-particle in the rest system of the cusp and
$\epsilon_X$ in the laboratory system. Then $\epsilon_X^r=\gamma m_X= i_c
\eta$ and
\begin{equation}
\epsilon_X \sim i_c \gamma_c \eta,
\label{E_X}
\end{equation}
respectively, where $\gamma$ is the average Lorentz factor of
X-particle in the rest system of the cusp. In Eq.~(\ref{E_X}) we took
into account that the energy of $X$-particle in the laboratory
system is boosted by the Lorentz factor of the cusp $\gamma_c$.
The number of $X$ particles per unit invariant length of the string is
$\sim J\slash e$, and the segment that develops Lorentz factor
$\gamma_c$ includes a fraction $1\slash \gamma_c$ of the total
invariant length $l$ of the loop. Hence, the number of $X$ particles
ejected in one cusp event (burst) is
\begin{equation}
N_{X}^b \sim (J \slash e) (l\slash \gamma_c) \sim J^2 l/eJ_{max}\; .
\label{NX}
\end{equation}
The oscillation period of the loop is $l/2$, so assuming one
cusp per oscillation, the average number of $X$ particles emitted per unit
time is
\begin{equation}
\dot{N}_X \sim 2J^2/eJ_{max},
\label{dotN_X}
\end{equation}
and the luminosity of the loop is
\begin{equation}
L_{tot} \sim \dot{N}_X \epsilon_X .
\label{Ltot}
\end{equation}
The $X$ particles are short-lived.
They decay producing the parton cascade which is developed due
to parton splitting in the perturbative regime, until at the
confinement radius the partons are converted into hadrons, mostly pions
and kaons, which then decay producing gamma rays, neutrinos, and electrons.
These particles together with less numerous nucleons
give the observational signatures of superconducting cusps.
The neutrino spectrum at present epoch $z=0$, produced by the decay
of one X-particle with energy $\epsilon_X \sim i_c \gamma_c \eta$ at
epoch $z$ can be calculated using the fragmentation function
(\ref{E-2}) for an X-particle at rest:
\begin{equation}
\label{Fragmentation}
\xi_{\nu}(E) \approx \frac{i_{c} \eta \gamma_{c}}{2 (1+z) \ln
(E_{max}^{rest}/E_{min}^{rest})} \frac{1}{E^{2}} ,
\end{equation}
where
$E_{max}^{rest}$ and $E_{min}^{rest}$ are the maximum and minimum
neutrino energies in the rest system of X-particle.
Particle emission from a cusp occurs within a narrow cone
of opening angle
\begin{equation}
\label{OpeningAngle}
\theta_c \sim \gamma_c^{-1} \sim J \slash J_{max}
\end{equation}
The duration of a cusp event is \cite{Babul}
\begin{equation}
t_{burst} \sim l \gamma_c^{-3}
\end{equation}
\subsection{Superconducting loops in the universe}
In any horizon-size volume of the universe at arbitrary time there are a
few long strings crossing the volume and a large number of small
closed loops. As loops oscillate under the force of string tension, they
lose energy by emitting gravitational waves at the rate
\begin{equation}
{\dot E}_g\sim \Gamma G\mu^2,
\label{grav-loss}
\end{equation}
where $\mu \sim \eta^2$ is the string mass per unit length,
$G=1/m_{Pl}^2$ is the gravitational constant and
$\Gamma \sim 50$ is a numerical coefficient.
The number density of loops with lengths in the interval from $l$ to
$l+dl$ at time $t$ can be expressed as $n(l,t)dl$. Of greatest
interest to us are the
loops that formed during the radiation era $t < t_{eq}$ and still survive at
$t>t_{eq}$. The density
of such loops at time $t$ is given by \cite{CosmStrBookVilenkin}
\begin{equation}
n(l,t)dl \sim t_{eq}^{1\slash 2} t^{-2} l^{-5\slash2}dl,
\label{LoopDensity}
\end{equation}
in the range from the minimum length $l_{min}$ to the maximum
length $l \sim \alpha t_{eq}$, where
\begin{equation}
l_{min} \sim \Gamma G \mu t \sim 3\times 10^{11} \eta_{10}^2 (1+z)^{-3/2}
{\rm cm}
\label{lmin}
\end{equation}
and $\eta_{10} = \eta/10^{10}$~GeV. Here and below we assume that the
loop length parameter in (\ref{alpha}) is $\alpha \sim 0.1$, as
suggested by simulations \cite{OVV,OV}.
Loops of the minimum length are of most importance in our calculations
because they are the most numerous.
For a loop of length $l$ at redshift $z$, the Lorentz factor at the cusp
$\gamma_{c}$ can be expressed as
\begin{equation}
\gamma_{c} = \frac{J_{cusp}}{J} = \frac{i_{c} e \eta}{0.1 e^2 B l} =
\gamma_{c} (l_{min}) \frac{l_{min}}{l}
\end{equation}
where $\gamma_{c} (l_{min}) = \gamma_{0} (1+z)^{3/2}$ and
\begin{equation}
\gamma_{0} = \frac{10 i_{c} \eta}{e B t_{0} \Gamma G \mu}
= \frac{1.1\times 10^{12} i_c}{B_{-6}\eta_{10}}
\end{equation}
where $B_{-6}$ is the magnetic field in microgauss.
\subsection{Limits on $\eta$}
The string motion is overdamped at early cosmic times, as a result of
friction due to particle scattering on moving strings. The
friction-dominated epoch ends at
\begin{equation}
t_*\sim (G\mu)^{-2}t_p,
\end{equation}
where $t_p$ is the Planck time. In the above analysis we have assumed
that loops of interest to us are formed at $t>t_*$. The corresponding
condition,
\begin{equation}
\Gamma G\mu t_0/\alpha \agt t_* ,
\end{equation}
yields
\begin{equation}
\eta\agt 10^9 ~{\rm GeV}.
\label{eta*}
\end{equation}
For strings with $\eta<10^9$~GeV, loops of the size given by
(\ref{lmin}) never form. Instead, the smallest loops are those that
form at time $t_*$ with length
\begin{equation}\label{lminfriction}
l_{min}\sim \alpha t_*\,,
\end{equation}
and then survive until the present day.
We should also verify that energy losses due to particle emission and to
electromagnetic radiation in recent epochs (after magnetic fields have
been generated) are sufficiently small, so the lifetimes of the loops
(which we estimated assuming that gravitational radiation is the
dominant energy loss mechanism) are not significantly modified.
The average rate of energy loss due to particle emission is
\begin{equation}
{\dot E}_{part}\sim f_B \dot{N}_X \epsilon_X \sim 2 f_B JJ_{max}/e^2\,
\label{Epart}
\end{equation}
where we have
used Eqs.~(\ref{dotN_X}) and (\ref{E_X}). The electromagnetic radiation
power is smaller by a factor $e^2\sim 10^{-2}$.
The factor $f_B$ in Eq.~(\ref{Epart}) is the filling factor -- the
fraction of space filled with the magnetic field. It gives the
fraction of time that cosmic string loops spend in magnetized regions.
We assume that loop velocities are sufficiently high that they do not
get captured in magnetized cosmic structures (such as galaxy clusters
or LSS filaments). To justify this assumption, we note that particle
emission can start only after the cosmic magnetic fields are
generated, that is, at $z\sim 3$ or so. Before that, gravitational
radiation is the dominant energy loss mechanism, and the loops are
accelerated to high speeds by the gravitational rocket effect
\cite{Vachaspati,Hogan}. The smallest loops of length (\ref{lmin})
have velocities $v\sim 0.1$, certainly large enough to avoid capture.
The particle emission energy rate (\ref{Epart}) should be compared to
the gravitational radiation rate (\ref{grav-loss}).
The ratio of the two rates is zero at $z > z_{max}$, where $z_{max} \sim$
2--3 is the red-shift of magnetic field production. At $z < z_{max}$
it is given by
\begin{equation}
{\dot E}_{part}/{\dot E}_g \sim 50 f_{-3}B_{-6} i_c \eta_{10}^{-1}
\left(\frac{l}{l_{min}}\right)(1+z)^{-3/2}.
\label{EpEg}
\end{equation}
where $f_{-3} = f_B/10^{-3}$ and $l_{min}$ is given by (\ref{lmin}).
If particle emission is the dominant energy loss mechanism, then the
lifetime of a loop is
\begin{equation}
\tau_{part}\sim \frac{\mu l}{{\dot E}_{part}} \sim \frac{5\eta}{ei_c f_B B}
\sim 0.025 \frac{t_0\eta_{10}}{f_{-3}B_{-6}i_c} .
\label{taupart}
\end{equation}
Note that $\tau$ is independent of $l$. This means that all loops
surviving from the radiation era decay at about the same time.
For the time being, we shall assume that particle radiation is
subdominant. We shall discuss the opposite regime in Section II.G.
\subsection{Rate of cusp events}
The rate of observable cusp bursts (i.e., the bursts
whose spot hits the Earth) is given by
\begin{equation}
d\dot{N_{b}} = f_{B} \frac{d\Omega} {4\pi} \nu(l, z) dl \frac{dV(z)}{1+z}
\label{BurstRate}
\end{equation}
where, as before, $f_{B}$ is the fraction of space with magnetic field
$B$, $d\Omega = 2\pi \theta d\theta$ is the solid angle element,
with $\theta$ limited by the angle of cusp emission $\theta_c \sim
1/\gamma_{c}$; $\nu(l, z) = n(l, z)/(l/2)$ is the frequency of the
bursts with $n(l,z)$ given by Eq.~(\ref{LoopDensity}), and
$dV(z)$ is a proper volume of space limited by redshifts $z$ and $z+dz$,
\begin{equation} \label{ProperVolume}
dV(z) = 54 \pi t_{0}^{3} [(1+z)^{1/2}-1]^{2} (1+z)^{-11/2} dz .
\end{equation}
Integrating Eq.~(\ref{BurstRate}) over $\theta$, $l$ and $z$,
we obtain
\begin{equation}
\dot{N_{b}} = 54 \pi (t_{eq} t_{0})^{1/2} (\Gamma G \mu)^{-1/2}
(e/10i_c \eta)^2 \int_{0}^{z_{max}} dz \frac{[(1+z)^{1/2} -
1]^{2}}{(1+z)^{11/4}} f_B(z) B^2(z) ,
\label{NBf}
\end{equation}
where $z_{max}$ is the redshift at which the magnetic fields are
generated. Since the earth is opaque to neutrinos with
the energies we are considering,
only half of these bursts can actually be detected by any given
detector at the surface of the earth or using the atmosphere.
The value of the integral in (\ref{NBf}) depends on one's assumptions
about the evolution of the magnetic field $B$ and of the volume
fraction $f_B$. This evolution is not well understood.
If we take these values out of the integral in Eq.~(\ref{NBf})
as the average and characterize them by the effective values of parameters
$B_{-6}$ and $f_{-3}$ in the range $0<z<z_{max}$,
then Eq.~(\ref{NBf}) reduces to
\begin{equation} \label{NBurst}
\dot{N_{b}} = 2.7 \times 10^{2} \frac{B_{-6}^2 f_{-3}}{i_{c}^2 \eta_{10}^{3}}
\frac{I}{0.066}\,\,yr^{-1} ,
\end{equation}
where the integral
\begin{equation}
I = \int_{0}^{z^{\prime}} dz \frac{[(1+z)^{1/2} - 1]^{2}}{(1+z)^{11/4}}
= \frac{4}{3} [1 - (1+z^{\prime})^{-3/4}] - \frac{8}{5} [1 - (1+z^{\prime})^{-5/4}] +
\frac{4}{7}[1 - (1+z^{\prime})^{-7/4}],
\label{int}
\end{equation}
is equal to $0.015$, $0.042$ and $0.066$ for
$z^{\prime} = z_{max} = 1$, $2$ and $3$, respectively.
The integrand in Eq.~(\ref{NBf}) includes the product $f_B(z)B^2(z)$.
In the calculations of other physical quantities below, similar
integrals will have different combinations of $f_B(z)$ and
$B(z)$. Nevertheless, we shall assume that the average values taken
out of the integral are characterized by approximately the same values
of $f_{-3}$ and $B_{-6}$.
All cosmic structures --- galaxies, clusters, and filaments of the
large-scale structure --- are magnetized and contribute to the rate of
cusp bursts. In the recent epoch, $z\alt 1$, the dominant
contribution is given by clusters of galaxies with $B_{-6}^{2} f_{-3}
\sim 1$. The magnetic fields of galaxies have about the same magnitude,
but the corresponding filling factor $f_B$ is orders of magnitude
smaller. We shall assume that this holds in the entire interval
$0<z<z_{max}$. The sources in our model are then essentially clusters
of galaxies.
\subsection{Diffuse flux of UHE neutrinos}
The diffuse differential neutrino flux, summed over all produced
neutrino flavors, is given by the formula
\begin{equation}
J_{\nu}(E) = \frac{1}{4\pi} \int d\dot{N_{b}} N_{X}^{b} \xi_{\nu} (E)
\frac{1}{\Omega_{jet} r^2(z)},
\end{equation}
where $d\dot{N_{b}}$
is the rate of cusp bursts (\ref{BurstRate}),
$N_{X}^{b}$ is the number of $X$ particles produced per burst, given
by Eq.~(\ref{NX}),
$\xi_\nu(E)$ is the neutrino spectrum produced by the decay of one
$X$-particle, given by (\ref{Fragmentation}),
\begin{equation}
\Omega_{jet} = \pi \theta_c^{2} = \frac{\pi}{\gamma_{c}^{2}},
\end{equation}
\begin{equation}
r(z) = 3 t_{0} [1 - (1+z)^{-1/2}]
\end{equation}
is the distance between a source at redshift $z$ and the
observation point at $z = 0$, and
$\Omega_{jet} r^{2}$ is the area of the burst spot at the Earth from
a source at redshift $z$.
Using expressions (\ref{LoopDensity}) and (\ref{ProperVolume}),
and assuming that the product $f_B(z) B(z)$ does not change much in
the interval $0<z<z_{max}$, we obtain\footnote{We note that
numerical simulations of the magnetic field evolution performed by Ryu
et al.~\cite{Ryu} do indicate that the space average of the magnetic
field $\langle B(z)\rangle = f_B(z) B(z)$ remains roughly constant at
$\sim 10^{-9}$~G for $0<z\alt 3$ and decreases at larger values of
$z$. The effective values $B_{-6}$ and $f_{-3}$ could be
different from those in Eq.~(\ref{NBurst}) for the rate of bursts,
but we neglect the possible difference. }
\begin{equation} \label{DiffuseFlux}
E^{2} J_{\nu}(E) = \frac{0.3 i_{c} m_{pl} (t_{eq}/t_{0})^{1/2}
(e B t_{0}^{2})f_B }{7\pi (\Gamma)^{1/2}\, t_{0} (c t_{0})^2\,
\ln (E_{max}^{rest}/E_{min}^{rest})} [1- (1+z_{max})^{-7/4}].
\end{equation}
Numerically, this gives for the neutrino flux summed over neutrino
flavors
\begin{equation} \label{NFlux}
E^{2} J_{\nu}(E) = 6.6 \times 10^{-8} i_{c} B_{-6} f_{-3}\,\,\,\,GeV\,cm^{-2}\,s^{-1}\,sr^{-1},
\end{equation}
where we have set $z_{max} =3$ and estimated the logarithmic
factor as $\sim 30$.
For $i_c \sim 1$, the flux (\ref{NFlux}) is close to the cascade
upper limit shown in Figure \ref{fig:upper-limits}. Notice that the
diffuse neutrino flux (\ref{DiffuseFlux}) does not depend on
$\eta$. The neutrino flux must correlate with clusters of galaxies.
To detect this flux, we need to monitor a target with some large
mass M. The effective cross-section of the detector is then
\begin{equation}
\Sigma = \sigma_{\nu N} M/m_N
\end{equation}
where $\sigma_{\nu N} \sim 3\times 10^{-32}\,\,\, cm^{2}$ is the
neutrino-nucleon cross section at $E \agt 10^{10}$~GeV and $m_N$ the
mass of a nucleon. Because of the opacity of the earth, the
detector will see solid angle about $2\pi$ sr. The detection rate
of particles with energy above E is
\begin{equation}
2 \pi E J_\nu(E) \Sigma \approx 23
\left(\frac{M}{10^{18}g}\right)\left(\frac{10^{10}~GeV}{E}\right) i_{c}
B_{-6} f_{-3}\text{yr}^{-1}
\label{Ndotic}
\end{equation}
In the
case of JEM-EUSO in tilt mode, $M\sim 5\times10^{18} g$, and thus
we expect about $100 i_c$ detections per year, so events can be
expected for $i_c\agt 0.01$.
\subsection{Neutrino fluence and the number of neutrinos detected
from a burst}
The fluence of neutrinos incident on the detector from a burst at
redshift $z$ can be calculated as
\begin{equation}
\Phi (>E) = \frac{N_{X}^{b} \xi_\nu(>E)}{\Omega_{jet} r^2(z)}
\end{equation}
Consider a neutrino burst from a loop of length $l$ at redshift
$z$. Using $N_{X}^{b}$ from (\ref{NX}), $l_{min}$ from
(\ref{lmin}) and $\xi_{\nu}(>E)$ from (\ref{Fragmentation}), we
obtain for a loop of any length $l$,
\begin{equation}
\Phi (>E) \approx \frac{10 i_{c}^{3} \eta^{3}}{18 \pi e B
t_{0}^{2}\, E\, \ln (E_{max}^{rest}/E_{min}^{rest}) [(1+z)^{1/2}
-1]^{2}} ,
\end{equation}
which numerically results in
\begin{equation} \label{NNumber}
\Phi(>E) \approx 1.2\times 10^{-2} \frac{i_{c}^{3}
\eta_{10}^{3}}{B_{-6}} \left(\frac{10^{10}\,\,GeV}{E}\right)
\frac{1}{[(1+z)^{1/2} -1]^{2}}~ \text{km}^{-2}
\end{equation}
The number of neutrinos detected in a burst is
\begin{equation}
N_{\nu}^{det} \sim \Phi(>E)\Sigma
\end{equation}
With $M\sim 5\times10^{18} g$ as above,
\begin{equation}
N_{\nu}^{det}(>E) \approx 0.11 \frac{10^{10}~ {\rm GeV}}{E}
\frac{i_c^3 \eta_{10}^3} {B_{-6}}
\frac{1}{[(1+z)^{1/2} -1]^2}
\label{Ndet}
\end{equation}
Therefore, for a certain range of $i_c\eta_{10}$ values and source
redshifts $z$, multiple neutrinos can be detected as parallel tracks
from a single burst. For example, for $i_c \eta_{10} \sim 3$, and
$z \sim 1$, ~~ $N_{\nu}^{det} \sim 17$.
For neutrino energies of interest, $E_{\nu} \agt 1\times 10^{20}$~eV,
the neutrino Lorentz factor is so large that there is practically no
arrival delay for neutrinos with smaller energies. All neutrinos from
a burst arrive simultaneously and produce atmospheric
showers with parallel axes, separated by large distances.
For other sets of parameters $N_{\nu}^{det} < 1$ , i.e. only one
neutrino from a burst (or no neutrino) is detectable. As $\eta$
increases, the rate of bursts (\ref{NBurst}) diminishes while the
number of neutrinos per burst increases, so that the total neutrino flux
remains unchanged.
\begin{figure*}[t]
\begin{center}
\mbox{\includegraphics[width=11cm,angle=-90]{burst.eps}}
\end{center}
\caption{The region of parameter space where neutrinos can be seen by
a detector with the parameters of JEM-EUSO. The curved lines show the
left edges of the regions in which bursts containing at least 2, 3,
and 10 neutrinos can be expected at least once per year. Below
the dotted line, particle radiation is the dominant channel of
energy loss from loops.}
\label{fig:parameters}
\end{figure*}
The rate of detected neutrino bursts with the number of detected
neutrinos $N_{\nu}^{det} > \zeta$ for each burst, is given by
Eqs~(\ref{NBurst}) and (\ref{int}), with $z_{max}$ determined by
$N_{\nu}^{det}(>E,z_{max})=\zeta$ . Using Eq.~(\ref{Ndet}) we obtain
for $x_{max} \equiv (1+z_{max})$:
\begin{equation}
x_{max}(>E,\zeta)= \left [1+ \left (\frac{0.11}{\zeta}
\frac{i_c^3\eta_{10}^3}{B_{-6}} \frac{10^{10}~{\rm GeV}}{E}
\right )^{1/2}\right ]^2,
\label{Xmax}
\end{equation}
if (\ref{Xmax}) is less than
4, and $x_{max}=4$ if (\ref{Xmax}) is larger than 4.
Introducing in Eq.~(\ref{NBurst}) coefficient $1/2$ which approximately
takes into account the absorption of UHE neutrinos crossing the
Earth we obtain for the rate of detected bursts with $N_{\nu}^{det}
\geq \zeta$
\begin{equation}
\dot{N}_b^{det}(\geq \zeta) = 2.1 \times 10^3 \frac{f_{-3} B_{-6}^2}
{i_c^2 \eta_{10}^3} I(z_{max}) ~~ {\rm yr}^{-1},
\label{Nbzeta}
\end{equation}
where $I(z_{max})$ is given by Eq.~(\ref{int}) with $z_{max}$ from
Eq.~(\ref{Xmax}).
In Fig.~\ref{fig:parameters}, we have shaded the region of the
parameter space $(\eta,i_c)$ corresponding to a detectable flux of
neutrinos. Curved lines in the figure mark the regions where we
expect a burst with a given multiplicity of neutrinos, $\zeta =$ 2, 3 or
10, detected simultaneously by a detector with the parameters of
JEM-EUSO tilted. To the left of the 2-neutrino-burst line, only a
diffuse flux of single neutrinos can be observed. This flux depends
only on $i_c$, and the vertical left boundary of the shaded region
marks the value of $i_c$ at which it drops below one particle detected
per year.
Note that the regions shown for multiple events are those where
we expect at least one burst per year whose average multiplicity is
the given $\zeta$ or more. But it is possible even if the parameters
are to the left of the $\zeta = 2$ line that we would happen to
observe multiple neutrinos from a single burst, which would give a
clear signature of neutrino-jet emission from cusps.
Another quantity of interest is the {\em rate of detected neutrinos}
$f_{\nu}(\geq \zeta)$ in the events with neutrino multiplicity greater
than $\zeta$. It is given by
\begin{equation}
f_{\nu}(\geq \zeta)= \frac{1}{2} \int \frac{f_B}{2} \frac{1}{\gamma_c^2}
\frac{n(l,z) dl}{l} \frac{dV(z)}{1+z} N_{\nu}^{det}(>E,z,l).
\label{f_nu1}
\end{equation}
The important feature of the calculations is the
independence of $N_{\nu}^{det}(>E,z,l)$ from $l$. This allows us to
integrate over $l$ in Eq.~(\ref{f_nu1}) to obtain
\begin{equation}
f_{\nu}(\geq \zeta) = 2.1 \times 10^3 \frac{f_{-3} B_{-6}^2}
{i_c^2\eta_{10}^3} \int_0^{z_{max}(\zeta)} dz (1+z)^{-\frac{11}{4}}
\left [ (1+z)^{1/2} - 1 \right ]^2 N_{\nu}^{det} (>E,z) ,
\label{f_nu2}
\end{equation}
where $z_{max}(\zeta)$ is given by Eq.~(\ref{Xmax}). Using
Eq.~(\ref{Ndet}) for $N_{\nu}^{det}(>E,z)$ results in
\begin{equation}
f_{\nu}(\geq \zeta) = 1.3 \times 10^2 i_c f_{-3} B_{-6}
[1 - x_{max}^{-7/4}(i_c,\eta_{10})]~ {\rm yr}^{-1}.
\label{f_nu3}
\end{equation}
for $E > 1\times 10^{19}$~eV. The asymptotic expression at
$0.11 i_c^3 \eta_{10}^3 / B_{-6}\zeta \ll 1$ gives
\begin{equation}
f_{\nu}(\geq \zeta) = \frac{1.5 \times 10^2}{\sqrt{\zeta}}
i_c^{5/2} \eta_{10}^{3/2} B_{-6}^{1/2}~ {\rm yr}^{-1}
\label{f_nu4}.
\end{equation}
\subsection{Neutrino fluxes in the particle-emission dominated regime}
So far we have assumed that gravitational radiation is the
dominant energy loss mechanism of strings.
In the opposite regime, where the particle emission energy
losses dominate, the loop's lifetime $\tau_{part}$ is
independent of its length and is given by Eq.~(\ref{taupart}).
We shall analyze this regime in the present section.
As before, we shall adopt the idealized model where the magnetic field
$B$ is turned on at time $t=t_B$, corresponding to redshift $z_{max}$,
\begin{equation}
t_B\sim t_0(1+z_{max})^{-3/2} .
\end{equation}
The loops decay at the time $t_{dec}\sim t_B + \tau_{part}$. The rate
of observable bursts ${\dot N}_b$ is given by Eq.~(\ref{NBurst}) with
$I$ from Eq.~(\ref{int}), where the integration is taken between
$z_{dec}$ and $z_{max}$ and $z_{dec}$ is the redshift corresponding to
the time $t_{dec}$.
If $\tau_{part} \agt t_B$, the redshift $z_{dec}$ is significantly
different from $z_{max}$, with $\Delta z = z_{max} - z_{dec} \agt
1$, and the value of $I$ is not much different from that evaluated in
Sec. II.D. This is an intermediate regime, in which the results we
obtained in Sections II.D and II.E for the rate of bursts and for the
diffuse flux can still be used as order of magnitude estimates.
For $\tau_{part}\ll t_B$, the loops lose all their energy to particle
emission in less than a Hubble time. The condition $\tau_{part}\sim
t_B$ can also be expressed as ${\dot E}_{part}/{\dot E}_g (z_{max})\sim
1$. Using Eq.~(\ref{EpEg}) with $z_{max}\sim 3$, we find this
condition is met for the smallest loops when
\begin{equation}
\eta \sim 6\times 10^{10} i_c f_{-3} B_{-6}~~{\rm GeV}.
\label{partradcond}
\end{equation}
It marks the boundary of the strong particle-emission domination
regime and is shown by the inclined dotted line in Fig.~2. Below this
line, the results of the preceding sections do not apply even by order
of magnitude, but as we shall see, detectable neutrino fluxes can
still be produced.
The redshift interval $\Delta z = z_{max}-z_{dec}$ for
$\tau_{part}\ll t_B$ can be estimated as
\begin{equation}
\Delta z \approx \frac{2}{3}\frac{\tau_{part}}{t_B}(1+z_{max}) \ll 1 ,
\label{Deltaz}
\end{equation}
and the integral $I$ in Eq.~(\ref{int}) is given by
\begin{equation}
I \approx \Delta z {[(1+z_{max})^{1/2}-1]^2 \over{(1+z_{max})^{11/4}}} .
\end{equation}
With $z_{max}\sim 3$, we have $t_B \sim t_0/8$, and
\begin{equation}
{\tau_{part}\over{t_B}}\sim 0.2 {\eta_{10}\over{f_{-3}B_{-6}i_c}} .
\end{equation}
The rate of bursts that are actually detected, ${\dot N}_b^{det}$,
can be expressed as a product of ${\dot N}_b$ and the probability
$p_\nu^{det}$ that at least one neutrino from the burst will be
detected. This probability is simply related to the average number of
detected neutrinos per burst $N_\nu^{det}$, given by Eq.~(\ref{Ndet}),
\begin{equation}
p_\nu^{det} = 1-\exp(-N_\nu^{det}) .
\end{equation}
For $N_{\nu}^{det}\ll 1$, we have
\begin{equation}
p_\nu^{det} \approx N_\nu^{det}
\label{pdet}
\end{equation}
and again taking $E > 1\times 10^{19}$~eV,
\begin{equation}
{\dot N}_b^{det} \sim {\dot N}_b N_\nu^{det} \sim {60
\eta_{10}\over{(1+z_{max})^{7/4}}}~ yr^{-1}
\sim 5\eta_{10} ~yr^{-1},
\label{Ndoteta}
\end{equation}
where in the last step we have used $z_{max}\sim 3$. Requiring that
${\dot N}_b^{det} \agt 1~yr^{-1}$, we obtain the condition
\begin{equation}
\eta\agt 10^9 ~GeV.
\label{cond2}
\end{equation}
Note that at the boundary of detectability, where $\eta\sim 10^9$ GeV,
we always have $N_\nu^{det}\ll 1$, and thus the approximation
(\ref{pdet}) is justified. This boundary is the lower horizontal line
bounding the observable parameter range in Fig.~2. Note also that
Eq.~(\ref{cond2}) coincides with with the condition (\ref{eta*}) for
the burst-producing loops to be unaffected by friction.
It is interesting to note that the detection rate (\ref{Ndoteta}) in
the particle-emission dominated regime is independent of $i_c$ and
depends only on the symmetry breaking scale $\eta$. This is in
contrast with Eq.~(\ref{Ndotic}) for the case of gravitational
radiation dominance, where the rate is proportional to $i_c$ and
independent of $\eta$.
\subsection{Cascade upper limit on neutrino flux in the superconducting
string model}
In Subsection I.C of the Introduction, we gave a very general upper
limit for UHE neutrino flux. The presence of such a limit does
not contradict the existence of stronger upper limits in some
particular models with additional assumptions.
In this section, we calculate the energy density of the cascade
radiation in our model and compare it with $\omega_{cas} = 2 \times
10^{-6}\,\,\, eV\,cm^{-3}$ allowed by EGRET measurements.
The cascade energy density can be calculated as
\begin{equation}
\omega_{cas} = \int_0^{z_{max}} \frac{dz}{(1+z)^{4}} \int_{l_{min(z)}}^{l_{max(z)}}
dl f_{B} n(l,t) L_{em}(l,t)
\end{equation}
where $L_{em}(l, t) \sim \frac{1}{2} L_{tot}(l, t)$ is the loop
luminosity in the form of UHE electrons and photons produced by pion
decays. The standard calculation (for $z_{max}=3$) results in
\begin{equation} \label{CascadeEnergyDensity}
\omega_{cas} \approx \frac{1.2 i_{c} (e B t_{0}^{2}) (t_{eq} / t_{0})^{1/2}
f_{B} \eta}{7 (\Gamma G \mu)^{1/2} t_{0}^{3}}
\left [ 1- (1+z_{max})^{-7/4} \right ]
\approx 8.3 \times 10^{-7}
i_c f_{-3} B_{-6}\,\,\, eV\,cm^{-3}
\end{equation}
The energy density (\ref{CascadeEnergyDensity}) does not depend on
$\eta$ and since $\omega_{cas} < \omega_{EGRET}$, it respects the
general upper limit (\ref{cas-E2}). For $i_c\sim 1$, the
predicted neutrino flux (\ref{NFlux}) is close to the upper limit
shown in figure \ref{fig:upper-limits}.
\section{Gamma-ray jets and single gamma-rays from the cusps}
\subsection{Bursts from loops in the Milky Way}
In each galaxy, including the Milky Way, there are approximately
$N_{l}$ loops with $l \agt l_{min}$,
\begin{equation}
N_{l} \sim n(>l_{min}) V_g \sim 2.5 \times 10^{5} \eta_{10}^{-3}
V_g/10^3 kpc^3 ,
\end{equation}
where $V_g$ is the volume of the magnetized part of the galaxy. A narrow
jet of particles emanating from a cusp on such a loop can in
principle hit the Earth. The probability of such a catastrophic event
is very small because of the smallness of solid angle $\Omega_{jet}$
of jet emission. The number of jets hitting
an area $S$ on the Earth per unit time does not depend on $S$ if
$S\ll\Omega_{jet} r^{2}$, where $r$ is the distance from the
source. This rate is given by
\begin{equation}
\dot{N_{b}} = P V_g \int dl \frac{2n(l)}{l} ,
\end{equation}
where again we assume one cusp per oscillation, and
$P = \Omega_{jet}/4\pi = 1/(4 \gamma_{c})^{2}$ is the probability to hit
the detector. After the
standard calculations, we obtain for
$V_g \sim 1\times 10^3~ kpc^{3}$,
\begin{equation}
\dot{N_{b}} = 1\times 10^{-13} B_{-6}^{2} \eta_{10}^{-3}
i_{c}^{-2}\,\,\, yr^{-1} .
\end{equation}
Thus, for particles propagating rectilinearly, the jets from
cusps in our galaxy are unobservable.
The most important components of the galactic jets are photons and
neutrinos. A photon jet at the highest energies undergoes widening of
the jet angle due to photon absorption in the galactic magnetic field
\cite{Berez70}. Absorption of photons, $\gamma + B \to B + e^{+} +
e^{-}$, is followed by energy loss by electrons and positrons in the
magnetic field, with the emission of synchrotron photons in directions
different from that of the primary photon. This results in the widening
of the solid angle $\Omega_{jet}$ \cite{Berez70}.
The widening of photon jets in the Milky Way is negligible. This can
be illustrated by a numerical example. The highest energy of a photon
in a jet is $E_{\gamma}^{max} \sim \gamma_{c} \eta \sim 10^{31}
i_{c}/B_{-6}\,\,eV$. Photons with $E_{\gamma} \geqslant
10^{25}\,\,eV$ are absorbed in galactic magnetic fields. The produced
electrons and positrons with $E_{e} \sim 10^{25} \,\, eV$ have
lifetime $\tau \sim 10^{3}\,\,s$ for synchrotron energy losses and
attenuation length $l_{att} \sim 3\times 10^{13}\,\,cm$. Since the
Larmor radius of such electrons is $r_{L} \sim 3 \times 10^{28}\,\,
cm$, the deflection angle $\theta \sim l_{att}/r_{L} \sim 10^{-15}$ is
of no consequence.
\subsection{Cascade gamma-radiation from Virgo cluster}
As was discussed above, the photon jets from the galactic cusps are
not widening and thus are invisible. For cusps at large distances,
the widening of the photon jet efficiently occurs in the cascading on
CMB photons, $\gamma + \gamma_{CMB} \to e^{+} + e^{-}$, $e +
\gamma_{CMB} \to e^{\prime} + \gamma^{\prime}$ etc., and the source can
be seen in gamma-radiation. As in the case of diffuse cascade
radiation (see section I.C), all primary photons with energy higher
than the absorption energy $\epsilon_{a}$ are absorbed on CMB
radiation and converted into low-energy cascade photons. Thus, cusps
can be seen in $100\,\,GeV - 100\,\,TeV$ gamma radiation, similar to
the sources of UHE protons which can be seen in $TeV$ gamma-radiation
\cite{Blasi}.
The nearest source from which this radiation can be expected is the
Virgo cluster. It is located at distance $r = 18\,\,Mpc$, and the
number of loops within the core of radius $R_{c} \sim 3\,\,Mpc$, where
a magnetic field $B\sim 10^{-6}\,\,G$ can be reliably assumed, can reach
$n_{l} R_{c}^{3} \sim 7 \times 10^{12} \eta_{10}^{-3}$, with the
luminosity in $\gamma\, e^{+}\,e^{-}$ component for each loop
$L_{loop}^{\gamma} \sim 2 \times 10^{29}\,\, i_{c}
\eta_{10}^{3}B_{-6}\,\, erg\,s^{-1}$. Half of this energy goes into
cascade radiation, $L_{cas} \sim 0.5 L_{loop}^{\gamma}$.
The spectrum of the cascade photons at distance $r \sim 20\,\,Mpc$
has two characteristic energies \cite{book}: the absorption
energy $\epsilon_{a} \sim 100\,\,TeV$ and the energy
$\epsilon_{x}$. The latter is the energy of a photon produced by an
electron born in $\gamma + \gamma_{CMB} \to e^{+} + e^{-}$ scattering
by a photon with $E_{\gamma} = \epsilon_{a}$. The energy of this
electron is $E_{e}
\sim 0.5 \epsilon_{a} \sim 50\,\,TeV$ and the second characteristic
energy is $\epsilon_{x} \sim 20\,\,TeV$ for $r
\sim 20\,\, Mpc$.
The spectrum of cascade photons at observation is calculated in \cite{book} as
\begin{equation} \label{SpectrumCascade}
J_{\gamma}(E_{\gamma}) = \begin{cases}K(E_{\gamma}/ \epsilon_{x})^{-3/2}, & E_{\gamma} \leqslant \epsilon_{x} \\
K(E_{\gamma}/ \epsilon_{x})^{-2.0}, & \epsilon_{x} \leqslant E_{\gamma} \leqslant \epsilon_{a}\end{cases}
\end{equation}
The spectrum constant $K$ in (\ref{SpectrumCascade}) can be expressed
in terms of the cascade luminosity $L_{cas}$ and the distance to the
source $r$ as
\begin{equation}
K = \frac{L_{cas}}{\Omega_{eff} r^{2}} \frac{1}{\epsilon_{x}^{2} (2 +
\ln (\epsilon_{a} / \epsilon_{x}))}\,,
\end{equation}
where $\Omega_{eff}$ is the effective solid angle produced by
scattering of cascade electron in extragalactic magnetic field. In
case of full isotropization $\Omega_{eff} \sim 4\pi$. Cascade
luminosity can be estimated as $1/4$ of the total luminosity of cusps
in a cluster, $L_{cas} \sim \frac{1}{4} L_{loop} N_{loop}$. Using
$L_{loop} = 4.4 \times 10^{29} i_{c} \eta_{10}^{3} B_{-6} \,\,\, erg\,
s^{-1}$ and $N_{loop} \sim 2.5 \times 10^{11} \eta_{10}^{-3}$, valid
for a cluster core with $R_{c}\sim 3\,\,Mpc$, one obtains for the flux
\begin{equation} \label{FluxVirgo}
J_{\gamma}(>\epsilon_{x}) = \int_{\epsilon_{x}}^{\epsilon_{a}} dE_{\gamma} J_{\gamma}(E_{\gamma}) \sim 1\times 10^{-13} i_{c} B_{-6} (R_{c}/ 3\,\,Mpc)^{3}\,\,\, cm^{-2}\,s^{-1}
\end{equation}
which is marginally detectable by present telescopes. Note that
$L_{cas}$ and the flux $J_{\gamma}$ do not depend on $\eta$. We
consider the estimate (\ref{FluxVirgo}) as a very rough indication of
detectability of the gamma-ray flux from the Virgo cluster. Much
more accurate calculations are needed for a reliable prediction of
this flux.
\section{UHE protons from superconducting strings}
The cusps of superconducting strings in clusters of galaxies produce
UHE nucleons at fragmentation of parton jets with a fraction of
nucleons $\epsilon_{N} = 0.12$ \cite{ABK} relative to the total number
of hadrons. The generation rate $Q_{p}(\Gamma_{p})$ of UHE protons
with Lorentz factor $\Gamma_{p}$ per unit comoving volume and unit
time can be expressed through emissivity,
\begin{equation}
\mathcal{L}_{0} = \int_{\Gamma_{p}^{min}}^{\Gamma_{p}^{max}}
d\Gamma_{p} m_{N} \Gamma_{p} Q_{p}(\Gamma_{p})\,,
\end{equation}
where the emissivity $\mathcal{L}_{0}$ is the energy released in UHE
protons at $z=0$ per unit comoving volume per unit time,
$\Gamma_{p}^{max}$ and $\Gamma_{p}^{min} \sim 1$ are the maximum and
minimum Lorentz factors of the protons, respectively, and $m_{N}$ is the
nucleon mass. For a power-law generation spectrum $Q_{p}(\Gamma_{p})
\sim \Gamma_{p}^{-2}$, we have
\begin{equation} \label{GenerationRate}
Q_{p}(\Gamma_{p}) = \frac{\mathcal{L}_{0}}{m_{N}\, \ln
\Gamma_{p}^{max}} \Gamma_{p}^{-2} .
\end{equation}
The emissivity is calculated as
\begin{equation}
\mathcal{L}_{0} = \epsilon_{N} f_{B} \int_{l_{min}}^{l_{max}} dl n(l)
L_{tot}^{cusp}(l) ,
\end{equation}
where $l_{min}$ is given by (\ref{lmin}), while $n(l)$ and $L_{cusp}$
are given by (\ref{LoopDensity}) and (\ref{Ltot}) respectively. For
$L_{tot}^{cusp}$ one readily obtains
\begin{equation}
L_{tot}^{cusp} = \frac{J^{2} l}{e J_{c}} \frac{i_{c} \gamma_{c} \eta}{l/2} =
0.2 i_{c} e B l \eta ,
\end{equation}
and after a simple calculation we have
\begin{equation}
\mathcal{L}_{0} \approx 0.4 i_{c} \epsilon_{N} f_{B}
\frac{(t_{eq}/t_{0})^{1/2} e B t_0^2}{(\Gamma G \mu)^{1/2}}
\frac{\eta}{t_0} \frac{1}{(t_0)^3}
\approx 1.4 \times 10^{45} i_{c} f_{-3} B_{-6}\,\,\,erg\,Mpc^{-3}\,yr^{-1} .
\end{equation}
One more parameter relevant for the calculation of $Q_{p}(\Gamma_{p})$ is
$\Gamma_{p}^{\rm max} = E_{p}^{max} / m_{N}$. It can be estimated using
$E_{p}^{max} \sim 0.1 \epsilon_{X}$, where
$\epsilon_{X}=i_c\gamma_c \eta$ is the
energy of the boosted $X$ particles in the laboratory system,
which being estimated for loops of length $l_{min}$, gives
\begin{equation}
\Gamma_{p}^{max} = 1 \times 10^{10} \eta_{10} i_{c}^{2}
\frac{1}{\Gamma G \mu}\frac{\eta}{eBt_0} \left(\frac{1\,\,GeV}{m_{N}}\right) .
\end{equation}
Notice that $\Gamma_{p}^{max}$ does not depend on $\eta$ and that it
enters $Q_{p}(\Gamma_{p})$ through $\ln \Gamma_{p}^{max}$.
Now we can calculate the space density of UHE protons using the
generation
rate $Q_{p}(\Gamma_{p})$ given by (\ref{GenerationRate}) and taking
into account propagation through CMB radiation with the help of the
kinetic equation \cite{Longaire,BGGprd}
\begin{equation} \label{Kinetic1}
\frac{\partial}{\partial t} n_{p}(\Gamma_{p}, t) -
\frac{\partial}{\partial_{\Gamma_p}}[b(\Gamma_{p}, t) n_{p}
(\Gamma_{p}, t)] = Q_{p}(\Gamma_{p}, t) ,
\end{equation}
where $b(\Gamma_{p}, t) = -d\Gamma/dt$ describes energy losses of UHE
protons interacting with CMB photons. For $\Gamma \geqslant 3 \times
10^{10}$, the proton energy losses become large and one can neglect
the first term in the lhs of equation (\ref{Kinetic1}). Then
Eq.~(\ref{Kinetic1}) becomes stationary and its solution for $t =
t_{0}$ reads
\begin{equation}
n_{p}(\Gamma_{p}) = \frac{1}{b(\Gamma_{p})}
\int_{\Gamma_{p}}^{\Gamma_{p}^{max}} Q_{p}(\Gamma_{p}) d\Gamma_{p} \approx
\frac{\mathcal{L}_{0}}{m_{N} \Gamma_{p}\, b(\Gamma_{p})\,
\ln\Gamma_{p}^{max}} .
\end{equation}
In terms of the proton energy $E = m_{N}\Gamma_{p}$ and the diffuse
flux $J_{p}(E) = (c/4\pi) n_{p}(E)$, we have, in the standard form of
presentation,
\begin{equation}
E^{3} J_{p}(E) \approx \frac{c}{4\pi} \frac{\mathcal{L}_{0}}{\ln \Gamma_p^{max}}
\frac{E^{2}}{b(E)} ,
\end{equation}
where $b(E)=dE/dt$.
With $b(E)$ taken from \cite{BGGprd} a numerical estimate at
$E = 3\times 10^{19}\,\,\,eV$ gives
\begin{equation} \label{PFlux}
E^{3} J_{p}(E) \approx 1.3 \times 10^{24} i_{c} f_{-3} B_{-6}\,\,\,
eV^{2}\,m^{-2}\,s^{-1}\,sr^{-1} .
\end{equation}
With $i_c\sim1$, the calculated flux (\ref{PFlux}) coincides well with
the measurements at the same energy, e.g., with the HiRes \cite{HiRes}
flux $ E^{3}J_{p}(E) = 2.0 \times 10^{24}\,\,\,
eV^{2}\,m^{-2}\,s^{-1}\,sr^{-1}$,
so the cusp emission may account for the observed events at the
highest energies. For $i_c \alt 0.1$ the UHE proton flux from
superconducting strings is subdominant.
The UHE proton spectrum from superconducting strings has a sharper GZK
cutoff than the standard spectrum for homogeneously distributed
sources. This is due to the absence of clusters of galaxies in the
vicinity of our galaxy. The nearest cluster, Virgo, is located at
$18\,\,Mpc$ from the Milky Way; other clusters are located at much
larger distances. Nearby sources affect the spectrum at $E \geqslant 1
\times 10^{20}\,\,eV$, where the proton spectrum from superconducting
strings is predicted to be steeper than the standard one. The
experimental data at present have too low statistics to distinguish
the two cases.
In contrast, homogeneously distributed sources such as necklaces
\cite{BV}, give the dominant contribution at $E \geqslant (7-8) \times
10^{19}\,\,eV$ in the form of UHE photons, coming from nearby
sources. In the case of superconducting strings such component is
absent. The UHE photon component from superconducting strings is not
dominant at energy lower than $5 \times 10^{19}\,\,eV$, because
absorption of photons at these energies is stronger than for protons.
\section{Conclusions}
Superconducting cosmic strings produce high energy particles in the
decay of charge carriers, $X$ particles, ejected from the string
cusps. The large Lorentz factor $\gamma_{c}$ of the cusp boosts the
energies of these particles and collimates them in a narrow beam with
opening angle $\theta \sim 1/\gamma_{c}$. The basic string parameter
is $\eta$, the scale of symmetry breaking, which we parametrize as
$\eta = \eta_{10} 10^{10}\,\,GeV$. Another free parameter $i_{c}
\alt 1$ determines the critical electric current in the cusp,
$J_{max} = i_{c} e \eta$, and the mean energy of the charge carriers $X$
escaping from the string, $\epsilon_{X} = i_{c} \gamma_c \eta$.
The astrophysical parameter which determines the electric current
induced in the string is the magnitude of the magnetic field $B$
in the relevant cosmic structures.
The fraction $f_{B}$ of the universe occupied by magnetic field $B$
determines the flux of high-energy particles produced by
superconducting strings. The most favorable values of $B$ and $f_{B}$
for the generation of a large flux of UHE neutrinos are $B \sim
10^{-6}\,\,G$ and $f_{B} \sim 10^{-3}$. They correspond to clusters of
galaxies.
The main uncertainties of our model are related to the uncertainties
in our understanding of the evolution of cosmic strings and of the
origin and evolution of cosmic magnetic fields. On the cosmic string
side, the key unknown quantity is the parameter $\alpha$ which sets
the characteristic length of string loops in Eq.~(\ref{alpha}). Here,
we used the value of $\alpha\sim 0.1$, as suggested by numerical
simulations in Refs.~\cite{OVV,OV}. We have also disregarded the
effects of loop fragmentation. Toward the end of its life, the
loop's configuration may be sufficiently modified by radiation
back-reaction that the loop will self-intersect and break up into
smaller pieces. These smaller loops will have more frequent cusps,
shorter lifetimes, higher velocities, and smaller induced currents.
The effect of such loops on the neutrino fluxes is hard to assess without a quantitative model of loop fragmentation. This will have to await the
next generation of string evolution simulations.
On the astrophysical side,
basically unknown is the cosmological evolution of the magnetic field
parameters $f_B(z)$ and $B(z)$ in the redshift interval $0 < z <
z_{max}$, where $z_{max} \sim$ 2 -- 3 is the redshift when the magnetic
field was generated. For the space average value $\langle f_B(z) B(z)
\rangle$ we use the numerical simulation by Ryu et al. \cite{Ryu},
according to which this value remains roughly constant at $0 < z <
3$. Some important quantities, such as the diffuse neutrino flux
$J_{\nu}(E)$, the cascade energy density $\omega_{cas}$, and the UHE
proton emissivity are determined by the evolution of the product
$f_B(z)B(z)$. However, some other quantities, such as the rate of
neutrino bursts and fluence depend on the evolution of $f_B(z)$ and
$B(z)$ in other combinations. In these cases we consider the
parameters $f_{-3}$ and $B_{-6}$ as effective values, using $f_{-3}
\sim B_{-6} \sim 1$.
In addition, we adopted the following simplifying assumptions. The
Lorentz factor of the cusp is characterized by a single fixed value
$\gamma_{c}$, while in reality there is a distribution of Lorentz
factors along the cusp. The spectrum of particles in a jet is
approximated as $E^{-2}$, while a QCD calculation
\cite{DGLAP} gives a spectrum which is not a power law, with
the best power-law fit as $E^{-1.92}$.
We use cosmology with $\Lambda = 0$. The spectrum of photons from
Virgo cluster and the diffuse spectrum of UHE protons are calculated
using very rough approximations. Given the uncertainties of
string and magnetic field evolution, these simplifications are rather
benign. On the other hand, they have the advantage of yielding
analytic formulae, which allow us to clearly see the dependence of the
results on the parameters involved in the problem. In particular,
with the assumed particle spectrum $\sim E^{-2}$, the diffuse
flux of neutrinos, the cascade upper limit, the flux of $TeV$ photons
from Virgo cluster and the diffuse flux of UHE protons do not depend on
$\eta$. Since the realistic spectrum is very close to $E^{-2}$, this
means that the quantities listed above depend on $\eta$ very weakly.
We summarize the results obtained in this work as follows.
As our calculations show, among different sources, such as galaxies,
group of galaxies, filaments, etc., the largest diffuse flux is
produced by clusters of galaxies with $B\sim 10^{-6}\,\,G$ in a
cluster core and $f_{B} \sim 10^{-3}$. The calculated diffuse
neutrino flux for three neutrino flavors and for $z_{max}=3$ is
\begin{equation} \label{NFlux2}
E^{2} J_{\nu} (E) \sim 6.6 \times 10^{-8} i_{c} f_{-3}
B_{-6}\,\,\,GeV\,cm^{-2}\,s^{-1}\,sr^{-1}.
\end{equation}
This flux respects the cascade upper limit, provided by the energy
density of electrons, positrons and photons, which initiate
electromagnetic cascades in collisions with CMB photons. The cascade
energy density is calculated from Eq.~(\ref{NFlux2}) as
\begin{equation} \label{CascadeEnergyDensity2}
\omega_{cas} \approx 8.3 \times 10^{-7}i_c f_{-3} B_{-6}\,\,\, eV\,cm^{-3} .
\end{equation}
and is close to the cascade limit for $i_c\sim 1$.
It is the same as given by Eq.~(\ref{CascadeEnergyDensity}).
At energies $E \alt 10^{22}\,\,eV$, the flux (\ref{NFlux2}) is
detectable by future detectors JEM-EUSO and Auger
(South + North). The signature of the superconducting string model is
the correlation of neutrinos with clusters of galaxies. We note,
however, that the neutrino flux from the nearest cluster, Virgo, is
undetectable by the above-mentioned detectors.
Another signature of the model is the possibility of multiple
events, when several showers appear simultaneously in the field of
view of the detector, e.g. JEM-EUSO. They are produced by neutrinos from
the same jet. The time delay in arrival of neutrinos with different
energies is negligibly small. Such multiple events are expected
to appear for a certain range of parameters, as indicated in Fig.~2.
As an illustration, in Table \ref{table1} we show, for a representative
value $\eta = 5\times 10^{10}$~GeV, the diffuse neutrino flux,
in units of the cascade upper limit $J_{\nu}^{max}$,
the rate of bursts, and the average shower multiplicity for several
values of $i_c$. Note that the bottom row in the table is the {\it
average} multiplicity, that is, the average number of neutrinos
detected per burst. For example, the low multiplicity at $i_c=0.1$
indicates that only a small number (about 5) out of the 220 bursts per
year will actually be detected. For $i_c=1/3$, the average
multiplicity is below 1, but Fig.~2 shows that we can expect at least
one 2-neutrino burst per year.
\begin{table}[ht]
\begin{center}
\caption{The diffuse flux $J_{\nu}(E)$ in units of the cascade upper limit
$J_{\nu}^{max}$ for 3 neutrino flavors, found from (\ref{cas-E2}),
the rate of neutrino bursts, and the shower multiplicity
(the average number of neutrinos detected in one bursts), for
$\eta=5\times 10^{10}~$GeV, $z_{max}=3$ and different values of $i_c$. The
multiplicity is shown for neutrinos with $E \agt 10^{10}$~GeV from a
burst at $z=2$.
}
\vspace{3mm}
\begin{tabular}{c|c|c|c|c}
\hline
$i_c$ & $1.0$ & $1/2$ & $1/3$ & $0.1$ \\
\hline
$J_{\nu}/J_{\nu}^{max}$ & $0.42$ & $0.21$ & $0.14$ & $0.042$ \\
\hline
rate of bursts & $2.2~{\rm yr}^{-1}$ & $8.7~{\rm yr}^{-1} $ &
$19.6~{\rm yr}^{-1} $ & $220~{\rm yr}^{-1} $ \\
\hline
multiplicity & $26$ & $3.2$ & $0.95$ & 0.026 \\
\hline
\end{tabular}
\label{table1}
\end{center}
\end{table}
A photon jet from the cusp initially propagates together with
the neutrino jet, within the same solid angle. However, at large
enough distance, photons from the jet can be absorbed in collisions
with CMB photon ($\gamma + \gamma_{CMB} \to e^{+} + e^{-}$), the
produced electrons (positrons) emit high-energy photons in
inverse-Compton scattering ($e + \gamma_{CMB} \to e^{\prime} +
\gamma^{\prime}$), and thus an em cascade
develops. Electrons are deflected in magnetic fields, and photon
radiation is isotropized. Due to this effect, $10-100\,\,TeV$ gamma
radiation from the nearby cluster of galaxies, Virgo, can be marginally
detectable. The corresponding photon flux
is given by
\begin{equation} \label{FluxVirgo2}
J_{\gamma}(>\epsilon_{x}) \sim 1\times 10^{-13} i_{c} B_{-6}\,\,\,
cm^{-2}\,s^{-1}
\end{equation}
where $\epsilon_{x} \sim 20\,\,TeV$.
In the Milky Way, there may be a large number of loops with radiating
cusps, but because of the very small jet opening angle, the
probability to observe UHE particle jets coming from these loops is
extremely small.
The diffuse flux of UHE protons is suppressed by the small fraction of
nucleons produced at decay of $X$ particles (the factor $\epsilon_{N}
= 0.12$ is obtained in MC and DGLAP calculations \cite{DGLAP}),
and by energy losses of protons interacting with the CMB during
propagation.
The calculated flux at energy $E \geqslant 3\times 10^{19}\,\,eV$
is given by the approximate formula
\begin{equation} \label{PFlux2}
E^{3} J_{p}(E) \approx \frac{c}{4\pi}\frac{\mathcal{L}_{0}}
{\ln \Gamma_{p}^{max}} \frac{E^{2}}{b(E)}
\end{equation}
where $b(E) = - dE/dt$ is the energy loss rate of protons,
$\Gamma_{p}^{max}$ is the maximum Lorentz factor of a proton at
production, and $\mathcal{L}_{0}$ is the emissivity (energy in the form
of protons emitted per unit comoving volume per unit time),
given by
\begin{equation} \label{Emissivity2}
\mathcal{L}_{0} \approx 1.4 \times 10^{45} i_{c} f_{-3} B_{-6}\,\,\,erg\, Mpc^{-3}\,yr^{-1}
\end{equation}
For $i_c\sim 1$ and $E \sim 3\times 10^{19}\,\,eV$, the proton flux
can reach the value $1.3\times
10^{24}\,\,eV^{2}\,m^{-2}\,s^{-1}\,sr^{-1}$, which can be compared for
example with $2 \times 10^{24}\,\,eV^{2}\,m^{-2}\,s^{-1}\,sr^{-1}$
measured by HiRes \cite{HiRes}. Thus, radiation from cusps may account for
observed events at the highest energies. The predicted spectrum at
$E> 8\times 10^{19}\,\,eV$ is steeper than the standard UHECR spectrum
with homogeneous distribution of sources. The accompanying UHE gamma
radiation is very low, due to large distances between the sources
(clusters of galaxies).
As already mentioned, practically all predicted quantities, such
as the diffuse neutrino flux (\ref{NFlux2}), the cascade energy
density (\ref{CascadeEnergyDensity2}), the UHE gamma-ray flux from
Virgo cluster (\ref{FluxVirgo2}), the diffuse flux of UHE protons
(\ref{PFlux2}) and the proton emissivity (\ref{Emissivity2}),
do not depend on the basic string parameter $\eta$. There are
only two observable quantities that do,
the rate of neutrino bursts $\dot{N_b}$ and the
neutrino fluence $\Phi (>E)$:
\begin{equation}
\dot{N_{b}} \sim 3\times 10^{2} \frac{B_{-6}^{2} f_{-3}}{i_{c}^{2} \eta_{10}^{3}}
\,\,yr^{-1}
\end{equation}
\begin{equation}
\Phi(>E) \approx 1\times 10^{-2} \frac{i_{c}^{3}
\eta_{10}^{3}}{B_{-6}} \left(\frac{10^{10}\,\,GeV}{E}\right)
\frac{1}{[(1+z)^{1/2} -1]^{2}} \text{km}^{-2},
\end{equation}
As $\eta$ decreases (at a fixed $i_c$), the rate of neutrino
bursts goes up and the number of neutrinos detected in a burst,
\begin{equation}
N_{\nu}^{det}(>E) \approx 0.11 \frac{10^{10}~ {\rm GeV}}{E}
\frac{i_c^3 \eta_{10}^3} {B_{-6}}
\frac{1}{[(1+z)^{1/2} -1]^2}
\end{equation}
goes down, while the product $\dot{N_{b}} N_{\nu}^{det}$ remains
$\eta$-independent.
We have considered here only ``regular'', field theory cosmic strings.
Recent developments in superstring theory suggest
\cite{Tye,Copeland,Dvali} that the role of cosmic strings can also be
played by fundamental (F) strings and by $D$-branes. Such strings may
be superconducting, in which case they will also emit bursts of
relativistic particles from their cusps. The main difference from the
case of ordinary strings is that the probability for two intersecting
strings to reconnect, which is $p=1$ for ordinary strings, can be
$p<1$ and even $p\ll 1$ for F or D-strings. A low reconnection
probability results in an enhanced density of loops; the particle
production by loops is increased correspondingly.
UHE neutrinos from superconducting strings may have three
important signatures: correlation with clusters of galaxies, multiple
neutrino-induced showers observed simultaneously in the field of view
of a detector, e.g. JEM-EUSO, and detection of $\sim 10\,\,TeV$
gamma-radiation from Virgo, the nearest cluster of
galaxies.
\section{Acknowledgments}
We would like to thank J. J. Blanco-Pillado for useful discussions,
and A. Gazizov for preparing Fig.~1 and discussions. This work was
supported in part by the National Science Foundation under grants
0353314 and 0457456 (USA), and by the contract ASI-INAF I/088/06/0 for
theoretical studies in High Energy Astrophysics (Italy).
|
1,116,691,499,100 | arxiv | \section{Introduction}
In this paper, we consider a two-dimensional Stokes interface problem. For simplicity, we describe this problem using the following model. In the model, we consider a bounded domain $\Omega \subset \mathbb{R}^2$ that is divided into two subdomains $\Omega_1$ and $\Omega_2$. In each subdomain $\Omega_i,i=1,2$, the fluid flow is governed by the Stokes equation, i.e.,
\begin{eqnarray}
-\nabla\cdot(A_i \nabla {\bf u}_i)+\nabla p_i &=&{\bf f}_i ,\,\, \qquad\text{in}\ \Omega_i,\label{model 1} \\
\nabla\cdot {\bf u}_i &=&0 , \,\,\,\qquad\text{in}\ \Omega_i,\\
{\bf u}_i&=&{\bf g}_i , \qquad\text{on}\ \partial \Omega_i \backslash \Gamma ,
\end{eqnarray}
where the viscosity coefficient $A_i$ is piecewise constant. And $\Gamma=\Omega_1 \cap \Omega_2$ denotes the interface of two subdomains. The interface conditions on $\Gamma$ are described by the following equations:
\begin{eqnarray}
{\bf u}_1-{\bf u}_2&=&\bm{\psi}, \qquad\text{on}\ \Gamma \label{interface 1}\\
(A_1 \nabla {\bf u}_1-p_1 I) \mathbf{n_1} +(A_2 \nabla {\bf u}_2-p_2 I) \mathbf{n_2}&=&\bm{\phi}, \qquad\text{on}\ \Gamma \label{interface 2}
\end{eqnarray}
where $\mathbf{n_1}$ and $\mathbf{n_2}$ denote unit outward normals of $\Omega_1$ and $\Omega_2$. The problem is usually used in geology and geophysics. For example, the mantle flow problem is Stokes equations with discontinuous viscosity $A$. This type of problem is investigated by many geophysicists and mathematicians \cite{Stokesinterfacebackground4,Stokesinterfacebackground1,Stokesinterfacebackground6,Stokesinterfacebackground2,Stokesinterfacebackground5,Stokesinterfacebackground3}.
In practical problems, the interface is usually a complex curve or surface. In two-dimensional
problems, for such interfaces, we usually use straight edges to approximate curved edges. Then the method on straight edge element is used to solve the problem numerically. However, when we use higher degree polynomials to approximate the solution, the geometric error of the straight edge approximating the curved edge will reduce the convergence order of the error between the numerical solution and the exact solution \cite{reduceerror3,reduceerror1,reduceerror2}. Therefore, for the domain with curved edges, we need to reduce the geometric error as much as possible. Many numerical methods solve the problem on the curved domain directly. For example, p-FEM method \cite{pFEM} establishes the relationship between local and Cartesian coordinates, isogeometric methods \cite{isogeometricmethods} uses the NURBS (non-uniform rational B-splines) to approximate the whole computational domain and the basis of the approximation functions. NURBS-enhanced finite element method (NEFEM) \cite{NEFEM,reduceerror3} has the similar idea. It employs NURBS to approximate the curved domain, and uses the finite element method to solve the equations. The standard finite element method is used in the interior elements (for elements not intersecting the curved boundary), and the NURBS is used for approximation of the solution in the boundary elements (for elements intersecting the curved boundary).
In this paper, we use the weak Galerkin (WG) finite element method to solve the interface problem on the domain with curved boundary. The WG method was first proposed in \cite{wang2013weak} to deal with second-order elliptic equations of the primal scheme. The WG method is characterized by replacing continuous functions with weak functions and differential operators with weak operators. Later, the WG method was also used to solve the Stokes equation \cite{WGStokes1,WGStokes2}, Brinkman equation \cite{WGBrinkman1,WGBrinkman2}, linear elasticity equation \cite{elasticityequation}. Due to the discontinuity of weak functions, the WG is more flexible in dealing with interface problems \cite{Stokesinterfacebackground6,ellipticinterface,stokesDarcy}. For example, when dealing with elliptic interface problems, it can be the same value on the interface \cite{ellipticinterface}, and the interface condition is reflected in the variational problem, or it can be two values, and the interface condition is transformed into Dirichlet boundary condition. In this paper, we take the idea of the NEFEM method. In the interior element, we use the WG to treat, that is, we define the basis function on the reference element and obtain the basis function on the actual element by affine transformation \cite{nonaffineisoparametric1,nonaffineisoparametric2}. For interface elements, we directly define basis functions on the actual elements. For numerical integration on the actual element, we establish a non-affine transformation on the reference element, and then perform numerical integration on the reference element.
An outline of this paper is as follows: In Section 2, some notations used in this paper are presented. We also give the definitions of weak finite element space and weak operator, and the numerical scheme of weak Galerkin finite element. The Section 3 is devoted to the proof of the existence, uniqueness and stability of the solution of the weak Galerkin finite element numerical scheme. In Section 4 and 5, the error estimation under the energy norm and the $L^2$ norm are proved seperately. Finally, we give some numerical examples to verify our proposed theory in Section 6.
\section{The numerical scheme}In the section, we first give the introduction of notations used in the paper. Next, we will define the weak Galerkin finite element space in the partiton of the domain $\Omega$ and the corresponding weak operators. Finally, the weak Galerkin finite element numerical scheme of Stokes interface problem is given.
\subsection{The notations}
Assume that $\mathcal{T}_h$ is the triangulation of the domain $\Omega$ containing both straight and curved triangular cell. For $T \in \mathcal{T}_h$, when all three edges of $T$ are straight, it is called a triangular cell. The set of such triangles is denoted by $\mathcal{T}_h^S$. When $T$ has a curved edge which intersects with the interface edge, it is called a curved cell. The set of curved elements is denoted by $\mathcal{T}_h^I$. The set of the curved elements in $\Omega_1$ is denoted by $\mathcal{T}_{1h}^I$. The set of the curved elements in $\Omega_2$ is denoted by $\mathcal{T}_{2h}^I$. The triangular elements and the straight elements formed by three vertices of curved elements should satisfy the regularity condition \cite{WGStokes1}.
For $T \in \mathcal{T}_h$, let $|T|$ and $h_T$ be the area and the diameter of the element $T$, respectively. Denote $h=\max_{T \in \mathcal{T}_h} h_T$ is the mesh size. Let $\mathcal{E}_h$ be the set of all edges in the partition $\mathcal{T}_h$ and $\mathcal{E}_h^0$ be the set of all interior edges. Denote $\mathcal{E}_h^S$ is the set of all straight edges and $\mathcal{E}_h^I$ is the set of all interface edges.
\subsection{Weak finite element space}
We define the weak function space on the partition $\mathcal{T}_h$. For $T \in \mathcal{T}_h$, we define the weak function ${\mathbf v}= \{\ {\mathbf v}_0, {\mathbf v}_b \}$ on the element $T$, where ${\mathbf v}_0$ represents the value in the interior of T and ${\mathbf v}_b$ represents the value on the boundary $\partial T$. Note ${\mathbf v}_b$ has one unique value on an edge $e \in \mathcal{E}_h$. There is no relationship between the value of ${\mathbf v}_0$ and ${\mathbf v}_b$. Next, we give the weak Galerkin finite element space with respect to the velocity ${\bf u}$ in the domain. For the pressure $p$, we define piecewise polynomial space for approximation.
For a given integer $k \geqslant 1$,
\begin{align*}
V_h=&\{{\mathbf v}=\{ \mathbf{v_0}, \mathbf{v_b}\},\mathbf{v_0}|_{T} \in [P_k(T)]^2, \,T \in \mathcal{T}_h,\\
&\mathbf{v_b}|_e \in [P_{k-1}(e)]^2, e \in \mathcal{E}_h^s,
\tilde{{\mathbf v}}_b|_e \in [P_k(e)]^{2 \times 2}, e \in \mathcal{E}_h^I
\}\\
V_h^0=&\{{\mathbf v}=\{ \mathbf{v_0}, \mathbf{v_b}\} \in V_h, \mathbf{v_b}=\mathbf{0} ~on ~ \partial \Omega \}\\
W_h=&\{q: q \in L_0^2(\Omega), q|_T \in P_{k-1}(T),\, T \in \mathcal{T}_h\}
\end{align*}
\subsection{Discrete weak operators}
In this subsection, we introduce some weak differential operators.
\begin{definition}\label{weak gradient}
For each ${\mathbf v} \in V_h$, its discrete weak gradient is denoted by $\nabla_w {\mathbf v} \in [P_{k-1}(T)]^{2\times2}$ and satisfies the following equations:
(1) for T $\in \mathcal{T}_h^S$ and $\forall q \in [P_{k-1}(T)]^{2\times2}$,
\begin{equation}\label{weak gradient 1}
(\nabla_w {\mathbf v},q)_T=-({\mathbf v}_0,\nabla \cdot q)_T + \langle {\mathbf v}_b,q {\bf n} \rangle_{\partial T},
\end{equation}
\qquad (2) for T $\in \mathcal{T}_{1h}^I$ and $\forall q \in [P_{k-1}(T)]^{2\times2}$,
\begin{eqnarray}\label{weak gradient 2}
(\nabla_w {\mathbf v},q)_T=-({\mathbf v}_0,\nabla \cdot q)_T +\langle {\mathbf v}_{1b},q {\bf n} \rangle_{\partial T \backslash \Gamma}
+\langle \tilde{{\mathbf v}}_{1b},q \rangle_{\partial T \cap \Gamma},
\end{eqnarray}
\qquad (3) for T $\in \mathcal{T}_{2h}^I$ and $\forall q \in [P_{k-1}(T)]^{2\times2}$,
\begin{eqnarray}\label{weak gradient 3}
(\nabla_w {\mathbf v},q)_T=-({\mathbf v}_{20},\nabla \cdot q)_T +\langle {\mathbf v}_{2b},q {\bf n} \rangle_{\partial T \backslash \Gamma}
-\langle \tilde{{\mathbf v}}_{1b},q \rangle_{\partial T \cap \Gamma}.
\end{eqnarray}
\end{definition}
\qquad Similarly, we can give the definition of discrete weak divergence.
\begin{definition}\label{weak divergence}
For each ${\mathbf v} \in V_h$, its discrete weak divergence is denoted by $\nabla_w \cdot {\mathbf v} \in P_{k-1}(T)$ and satisfies the following equations:
(1) for T $\in \mathcal{T}_h^S$ and $\forall q \in P_{k-1}(T)$,
\begin{eqnarray}\label{weak divergence 1}
(\nabla_w \cdot {\mathbf v},q)_T=-({\mathbf v}_0, \nabla q)_T + \langle{\mathbf v}_b \cdot {\bf n}, q \rangle_{\partial T},
\end{eqnarray}
\qquad (2) for T $\in \mathcal{T}_{1h}^I$ and $\forall q \in P_{k-1}(T)$,
\begin{eqnarray}\label{weak divergence 2}
(\nabla_w \cdot {\mathbf v},q)_T=-( {\mathbf v}_0,\nabla q)_T + \langle {\mathbf v}_{1b} \cdot \mathbf{n},q \rangle_{\partial T \backslash \Gamma}
+\langle \tilde{{\mathbf v}}_{1b},qI \rangle_{\partial T \cap \Gamma},
\end{eqnarray}
\qquad (3) for T $\in \mathcal{T}_{2h}^I$ and $\forall q \in P_{k-1}(T)$,
\begin{eqnarray}\label{weak divergence 3}
(\nabla_w \cdot {\mathbf v},q)_T=-({\mathbf v}_{20},\nabla q)_T + \langle {\mathbf v}_{2b} \cdot \mathbf{n},q \rangle_{\partial T \backslash \Gamma}
-\langle \tilde{{\mathbf v}}_{1b},qI \rangle_{\partial T \cap \Gamma}.
\end{eqnarray}
where I is the identity matrix.
\end{definition}
\subsection{The numerical scheme}
Let $Q_0$ be the $L^2$ projection operator from $[L^2(T)]^2$ onto $[P_k(T)]^2$, $T \in \mathcal{T}_h$. For each edge $e \in \mathcal{E}_h^S$, $Q_b$ is defined as the $L^2$ projection operator from $L^2(e)$ onto $P_{k-1}(e)$. Denote by $\hat{Q}_b$ the $L^2$ projection operator from $[L^2(e)]^{2 \times 2}$ onto $[P_{k}(e)]^{2 \times 2}$, $e \in \mathcal{E}_h^I$. Next, we introduce two bilinear forms as follows:
\begin{align*}
s({\bf u},{\mathbf v})=&\sum_{T \in \mathcal{T}_h^S }h_T^{-1} \langle Q_b {\bf u}_0 -{\bf u}_b,Q_b {\mathbf v}_0 -{\mathbf v}_b \rangle_{\partial T}\\
+&\sum_{T \in \mathcal{T}_{1h}^I}h_T^{-1} \Big( \langle Q_b {\bf u}_0 -{\bf u}_b,Q_b {\mathbf v}_0 -{\mathbf v}_b \rangle_{\partial T \backslash \Gamma}\\
+&\langle \hat{Q}_b({\bf u}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\bf u}}_b,\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma} \Big)\\
+&\sum_{T \in \mathcal{T}_{2h}^I}h_T^{-1} \Big(\langle Q_b {\bf u}_0 -{\bf u}_b,Q_b {\mathbf v}_0 -{\mathbf v}_b \rangle_{\partial T \backslash \Gamma}\\
+&\langle \hat{Q}_b({\bf u}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\bf u}}_b,\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma}\Big),\\
a({\bf u},{\mathbf v})=&\sum_{T \in \mathcal{T}_h}(A\nabla_w {\bf u},\nabla_w {\mathbf v})_T+s({\bf u},{\mathbf v}),\\
b({\mathbf v},p)=&\sum_{T \in \mathcal{T}_h}(\nabla_w \cdot {\mathbf v},p)_T.
\end{align*}
\begin{algorithm}
For the Stokes interface problem (\ref{model 1})-(\ref{interface 2}), seeking $\mathbf{u}_h \in V_h, {\bf u}_b=Q_b {\bf g} ~\text{on}\ \partial \Omega$ and $p_h \in W_h$ such that
\begin{eqnarray}\label{algorithm 1}
\begin{split}
a({\bf u}_h,{\mathbf v}_h)-&b({\mathbf v}_h,p_h)=({\bf f},{\mathbf v}_0)-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),A_2 \nabla_w {\mathbf v}_h \rangle_{\Gamma}\\
+& \sum_{T \in \mathcal{T}_{2h}^I}h_T^{-1} \langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma}+\langle \bm{\phi} {\bf n}_1^{\mathrm{T}},\tilde{{\mathbf v}}_b \rangle_{\Gamma},
\end{split}
\end{eqnarray}
\begin{eqnarray}\label{algorithm 2}
b({\bf u}_h,q_h)=-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),q_hI \rangle_{\Gamma}.
\end{eqnarray}
for all ${\mathbf v}_h \in V_h^0$ and $q_h \in W_h$, ${\bf n}_1$ and ${\bf n}_2$ are the unit outward noramal in $\Omega_1$ and $\Omega_2$, respectively.
\end{algorithm}
\section{Stability}
For $T \in \mathcal{T}_h$, let $Q_h$ be the $L^2$ projection operator onto $V_h$. When $T \in \mathcal{T}_h^S$, $Q_h {\bf u}=\{Q_0 {\bf u}, Q_b {\bf u} \}$. When $T \in \mathcal{T}_h^I$, $Q_h= \{Q_0,\tilde{Q}_b\}$, where $e \in \mathcal{E}_h^S$, $\tilde{Q}_b {\bf u} = Q_b {\bf u}$; $e \in \mathcal{E}_h^I$, $\tilde{Q}_b {\bf u} = \hat{Q}_b ({\bf u} {\bf n}^{\mathrm{T}})$. To make sure there's only one value on the interface, let $\hat{Q}_b ({\bf u} {\bf n}^{\mathrm{T}})=\hat{Q}_b ({\bf u}_1 {\bf n}_1^{\mathrm{T}})$. Therefore, when $T \in \mathcal{T}_{2h}^I$, $\hat{Q}_b ({\bf u}_2 {\bf n}_2^{\mathrm{T}})=\hat{Q}_b (\bm{\psi} {\bf n}_1^{\mathrm{T}})-\hat{Q}_b ({\bf u}_1 {\bf n}_1^{\mathrm{T}})$. $\mathcal{Q}_h$ and $\mathbb{Q}_h$ are defined as the $L^2$ projection operator onto $P_{k-1}(T)$ and $[P_{k-1}(T)]^{2 \times 2}$, respectively.
\begin{lemma}\label{weak gradient exchange}
On each element $T \in \mathcal{T}_h$, for the discrete weak gradient operator, we have the following properties:
$\forall \tau \in [P_{k-1}(T)]^{2 \times 2}$,
(1)for $T\nsubseteq \Omega_2$ or $ \partial T \cap \Gamma = \emptyset$, we have that
\begin{eqnarray}\label{weak gradient exchange 1}
(\nabla_w(Q_h {\bf u}),\tau)_T=(\mathbb{Q}_h(\nabla {\bf u}),\tau)_T,
\end{eqnarray}
(2)for $T\subseteq \Omega_2 $ and $\partial T \cap \Gamma \neq \emptyset$, we drive
\begin{eqnarray}\label{weak gradient exchange 2}
(\nabla_w(Q_h {\bf u}),\tau)_T=(\mathbb{Q}_h(\nabla {\bf u}),\tau)_T-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}} ),\tau \rangle_{\partial T \cap \Gamma}.
\end{eqnarray}
\end{lemma}
\begin{proof}According to Eq.(\ref{weak gradient 1}), definition of $Q_0$, $Q_b$ and $\mathbb{Q}_h$ and the integration by parts, for T $\in \mathcal{T}_h^S $ and $\forall \tau \in [P_{k-1}(T)]^{2 \times 2}$, we have
\begin{align*}
(\nabla_w(Q_h {\bf u}), \tau)_T=&-(Q_0 {\bf u}, \nabla \cdot \tau)_T+\langle Q_b {\bf u}, \tau \cdot {\bf n} \rangle_{\partial T}\\
=&-({\bf u},\nabla \cdot \tau)_T+\langle {\bf u},\tau \cdot {\bf n} \rangle_{\partial T}\\
=&(\nabla {\bf u},\tau)_T\\
=&(\mathbb{Q}_h(\nabla {\bf u}),\tau)_T.
\end{align*}
Next, using Eq.(\ref{weak gradient 2}) can lead to, for T $\in \mathcal{T}_{1h}^I$
\begin{align*}
(\nabla_w(Q_h {\bf u}), \tau)_T=&-(Q_0 {\bf u}, \nabla \cdot \tau)_T+\langle Q_b {\bf u}, \tau {\bf n}_1 \rangle_{\partial T \backslash \Gamma}+
\langle \hat{Q}_b({\bf u} {\bf n}_1^{\mathrm{T}}),\tau \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u},\nabla \cdot \tau)_T+\langle {\bf u} ,\tau {\bf n}_1 \rangle_{\partial T \backslash \Gamma}+\langle {\bf u} {\bf n}_1^{\mathrm{T}},\tau \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u},\nabla \cdot \tau)_T+\langle {\bf u},\tau {\bf n}_1 \rangle_{\partial T}\\
=&(\nabla {\bf u},\tau)_T\\
=&(\mathbb{Q}_h(\nabla {\bf u}),\tau)_T.
\end{align*}
Combining the above two equations, Eq.(\ref{weak gradient exchange 1}) is completed. Similarly, using (\ref{weak gradient 3}) and interface condition, for the T $\in \mathcal{T}_{2h}^I$ and $\forall \tau \in [P_{k-1}(T)]^{2 \times 2}$, we have
\begin{align*}
(\nabla_w(Q_h {\bf u}), \tau)_T=&-(Q_0 {\bf u}_2, \nabla \cdot \tau)_T+\langle Q_b {\bf u}_2, \tau {\bf n}_2 \rangle_{\partial T \backslash \Gamma}-
\langle \hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}),\tau \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u}_2,\nabla \cdot \tau)_T+\langle {\bf u}_2 ,\tau {\bf n}_2 \rangle_{\partial T \backslash \Gamma}-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\tau \rangle_{\partial T \cap \Gamma}\\
&+\langle \hat{Q}_b({\bf u}_2 {\bf n}_2^{\mathrm{T}}),\tau \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u}_2,\nabla \cdot \tau)_T+\langle {\bf u}_2,\tau {\bf n}_2 \rangle_{\partial T}-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\tau \rangle _{\partial T \cap \Gamma}\\
=&(\nabla {\bf u}_2,\tau)_T-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\tau \rangle _{\partial T \cap \Gamma}\\
=&(\mathbb{Q}_h(\nabla {\bf u}_2),\tau)_T-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\tau \rangle _{\partial T \cap \Gamma}.
\end{align*}
\end{proof}
\begin{lemma}\label{weak divergence exchange}
On each element $T \in \mathcal{T}_h$, for the weak divergence operator, we have the following properties:
$\forall \tau \in P_{k-1}(T)$,
(1)for $T\nsubseteq \Omega_2$ or $\partial T \cap \Gamma = \emptyset$, we obtain that
\begin{eqnarray}\label{weak divergence exchange 1}
(\nabla_w \cdot (Q_h {\bf u}),\tau)_T=(\mathcal{Q}_h(\nabla \cdot {\bf u}),\tau)_T,
\end{eqnarray}
(2)for $T\subseteq \Omega_2$ and $\partial T \cap \Gamma \neq \emptyset$, we drive that
\begin{eqnarray}\label{weak divergence exchange 2}
(\nabla_w \cdot (Q_h {\bf u}),\tau)_T=(\mathcal{Q}_h(\nabla \cdot {\bf u}),\tau)_T-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}} ),\tau I\rangle_{\partial T \cap \Gamma}.
\end{eqnarray}
\end{lemma}
\begin{proof}According to the (\ref{weak divergence 1}), definition of $Q_0$, $Q_b$ and $\mathcal{Q}_h$ and the integration by parts, for the T $\in \mathcal{T}_h^S$ we have
\begin{align*}
(\nabla_w \cdot (Q_h {\bf u}),\tau)_T=&-(Q_0 {\bf u}, \nabla \tau)_T+\langle Q_b {\bf u} ,\tau {\bf n} \rangle_{\partial T}\\
=&-({\bf u},\nabla \tau)_T+\langle {\bf u} ,\tau {\bf n} \rangle_{\partial T}\\
=&(\nabla \cdot {\bf u},\tau)_T\\
=&(\mathcal{Q}_h(\nabla \cdot {\bf u}),\tau)_T.
\end{align*}
Next, using (\ref{weak divergence 2}), for the T $\in \mathcal{T}_{1h}^I$,
\begin{align*}
(\nabla_w \cdot (Q_h {\bf u}),\tau)_T=&-(Q_0 {\bf u},\nabla \tau)_T+\langle Q_b {\bf u}, \tau {\bf n}_1
\rangle_{\partial T \backslash \Gamma}
+\langle \hat{Q}_b({\bf u} {\bf n}_1^{\mathrm{T}}),\tau I \rangle _{\partial T \cap \Gamma}\\
=&-({\bf u},\nabla \tau)_T+\langle {\bf u},\tau {\bf n}_1 \rangle_{\partial T \backslash \Gamma}+\langle {\bf u} {\bf n}_1^{\mathrm{T}},\tau I \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u},\nabla \tau)_T+\langle {\bf u},\tau {\bf n}_1 \rangle_{\partial T}\\
=&(\nabla \cdot {\bf u},\tau)_T\\
=&(\mathcal{Q}_h(\nabla \cdot {\bf u}),\tau )_T
\end{align*}
Using (\ref{weak divergence 3}) and the interface conditions, for the T $\subseteq \mathcal{T}_{2h}$ and $\partial T \cap \Gamma \neq \emptyset$,
\begin{align*}
(\nabla_w \cdot (Q_h {\bf u}),\tau)_T=&-(Q_0 {\bf u}_2,\nabla \tau)_T+\langle Q_b {\bf u}_2, \tau {\bf n}_2
\rangle_{\partial T \backslash \Gamma}
-\langle \hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}),\tau I \rangle _{\partial T \cap \Gamma}\\
=&-({\bf u}_2,\nabla \tau)_T+\langle {\bf u}_2,\tau {\bf n}_2 \rangle_{\partial T \backslash \Gamma}\\
&+\langle \hat{Q}_b({\bf u}_2 {\bf n}_2^{\mathrm{T}}),\tau I \rangle_{\partial T \cap \Gamma}-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\tau I \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u}_2,\nabla \tau)_T+\langle {\bf u}_2,\tau {\bf n}_2 \rangle_{\partial T}
-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\tau I \rangle_{\partial T \cap \Gamma}\\
=&(\nabla \cdot {\bf u}_2,\tau)_T-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\tau I \rangle_{\partial T \cap \Gamma}\\
=&(\mathcal{Q}_h(\nabla \cdot {\bf u}_2),\tau )_T-\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),\tau I \rangle_{\partial T \cap \Gamma}
\end{align*}
\end{proof}
We now define a semi-norm in the weak Galerkin finite element $V_h$ as follows:
\begin{eqnarray}
{|\hspace{-.02in}|\hspace{-.02in}|}{\mathbf v}{|\hspace{-.02in}|\hspace{-.02in}|}^2=a({\mathbf v},{\mathbf v}).
\end{eqnarray}
Then, we have the following property.
\begin{lemma}
${|\hspace{-.02in}|\hspace{-.02in}|} \cdot {|\hspace{-.02in}|\hspace{-.02in}|}$ provides a norm in $V_h^0$.
\end{lemma}
\begin{proof}
It's easy to see that the positive property and linear property for ${|\hspace{-.02in}|\hspace{-.02in}|} \cdot {|\hspace{-.02in}|\hspace{-.02in}|}$. Next, assume that ${|\hspace{-.02in}|\hspace{-.02in}|} \cdot {|\hspace{-.02in}|\hspace{-.02in}|} =0$ for some ${\mathbf v} \in V_h^0$, then it follows that
\begin{align*}
0=&\sum_{T \in \mathcal{T}_h}(A\nabla_w {\mathbf v},\nabla_w {\mathbf v})_T+\sum_{T \in \mathcal{T}_h^S}h_T^{-1}\langle Q_b {\mathbf v}_0-{\mathbf v}_b,Q_b {\mathbf v}_0-{\mathbf v}_b \rangle_{\partial T}\\
+&\sum_{T \in \mathcal{T}_{1h}^I} h_T^{-1} \Big( \langle Q_b {\mathbf v}_0 -{\mathbf v}_b,Q_b {\mathbf v}_0 -{\mathbf v}_b \rangle_{\partial T \backslash \Gamma}\\
+&\langle \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b,\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b) \rangle_{\partial T \cap \Gamma} \Big)\\
+&\sum_{T \in \mathcal{T}_{2h}^I }h_T^{-1} \Big( \langle Q_b {\mathbf v}_0 -{\mathbf v}_b,Q_b {\mathbf v}_0 -{\mathbf v}_b \rangle_{\partial T \backslash \Gamma}\\
+&\langle \hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b,\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b) \rangle_{\partial T \cap \Gamma} \Big).\\
\end{align*}
The above equation implies the following conclusions:
\begin{enumerate}
\item $\nabla_w {\mathbf v} =0$ on all $T \in \mathcal{T}_h $;\\
\item $Q_b {\mathbf v}_0={\mathbf v}_b $ for $ e \in \mathcal{E}_h^S$;\\
\item $\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})=\tilde{{\mathbf v}}_b $ for $ e \in \mathcal{E}_h^I $;\\
\item $\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})=-\tilde{{\mathbf v}}_b $ for $ e \in \mathcal{E}_h^I $.
\end{enumerate}
Therefore, by the above fact, we have for any $q \in [P_{k-1}(T)]^{2 \times 2}$,
(1)For the T $\in \mathcal{T}_{h}^S$,
\begin{align*}
0=&(\nabla_w {\mathbf v} ,q)_T\\
=&-({\mathbf v}_0,\nabla \cdot q)_T+\langle {\mathbf v}_b ,q {\bf n} \rangle_{\partial T}\\
=&(\nabla {\mathbf v}_0,q)_T-\langle {\mathbf v}_0 ,q {\bf n} \rangle_{\partial T}+\langle {\mathbf v}_b ,q {\bf n} \rangle_{\partial T}\\
=&(\nabla {\mathbf v}_0,q)_T-\langle Q_b {\mathbf v}_0 -{\mathbf v}_b ,q {\bf n} \rangle_{\partial T}\\
=&(\nabla {\mathbf v}_0,q)_T
\end{align*}
In the above equation letting $q=\nabla {\mathbf v}_0$ can drive $\nabla {\mathbf v}_0=0, \ T \in \mathcal{T}_{h}^I$. \\
(2)For the T $\in \mathcal{T}_{1h}^I$,
\begin{align*}
0=&(\nabla_w {\mathbf v} ,q)_T\\
=&-({\mathbf v}_0,\nabla \cdot q)_T+\langle {\mathbf v}_b ,q {\bf n}_1 \rangle_{\partial T \backslash \Gamma}+\langle \tilde{{\mathbf v}}_b , q \rangle_{\partial T \cap \Gamma}\\
=&(\nabla {\mathbf v}_0,q)_T-\langle {\mathbf v}_0 ,q {\bf n}_1 \rangle_{\partial T}+\langle {\mathbf v}_b ,q {\bf n}_1 \rangle_{\partial T \backslash \Gamma}
+\langle \tilde{{\mathbf v}}_b, q \rangle_{\partial T \cap \Gamma}\\
=&(\nabla {\mathbf v}_0,q)_T-\langle Q_b {\mathbf v}_0 -{\mathbf v}_b ,q {\bf n}_1 \rangle_{\partial T \backslash \Gamma}-\langle
\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b,q \rangle_{\partial T \cap \Gamma}\\
=&(\nabla {\mathbf v}_0,q)_T
\end{align*}
In the above equation letting $q=\nabla {\mathbf v}_0$ can drive $\nabla {\mathbf v}_0=0, T \in \mathcal{T}_{1h}^I$. \\
(3)For the T $\in \mathcal{T}_{2h}^I$,
\begin{align*}
0=&(\nabla_w {\mathbf v} ,q)_T\\
=&-({\mathbf v}_0,\nabla \cdot q)_T+\langle {\mathbf v}_b ,q {\bf n}_2 \rangle_{\partial T \backslash \Gamma}-\langle \tilde{{\mathbf v}}_b , q \rangle\\
=&(\nabla {\mathbf v}_0,q)_T-\langle {\mathbf v}_0 ,q {\bf n}_2 \rangle_{\partial T}+\langle {\mathbf v}_b ,q {\bf n}_2 \rangle_{\partial T \backslash \Gamma}
-\langle \tilde{{\mathbf v}}_b , q \rangle_{\partial T \cap \Gamma}\\
=&(\nabla {\mathbf v}_0,q)_T-\langle Q_b {\mathbf v}_0 -{\mathbf v}_b ,q {\bf n}_2 \rangle_{\partial T \backslash \Gamma}-\langle \hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b,q\rangle_{\partial T \cap \Gamma}\\
=&(\nabla {\mathbf v}_0,q)_T
\end{align*}
In the above equation letting $q=\nabla {\mathbf v}_0$ can drive $\nabla {\mathbf v}_0=0, T \in \mathcal{T}_{2h}^I$. \\
It follow that ${\mathbf v}_0=constant$, on $T \in \mathcal{T}_{h} $. This together with the fact that ${\mathbf v}_b=0 $ on $\partial \Omega$ implies that ${\mathbf v}_0=0$ and ${\mathbf v}_b=0$.
\end{proof}
\begin{lemma}\label{Abounded}
For $\forall~ {\mathbf v},{\bf w} \in V_h^0$,
\begin{eqnarray}
|a({\mathbf v},{\bf w})|\leqslant {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|} \cdot {|\hspace{-.02in}|\hspace{-.02in}|} {\bf w} {|\hspace{-.02in}|\hspace{-.02in}|};\\
a({\mathbf v},{\mathbf v})= {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}^2.
\end{eqnarray}
\end{lemma}
\begin{lemma}\cite{WGStokes1}
Let $\mathcal{T}_h$ be a finite element partition of $\Omega$ satisfying the shape regularity assumption and ${\bf w} \in [H^{r+1}(\Omega)]^d$ and $\rho \in H^r(\Omega)$ with
$1 \leqslant r \leqslant k$. Then, for $1 \leqslant s \leqslant 1$ we have
\begin{eqnarray}
\sum_{T \in \mathcal{T}_h} h_T^{2s} \| {\bf w}- Q_0 {\bf w}\|_{T,s}^2 \leqslant Ch^{2(r+1)}\|{\bf w}\|_{r+1}^2,\label{projectorestimate1}\\
\sum_{T \in \mathcal{T}_h} h_T^{2s} \|\nabla {\bf w}- \mathbb{Q}_h(\nabla {\bf w}) \|_{T,s}^2 \leqslant C h^{2r}\|{\bf w}\|_{r+1}^2,\label{projectorestimate2}\\
\sum_{T \in \mathcal{T}_h} h_T^{2s} \|\rho- Q_h^p \rho \|_{T,s}^2 \leqslant Ch^{2r}\|\rho\|_{r+1}^2 \label{projectorestimate3}.
\end{eqnarray}
Here C denotes a generic constant independent of the mesh size h and the functions in the estimates.
\end{lemma}
\begin{lemma}(inf-sup condition)\label{InfSupCondition}
There exists a positive constant C such that\\
\begin{eqnarray}\label{InfSup}
\sup_{{\mathbf v} \in V_h^0}\frac{b({\mathbf v},\rho)}{{|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}} \geqslant C \| \rho\|
\end{eqnarray}
for all $\rho \in W_h $, where C is independent of mesh size h.
\end{lemma}
\begin{proof}
According to \cite{bookfiniteelementmethod1,bookfiniteelementmethod2,bookfiniteelementmethod3,bookfiniteelementmethod5,bookfiniteelementmethod4,WGStokes1},for any given $\rho \in W_h\subset L_0^2(\Omega)$, there exists a vector-value function $\tilde{{\mathbf v}} \in [H_0^1(\Omega)]^d$ such that
\begin{align*}
\frac{(\nabla \cdot \tilde{{\mathbf v}},\rho) }{\| \tilde{{\mathbf v}} \|_1} \geqslant C \| \rho\|,
\end{align*}
where $C \geqslant 0$ is a constant depending only on the domain $\Omega$. Let ${\mathbf v} = Q_h \tilde{{\mathbf v}} \in V_h$, then we need to verify the following inequality holds true:
\begin{eqnarray}\label{InfSup1}
{|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|} \leqslant C_0 \| \tilde{{\mathbf v}} \|_1.
\end{eqnarray}
Firstly, it follows from the definition of ${|\hspace{-.02in}|\hspace{-.02in}|} \cdot {|\hspace{-.02in}|\hspace{-.02in}|}$ and Lemma \ref{weak gradient exchange} that for $T \in \mathcal{T}_h$,
\begin{eqnarray}\label{InfSup1.1}
A\| \nabla_w {\mathbf v} \|_T^2 =A\| \nabla_w (Q_h \tilde{{\mathbf v}}) \|_T^2 = A\|\mathbb{Q}_h(\nabla \tilde{{\mathbf v}}) \|_T^2 \leqslant A\| \nabla \tilde{{\mathbf v}} \|_T^2 \leqslant C
\| \tilde{{\mathbf v}} \|_{1,T}^2.
\end{eqnarray}
Next, by using the definition of $Q_h$, the trace inequality and the estimate (\ref{projectorestimate1}), we can derive that for $T \in \mathcal{T}_h^S $,
\begin{eqnarray}\label{SvvS}
\begin{split}
s({\bf u},{\mathbf v})|_T=&h_T^{-1}\|Q_b {\mathbf v}_0 -{\mathbf v}_b \|_{\partial T}^{2}\\
=&h_T^{-1}\|Q_b (Q_0 \tilde{{\mathbf v}}) -Q_b{\tilde{{\mathbf v}}} \|_{\partial T}^{2}\\
\leqslant &h_T^{-1}\|Q_0 \tilde{{\mathbf v}}-\tilde{{\mathbf v}}\|_{\partial T}^{2}\\
\leqslant &C\Big( h_T^{-2}\|Q_0 \tilde{{\mathbf v}}-\tilde{{\mathbf v}}\|_T^2+\|\nabla(Q_0 \tilde{{\mathbf v}}-\tilde{{\mathbf v}}) \|_T^2 \Big)\\
\leqslant &C\|\tilde{{\mathbf v}}\|_{1,T}^2.
\end{split}
\end{eqnarray}
When the element T $\in \mathcal{T}_{1h}^I$, the stablizer term has the following definition
\begin{align*}
s({\bf u},{\mathbf v})|_T=h_T^{-1}\|Q_b{{\mathbf v}_0}-{\mathbf v}_b \|_{\partial T \backslash \Gamma}+h_T^{-1}\|\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \|_{\partial T \cap \Gamma}.
\end{align*}
The first term is similar to (\ref{SvvS}), so we consider the second term. Proceeding as in the proof of (\ref{SvvS}), we have
\begin{eqnarray}
\begin{split}
&h_T^{-1}\|\hat{Q_b}({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \|_{\partial T \cap \Gamma}\\
=&h_T^{-1}\|\hat{Q_b}((Q_0 \tilde{{\mathbf v}}) {\bf n}_1^{\mathrm{T}})-\hat{Q}_b(\tilde{{\mathbf v}} {\bf n}_1^{\mathrm{T}})\|_{\partial T \cap \Gamma}\\
\leqslant & h_T^{-1}\|(Q_0 \tilde{{\mathbf v}}) {\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}} {\bf n}_1^{\mathrm{T}} \|_{\partial T \cap \Gamma}\\
\leqslant & h_T^{-1}\|Q_0 \tilde{{\mathbf v}}- \tilde{{\mathbf v}}\|_{\partial T \cap \Gamma}\\
\leqslant & h_T^{-1}\|Q_0 \tilde{{\mathbf v}}- \tilde{{\mathbf v}}\|_{\partial T}\\
\leqslant &C\| \tilde{{\mathbf v}}\|_{1,T}^2.
\end{split}
\end{eqnarray}
Similarly, for $T \in \mathcal{T}_{2h}^I$, we have
\begin{align*}
s({\bf u},{\mathbf v})|_T=&h_T^{-1}\| Q_b {\mathbf v}_0- {\mathbf v}_b|_{\partial T \backslash \Gamma}+h_T^{-1} \| \hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b\|_{\partial T \cap \Gamma},
\end{align*}
then using the same estimate method as (\ref{SvvS}), we obtain that
\begin{eqnarray}\label{InfSup1.2}
\begin{split}
&h_T^{-1}\|\hat{Q_b}({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b \|_{\partial T \cap \Gamma}\\
=&h_T^{-1}\|\hat{Q_b}((Q_0 \tilde{{\mathbf v}}) {\bf n}_2^{\mathrm{T}})+\hat{Q}_b(\tilde{{\mathbf v}} {\bf n}_1^{\mathrm{T}})\|_{\partial T \cap \Gamma}\\
\leqslant & h_T^{-1}\|(Q_0 \tilde{{\mathbf v}}) {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}} {\bf n}_1^{\mathrm{T}} \|_{\partial T \cap \Gamma}\\
\leqslant & h_T^{-1}\|Q_0 \tilde{{\mathbf v}}- \tilde{{\mathbf v}}\|_{\partial T \cap \Gamma}\\
\leqslant & h_T^{-1}\|Q_0 \tilde{{\mathbf v}}- \tilde{{\mathbf v}}\|_{\partial T}\\
\leqslant &C\| \tilde{{\mathbf v}}\|_{1,T}^2
\end{split}
\end{eqnarray}
Consequently, Eq.(\ref{InfSup1}) follows by using (\ref{InfSup1.1})-(\ref{InfSup1.2}).
Another step in the proof is to show $b({\mathbf v},\rho)$, on the basis of Lemma \ref{weak divergence exchange} and the definition of $\mathcal{Q}_h$, it's easy to see that
\begin{eqnarray}\label{InfSup2}
(\nabla \cdot (Q_h \tilde{{\mathbf v}}),\rho)_T=(\mathcal{Q}_h(\nabla \cdot \tilde{{\mathbf v}}),\rho)_T=(\nabla \cdot \tilde{{\mathbf v}},\rho)_T, ~ \forall \rho \in W_h
\end{eqnarray}
for all $T \in \mathcal{T}_h$. Combining (\ref{InfSup1}) with (\ref{InfSup2}), we have
$$
\frac{b({\mathbf v},p)}{{|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}} \geqslant \frac{\sum_{T \in \mathcal{T}_h}(\nabla \cdot \tilde{{\mathbf v}},\rho)_T}{C_0 \|\tilde{{\mathbf v}} \|_1}
\geqslant \frac{\beta}{C_0} \| \rho \| =C \| \rho \| .
$$
This indicates that (\ref{InfSup}) holds true.
\end{proof}
According to the Lemma \ref{Abounded} and Lemma \ref{InfSupCondition}, it follows that the weak Galerkin finite element scheme (\ref{algorithm 1})-(\ref{algorithm 2}) exist a unique solution and the solution is stable.
\section{Error Equation}In the section, we will give the equation of the error of ${\bf u}$ and $p$. Firstly, we assume $A$ is a piecewise constant in the domain $\Omega$. We use $\{{\bf u}_h; p_h\}$ to denote the solution of (\ref{algorithm 1})-(\ref{algorithm 2}). Let $\{{\bf u},p\}$ be the exact solution of (\ref{model 1})-(\ref{interface 2}). Denote by $Q_h {\bf u} \in V_h$ the $L^2$ projection of ${\bf u}$. Let $Q_h^p$ be the $L^2$ projection of $p$ onto $W_h$. The error of ${\bf u} ~\text{and}\ p$ is given by
\begin{eqnarray}
e_h=Q_h {\bf u} - {\bf u}_h, ~ \varepsilon_h=Q_h^p p-p_h.
\end{eqnarray}
\begin{lemma}\label{EE}
For the $({\bf u}_i; p_i)\in [H^1(\Omega_i)]^2 \times H^1(\Omega_i)$ with $i=1,2$ satisfying the equations (\ref{model 1})-(\ref{interface 2}), we can derive the following equation
\begin{eqnarray}\label{EEE}
\begin{split}
&\sum_{T \in \mathcal{T}_h}(\nabla_w(Q_h {\bf u}), A\nabla_w {\mathbf v})_T-\sum_{T \in \mathcal{T}_h}(\nabla_w \cdot {\mathbf v}, Q_h^p p)_T \\
=&({\bf f},{\mathbf v}_0)+\ell_1({\bf u},{\mathbf v})+\tilde{\ell}_1({\bf u},{\mathbf v})-\ell_2(p,{\mathbf v})-\tilde{\ell}_2(p,{\mathbf v})\\
+&\sum_{e \in \mathcal{E}_h^I}
\langle \tilde{{\mathbf v}}_b,\bm{\phi}{\bf n}_1^{\mathrm{T}} \rangle_e - \sum_{e \in \mathcal{E}_h^I} \langle \hat{Q}_b(\bm{\psi}{\bf n}_1^{\mathrm{T}},
A_2 \nabla_w {\mathbf v} \rangle_e, \\
\end{split}
\end{eqnarray}
where
\begin{eqnarray}
\ell_1({\bf u},{\mathbf v})=\sum_{T \in \mathcal{T}_h^S} \langle {\mathbf v}_0 -{\mathbf v}_b, A \nabla {\bf u}-A \mathbb{Q}_h(\nabla {\bf u}) \rangle_{\partial T},
\end{eqnarray}
\begin{eqnarray}
\begin{split}
\tilde{\ell}_1({\bf u},{\mathbf v})=\sum_{T \in \mathcal{T}_{1h}^I}
\Big( \langle {\mathbf v}_0 -{\mathbf v}_b, A_1 \nabla {\bf u}-A_1 \mathbb{Q}_h(\nabla {\bf u}_1) \rangle_{\partial T \backslash \Gamma}\\
+\langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}} -\tilde{{\mathbf v}}_b, A_1 \nabla {\bf u}_1-A_1 \mathbb{Q}_h(\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma} \Big)\\
+\sum_{T \in \mathcal{T}_{2h}^I}
\Big( \langle {\mathbf v}_0 -{\mathbf v}_b, A_2 \nabla {\bf u}-A_2 \mathbb{Q}_h(\nabla {\bf u}_2) \rangle_{\partial T \backslash \Gamma}\\
+\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b, A_2 \nabla {\bf u}_2-A_2 \mathbb{Q}_h(\nabla {\bf u}_2) \rangle_{\partial T \cap \Gamma} \Big),
\end{split}
\end{eqnarray}
\begin{eqnarray}
\ell_2(p,{\mathbf v})=\sum_{T \in \mathcal{T}_h^S} \langle {\mathbf v}_0 -{\mathbf v}_b, p{\bf n}-(Q_h^p p) {\bf n} \rangle_e,
\end{eqnarray}
\begin{eqnarray}
\begin{split}
\tilde{\ell}_2(p,{\mathbf v})=\sum_{T \in \mathcal{T}_{1h}^I}
\Big( \langle {\mathbf v}_0 -{\mathbf v}_b,p_1{\bf n}_1-(Q_h^p p_1) {\bf n}_1 \rangle_{\partial T \backslash \Gamma}
+ \langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}} -\tilde{{\mathbf v}}_b,p_1 I-(Q_h^p p_1) I \rangle_{\partial T \cap \Gamma} \Big)\\
+\sum_{T \in \mathcal{T}_{2h}^I} \Big(
\langle {\mathbf v}_0 -{\mathbf v}_b,p_2{\bf n}_2-(Q_h^p p_2) {\bf n}_2 \rangle_{\partial T \backslash \Gamma}
+\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b,p_2I- (Q_h^p p_2) I \rangle_{\partial T \cap \Gamma} \Big).
\end{split}
\end{eqnarray}
\end{lemma}
\begin{proof}
According to the (\ref{weak gradient 1}), the definition of the weak gradient and the integration by parts, it follows that for $T \in \mathcal{T}_h^S$,
\begin{eqnarray}\label{EEP1}
\begin{split}
&(\nabla_w (Q_h {\bf u}),A \nabla_w {\mathbf v})_T\\
=&(\mathbb{Q}_h (\nabla {\bf u}),A \nabla_w {\mathbf v})_T\\
=&-({\mathbf v}_0,\nabla \cdot (A \mathbb{Q}_h (\nabla {\bf u})))_T+\langle {\mathbf v}_b,A \mathbb{Q}_h (\nabla {\bf u}) {\bf n} \rangle_{\partial T}\\
=&(A \mathbb{Q}_h (\nabla {\bf u}),\nabla {\mathbf v}_0)_T-\langle{\mathbf v}_0-{\mathbf v}_b,A \mathbb{Q}_h (\nabla {\bf u}) {\bf n}\rangle_{\partial T}\\
=&A(\nabla {\bf u},\nabla {\mathbf v}_0)_T-\langle{\mathbf v}_0-{\mathbf v}_b,A \mathbb{Q}_h(\nabla {\bf u}) {\bf n}\rangle_{\partial T}.\\
\end{split}
\end{eqnarray}
Similarly, by using the definition of the weak divergence and the integration by parts, we have
\begin{eqnarray}\label{EEP2}
\begin{split}
&(\nabla_w \cdot {\mathbf v}, Q_h^p p )_T\\
=&-({\mathbf v}_0,\nabla(Q_h^p p ))_T+\langle {\mathbf v}_b \cdot {\bf n},Q_h^p p \rangle_{\partial T}\\
=&(\nabla \cdot {\mathbf v}_0,Q_h^p p)_T-\langle {\mathbf v}_0 -{\mathbf v}_b, (Q_h^p p){\bf n}\rangle_{\partial T}\\
=&(\nabla \cdot {\mathbf v}_0,p)_T-\langle {\mathbf v}_0 -{\mathbf v}_b, (Q_h^p p){\bf n}\rangle_{\partial T}.
\end{split}
\end{eqnarray}
For $T \in \mathcal{T}_{1h}^I$, it follows from (\ref{weak gradient 2}), (\ref{weak gradient exchange 1}) that
\begin{eqnarray}\label{EEP3}
\begin{split}
&(\nabla_w (Q_h {\bf u}),A \nabla_w {\mathbf v})_T\\
=&(\mathbb{Q}_h (\nabla {\bf u}),A \nabla_w {\mathbf v})_T\\
=&-({\mathbf v}_0,\nabla \cdot (A \mathbb{Q}_h (\nabla {\bf u})))_T+\langle {\mathbf v}_b,A \mathbb{Q}_h (\nabla {\bf u}) {\bf n}_1 \rangle_{\partial T \setminus \Gamma}+\langle \tilde{{\mathbf v}}_b,A \mathbb{Q}_h(\nabla {\bf u}) \rangle_{\partial T \cap \Gamma}\\
=&(A \mathbb{Q}_h(\nabla {\bf u}), \nabla {\mathbf v}_0)_T-\langle {\mathbf v}_0,A \mathbb{Q}_h(\nabla {\bf u}) {\bf n}_1 \rangle_{\partial T}\\
+&\langle {\mathbf v}_b,A \mathbb{Q}_h(\nabla {\bf u}) {\bf n}_1 \rangle_{\partial T \setminus \Gamma}+\langle \tilde{{\mathbf v}}_b,A \mathbb{Q}_h(\nabla {\bf u})\rangle_{\partial T \cap \Gamma}\\
=&A( \nabla {\bf u},\nabla {\mathbf v}_0)_T-\langle {\mathbf v}_0 - {\mathbf v}_b,A \mathbb{Q}_h(\nabla {\bf u}) {\bf n}_1 \rangle_{\partial T \setminus \Gamma}
-\langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}}_b,A \mathbb{Q}_h(\nabla {\bf u}) \rangle_{\partial T \cap \Gamma}.
\end{split}
\end{eqnarray}
From (\ref{weak divergence 2}), (\ref{weak divergence exchange 1}), we have
\begin{eqnarray}\label{EEP4}
\begin{split}
(\nabla_w \cdot {\mathbf v}, Q_h^p p_1 )_T=&-({\mathbf v}_0,\nabla(Q_h^p p_1 ))_T+\langle {\mathbf v}_b \cdot {\bf n}_1,Q_h^p p_1 \rangle_{\partial T \setminus \Gamma}\\
+&\langle \tilde{{\mathbf v}}_b,(Q_h^p p_1)I \rangle_{\partial T \cap \Gamma}\\
=&(\nabla \cdot {\mathbf v}_0,Q_h^p p_1)_T-\langle {\mathbf v}_0 -{\mathbf v}_b,
(Q_h^p p_1) {\bf n}_1 \rangle_{\partial T \setminus \Gamma}\\
-&\langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}}_b,(Q_h^p p_1)I \rangle_{\partial T \cap \Gamma}\\
=&(\nabla \cdot {\mathbf v}_0,p_1)_T-\langle {\mathbf v}_0 -{\mathbf v}_b, (Q_h^p p_1) {\bf n}_1 \rangle_{\partial T \setminus \Gamma}\\
-&\langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}}_b,(Q_h^p p_1)I \rangle_{\partial T \cap \Gamma}.
\end{split}
\end{eqnarray}
Similarly, by using Eq.(\ref{weak gradient 3}) and (\ref{weak gradient exchange 2}), we have for
$T \in \mathcal{T}_{2h}^I$,
\begin{eqnarray}\label{EEP5}
\begin{split}
&(\nabla_w (Q_h {\bf u}),A \nabla_w {\mathbf v})_T\\
=&(\mathbb{Q}_h (\nabla {\bf u}),A_2 \nabla_w {\mathbf v})_T+\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),A_2 \nabla_w {\mathbf v} \rangle_{\partial T \cap \Gamma}\\
=&-({\mathbf v}_0,\nabla \cdot (A_2 \mathbb{Q}_h (\nabla {\bf u})))_T+\langle {\mathbf v}_b,A_2 \mathbb{Q}_h (\nabla {\bf u}) {\bf n} \rangle_{\partial T \setminus \Gamma}\\
-&\langle \tilde{{\mathbf v}}_b,A_2 \mathbb{Q}_h(\nabla {\bf u}) \rangle_{\partial T \cap \Gamma}+\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),A_2 \nabla_w {\mathbf v} \rangle_{\partial T \cap \Gamma}\\
=&A_2 ( \nabla {\bf u},\nabla {\mathbf v}_0)_T-\langle {\mathbf v}_0-{\mathbf v}_b,A_2 \mathbb{Q}_h(\nabla {\bf u}) {\bf n} \rangle_{\partial T \setminus \Gamma}\\
-&\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b, A_2 \mathbb{Q}_h (\nabla {\bf u})\rangle_{\partial T \cap \Gamma}+\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),A_2 \nabla_w {\mathbf v} \rangle_{\partial T \cap \Gamma},
\end{split}
\end{eqnarray}
\begin{eqnarray}\label{EEP6}
\begin{split}
(\nabla_w \cdot {\mathbf v}, Q_h^p p_2 )_T=&-({\mathbf v}_0,\nabla(Q_h^p p_2 ))_T+\langle {\mathbf v}_b \cdot {\bf n}_2,Q_h^p p_2 \rangle_{\partial T \setminus \Gamma}\\
-&\langle \tilde{{\mathbf v}}_b,(Q_h^p p_2)I \rangle_{\partial T \cap \Gamma}\\
=&(\nabla \cdot {\mathbf v}_0,Q_h^p p_2)_T-\langle {\mathbf v}_0 -{\mathbf v}_b, (Q_h^p p_2) {\bf n}_2 \rangle_{\partial T \setminus \Gamma}\\
-&\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+ \tilde{{\mathbf v}}_b,(Q_h^p p_2)I \rangle_{\partial T \cap \Gamma}\\
=&(\nabla \cdot {\mathbf v}_0,p_2)_T-\langle {\mathbf v}_0 -{\mathbf v}_b, (Q_h^p p_2) {\bf n}_2 \rangle_{\partial T \setminus \Gamma}\\
-&\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b,(Q_h^p p_2)I \rangle_{\partial T \cap \Gamma}.
\end{split}
\end{eqnarray}
Combining (\ref{EEP1}), (\ref{EEP3}) with (\ref{EEP5}), we have
\begin{eqnarray}\label{EEP7}
\begin{split}
&\sum_{T \in \mathcal{T}_h}(\nabla_w (Q_h {\bf u}),A \nabla_w {\mathbf v})_T\\
=&\sum_{T \in \mathcal{T}_h}(A \nabla {\bf u}, \nabla {\mathbf v})_T -\sum_{T \in \mathcal{T}_{h}^S} \langle {\mathbf v}_0 -{\mathbf v}_b, A \mathbb{Q}_h(\nabla {\bf u}) {\bf n} \rangle_{\partial T}\\
-&\sum_{T \in \mathcal{T}_{1h}^I} \Big( \langle {\mathbf v}_0 -{\mathbf v}_b,A_1 \mathbb{Q}_h(\nabla {\bf u}) {\bf n}_1 \rangle_{\partial T \setminus \Gamma}+\langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}}_b,A_1 \mathbb{Q}_h(\nabla {\bf u})\rangle_{\partial T \cap \Gamma} \Big)\\
-&\sum_{T \in \mathcal{T}_{2h}^I} \Big( \langle {\mathbf v}_0 -{\mathbf v}_b,A_2\mathbb{Q}_h(\nabla {\bf u}) {\bf n}_2 \rangle_{\partial T \setminus \Gamma}+\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b,A_2\mathbb{Q}_h(\nabla {\bf u})\rangle_{\partial T \cap \Gamma} \\
+&\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),A_2 \nabla_w {\mathbf v} \rangle_{\partial T \cap \Gamma}\Big),
\end{split}
\end{eqnarray}
It follows from Eqs.(\ref{EEP2}),(\ref{EEP4}) and (\ref{EEP6}) that
\begin{eqnarray}\label{EEP8}
\begin{split}
&(\nabla_w \cdot {\mathbf v},Q_h^p p)\\
=&\sum_{T \in \mathcal{T}_h}\langle \nabla \cdot {\mathbf v}_0, p \rangle_T - \sum_{T \in \mathcal{T}_{h}^S} \langle {\mathbf v}_0-{\mathbf v}_b, Q_h^p p {\bf n} \rangle_{\partial T}\\
-&\sum_{T \in \mathcal{T}_{1h}^I}\Big( \langle {\mathbf v}_0 -{\mathbf v}_b, (Q_h^p p_1) {\bf n}_1 \rangle_{\partial T \setminus \Gamma} +\langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}}_b, (Q_h^p p_1) I\rangle_{\partial T \cap \Gamma} \Big)\\
-&\sum_{T \in \mathcal{T}_{2h}^I}\Big( \langle {\mathbf v}_0 -{\mathbf v}_b, (Q_h^p p_2) {\bf n}_2 \rangle_{\partial T \setminus \Gamma} +\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b, (Q_h^p p_2) I\rangle_{\partial T \cap \Gamma} \Big),\\
\end{split}
\end{eqnarray}
The next thing in the proof to test (\ref{model 1}) by using ${\mathbf v}_0$ in ${\mathbf v}=\{ {\mathbf v}_0, {\mathbf v}_b\} \in V_h^0$ to obtain
\begin{eqnarray}\label{EEP9}
-(\nabla \cdot (A \nabla {\bf u}),{\mathbf v}_0)+(\nabla p, {\mathbf v}_0)=({\bf f}, {\mathbf v}_0).
\end{eqnarray}
According to the integration by parts and the interface condition (\ref{interface 2}), it follows that
\begin{eqnarray}\label{EEP10}
\begin{split}
-(\nabla \cdot (A \nabla {\bf u}),{\mathbf v}_0)=&\sum_{T \in \mathcal{T}_h} (-\nabla \cdot (A \nabla {\bf u}),{\mathbf v}_0)_T\\
=&\sum_{T \in \mathcal{T}_h}(A\nabla {\bf u}, \nabla {\mathbf v}_0)-\sum_{T \in \mathcal{T}_h} \langle A\nabla {\bf u} \cdot {\bf n} ,{\mathbf v}_0 \rangle_{\partial T}\\
=&\sum_{T \in \mathcal{T}_h}(A\nabla {\bf u}, \nabla {\mathbf v}_0)_T-\sum_{e \in \mathcal{E}_h^S} \langle {\mathbf v}_0 - {\mathbf v}_b ,A \nabla {\bf u} \cdot {\bf n} \rangle_e\\
-&\sum_{e \in \mathcal{E}_h^I} \Big( \langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}}_b,A_1 \nabla {\bf u}_1 \rangle_e
+\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b,A_2 \nabla {\bf u}_2 \rangle_e \Big)\\
-&\sum_{e \in \Gamma} \langle \tilde{{\mathbf v}}_b,\bm{\phi}{\bf n}_1^{\mathrm{T}}+p_1I-p_2I \rangle_e,\\
\end{split}
\end{eqnarray}
where we have used the fact that $\sum_{e \in \mathcal{E}_h^S} \langle {\mathbf v}_b,\nabla {\bf u} \cdot {\bf n}\rangle_e=0$. Then by the fact that $\sum_{e \in \mathcal{E}_h^S} \langle {\mathbf v}_b, p {\bf n} \rangle_e =0$, it follows that
\begin{eqnarray}\label{EEP11}
\begin{split}
(\nabla p, {\mathbf v}_0)=&\sum_{T \in \mathcal{T}_h}(\nabla p, {\mathbf v}_0)_T\\
=&\sum_{T \in \mathcal{T}_h}-(\nabla \cdot {\mathbf v}_0,p)_T+\sum_{T \in \mathcal{T}_h} \langle p, {\mathbf v}_0 \cdot {\bf n} \rangle_{\partial T}\\
=&\sum_{T \in \mathcal{T}_h}-(\nabla \cdot {\mathbf v}_0,p)_T+\sum_{e \in \mathcal{E}_h^S} \langle {\mathbf v}_0-{\mathbf v}_b, p{\bf n} \rangle_e\\
+&\sum_{T \in \mathcal{E}_h^I} \Big( \langle {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}}_b,p_1 I \rangle_e+\langle {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b,p_2 I \rangle_e \Big)\\
+&\sum_{e \in \mathcal{E}_h^I}\langle \tilde{{\mathbf v}}_b,p_1 I- p_2 I \rangle_e.
\end{split}
\end{eqnarray}
By substituting (\ref{EEP10}) into (\ref{EEP7}), substituting (\ref{EEP11}) into (\ref{EEP8}), and adding (\ref{EEP8}) to (\ref{EEP7}), the proof of Eq.(\ref{EEE}) is completed.
\end{proof}
\begin{lemma}\label{error equation}
The errors ${\bf e}_h$ and $\varepsilon_h$ of the weak Galerkin finite element solution satisfy the following equations:
\begin{eqnarray}\label{error equation 1}
a({\bf e}_h,{\mathbf v})-b({\mathbf v},\varepsilon_h)&=&\ell_1({\bf u},{\mathbf v})+\tilde{\ell}_1({\bf u},{\mathbf v})-\ell_2(p,{\mathbf v})-\tilde{\ell}_2(p,{\mathbf v})+\tilde{s}(Q_h {\bf u} ,{\mathbf v})\\
b({\bf e}_h,q)&=&0 \label{error equation 2}
\end{eqnarray}
for all ${\mathbf v} \in V_h^0$ and $q \in W_h$.
\end{lemma}
\begin{proof}
Since the solution $({\bf u}; p)$ satisfies the Eqs.(\ref{model 1})-(\ref{interface 2}), according to Lemma \ref{EE} we have
\begin{align*}
&\sum_{T \in \mathcal{T}_h}(\nabla_w (Q_h {\bf u}),A \nabla_w {\mathbf v})_T-\sum_{T \in \mathcal{T}_h}(\nabla_w \cdot {\mathbf v}, Q_h^p p)_T\\
=&({\bf f},{\mathbf v}_0)+\ell_1({\bf u},{\mathbf v})+\tilde{\ell}_1({\bf u},{\mathbf v})-\ell_2(p,{\mathbf v})-\tilde{\ell}_2(p,{\mathbf v})\\
+&\sum_{e \in \mathcal{E}_h^I}\langle \tilde{{\mathbf v}}_b,\bm{\phi}{\bf n}_1^{\mathrm{T}} \rangle_e -\sum_{e \in \mathcal{E}_h^I}\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}), A_2 \nabla_w {\mathbf v} \rangle_e
\end{align*}
Adding $S(Q_h {\bf u},{\mathbf v})$ to two side of the above equation yields
\begin{eqnarray}\label{eep1}
\begin{split}
&a(Q_h{\bf u},{\mathbf v})-b({\mathbf v},Q_h^p p)\\
=&({\bf f},{\mathbf v}_0)+\ell_1({\bf u},{\mathbf v})+\tilde{\ell}_1({\bf u},{\mathbf v})-\ell_2(p,{\mathbf v})-\tilde{\ell}_2(p,{\mathbf v})+s(Q_h {\bf u},{\mathbf v})\\
+&\sum_{e \in \mathcal{E}_h^I}\langle \tilde{{\mathbf v}}_b,\bm{\phi}{\bf n}_1^{\mathrm{T}} \rangle_e -\sum_{e \in \mathcal{E}_h^I}\langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}), A_2 \nabla_w {\mathbf v} \rangle_e
\end{split}
\end{eqnarray}
For $s(Q_h {\bf u},{\mathbf v})$, we need to pay special attention to the elements $T \in \mathcal{T}_{2h}^I$. According to the definition of $Q_h$, we have
\begin{eqnarray}\label{eep2}
\begin{split}
s(Q_h {\bf u},{\mathbf v})|_T=&h_T^{-1}\langle Q_b(Q_0 {\bf u})-Q_b {\bf u},Q_b {\mathbf v}_0 -{\mathbf v}_b \rangle_{\partial T \setminus \Gamma}\\
+&h_T^{-1}\langle \hat{Q}_b((Q_0 {\bf u}_2){\bf n}_2^{\mathrm{T}})+\hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}),
\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma}\\
=&h_T^{-1}\langle Q_b(Q_0 {\bf u})-Q_b {\bf u},Q_b {\mathbf v}_0 -{\mathbf v}_b \rangle_{\partial T \setminus \Gamma}\\
+&h_T^{-1}\langle \hat{Q}_b((Q_0 {\bf u}_2){\bf n}_2^{\mathrm{T}})-\hat{Q}_b({\bf u}_2 {\bf n}_2^{\mathrm{T}}),
\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma}\\
+&h_T^{-1}\langle \tilde{Q}_b(\bm{\psi}{\bf n}_1^{\mathrm{T}}),\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b
\rangle_{\partial T \cap \Gamma}
\end{split}
\end{eqnarray}
Therefore, denote
\begin{eqnarray}\label{eep3}
\begin{split}
\tilde{s}(Q_h {\bf u},{\mathbf v})=&\sum_{T \in \mathcal{T}_h^S}h_T^{-1} \langle Q_b(Q_0 {\bf u})-Q_b {\bf u},
Q_b {\mathbf v}_0 -{\mathbf v}_b \rangle_{\partial T}\\
+&\sum_{ T \in \mathcal{T}_{1h}^I} h_T^{-1}\Big( \langle Q_b(Q_0 {\bf u})-Q_b {\bf u}, Q_b {\mathbf v}_0 -{\mathbf v}_b\rangle_{\partial T \setminus \Gamma} \\
+ &\langle \hat{Q}_b((Q_0 {\bf u}_1){\bf n}_1)-\hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}),
\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma} \Big)\\
+&\sum_{T \in \mathcal{T}_{2h}^I} h_T^{-1}\Big( \langle Q_b(Q_0 {\bf u})-Q_b {\bf u}, Q_b {\mathbf v}_0 -{\mathbf v}_b\rangle_{\partial T \setminus \Gamma} \\
+ &\langle \hat{Q}_b((Q_0 {\bf u}_2){\bf n}_2)-\hat{Q}_b({\bf u}_2 {\bf n}_2^{\mathrm{T}}),
\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma} \Big).\\
\end{split}
\end{eqnarray}
Then, by plugging Eq.(\ref{eep3}) back into Eq.(\ref{eep2}), we obtain
\begin{eqnarray}\label{eep4}
\begin{split}
a(Q_h {\bf u}, {\mathbf v})-b({\mathbf v},Q_h^p p)=&({\bf f},{\mathbf v}_0)+\ell_1({\bf u},{\mathbf v})+\tilde{\ell}_1({\bf u},{\mathbf v})-\ell_2(p,{\mathbf v})-\tilde{\ell}_2(p,{\mathbf v})+\tilde{s}(Q_h {\bf u},{\mathbf v})\\
+&\sum_{e \in \mathcal{E}_h^I} \langle \tilde{{\mathbf v}}_b, \bm{\phi}{\bf n}_1^{\mathrm{T}} \rangle_e-\sum_{e \in \mathcal{E}_h^I} \langle \hat{Q}_b(\bm{\psi} {\bf n}_1^{\mathrm{T}}),A_2 \nabla_w {\mathbf v} \rangle_e\\
+&\sum_{T \in \mathcal{T}_{2h}^I}
\langle \hat{Q}_b(\bm{\psi}{\bf n}_1^{\mathrm{T}}),\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}})+\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma}.
\end{split}
\end{eqnarray}
Subtracting Eq.(\ref{eep4}) from Eq.(\ref{algorithm 1}) can lead to
\begin{eqnarray}\label{eep5}
\begin{split}
&a({\bf e}_h, {\mathbf v})-b({\mathbf v},\varepsilon_h)\\
=&\ell_1({\bf u},{\mathbf v})+\tilde{\ell}_1({\bf u},{\mathbf v})-\ell_2(p,{\mathbf v})-\tilde{\ell}_2(p,{\mathbf v})+\tilde{s}(Q_h {\bf u}, {\mathbf v}).
\end{split}
\end{eqnarray}
Another step in the proof is to verify the Eq.(\ref{eep1}) by incorporating with
Eq.(\ref{weak divergence 1}). Therefore, we have the following equations: for $T \in \mathcal{T}_h^S$,
\begin{eqnarray}\label{eep6}
\begin{split}
(\nabla_w \cdot (Q_h {\bf u}), q)_T=&-(Q_0 {\bf u}, \nabla q)_T + \langle Q_b {\bf u} \cdot {\bf n}, q \rangle_{\partial T}\\
=&-({\bf u},\nabla q)_T + \langle {\bf u}, q{\bf n} \rangle_{\partial T}\\
=&(\nabla \cdot {\bf u} , q)_T,
\end{split}
\end{eqnarray}
and for $T \in \mathcal{T}_{1h}^I$,
\begin{eqnarray}\label{eep7}
\begin{split}
&(\nabla_w \cdot (Q_h {\bf u}), q)_T\\
=&-(Q_0 {\bf u}_1, \nabla q)_T+\langle Q_b {\bf u}_1 ,q{\bf n}\rangle_{\partial T \setminus \Gamma}+\langle \hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}),
qI \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u}_1, \nabla q)_T + \langle {\bf u}_1, q{\bf n}_1 \rangle_{\partial T \setminus \Gamma}+\langle {\bf u}_1 {\bf n}_1^{\mathrm{T}},qI \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u}_1, \nabla q)_T+\langle {\bf u}_1, q{\bf n}_1 \rangle_{\partial T}\\
=&(\nabla \cdot {\bf u}_1, q)_T,
\end{split}
\end{eqnarray}
and for $T \in \mathcal{T}_{2h}^I$, using the interface condition (\ref{interface 1}) leads to
\begin{eqnarray}\label{eep8}
\begin{split}
&(\nabla_w \cdot (Q_h {\bf u}), q)_T\\
=&-(Q_0 {\bf u}_2, \nabla q)_T+\langle Q_b {\bf u}_2 ,q{\bf n}_2\rangle_{\partial T \setminus \Gamma}-\langle \hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}),
qI \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u}_2, \nabla q)_T + \langle {\bf u}_2, q{\bf n}_2 \rangle_{\partial T \setminus \Gamma}+\langle {\bf u}_2 {\bf n}_2^{\mathrm{T}},qI \rangle_{\partial T \cap \Gamma}-\langle \hat{Q}_b(\bm{\psi}{\bf n}_2^{\mathrm{T}}),qI \rangle_{\partial T \cap \Gamma}\\
=&-({\bf u}_2, \nabla q)_T+\langle {\bf u}_2, q{\bf n}_2 \rangle_{\partial T}-\langle \hat{Q}_b(\bm{\psi}{\bf n}_2^{\mathrm{T}}),qI \rangle_{\partial T \cap \Gamma}\\
=&(\nabla \cdot {\bf u}_2, q)_T-\langle \hat{Q}_b(\bm{\psi}{\bf n}_2^{\mathrm{T}}),qI \rangle_{\partial T \cap \Gamma},
\end{split}
\end{eqnarray}
Combing Eqs.(\ref{eep6}), (\ref{eep7}) and (\ref{eep8}), we see
\begin{eqnarray}\label{eep9}
\begin{split}
(\nabla_w \cdot (Q_h {\bf u}), q)=&\sum_{T \in \mathcal{T}_h}(\nabla_w \cdot (Q_h {\bf u}), q)_T\\
=&\sum_{T \in \mathcal{T}_h} (\nabla \cdot {\bf u}, q)_T-\sum_{T \in \mathcal{T}_{2h}^I} \langle \hat{Q}_b(\bm{\psi}{\bf n}_1^{\mathrm{T}}), qI\rangle_{\partial T \cap \Gamma},
\end{split}
\end{eqnarray}
Then, subtracting Eq.(\ref{eep9}) from Eq.(\ref{algorithm 2}), we drive that
\begin{eqnarray}
b({\bf e}_h,q)=0
\end{eqnarray}
Hence the error equations are proved.
\end{proof}
\section{Error Estimates in a Energy Norm}
In the section, we will obtain optimal order estimates for velocity error ${\bf e}_h$ in a energy norm and for pressure error $\varepsilon_h$ in a $L^2$ norm.
\begin{lemma}
For any ${\mathbf v}=\{ {\mathbf v}_0, {\mathbf v}_b\} \in V_h$, we obtain
\begin{eqnarray}\label{estimate 1}
\sum_{T \in \mathcal{T}_h} \| \nabla {\mathbf v}_0 \|^2_T \leqslant C {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}^2.
\end{eqnarray}
\end{lemma}
\begin{proof}
For the proof of $T \in \mathcal{T}_{h}^S$, the reader is referred to \cite{WGStokes1}.For $T \in \mathcal{T}_{1h}^I$,
\begin{align*}
(\nabla {\mathbf v}_0 ,\nabla {\mathbf v}_0)_T &= -({\mathbf v}_0, \nabla \cdot (\nabla {\mathbf v}_0))_T +
\langle {\mathbf v}_0, \nabla {\mathbf v}_0 \cdot {\bf n} \rangle_{\partial T}\\
&=-({\mathbf v}_0, \nabla \cdot (\nabla {\mathbf v}_0))_T+
\langle {\mathbf v}_b, \nabla {\mathbf v}_0 \cdot {\bf n} \rangle_{\partial T \setminus \Gamma}
+\langle \tilde{{\mathbf v}}_b, \nabla {\mathbf v}_0 \rangle_{\partial T \cap \Gamma}\\
&+\langle Q_b {\mathbf v}_0 -{\mathbf v}_b, \nabla {\mathbf v}_0 \cdot {\bf n} \rangle_{\partial T \setminus \Gamma}
+\langle \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})- \tilde{{\mathbf v}}_b ,\nabla {\mathbf v}_0 \cdot {\bf n} \rangle_{\partial T \cap \Gamma}\\
&=(\nabla_w {\mathbf v}, \nabla {\mathbf v}_0)_T+\langle Q_b {\mathbf v}_0 -{\mathbf v}_b, \nabla {\mathbf v}_0 \cdot {\bf n} \rangle_{\partial T \setminus \Gamma}+\langle \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})- \tilde{{\mathbf v}}_b ,\nabla {\mathbf v}_0 \rangle_{\partial T \cap \Gamma}\\.
\end{align*}
Then it follows from the Cauchy-Schwarz inequality and the trace inequality that
\begin{align*}
\| \nabla {\mathbf v}_0\|_T^2 &\leqslant \|\nabla_w {\mathbf v} \|_T \| \nabla {\mathbf v}_0 \|_T +\| Q_b {\mathbf v}_0 -{\mathbf v}_b\|_{\partial T \setminus \Gamma} \| \nabla {\mathbf v}_0 \|_{\partial T \setminus \Gamma}
+\| \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})- \tilde{{\mathbf v}}_b\|_{\partial T \cap \Gamma}
\|\nabla {\mathbf v}_0 \|_{\partial T \cap \Gamma}\\
&\leqslant \|\nabla_w {\mathbf v} \|_T \| \nabla {\mathbf v}_0 \|_T+ h_T^{-\frac{1}{2}}\| Q_b {\mathbf v}_0 -{\mathbf v}_b\|_{\partial T \setminus \Gamma} \| \nabla {\mathbf v}_0 \|_T
+h_T^{-\frac{1}{2}}\| \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})- \tilde{{\mathbf v}}_b \|_{\partial T \setminus \Gamma} \| \nabla {\mathbf v}_0 \|_T.
\end{align*}
Therefore,
\begin{align*}
\| \nabla {\mathbf v}_0\|_T \leqslant \|\nabla_w {\mathbf v} \|_T+h_T^{-\frac{1}{2}}\| Q_b {\mathbf v}_0 -{\mathbf v}_b\|_{\partial T \setminus \Gamma}+h_T^{-\frac{1}{2}}\| \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})- \tilde{{\mathbf v}}_b \|_{\partial T \setminus \Gamma},
\end{align*}
it's easy to see that $\| \nabla {\mathbf v}_0\|_T^2 \leqslant C {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}^2$. For $T \in \mathcal{T}_{2h}^I$, we use the same method to prove $\| \nabla {\mathbf v}_0\|_T^2 \leqslant C {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}^2$. Finally, the proof is completed.
\end{proof}
\begin{lemma}\label{normal vector}
Suppose $T \in \mathcal{T}_h^I$ is the curved element which doesn't contain the origin. The equation for the interface can be written as $r=r(\theta), a \leqslant \theta \leqslant b$. Then the definition of the normal vector ${\bf n}$ on the interface extends to the whole element T, and ${\bf n}$ is twice continuously differentiable and bounded in the element T.
\end{lemma}
\begin{proof}
According to the polar equation of the interface, the coordinates in Cartesian coordinate system can be expressed as
\begin{eqnarray}\label{nvp1}
x(\theta)=r(\theta) \cos \theta,~ y(\theta)=r(\theta) \sin \theta,
\end{eqnarray}
So the normal vector ${\bf n}$ at the interface can be written as
\begin{eqnarray}\label{nvp2}
{\bf n}=\left(\begin{array}{ccc}
-r^{'}(\theta) \sin \theta-r(\theta) \cos \theta\\
r^{'}(\theta) \cos \theta-r(\theta) \sin \theta
\end{array}\right).
\end{eqnarray}
Since the interface does not pass through the origin, the following inverse function group can be determined from the function group (\ref{nvp1}),
\begin{eqnarray}\label{nvp3}
\left \{\begin{array}{l}
r=\sqrt{x^2+y^2} \\
\theta={\left \{\begin{array}{ll}
\arctan \frac{y}{x} & x > 0 \\
\frac{\pi}{2} & x = 0 \\
\pi+\arctan \frac{y}{x} & x < 0
\end{array}\right.}
\end{array}\right..
\end{eqnarray}
By plugging (\ref{nvp3}) back into the normal vector (\ref{nvp2}), we can write the normal vector as an expression for $x$ and $y$. Since the function $r(x,y)$ and $\theta(x,y)$ can define inside the element, the normal vector ${\bf n}$ can be extended from the boundary to the whole element.
The next step in the proof is to verify ${\bf n}$ is teice continuously differentiabe. Because $ r(x, y), \theta(x, y) $ are smooth, it's obvious that ${\bf n}$ is smooth. When $(x, y) \in T$, we have $\theta(x, y) \in [a,b]$, so $r^{'}(\theta(x,y))$ and ${\bf n}$ is bounded in T. For $\nabla {\bf n}$,
\begin{align*}
\nabla {\bf n} =\left(\begin{array}{ccc}
-r^{''}(\theta)\frac{\partial \theta}{\partial x} \sin \theta -r^{'}(\theta) \frac{\partial \theta}{\partial x} \cos \theta -1 & -r^{''}(\theta)\frac{\partial \theta}{\partial y} \sin \theta -r^{'}(\theta)\frac{\partial \theta}{\partial y} \cos \theta \\
r^{''}(\theta)\frac{\partial \theta}{\partial x} \cos \theta -r^{'}(\theta) \frac{\partial \theta}{\partial x} \sin \theta & r^{''}(\theta)\frac{\partial \theta}{\partial y} \cos \theta -r^{'}(\theta)\frac{\partial \theta}{\partial y} \sin \theta-1
\end{array}\right).
\end{align*}
Since $r(\theta)$ and $\theta (x, y)$ is smooth, $\nabla {\bf n}$ is continuous in element T. Moreover, we have
\begin{align*}
|\frac{\partial \theta}{\partial x}|=|\frac{-y}{x^2+y^2}|\leqslant M_1,\\
|\frac{\partial \theta}{\partial y}|=|\frac{x}{x^2+y^2}|\leqslant M_2.
\end{align*}
Therefore, $\nabla {\bf n}$ is bounded in the element T. The proof of the second derivative of n is bounded can be completed by the method analogous to that used above.
\end{proof}
\begin{lemma}\label{projector inequality}
Assume that e is the curved edge of the element T. Then there exists a positive constant C such that
\begin{eqnarray}
\| {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}}) \|_e^2 \leqslant C h_T^{2k-1} \| \nabla {\mathbf v}_{10} \|^2_{k,T},~\forall \, T \in \mathcal{T}_{1h}^I,\label{projector inequality 1}\\
\| {\mathbf v}_0 {\bf n}_2^{\mathrm{T}}-\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}}) \|_e^2 \leqslant C h_T^{2k-1} \| \nabla {\mathbf v}_{20} \|_{k,T}^2,~\forall \, T \in \mathcal{T}_{2h}^I,\label{projector inequality 2}
\end{eqnarray}
where k=1,2.
\end{lemma}
\begin{proof}
First, let $\bar{Q}_b$ and $\bar{Q}_0$ be the $L^2$ projector onto $P_k(e)$ and $P_k(T)$. we give some notations used in the proof:
\begin{align*}
{\mathbf v}_0 =\left(\begin{array}{ccc}
v_{10}(x,y)\\
v_{20}(x,y)
\end{array}\right),
~{\bf n}_1=\left(\begin{array}{ccc}
n_{11}\\
n_{12}
\end{array}\right),
~\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})=\left(\begin{array}{ccc}
\bar{Q}_b(v_{10}n_{11}) & \bar{Q}_b(v_{10}n_{12})\\
\bar{Q}_b(v_{20}n_{11}) & \bar{Q}_b(v_{20}n_{12})
\end{array}\right),
\end{align*}
According to the above notations, we have
\begin{align*}
\| {\mathbf v}_0 {\bf n}_1^{\mathrm{T}}-\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}}) \|_e^2&=\| v_{10}n_{11}-\bar{Q}_b(v_{10}n_{11})\|_e^2+\| v_{10}n_{12}-\bar{Q}_b(v_{10}n_{12})\|_e^2\\
&+\| v_{20}n_{11}-\bar{Q}_b(v_{20}n_{11})\|_e^2+
\| v_{20}n_{12}-\bar{Q}_b(v_{20}n_{12})\|_e^2.
\end{align*}
By using the Lemma \ref{normal vector}, we have
\begin{align*}
&\| v_{10}n_{11}-\bar{Q}_b(v_{10}n_{11})\|_e^2
\leqslant \| v_{10}n_{11}-\bar{Q}_b(v_{10}n_{11})\|_{\partial T}^2\\
& \leqslant \| v_{10}n_{11}-\bar{Q}_b(v_{10}n_{11}) \|_{\partial T}^2\\
& \leqslant C_1\Big(h_T^{-1} \| v_{10}n_{11}-\bar{Q}_b(v_{10}n_{11})\|_T^2
+h_T \|\nabla (v_{10}n_{11}-\bar{Q}_b(v_{10}n_{11})) \|_{T}^2 \Big)\\
& \leqslant C_1 h_T^{2k-1} | v_{10}n_{11}|_{k,T}^2\\
\end{align*}
For k=1, we get
\begin{align*}
&\| v_{10}n_{11}-\bar{Q}_b(v_{10}n_{11})\|_e^2 \\
&\leqslant 2 C_1 h_T \Big( \| n_{11} \nabla v_{10}\|_T^2 +\| v_{10} \nabla n_{11} \|_T^2 \Big)\\
& \leqslant 2 C_1 h_T \Big( \| n_{11} \|_T^2 \| \nabla v_{10} \|_T^2
+ \| \nabla n_{11} \|_T^2 \| v_{10} \|_T^2 \Big)\\
& \leqslant C h_T \| \nabla v_{10}\|_T^2
\end{align*}
Similarly, for k=2, we obtain
\begin{align*}
&\| v_{10}n_{11}-\bar{Q}_b(v_{10}n_{11})\|_e^2 \\
&\leqslant C h_T^3 | v_{10}n_{11}|_{2,T}^2\\
&\leqslant C h_T^3 (\| v_{10}\|_T^2 \| \vartriangle n_{11}\|_T^2+\| \vartriangle v_{10}\|_T^2 \|n_{11}\|_T^2+\| \nabla v_{10}\|_T^2\| \nabla n_{11}\|_T^2)\\
&\leqslant C_1(\| v_{10}\|_T^2+\| \vartriangle v_{10}\|_T^2+\| \nabla v_{10}\|_T^2)\\
&\leqslant C|v_{10}|^2_{2,T}
\end{align*}
Similarly, the proof of (\ref{projector inequality 2}) can be completed.
\end{proof}
\begin{lemma}\label{H1 estimates}
Suppose that the partiton $\mathcal{T}_h$ of the domain $\Omega$ is shape regular. Then for ${\bf u}_i \in[H^{k+1}(\Omega_i)]^2 , p_i \in H^r(\Omega_i)$ with $i=1,2$, we have
\begin{eqnarray}
|\ell_1({\bf u},{\mathbf v})| \leqslant C h^k(\|{\bf u}_1\|_{k+1, \Omega_1}+\| {\bf u}_2 \|_{k+1, \Omega_2}) {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|},\label{H1 estimates 1}\\
|\tilde{\ell}_1({\bf u},{\mathbf v})| \leqslant C h^k(\|{\bf u}\|_{k+1, \Omega_1}+\|{\bf u}\|_{k+1, \Omega_2}) {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|},\label{H1 estimates 2}\\
|\ell_2(p,{\mathbf v})| \leqslant C h^r(\| p_1 \|_{r, \Omega_1}+\| p_2 \|_{r, \Omega_2}) {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|},\label{H1 estimates 3}\\
|\tilde{\ell}_2(p,{\mathbf v})| \leqslant C h^r(\| p_1 \|_{r, \Omega_1}+\| p_2 \|_{r, \Omega_2}) {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|},\label{H1 estimates 4}\\
|\tilde{s}(Q_h {\bf u}, {\mathbf v})| \leqslant C h^k(\|{\bf u}_1\|_{k+1, \Omega_1}+\| {\bf u}_2 \|_{k+1, \Omega_2}) {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|},\label{H1 estimates 5}
\end{eqnarray}
where ${\mathbf v}=\{{\mathbf v}_0, {\mathbf v}_b\} \in V_h$.
\end{lemma}
\begin{proof}
As to (\ref{H1 estimates 1}), according to Cauchy-Schwarz inequality and the Cauchy inequality, we have
\begin{eqnarray}\label{HEP1}
\begin{split}
|\ell_1({\bf u},{\mathbf v})|=&\sum_{T \in \mathcal{T}_h^S} \langle {\mathbf v}_0 -{\mathbf v}_b, A (\nabla {\bf u}){\bf n} -A \mathbb{Q}_h(\nabla {\bf u}){\bf n} \rangle_{\partial T}\\
&\leqslant A \sum_{T \in \mathcal{T}_h^S} \| {\mathbf v}_0 -{\mathbf v}_b \|_{\partial T} \| (\nabla {\bf u}){\bf n} -\mathbb{Q}_h(\nabla {\bf u}){\bf n} \|_{\partial T}\\
&\leqslant A \Big( \sum_{T \in \mathcal{T}_h^S} \| {\mathbf v}_0 -{\mathbf v}_b \|^2_{\partial T} \Big)^{\frac{1}{2}} \Big( \sum_{T \in \mathcal{T}_h^S} \| \nabla {\bf u} - \mathbb{Q}_h(\nabla {\bf u}) \|^2_{\partial T} \Big)^{\frac{1}{2}},
\end{split}
\end{eqnarray}
It follows from the trace inequality and the estimate (\ref{projectorestimate2}) that
\begin{eqnarray}\label{HEP2}
\begin{split}
& \sum_{T \in \mathcal{T}_h^S} \| \nabla {\bf u} - \mathbb{Q}_h(\nabla u) \|^2_{\partial T}\\
& \leqslant \sum_{T \in \mathcal{T}_h^S} C \Big( h_T^{-1} \| \nabla {\bf u} - \mathbb{Q}_h(\nabla u) \|^2_T
+ h_T \| \nabla(\nabla {\bf u} - \mathbb{Q}_h(\nabla u))\|^2_T \Big)\\
& \leqslant C h^{2k-1}\| {\bf u} \|^2_{k+1}.
\end{split}
\end{eqnarray}
Using the triangle inequality, the trace inequality, Cauchy-Schwarz inequality and approximation properties of $L^2$ projection operators, as well as the estimate (\ref{estimate 1}), we obtain
\begin{eqnarray}\label{HEP3}
\begin{split}
& \sum_{T \in \mathcal{T}_h^S} \| {\mathbf v}_0 -{\mathbf v}_b \|^2_{\partial T}\\
&\leqslant \sum_{T \in \mathcal{T}_h^S} 2 \Big( \|{\mathbf v}_0 - Q_b{\mathbf v}_0 \|^2_{\partial T}+ \|Q_b{\mathbf v}_0-{\mathbf v}_b \|^2_{\partial T} \Big)\\
&\leqslant C_1\Big(\sum_{T \in \mathcal{T}_h^S} h_T \| \nabla {\mathbf v}_0 \|_T^2 \Big)
+C_2 h \Big(\sum_{T \in \mathcal{T}_h^S} h_T^{-1}\|Q_b{\mathbf v}_0-{\mathbf v}_b \|^2_{\partial T} \Big)\\
&\leqslant C h {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}^2.
\end{split}
\end{eqnarray}
Using the Eqs.(\ref{HEP2}) and (\ref{HEP3}), we complete the proof of (\ref{H1 estimates 1}). \\
Next, as to (\ref{H1 estimates 2}), we only need to verify the integration on the interface, i.e.
\begin{eqnarray}\label{HEP4}
\begin{split}
&\sum_{T \in \mathcal{T}_{1h}^I} \langle {\mathbf v}_0{\bf n}_1^{\mathrm{T}}-\tilde{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}}), A_1 \nabla {\bf u}_1 - A_1 \mathbb{Q}_h (\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma}\\
&\leqslant \sum_{T \in \mathcal{T}_{1h}^I} \| {\mathbf v}_0{\bf n}_1^{\mathrm{T}}-\tilde{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}}) \|_{\partial T \cap \Gamma} \| \nabla {\bf u}_1 - \mathbb{Q}_h (\nabla {\bf u}_1) \|_{\partial T \cap \Gamma}\\
&\leqslant \Big(\sum_{T \in \mathcal{T}_{1h}^I} \|{\mathbf v}_0{\bf n}_1^{\mathrm{T}}-\tilde{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})\|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}}
\Big(\sum_{T \in \mathcal{T}_{1h}^I} \|\nabla {\bf u}_1 - \mathbb{Q}_h (\nabla {\bf u}_1)\|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}}\\
&\leqslant C h^{k} \|\nabla {\mathbf v}_0\| \|{\bf u}_1\|_{k+1,\Omega_1} \\
&\leqslant C h^{k} {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|} \|{\bf u}_1\|_{k+1,\Omega_1}.
\end{split}
\end{eqnarray}
Similarly, we obtain
\begin{eqnarray}\label{HEP5}
\begin{split}
&\sum_{T \in \mathcal{T}_{1h}^I} \langle \tilde{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b, A_1 \nabla {\bf u}_1 - A_1 \mathbb{Q}_h (\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma}\\
&\leqslant \Big(\sum_{T \in \mathcal{T}_{1h}^I} h_T^{-1} \| \tilde{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}}\\
&\Big(\sum_{T \in \mathcal{T}_{1h}^I} \| A_1 \nabla {\bf u}_1 - A_1 \mathbb{Q}_h (\nabla {\bf u}_1) \|^2_{\partial T \cap \Gamma}\Big)^{\frac{1}{2}}\\
&\leqslant C h^k {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|} \|{\bf u}_1\|_{k+1,\Omega_1}.
\end{split}
\end{eqnarray}
Combining (\ref{HEP4}) and (\ref{HEP5}), we have
\begin{eqnarray}\label{HEP6}
\begin{split}
&\sum_{T \in \mathcal{T}_{1h}^I} \langle {\mathbf v}_0{\bf n}_1^{\mathrm{T}}-\tilde{{\mathbf v}}_b, A_1 \nabla {\bf u}_1 - A_1 \mathbb{Q}_h (\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma}\\
&= \sum_{T \in \mathcal{T}_{1h}^I} \langle {\mathbf v}_0{\bf n}_1^{\mathrm{T}}-\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}}), A_1 \nabla {\bf u}_1 - A_1 \mathbb{Q}_h (\nabla {\bf u}_1)
\rangle_{\partial T \cap \Gamma}\\
&+\langle \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}}) -\tilde{{\mathbf v}}_b,A_1 \nabla {\bf u}_1 - A_1 \mathbb{Q}_h (\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma}\\
& \leqslant C h^k {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|} \|{\bf u}_1\|_{k+1,\Omega_1}.
\end{split}
\end{eqnarray}
Using the same method for $T \in \mathcal{T}_{2h}^I$ leads to
\begin{eqnarray}\label{HEP7}
\begin{split}
&\sum_{T \in \mathcal{T}_{2h}^I} \langle {\mathbf v}_0{\bf n}_2^{\mathrm{T}}+\tilde{{\mathbf v}}_b, A_2 \nabla {\bf u}_2 - A_2 \mathbb{Q}_h (\nabla {\bf u}_2) \rangle_{\partial T \cap \Gamma}\\
&= \sum_{T \in \mathcal{T}_{2h}^I} \langle {\mathbf v}_0{\bf n}_2^{\mathrm{T}}-\hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}}), A_2 \nabla {\bf u}_2 - A_2 \mathbb{Q}_h (\nabla {\bf u}_2)
\rangle_{\partial T \cap \Gamma}\\
&+\langle \hat{Q}_b({\mathbf v}_0 {\bf n}_2^{\mathrm{T}}) + \tilde{{\mathbf v}}_b,A_2 \nabla {\bf u}_2 - A_2 \mathbb{Q}_h (\nabla {\bf u}_2) \rangle_{\partial T \cap \Gamma}\\
& \leqslant C h^k {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|} \|{\bf u}_2\|_{k+1,\Omega_2}.
\end{split}
\end{eqnarray}
Therefore, the proof of Eq.(\ref{H1 estimates 2}) is completed.
For the (\ref{H1 estimates 3}), the same techniques for proving (\ref{H1 estimates 1}) can be applied to obtain the following estimate:
\begin{eqnarray}\label{HEP8}
\begin{split}
|\ell_2(p,{\mathbf v})|=&\sum_{T \in \mathcal{T}_h^S} \langle {\mathbf v}_0 -{\mathbf v}_b,p{\bf n}-(Q_h^p p ){\bf n} \rangle_{\partial T}\\
&\leqslant C h^r (\|p_1\|_{r,\Omega_1}+\|p_2\|_{r,\Omega_2}){|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}
\end{split}
\end{eqnarray}
Meanwhile, the proof of the (\ref{H1 estimates 4}) is same as the Eq.(\ref{H1 estimates 2}) and so is omitted. For $\tilde{S}(Q_h {\bf u},{\mathbf v})$, we divide our proof into four steps.
Firstly, we consider $T \in \mathcal{T}_h^S$,
\begin{eqnarray}\label{HEP9}
\begin{split}
&\sum_{T \in \mathcal{T}_h^S} h_T^{-1} \langle Q_b(Q_0 {\bf u})-Q_b {\bf u}, Q_b{\mathbf v}_0-{\mathbf v}_b \rangle_{\partial T}\\
=&\sum_{T \in \mathcal{T}_h^S}h_T^{-1} \langle Q_0 {\bf u}-{\bf u}, Q_b{\mathbf v}_0-{\mathbf v}_b \rangle_{\partial T}\\
& \leqslant \sum_{T \in \mathcal{T}_h^S} h_T^{-1} \| Q_0 {\bf u}-{\bf u} \|_{\partial T} \| Q_b{\mathbf v}_0-{\mathbf v}_b \|_{\partial T}\\
& \leqslant \Big( \sum_{T \in \mathcal{T}_h^S} h_T^{-1} \| Q_0 {\bf u}-{\bf u} \|_{\partial T}^2 \Big)^{\frac{1}{2}} \Big( \sum_{T \in \mathcal{T}_h^S} h_T^{-1} \| Q_b{\mathbf v}_0-{\mathbf v}_b \|_{\partial T}^2 \Big)^{\frac{1}{2}}.
\end{split}
\end{eqnarray}
For the estimate of the first term, we drive by using the trace inequality and the estimate (\ref{projectorestimate1}) that
\begin{eqnarray}\label{HEP10}
\begin{split}
&\sum_{T \in \mathcal{T}_h^S} h_T^{-1} \| Q_0 {\bf u}-{\bf u} \|_{\partial T}^2\\
&\leqslant \sum_{T \in \mathcal{T}_h^S} C h_T^{-1} (h_T^{-1}\|Q_0 {\bf u} -{\bf u}\|_T^2
+h_T \|\nabla (Q_0 {\bf u} -{\bf u})\|_T^2)\\
& \leqslant C h^{2k} (\|{\bf u}_1\|_{k+1,\Omega_1}^2+\|{\bf u}_2\|_{k+1,\Omega_2}^2).
\end{split}
\end{eqnarray}
Next, for $T \in \mathcal{T}_{1h}^I$, we see that
\begin{eqnarray}\label{HEP11}
\begin{split}
\tilde{s}(Q_h {\bf u},{\mathbf v})=&\sum_{T \in \mathcal{T}_{1h}^I } h_T^{-1}\langle Q_b(Q_0 {\bf u})-Q_b {\bf u}, Q_b{\mathbf v}_0-{\mathbf v}_b
\rangle_{\partial T \backslash \Gamma}\\
+&\sum_{T \in \mathcal{T}_{1h}^I } h_T^{-1} \langle \hat{Q}_b((Q_0{\bf u}_1){\bf n}_1^{\mathrm{T}})-\hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}), \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma},
\end{split}
\end{eqnarray}
the first term is same as the triangle element, so we see on the curved edge
\begin{eqnarray}\label{HEP12}
\begin{split}
&\sum_{T \in \mathcal{T}_{1h}^I } h_T^{-1} \langle \hat{Q}_b((Q_0{\bf u}_1){\bf n}_1^{\mathrm{T}})-\hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}), \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma}\\
=&\sum_{T \in \mathcal{T}_{1h}^I } h_T^{-1} \langle (Q_0{\bf u}_1){\bf n}_1^{\mathrm{T}}- {\bf u}_1 {\bf n}_1^{\mathrm{T}}, \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \rangle_{\partial T \cap \Gamma}\\
\leqslant &\sum_{T \in \mathcal{T}_{1h}^I } h_T^{-1} \| Q_0{\bf u}_1- {\bf u}_1\|_{\partial T \cap \Gamma} \| \hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \|_{\partial T \cap \Gamma}\\
\leqslant &\Big( \sum_{T \in \mathcal{T}_{1h}^I} h_T^{-1} \| Q_0{\bf u}_1- {\bf u}_1 \|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}}
\Big( \sum_{T \in \mathcal{T}_{1h}^I} h_T^{-1} \|\hat{Q}_b({\mathbf v}_0 {\bf n}_1^{\mathrm{T}})-\tilde{{\mathbf v}}_b \|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}}\\
\leqslant & C h^k \| {\bf u}_1 \|_{k+1,\Omega_1} {|\hspace{-.02in}|\hspace{-.02in}|} {\mathbf v} {|\hspace{-.02in}|\hspace{-.02in}|}
\end{split}
\end{eqnarray}
Finally, for the $T \in \mathcal{T}_{2h}^I$, the proof is similar to the above equation. By incorporating from Eq.(\ref{HEP9}) to (\ref{HEP12}), Eq.(\ref{H1 estimates 5}) is completed.
\end{proof}
\begin{theorem}\label{H1ERROR}
Assume the exact solution $({\bf u}_i ,p_i) \in [H^{k+1}(\Omega_i)]^2 \times H^k (\Omega_i)$ with $i=1,2$ of the model (\ref{model 1})-(\ref{interface 2}) and let $({\bf u}_h,p_h) \in V_h \times W_h$ be the weak Galerkin finite element solutions, we have the following inequality holds true,
\begin{eqnarray}\label{H1error}
\begin{split}
&{|\hspace{-.02in}|\hspace{-.02in}|} Q_h {\bf u} - {\bf u}_h {|\hspace{-.02in}|\hspace{-.02in}|} +\| Q_h^p p -p_h\| \\
\leqslant & C h^k (\|{\bf u}_1\|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1} +\|p_2\|_{k,\Omega_2})
\end{split}
\end{eqnarray}
\end{theorem}
\begin{proof}
Let ${\mathbf v}= {\bf e}_h$ in the Eq.(\ref{error equation 1}) and $q = \varepsilon_h$ in the Eq.(\ref{error equation 2}) and add the two equations. Then we have
\begin{eqnarray}\label{H1errorP1}
a({\bf e}_h, {\bf e}_h)=\ell_1({\bf u},{\bf e}_h)+\tilde{\ell}_1 ({\bf u},{\bf e}_h)-\ell_2(p,{\bf e}_h)-\tilde{\ell}_2 (p,{\bf e}_h)+\tilde{s}(Q_h {\bf u},{\bf e}_h).
\end{eqnarray}
According to Eqs.(\ref{H1 estimates 1})-(\ref{H1 estimates 5}), we drive the following estimate
\begin{eqnarray}\label{H1errorP2}
{|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h {|\hspace{-.02in}|\hspace{-.02in}|}^2 \leqslant C h^k (\|{\bf u}_1\|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1} +\|p_2\|_{k,\Omega_2}).
\end{eqnarray}
The proof of the first term ${|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h {|\hspace{-.02in}|\hspace{-.02in}|}$ in Eq.(\ref{H1error}) is completed.
To estimate the second term $ \| \varepsilon_h \|$, we drive from Eq.(\ref{error equation 1}) that
\begin{eqnarray}\label{H1errorP3}
b({\mathbf v},\varepsilon_h)=a({\bf e}_h,{\mathbf v})-\ell_1({\bf u},{\mathbf v})-\tilde{\ell}_1({\bf u},{\mathbf v})+\ell_2(p,{\mathbf v})+\tilde{\ell}_2(p,{\mathbf v})-\tilde{s}(Q_h {\bf u} ,{\mathbf v})
\end{eqnarray}
Using the equation above, (\ref{H1 estimates 1})-(\ref{H1 estimates 5}) and (\ref{H1errorP2}), we have
\begin{eqnarray}\label{H1errorP4}
|b({\mathbf v},\varepsilon_h)| \leqslant C h^k (\|{\bf u}_1\|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1} +\|p_2\|_{k,\Omega_2}).
\end{eqnarray}
Incorporated with the inf-sup condition (\ref{InfSupCondition}), the estimate (\ref{H1errorP4}) gives
\begin{eqnarray}\label{H1errorP5}
\|\varepsilon_h\| \leqslant C h^k (\|{\bf u}_1\|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1} +\|p_2\|_{k,\Omega_2}).
\end{eqnarray}
Adding (\ref{H1errorP2}) to (\ref{H1errorP5}) leads to the estimate (\ref{H1error}).
\end{proof}
\section{ Error Estimate in $L^2$ Norm} In general, we use the duality argument to arrive at an $L^2$-error estimate for ${\bf e}_0=Q_0 {\bf u} - {\bf u}_0$. Now we consider the following problem: seeking $({\bf w}; \rho)$ to satisfy
\begin{eqnarray}
- \nabla \cdot (A \nabla {\bf w})+\nabla \rho = {\bf e}_0 \quad in ~\Omega, \label{dual problem 1} \\
\nabla \cdot {\bf w} =0 \quad in ~ \Omega \label{dual problem 2} \\
{\bf w} = 0 \quad on ~\partial \Omega \label{dual problem 3}
\end{eqnarray}
Assume the solution $({\bf w}; \rho) \in [H^2(\Omega)]^2 \times [H^1(\Omega)]$ of dual problem (\ref{dual problem 1})-(\ref{dual problem 3}) has $[H^2(\Omega)]^2 \times [H^1(\Omega)]$-regularity property, i.e.
\begin{eqnarray}
\| {\bf w} \|_2 +\| \rho \|_1 \leqslant C \|{\bf e}_0 \|.
\end{eqnarray}
\begin{theorem}
Under the assumption in theorem \ref{H1ERROR}, then we have the following optimal order error estimate holds true
\begin{eqnarray}
\| Q_0 {\bf u} -{\bf u}_0\| \leqslant C h^{k+1}(\| {\bf u}_1 \|_{k+1,\Omega_1}+\| {\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1}+\|p_2\|_{k,\Omega_2}).
\end{eqnarray}
\end{theorem}
\begin{proof}
Due to $({\bf w};\rho)$ satisfies the Eq.(\ref{dual problem 1}) with ${\bf f} = {\bf e}_0=Q_0 {\bf u} -{\bf u}_0$, then letting ${\mathbf v} = {\bf e}_h $ in Eq.(\ref{error equation 1}) leads to
\begin{align*}
\begin{split}
&(\nabla_w(Q_h {\bf w}), A\nabla_w {\bf e}_h)-(\nabla_w \cdot {\bf e}_h, Q_h^p p)\\
=&({\bf e}_0,{\bf e}_0)+\ell_1({\bf w},{\bf e}_h)+\tilde{\ell}_1({\bf w},{\bf e}_h)-\ell_2(\rho,{\bf e}_h)-\tilde{\ell}_2(\rho,{\bf e}_h)\\
\end{split}
\end{align*}
Therefore, from the above equation we obtain
\begin{align*}
\begin{split}
\| {\bf e}_0\|^2=&(\nabla_w(Q_h {\bf w}), A\nabla_w {\bf e}_h)-(\nabla_w \cdot {\bf e}_h, Q_h^p p)\\
-&\ell_1({\bf w},{\bf e}_h)-\tilde{\ell}_1({\bf w},{\bf e}_h)+\ell_2(\rho,{\bf e}_h)+\tilde{\ell}_2(\rho,{\bf e}_h)\\
=&a(Q_h {\bf w},{\bf e}_h)-b({\bf e}_h,Q_h^p \rho)-\ell_1({\bf w},{\bf e}_h)-\tilde{\ell}_1({\bf w},{\bf e}_h)\\
+&\ell_2(\rho,{\bf e}_h)+\tilde{\ell}_2(\rho,{\bf e}_h)-\tilde{s}(Q_h {\bf w},{\bf e}_h),
\end{split}
\end{align*}
where we have used the fact that $ S(Q_h {\bf w},{\bf e}_h)=\tilde{S}(Q_h {\bf w}, {\bf e}_h)$.
Next, combining (\ref{error equation 2}) and the definition of $Q_h^p$, we have
$$
b({\bf e}_h,Q_h^p \rho)=0 \quad b(Q_h {\bf w}, \varepsilon_h)=0
$$
The above equation can change into
\begin{align*}
\begin{split}
\| {\bf e}_0\|^2=&a(Q_h {\bf w},{\bf e}_h)-b(Q_h {\bf w}, \varepsilon_h)-\ell_1({\bf w},{\bf e}_h)-\tilde{\ell}_1({\bf w},{\bf e}_h)\\
+&\ell_2(\rho,{\bf e}_h)+\tilde{\ell}_2(\rho,{\bf e}_h)-\tilde{s}(Q_h {\bf w},{\bf e}_h).
\end{split}
\end{align*}
By substituting (\ref{error equation 1}) into the above equation, we obtain
\begin{align*}
\| {\bf e}_0\|^2=&\ell_1({\bf u},Q_h {\bf w})+\tilde{\ell}_1({\bf u},Q_h {\bf w})-\ell_2(p,Q_h {\bf w})-\tilde{\ell}_2(p,Q_h {\bf w})+\tilde{s}(Q_h {\bf u},Q_h {\bf w})\\
-&\ell_1({\bf w},{\bf e}_h)-\tilde{\ell}_1({\bf w},{\bf e}_h)+\ell_2(\rho,{\bf e}_h)+\tilde{\ell}_2(\rho,{\bf e}_h)-\tilde{s}(Q_h {\bf w},{\bf e}_h).
\end{align*}
According to Lemma \ref{H1 estimates}, we have
\begin{align*}
&|\ell_1({\bf u},Q_h {\bf w})+\tilde{\ell}_1({\bf u},Q_h {\bf w})-\ell_2(p,Q_h {\bf w})-\tilde{\ell}_2(p,Q_h {\bf w})+\tilde{S}(Q_h {\bf u},Q_h {\bf w})|\\
\leqslant & Ch(\| {\bf w}\|_2+\| \rho \|_1){|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h {|\hspace{-.02in}|\hspace{-.02in}|} \\
\leqslant &Ch{|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h {|\hspace{-.02in}|\hspace{-.02in}|} \|{\bf e}_0\|
\end{align*}
Each of the remaining terms is handled as follows.\\
(1) For $\ell_1({\bf u},Q_h {\bf w})$, we first use the Cauchy-Schwarz inequality, trace inequality and the approximation property of the $L^2$ projection operator to derive
\begin{align*}
&\sum_{T \in \mathcal{T}_h} \langle Q_0 {\bf w} -{\bf w}, A (\nabla {\bf u}){\bf n} -A \mathbb{Q}_h(\nabla {\bf u}) {\bf n} \rangle_{\partial T}\\
\leqslant & A \sum_{T \in \mathcal{T}_h} \|Q_0 {\bf w} -{\bf w}\|_{\partial T} \| \nabla {\bf u} - \mathbb{Q}_h(\nabla {\bf u}) \|_{\partial T}\\
\leqslant & A\Big(\sum_{T \in \mathcal{T}_h} \| Q_0 {\bf w} -{\bf w}\|^2_{\partial T} \Big)^{\frac{1}{2}}
\Big(\sum_{T \in \mathcal{T}_h} \| \nabla {\bf u} - \mathbb{Q}_h(\nabla {\bf u}) \|^2_{\partial T} \Big)^{\frac{1}{2}}\\
\leqslant & \Big( C \sum_{T \in \mathcal{T}_h} (h_T^{-1} \| Q_0 {\bf w} -{\bf w} \|_T^2
+ h_T \| \nabla(Q_0 {\bf w} -{\bf w})\|_T^2 )\Big)^{\frac{1}{2}}\\
&\Big( C \sum_{T \in \mathcal{T}_h} (h_T^{-1} \| \nabla {\bf u} - \mathbb{Q}_h(\nabla {\bf u}) \|_T^2
+h_T \| \nabla(\nabla {\bf u} - \mathbb{Q}_h(\nabla {\bf u}))\|_T^2 )\Big)^{\frac{1}{2}}\\
\leqslant & \Big( \sum_{T \in \mathcal{T}_h} C h_T^3 \|{\bf w}\|_2^2 \Big)^{\frac{1}{2}}
\Big( \sum_{T \in \mathcal{T}_h} C h_T^{2k-1} \| {\bf u} \|_{k+1}^2 \Big)^{\frac{1}{2}}\\
\leqslant & Ch^{k+1} \|{\bf w}\|_2 \| {\bf u} \|_{k+1}
\end{align*}
Next, we use same techniques and the fact that $\sum_{e \in \mathcal{E}_h^S} \langle {\bf w}-Q_b {\bf w},A (\nabla {\bf u} ){\bf n}-A \mathbb{Q}_h(\nabla {\bf u}){\bf n} \rangle_e=0$ to drive
\begin{align*}
&\sum_{T \in \mathcal{T}_h} \langle {\bf w}-Q_b {\bf w}, A(\nabla {\bf u} ) {\bf n} -A \mathbb{Q}_h(\nabla {\bf u}){\bf n} \rangle_{\partial T}\\
=&\sum_{e \in \mathcal{E}_h^I} \langle {\bf w} -Q_b {\bf w}, A (\nabla {\bf u} ) {\bf n} -A \mathbb{Q}_h(\nabla {\bf u}) {\bf n} \rangle_e\\
\leqslant & \sum_{e \in \mathcal{E}_h^I }\| {\bf w} -Q_b {\bf w}\|_e \| A \nabla {\bf u} -A \mathbb{Q}_h(\nabla {\bf u}) \|_e\\
\leqslant & A \Big( \sum_{e \in \mathcal{E}_h^I } \| {\bf w} -Q_b {\bf w}\|_e^2 \Big)^{\frac{1}{2}}
\Big( \sum_{e \in \mathcal{E}_h^I} \| \nabla {\bf u} -\mathbb{Q}_h(\nabla {\bf u}) \|_e^2 \Big)^{\frac{1}{2}}\\
\leqslant & \Big( \sum_{e \in \mathcal{E}_h^I} \| {\bf w} -Q_b {\bf w}\|_e^2 \Big)^{\frac{1}{2}} \Big( C h^{2k-1} \| {\bf u} \|_{k+1}^2 \Big)^{\frac{1}{2}},
\end{align*}
where
\begin{align*}
\| {\bf w} -Q_b {\bf w}\|_e^2 \leqslant \| {\bf w} - Q_0 {\bf w}\|_e^2 \leqslant \|{\bf w} - Q_0 {\bf w} \|_{\partial T}^2 \leqslant C h_T^3 \|{\bf w}\|_{2}^2.
\end{align*}
Using the above inequality, we have
\begin{align*}
&\sum_{T \in \mathcal{T}_h} \langle {\bf w}-Q_b {\bf w}, A (\nabla {\bf u}) {\bf n} -A \tilde{Q}_h(\nabla {\bf u}) {\bf n} \rangle_{\partial T}\\
\leqslant& Ch^{k+1}\|{\bf w}\|_2 \| {\bf u} \|_{k+1}.
\end{align*}
Therefore,
\begin{align*}
\ell_1({\bf u},Q_h {\bf w})=&\sum_{T \in \mathcal{T}_h} \langle Q_0 {\bf w} -Q_b {\bf w}, A (\nabla {\bf u}) {\bf n} -A Q_h(\nabla {\bf u}) {\bf n} \rangle_{\partial T}\\
=&\sum_{T \in \mathcal{T}_h} \langle Q_0 {\bf w} -{\bf w}, A (\nabla {\bf u}) {\bf n} -A Q_h(\nabla {\bf u}) {\bf n} \rangle_{\partial T}\\
+&\sum_{T \in \mathcal{T}_h} \langle {\bf w} -Q_b {\bf w} , A (\nabla {\bf u}) {\bf n} -A Q_h(\nabla {\bf u}) {\bf n} \rangle_{\partial T}\\
\leqslant & Ch^{k+1} \|{\bf w}\|_2 \| {\bf u} \|_{k+1}
\end{align*}
(2)For $\tilde{\ell}_1 ({\bf u},Q_h {\bf w})$,
\begin{align*}
\tilde{\ell}_1({\bf u},Q_h {\bf w})=&\sum_{T \in \mathcal{T}_{1h}^I}\Big(
\langle Q_0 {\bf w} -Q_b {\bf w}, A_1 (\nabla {\bf u}_1){\bf n}_1 - A_1 \mathbb{Q}_h(\nabla {\bf u}_1){\bf n}_1 \rangle_{\partial T \backslash \Gamma} \\
+ &\langle (Q_0{\bf w}) {\bf n}_1^{\mathrm{T}}-\hat{Q}_b({\bf w} {\bf n}_1^{\mathrm{T}}),A_1 \nabla {\bf u}_1 -A_1 \mathbb{Q}_h(\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma}
\Big)\\
+&\sum_{T \in \mathcal{T}_{2h}^I}\Big(
\langle Q_0 {\bf w} -Q_b {\bf w}, A_2 (\nabla {\bf u}_2){\bf n}-A_2 \mathbb{Q}_h(\nabla {\bf u}_2){\bf n}_2 \rangle_{\partial T \backslash \Gamma} \\
+ &\langle (Q_0{\bf w}) {\bf n}_2^{\mathrm{T}}-\hat{Q}_b({\bf w} {\bf n}_2^{\mathrm{T}}),A_1 \nabla {\bf u}_2 -A_2 \mathbb{Q}_h(\nabla {\bf u}_2) \rangle_{\partial T \cap \Gamma}
\Big)\\
\end{align*}
Since the terms on the straight edges are similar to $\ell_1({\bf u},Q_h {\bf w})$, we only need to verify the terms on the interface. According to the trace inequality, the Cauchy-Schwarz inequality and the Lemma \ref{projector inequality 1}, we have
\begin{align*}
&\sum_{T \in \mathcal{T}_{1h}^I} \langle (Q_0{\bf w}) {\bf n}_1^{\mathrm{T}}-{\bf w} {\bf n}_1^{\mathrm{T}}, A_1 (\nabla {\bf u}_1){\bf n}_1 - A_1 \mathbb{Q}_h(\nabla {\bf u}_1){\bf n}_1 \rangle_{\partial T \cap \Gamma}\\
\leqslant & \| Q_0 {\bf w} -{\bf w} \|_{\partial T \cap \Gamma}
\| A_1 (\nabla {\bf u}_1)- A_1 \mathbb{Q}_h(\nabla {\bf u}_1)\|_{\partial T \cap \Gamma}\\
\leqslant &\Big(\sum_{T \in \mathcal{T}_{1h}^I} \|Q_0 {\bf w} -{\bf w} \|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}} \Big(\sum_{T \in \mathcal{T}_{1h}^I}
\|A_1 (\nabla {\bf u}_1)- A_1 \mathbb{Q}_h(\nabla {\bf u}_1) \|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}}\\
\leqslant & C h^{k+1} \| {\bf u}_1\|_{k+1} \| {\bf w} \|_2.
\end{align*}
Similarly, we have
\begin{align*}
&\sum_{T \in \mathcal{T}_{1h}^I} \langle {\bf w} {\bf n}_1^{\mathrm{T}}-\hat{Q}_b({\bf w} {\bf n}_1^{\mathrm{T}}), A_1 \nabla {\bf u}_1 -A_1 \mathbb{Q}_h(\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma}\\
\leqslant &\Big(\sum_{T \in \mathcal{T}_{1h}^I} \|{\bf w} {\bf n}_1^{\mathrm{T}}-\hat{Q}_b({\bf w} {\bf n}_1^{\mathrm{T}}) \|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}} \Big(\sum_{T \in \mathcal{T}_{1h}^I}
\|A_1 (\nabla {\bf u}_1)- A_1 \mathbb{Q}_h(\nabla {\bf u}_1) \|^2_{\partial T \cap \Gamma} \Big)^{\frac{1}{2}}\\
\leqslant & C h^{k+1} \| {\bf u}_1\|_{k+1} \| {\bf w} \|_2.
\end{align*}
Therefore, we have
\begin{align*}
&\sum_{T \in \mathcal{T}_{1h}^I} \langle (Q_0{\bf w}) {\bf n}_1^{\mathrm{T}}-\hat{Q}_b({\bf w} {\bf n}_1^{\mathrm{T}}),A_1 \nabla {\bf u}_1 -A_1 \mathbb{Q}_h(\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma}\\
=&\sum_{T \in \mathcal{T}_{1h}^I} \langle (Q_0{\bf w}) {\bf n}_1^{\mathrm{T}}-{\bf w} {\bf n}_1^{\mathrm{T}}, A_1 (\nabla {\bf u}_1){\bf n}_1 - A_1 \mathbb{Q}_h(\nabla {\bf u}_1){\bf n}_1 \rangle_{\partial T \cap \Gamma}\\
+&\sum_{T \in \mathcal{T}_{1h}^I} \langle {\bf w} {\bf n}_1^{\mathrm{T}}-\hat{Q}_b({\bf w} {\bf n}_1^{\mathrm{T}}), A_1 \nabla {\bf u}_1 -A_1 \mathbb{Q}_h(\nabla {\bf u}_1) \rangle_{\partial T \cap \Gamma}\\
\leqslant & C h^{k+1} \| {\bf u}_1\|_{k+1} \| {\bf w} \|_2.
\end{align*}
Using the same method leads to
\begin{align*}
&\sum_{T \in \mathcal{T}_{2h}^I} \langle (Q_0{\bf w}) {\bf n}_2^{\mathrm{T}}-\hat{Q}_b({\bf w} {\bf n}_2^{\mathrm{T}}),A_2 \nabla {\bf u}_2 -A_2 \mathbb{Q}_h(\nabla {\bf u}_2) \rangle_{\partial T \cap \Gamma}\\
=&\sum_{T \in \mathcal{T}_{2h}^I} \langle (Q_0{\bf w}) {\bf n}_2^{\mathrm{T}}-{\bf w} {\bf n}_2^{\mathrm{T}}, A_2 (\nabla {\bf u}_2){\bf n}_2 - A_2 \mathbb{Q}_h(\nabla {\bf u}_2){\bf n}_2 \rangle_{\partial T \cap \Gamma}\\
+&\sum_{T \in \mathcal{T}_{2h}^I} \langle {\bf w} {\bf n}_2^{\mathrm{T}}-\hat{Q}_b({\bf w} {\bf n}_2^{\mathrm{T}}), A_2 \nabla {\bf u}_2 -A_2 \mathbb{Q}_h(\nabla {\bf u}_2) \rangle_{\partial T \cap \Gamma}\\
\leqslant & C h^{k+1} \| {\bf u}_2\|_{k+1} \| {\bf w} \|_2.
\end{align*}
In conclusion, we obtain
\begin{align*}
\tilde{\ell}_1({\bf u},Q_h {\bf w}) \leqslant Ch^{k+1}(\| {\bf u}_1 \|_{k+1}+\| {\bf u}_2 \|_{k+1}) \| {\bf w} \|_2.
\end{align*}
(3)for $\ell_2(p,Q_h {\bf w})$, we use the same method as the proof of $\ell_1({\bf u},Q_h {\bf w})$ to obtain
\begin{align*}
&\sum_{T \in \mathcal{T}_h} \langle Q_0 {\bf w} -{\bf w}, (p-Q_h^p p){\bf n} \rangle_{\partial T}\\
\leqslant & \sum_{T \in \mathcal{T}_h} \| Q_0 {\bf w} -{\bf w} \|_{\partial T} \| p-Q_h^p p \|_{\partial T}\\
\leqslant & \Big( \sum_{T \in \mathcal{T}_h} \| Q_0 {\bf w} -{\bf w} \|_{\partial T}^2 \Big)^{\frac{1}{2}}
\Big( \sum_{T \in \mathcal{T}_h} \| p-Q_h^p p \|_{\partial T}^2\Big)^{\frac{1}{2}}\\
\leqslant & \Big( C \sum_{T \in \mathcal{T}_h} (h_T^{-1} \| Q_0 {\bf w} -{\bf w} \|_T^2
+ h_T \| \nabla(Q_0 {\bf w} -{\bf w})\|_T^2 )\Big)^{\frac{1}{2}}\\
&\Big( C \sum_{T \in \mathcal{T}_h} (h_T^{-1} \| p-Q_h^p p \|_T^2
+ h_T \| \nabla(p-Q_h^p p)\|_T^2 ) \Big)^{\frac{1}{2}}\\
\leqslant &\Big( \sum_{T \in \mathcal{T}_h} C h_T^3 \| {\bf w} \|_2^2 \Big)^{\frac{1}{2}} \Big( \sum_{T \in \mathcal{T}_h} Ch^{2k-1}
\| p\|_k^2 \Big)^{\frac{1}{2}}\\
\leqslant & C h^{k+1} \| {\bf w} \|_2 \| p \|_k.
\end{align*}
Similarly, we can arrive at
\begin{align*}
&\sum_{T \in \mathcal{T}_h} \langle {\bf w} - Q_b {\bf w}, (p-Q_h^p p){\bf n} \rangle_{\partial T}
\leqslant C h^{k+1} \| {\bf w} \|_2 \|p\|_k.
\end{align*}
Therefore, for $\ell_2(p,Q_h {\bf w})$ we have the following estimate
\begin{align*}
|\ell_2(p,Q_h {\bf w})| \leqslant C h^{k+1} \| {\bf w} \|_2 \| p \|_k.
\end{align*}
(4)For $\tilde{\ell}_2(p,Q_h {\bf w})$, we have
\begin{align*}
\tilde{\ell}_2(p,Q_h {\bf w}) \leqslant Ch^{k+1}(\| p_1 \|_k+\| p_2 \|_k) \| {\bf w} \|_2.
\end{align*}
(5)for $\tilde{S}(Q_h {\bf u}, Q_h {\bf w})$,we first consider the element $T \in \mathcal{T}_h^S$ ,
\begin{align*}
&\sum_{T \in \mathcal{T}_h^S} h_T^{-1} \langle Q_b(Q_0 {\bf u})- Q_b {\bf u} ,Q_b(Q_0 {\bf w})-Q_b {\bf w} \rangle_{\partial T}\\
=&\sum_{T \in \mathcal{T}_h^S} h_T^{-1} \langle Q_0 {\bf u} -{\bf u} ,Q_b(Q_0 {\bf w})-Q_b {\bf w} \rangle_{\partial T}\\
\leqslant & \sum_{T \in \mathcal{T}_h^S} h_T^{-1} \| Q_0 {\bf u} -{\bf u} \|_{\partial T}
\| Q_0 {\bf w} -{\bf w}\|_{\partial T}\\
\leqslant & \Big( \sum_{T \in \mathcal{T}_h^S} h_T^{-1} \| Q_0 {\bf u} -{\bf u} \|_{\partial T}^2 \Big)^{\frac{1}{2}}
\Big(\sum_{T \in \mathcal{T}_h^S} h_T^{-1} \| Q_0 {\bf w} -{\bf w} \|_{\partial T}^2\Big)^{\frac{1}{2}}\\
\leqslant & \Big( C \sum_{T \in \mathcal{T}_h^S} (h_T^{-1} \| Q_0 {\bf u} -{\bf u} \|_T^2
+ h_T\| \nabla(Q_0 {\bf u} -{\bf u})\|_T^2 ) \Big)^{\frac{1}{2}} \\
&\Big( C \sum_{T \in \mathcal{T}_h^S} (h_T^{-1} \| Q_0 {\bf w} -{\bf w} \|_T^2
+ h_T \| \nabla(Q_0 {\bf w} -{\bf w})\|_T^2 ) \Big)^{\frac{1}{2}}\\
\leqslant & C h^{k+1} \| {\bf u} \|_{k+1} \| {\bf w} \|_2.
\end{align*}
Next, for the element $T \in \mathcal{T}_{1h}^I$, we have
\begin{align*}
&\tilde{s}(Q_h {\bf u},Q_h {\bf w})|_T \\
= &h_T^{-1} \langle Q_b(Q_0 {\bf u})- Q_b {\bf u} ,Q_b(Q_0 {\bf w})-Q_b {\bf w} \rangle_{\partial T \backslash \Gamma}\\
+&h_T^{-1} \langle \hat{Q}_b ((Q_0 {\bf u}_1){\bf n}_1^{\mathrm{T}}) - \hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}),
\hat{Q}_b ((Q_0 {\bf w}_1){\bf n}_1^{\mathrm{T}}) - \hat{Q}_b({\bf w}_1 {\bf n}_1^{\mathrm{T}}) \rangle_{\partial T \cap \Gamma}\\
\end{align*}
The first term is similar to stability term on triangular element, so we deal with the second term
\begin{align*}
&h_T^{-1} \langle \hat{Q}_b((Q_0 {\bf u}_1){\bf n}_1^{\mathrm{T}})-\hat{Q}_b({\bf u}_1 {\bf n}_1^{\mathrm{T}}), \hat{Q}_b ((Q_0 {\bf w}_1){\bf n}_1^{\mathrm{T}}) - \hat{Q}_b({\bf w}_1 {\bf n}_1^{\mathrm{T}}) \rangle_{\partial T \cap \Gamma}\\
=&h_T^{-1} \langle (Q_0 {\bf u}_1){\bf n}_1^{\mathrm{T}}- {\bf u}_1 {\bf n}_1^{\mathrm{T}},
\hat{Q}_b ((Q_0 {\bf w}_1){\bf n}_1^{\mathrm{T}}) - \hat{Q}_b({\bf w}_1 {\bf n}_1^{\mathrm{T}}) \rangle_{\partial T \cap \Gamma}\\
\leqslant & h_T^{-1} \| Q_0 {\bf u}_1 - {\bf u}_1\|_{\partial T \cap \Gamma}
\| \hat{Q}_b ((Q_0 {\bf w}_1){\bf n}_1^{\mathrm{T}}-{\bf w}_1 {\bf n}_1^{\mathrm{T}}) \|_{\partial T \cap \Gamma}\\
\leqslant & h_T^{-1} \| Q_0 {\bf u}_1 - {\bf u}_1\|_{\partial T \cap \Gamma}
\| (Q_0 {\bf w}_1){\bf n}_1^{\mathrm{T}}-{\bf w}_1 {\bf n}_1^{\mathrm{T}} \|_{\partial T \cap \Gamma}\\
\leqslant & Ch^{k+1}\| {\bf u}_1\|_{k+1} \| {\bf w} \|_2
\end{align*}
In addition, for the element $T \in \mathcal{T}_{2h}^I$, we can obtain
\begin{align*}
&\tilde{s}(Q_h {\bf u},Q_h {\bf w})|_T \\
= &h_T^{-1} \langle Q_b(Q_0 {\bf u})- Q_b {\bf u} ,Q_b(Q_0 {\bf w})-Q_b {\bf w} \rangle_{\partial T \backslash \Gamma}\\
+&h_T^{-1} \langle \hat{Q}_b ((Q_0 {\bf u}_2){\bf n}_2^{\mathrm{T}}) -\hat{Q}_b({\bf u}_2 {\bf n}_2^{\mathrm{T}}),
\hat{Q}_b ((Q_0 {\bf w}_1){\bf n}_1^{\mathrm{T}}) + \hat{Q}_b({\bf w}_1 {\bf n}_1^{\mathrm{T}}) \rangle_{\partial T \cap \Gamma}\\
\leqslant & Ch^{k+1} \|u_2\|_{k+1,\Omega_2}\| {\bf w}\|_2
\end{align*}
Therefore,
$$
\tilde{s}(Q_h {\bf u}, Q_h {\bf w}) \leqslant Ch^{k+1} (\|{\bf u}_1\|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2})\|{\bf w}\|_2
$$
Combining the five estimates and the regularity condition, we can obtain
\begin{align*}
&\|{\bf e}_0\|^2\\
\leqslant &C h^{k+1}(\| {\bf u}_1 \|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1}+\|p_2\|_{k,\Omega_2}) \| {\bf w} \|_2
+Ch{|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h {|\hspace{-.02in}|\hspace{-.02in}|} \|{\bf e}_0\|\\
\leqslant &C h^{k+1}(\|{\bf u}_1\|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1}+\|p_2\|_{k,\Omega_2})\| {\bf e}_0\|
+Ch{|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h {|\hspace{-.02in}|\hspace{-.02in}|} \|{\bf e}_0\|
\end{align*}
Finally, according to Theorem \ref{H1ERROR}, it follows that
\begin{align*}
\| {\bf e}_0 \|
\leqslant & C h^{k+1}(\|{\bf u}_1\|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1}+\|p_2\|_{k,\Omega_2})
+Ch{|\hspace{-.02in}|\hspace{-.02in}|} {\bf e}_h {|\hspace{-.02in}|\hspace{-.02in}|} \\
\leqslant & C h^{k+1}(\|{\bf u}_1\|_{k+1,\Omega_1}+\|{\bf u}_2\|_{k+1,\Omega_2}+\|p_1\|_{k,\Omega_1}+\|p_2\|_{k,\Omega_2})
\end{align*}
The proof of theorem is completed.
\end{proof}
\section{Numerical Results} In this section, we will give some numerical examples to validate the previous theory. In the numerical tests, we give the results of examples with discontinuous coefficients A, pressure p and velocity ${\bf u}$ at the interface for different interface shapes.
\subsection{Test Problem 1}In the test problem 1, we consider the following problems with discontinuous velocity and discontinuous pressure on the domain $\Omega=[-1,1] \times [-1,1]$ and the interface is described as
\begin{align*}
x^2+y^2=\frac{1}{4}.
\end{align*}
The exact solutions are
\begin{eqnarray*}
{\bf u}_1=\left(\begin{array}{ccc}
2 \sin y \cos y \cos x \\
(\sin^2 y-2)\sin x
\end{array}\right),
\end{eqnarray*}
\begin{eqnarray*}
{\bf u}_2=\left(\begin{array}{ccc}
- \cos(\pi x) \sin(\pi y)\\
\sin(\pi x) \cos(\pi y)
\end{array}\right),
\end{eqnarray*}
\begin{eqnarray}
p=\left \{\begin{array}{rcl}
1 & \mbox{in} & \Omega_1 \\
\frac{\pi}{16-\pi} & \mbox{in} & \Omega_2
\end{array}\right.,
A=\left \{\begin{array}{rcl}
1 & \mbox{in} & \Omega_2 \\
1 & \mbox{in} & \Omega_2
\end{array}\right.
\end{eqnarray}
\begin{table}[H]
\caption{Test Problem 1: curved triangular mesh}
\centering
\begin{tabular}{ccccccccc}
\hline
n&${|\hspace{-.02in}|\hspace{-.02in}|} Q_h {\bf u} -{\bf u}_h {|\hspace{-.02in}|\hspace{-.02in}|}$&order&$\|Q_0 {\bf u} -{\bf u}_0\|$&order&$\|Q_h^p p- p_h\|$&order&$\|{\bf u}-{\bf u}_h \|_{L^\infty(\Omega)}$&order\\
\hline
&&&&$k=1$&&&&\\
\hline
1 & 3.9611e+00 &-- & 6.5572e-01 & --& 8.6957e-01&--&7.4688e-01&-- \\
2 & 1.9447e+00 & 1.026 & 1.5798e-01& 2.053 & 4.1120e-01&1.080&1.3056e-01 & 2.516 \\
3 & 9.6473e-01& 1.041 &3.9173e-02&2.072 & 1.9511e-01& 1.108&3.5175e-02& 1.948 \\
4 & 4.8011e-01 &1.055 &9.7663e-03 &2.101 & 9.5708e-02&1.077&8.8538e-03 &2.086\\
\hline
&&&$k=2$&&&\\
\hline
1 & 9.9607e-01 &--& 1.2002e-01 &--& 1.3309e-01 &-- & 5.7368e-02 & --\\
2 & 2.5843e-01 & 1.946 & 1.6144e-02 & 2.894 & 2.9759e-02 &2.161 & 9.1152e-03& 2.654\\
3 & 6.4942e-02 & 2.052 & 2.0504e-03 & 3.066 & 6.8283e-03 &2.187& 1.2027e-03 & 3.009 \\
4 & 1.6237e-02 & 2.097 & 2.5745e-04 &3.138& 1.6139e-03 &2.182 & 1.5031e-04 & 3.145\\
\hline
&&&$k=3$&&&\\
\hline
1 & 1.9557e-01 &--& 2.2154e-02 &--& 1.8260e-02 &-- & 5.7239e-03&--\\
2 & 2.4560e-02 & 2.993 & 1.3796e-03 &4.005& 2.1947e-03 &3.057 &5.5005e-04 & 3.379\\
3 & 3.0687e-03 & 3.089 & 8.6147e-05 &4.120& 2.6353e-04 &3.149 & 3.8595e-05& 3.947\\
4 & 3.8338e-04 & 3.146 & 5.3835e-06 &4.197& 3.2198e-05 &3.179 & 2.4991e-06&4.139\\
\hline
\end{tabular}
\end{table}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{pics/C432-1.jpg}
\caption{}
\end{subfigure}%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{pics/C432-2.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.5\linewidth}
\centering \includegraphics[width=0.8\linewidth]{pics/C432-3.jpg}
\caption{}
\end{subfigure}
\caption{Test problem 1: (a) x-component of ${\bf u}_h$, (b) y-component of ${\bf u}_h$, (c) pressure $p_h$}
\end{figure*}
\begin{table}[H]
\caption{Test Problem 1: straight triangular mesh}
\centering
\begin{tabular}{ccccccccc}
\hline
n&${|\hspace{-.02in}|\hspace{-.02in}|} Q_h {\bf u} -{\bf u}_h {|\hspace{-.02in}|\hspace{-.02in}|}$&order&$\|Q_0 {\bf u} -{\bf u}_0\|$&order&$\|Q_h^p p- p_h\|$&order&$\|{\bf u}-{\bf u}_h \|_{L^\infty(\Omega)}$&order\\
\hline
&&&&$k=1$&&&&\\
\hline
1 & 3.8965e+00 &--& 6.9519e-01 &--& 8.9801e-01 &-- &8.7996e-01&--\\
2 & 1.9151e+00 & 1.025 & 1.6636e-01 & 2.063& 4.1484e-01&1.114&2.3388e-01 &1.912\\
3 &9.5547e-01 &1.033& 4.1203e-02 &2.073 & &1.118&5.9402e-02 &2.036\\
4 & 4.7758e-01 & 1.049 & 1.0277e-02&2.100& 9.5744e-02 &1.079&1.4933e-02&2.088\\
\hline
&&&&$k=2$&&&&\\
\hline
1 &1.0515 &--& 1.4146e-01 &--& 2.1288e-01 &--&1.7603e-01 &--\\
2 &2.7967e-01 &1.911& 2.2582e-02 &2.647&6.4754e-02 & 1.7170&3.1879e-02 &2.465\\
3 & 7.4373e-02 & 1.968 & 4.2713e-03 &2.474 & 2.0841e-02 &1.684&6.1469e-03&2.445\\
4 & 2.0444e-02 &1.953 & 9.5717e-04 &2.262 & 7.0023e-03 &1.649&1.7066e-03&1.938\\
\hline
&&&&$k=3$&&&&\\
\hline
1 & 3.7183e-01 &--& 6.1249e-02 &--& 1.6822e-01 &--&6.7480e-02 &--\\
2 & 9.7064e-02 & 1.938 &1.4642e-02 &2.065& 5.7458e-02 &1.549&1.7359e-02& 1.959\\
3 & 2.9548e-02 &1.767 & 3.6609e-03 &2.059& 1.9566e-02 &1.600&4.5858e-03& 1.978
\\
4 & 9.6547e-03 &1.692 & 9.1541e-04 &2.096& 6.7761e-03 &1.604 &1.16013e-03& 2.079
\\
\hline
\end{tabular}
\end{table}
\qquad We perform the weak Galerkin approximation on the curved triangular mesh in table 7.1 and the approximation performance on the straight triangular mesh in table 7.2. The convergency rates on the curved triangular mesh for linear and high order weak Galerkin finite elements arrive at the optimal convergency order, but the convergency rates on the straight triangular mesh are less than or equal to second order. This results show the advantages for using curved element.
\subsection{Test Problem 2}In the test problem 2, we consider the following problems with discontinuous velocity and the following interface:
\begin{align*}
r=\frac{1}{7}+\frac{sin(5 \theta)}{7}
\end{align*}
The exact solutions are
\begin{eqnarray*}
{\bf u}_1=\left(\begin{array}{ccc}
2 \pi \sin(\pi x) \sin(\pi x) \cos(\pi y) \sin(\pi y) \\
-2 \pi \sin(\pi x) \sin(\pi y) \cos(\pi x) \sin(\pi y)
\end{array}\right)
\end{eqnarray*}
\begin{eqnarray*}
{\bf u}_2=\left(\begin{array}{ccc}
x^2 y^2+e^{-y}\\
-\frac{2}{3} xy^3+2-\pi \sin(\pi x)
\end{array}\right)
\end{eqnarray*}
\begin{eqnarray}
p=\left \{\begin{array}{rcl}
0 & \mbox{in} & \Omega_1 \\
0 & \mbox{in} & \Omega_2
\end{array}\right.
A=\left \{\begin{array}{rcl}
1 & \mbox{in} & \Omega_2 \\
1 & \mbox{in} & \Omega_2
\end{array}\right.
\end{eqnarray}
\begin{table}[H]
\caption{Test Problem 2}
\centering
\begin{tabular}{ccccccccc}
\hline
n&${|\hspace{-.02in}|\hspace{-.02in}|} Q_h {\bf u} -{\bf u}_h {|\hspace{-.02in}|\hspace{-.02in}|}$&order&$\|Q_0 {\bf u} -{\bf u}_0\|$&order&$\|Q_h^p p- p_h\|$&order&$\|{\bf u}-{\bf u}_h \|_{L^\infty(\Omega)}$&order\\
\hline
&&&$k=1$&&&\\
\hline
1 & 8.3523e+00 &--& 6.8303e-01 &--& 1.3395e+00 &--&1.6144e+00&-- \\
2 & 4.2334e+00 & 0.973 & 1.8409e-01 &1.876& 7.0321e-01 & 0.922&5.7339e-01&-- \\
3 & 2.0738e+00 & 1.141 & 4.6527e-02 &2.198& 3.5766e-01 &1.081&1.5624e-01 &-- \\
4 & 1.0517e+00 & 0.988 & 1.2162e-02 & 1.952 &1.4259e-01 & 1.338&4.5628e-02 &-- \\
\hline
&&&$k=2$&&&\\
\hline
1 & 1.8413e+00 &--& 1.0910e-01 &--& 2.0817e-01 &-- &3.9152e-01&\\
2 & 4.9217e-01 &1.888& 1.5530e-02 &2.789& 4.5704e-02 &2.169&9.2244e-02&2.069\\
3 & 1.2483e-01 & 2.193 & 2.0196e-03 &3.260& 1.1769e-02 &2.168&1.0086e-02 &3.538 \\
4 & 3.1483e-02 & 2.004 & 2.5750e-04 &2.997& 2.6264e-03 &2.182 &1.3225e-03&2.956\\
\hline
&&&$k=3$&&&\\
\hline
1 & 2.8573e-01 &--& 1.4530e-02 &--& 2.7239e-02 &-- &9.0719e-02&\\
2 & 3.8777e-02 & 2.858 & 1.0229e-03 &3.798& 3.2468e-03 &3.044 &7.0040e-03&3.665\\
3 & 5.1489e-03 & 3.227 & 6.9235e-05 &4.304& 4.0691e-04 &3.319 &6.6249e-04&3.769\\
4 & 6.6696e-04 & 2.974 & 4.5695e-06 &3.955& 4.6397e-05 &3.159 &4.6409e-05& 3.868\\
\hline
\end{tabular}
\end{table}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{pics/F432-1.jpg}
\caption{}
\end{subfigure}%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{pics/F432-2.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{pics/F432-3.jpg}
\caption{}
\end{subfigure}
\caption{Test problem 2: (a) x-component of ${\bf u}_h$, (b) y-component of ${\bf u}_h$, (c) pressure $p_h$}
\end{figure*}
\subsection{Test Problem 3}In the test problem 3, we consider the following problems with discontinuous viscosity coefficient and the following interface:
\begin{align*}
r=\frac{1}{2}+\frac{sin(2 \theta)}{4}
\end{align*}
The exact solutions are
\begin{eqnarray*}
{\bf u}_1=\left(\begin{array}{ccc}
2 \pi \sin(\pi x) \sin(\pi x) \cos(\pi y) \sin(\pi y) \\
-2 \pi \sin(\pi x) \sin(\pi y) \cos(\pi x) \sin(\pi y)
\end{array}\right)
\end{eqnarray*}
\begin{eqnarray*}
{\bf u}_2=\left(\begin{array}{ccc}
x^2 y^2+e^{-y}\\
-\frac{2}{3} xy^3+2-\pi \sin(\pi x)
\end{array}\right)
\end{eqnarray*}
\begin{eqnarray}
p=\left \{\begin{array}{rcl}
0 & \mbox{in} & \Omega_1 \\
0 & \mbox{in} & \Omega_2
\end{array}\right.
A=\left \{\begin{array}{rcl}
1 & \mbox{in} & \Omega_2 \\
10 & \mbox{in} & \Omega_2
\end{array}\right.
\end{eqnarray}
\begin{table}[H]
\caption{Test Problem 3}
\centering
\begin{tabular}{ccccccccc}
\hline
n&${|\hspace{-.02in}|\hspace{-.02in}|} Q_h {\bf u} -{\bf u}_h {|\hspace{-.02in}|\hspace{-.02in}|}$&order&$\|Q_0 {\bf u} -{\bf u}_0\|$&order&$\|Q_h^p p- p_h\|$&order&$\|{\bf u}-{\bf u}_h \|_{L^\infty(\Omega)}$&order\\
\hline
&&&&$k=1$&&&&\\
\hline
1 & 3.7339e+01 &-- & 3.4134e+00 &--& 7.7369e+00 &--&3.0523e+01& \\
2 & 1.9557e+01 & 0.8879 & 9.3621e-01& 1.7762 & 3.3129e+00 &1.1646 & 1.2689e+00& 4.367\\
3 & 9.9253e+00 &1.0701& 2.4258e-01 &2.1307& 1.3139e+00 &1.4591 & 2.3145e-01 &2.685 \\
4 & 4.9796e+00 & 1.0953 & 6.1255e-02 &2.1855 & 5.3379e-01 &1.4303 & 6.0317e-02& 2.135 \\
\hline
&&&&$k=2$&&&&\\
\hline
1 & 4.7365e+00 &--& 3.3909e-01 &--& 6.5770e-01 &--& 7.4812e-01&\\
2 & 1.2581e+00 & 1.8202 & 4.5822e-02 & 2.7482& 1.2309e-01 &2.3010 &1.3168e-01& 2.3852 \\
3 & 3.1669e-01 & 2.1764 & 5.8042e-03 &3.2598& 2.6955e-02 &2.3961 &1.5557e-02& 3.3698 \\
4 & 7.9984e-02 & 2.1852 & 7.3506e-04 &3.2813& 5.7394e-03 &2.4563 &2.5042e-03&2.901 \\
\hline
&&&&$k=3$&&&&\\
\hline
1 & 4.7047e-01 &--& 2.7703e-02 &--& 7.6521e-02 &--&1.2870e-01&-- \\
2 & 6.6885e-02 & 2.6785 & 1.9608e-03 &3.6361& 8.8654e-03 &2.9595& 9.8594e-03&3.527 \\
3 & 8.6199e-03 & 3.2325 & 1.2957e-04 &4.2864& 7.0084e-04 &4.0036&6.8631e-04&4.204 \\
4 & 1.0955e-03 & 3.2757 & 8.2702e-06 &4.3694& 7.6611e-05 &3.5150 &4.8863e-05&4.196\\
\hline
\end{tabular}
\end{table}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{pics/P432-1.jpg}
\caption{}
\end{subfigure}%
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{pics/P432-2.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{pics/P432-3.jpg}
\caption{}
\end{subfigure}
\caption{Test problem 3: (a) x-component of ${\bf u}_h$, (b) y-component of ${\bf u}_h$, (c) pressure $p_h$}
\end{figure*}
\section{Conclusion}In this paper, we mainly use the weak Galerkin finite element method to deal with Stokes interface problems with complex interface. We present a weak finite element numerical scheme with only one value at the interface. It is proved that the optimal convergence order can be reached under the energy norm and the standard $L^2$ norm. At the same time, the results of numerical examples also show that the error convergence order under the energy norm and L2 norm is optimal, which is consistent with the theory.
\bibliographystyle{siam}
|
1,116,691,499,101 | arxiv | \section{Preliminaries and Notation}
\label{sec:1}
In this section, we introduce the basic notation and terminology
which we will use throughout in this paper.
Most of our notation and definitions including those ones
originating from the general theory of stochastic processes and
stochastic analysis are standard. We refer the reader to the
monographs \cite{CW1990}, \cite{HWY1992}, \cite{JS2003}
and \cite{P2004}.
Since at most countable unions of pairwise disjoint sets play an important
role in this paper, we use a well-known symbolic abbreviation. For example, if
$A : = \bigcup_{n=1}^{\infty} A_n$, where $(A_n)_{n \in \N}$ is a
sequence of sets such that $A_i \cap A_j = \emptyset$ for all $i
\not= j$, we write shortly $A : = \dju{n=1}{\infty}{7.0}{6.0}A_n$.
Throughout this paper, $(\Omega , \mathcal{F}, {\bf{F}}, {\mathbb{P}})$
denotes a fixed probability space, together with a fixed filtration
${\bf{F}}$. Even if it is not explicitly emphasized, the filtration
${\bf{F}} = (\mathcal{F}_t)_{t \geq 0}$ always is supposed to
satisfy the usual conditions\footnote{$\mathcal{F}_0$ contains all
$\mathbb{P}$-null sets and ${\bf{F}}$ is right-continuous.}.
A real-valued (stochastic) process $X : \Omega \times \R^+
\longrightarrow \R$ (which may be identified with the family of
random variables $(X_t)_{t \geq 0}$, where $X_t(\omega) : =
X(\omega, t)$)\footnote{$\R^{+} : = [0, \infty)$.} is called \textit{adapted} (with respect to
${\bf{F}}$) if $X_t$ is ${\mathcal{F}_t}$-measurable for all $t \in \R^+$. $X$
is called \textit{right-continuous} (respectively
\textit{left-continuous}) if for all $\omega \in \Omega$ the
trajectory $X_{\bullet}(\omega) : \R^+ \longrightarrow \R, t \mapsto
X_t(\omega)$ is a right-continuous (respectively left-continuous)
real-valued function. If all trajectories of $X$ do have left-hand
limits (respectively right-hand limits) everywhere on $\R^+$, $X^{-}
= (X_{t -})_{t \geq 0}$ (respectively $X^{+} = (X_{t +})_{t \geq
0})$ denotes the \textit{left-hand} (respectively
\textit{right-hand}) \textit{limit process}, where $X_{0 -} : =
X_{0+}$ by convention. If all trajectories of $X$ do have left-hand
limits and right-hand limits everywhere on $\R^+$, the \textit{jump
process} $\Delta X = (\Delta X_t)_{t \geq 0}$ is well-defined on
$\Omega \times \R^+$. It is given by $\Delta X : = X^{+} - X^{-}$.
A right-continuous process whose trajectories do have left limits
everywhere on $\R^+$, is known as a
\textit{c\`{a}dl\`{a}g} process. If $X$ is $\mathcal{F} \otimes
\mathcal{B}({\mathbb{R}}^+)$-measurable, $X$ is said to be
\textit{measurable}. $X$ is said to be \textit{progressively
measurable} (or simply \textit{progressive}) if for each $t \geq 0$,
its restriction $X\vert_{\Omega \times [0, t]}$ is $\mathcal{F}_t
\otimes \mathcal{B}([0, t])$-measurable. Obviously, every
progressive process is measurable and (thanks to Fubini) adapted.
A random variable $T : \Omega \longrightarrow [0, \infty]$ is said
to be a \textit{stopping time} or \textit{optional time} (with
respect to ${\bf{F}}$) if for each $t \geq 0$, $\{T \leq t\} \in
{\mathcal{F}}_t$. Let $\mathcal{T}$ denote the set of all stopping
times, and let $S, T \in \mathcal{T}$ such that $S \leq T$. Then
$\rsi S, T \rsi : = \{ (\omega , t) \in \Omega \times \R^+ :
S(\omega) \leq t < T(\omega)\}$ is an example for a
\textit{stochastic interval}. Similarly, one defines the stochastic
intervals $\lsi S, T \lsi$, $\lsi S, T \rsi$ and $\rsi S, T \lsi$.
Note again that $\rsi T \lsi : = \rsi T, T \lsi =
\textup{Gr}(T)\vert_{\Omega \times \R^+}$ is simply the graph of the
stopping time $T : \Omega \longrightarrow [0, \infty]$ restricted to
$\Omega \times \R^+$. $\mathcal{O} = \sigma\big\{[\![T,\infty [\![
\hspace{1mm}: T \in \mathcal{T}\big\}$ denotes the \textit{optional
$\sigma$-field} which is generated by all c\`{a}dl\`{a}g adapted
processes. The \textit{predictable $\sigma$-field} $\mathcal{P}$ is
generated by all left-continuous adapted processes. An
$\mathcal{O}$- (respectively $\mathcal{P}$-) measurable process is
called \textit{optional} or \textit{well-measurable} (respectively
\textit{predictable}).
All optional or predictable processes are adapted.
For the convenience of the reader, we recall and summarise the precise
relation between those different types of processes in the following
\begin{theorem}\label{thm:POPA}
Let $(\Omega , \mathcal{F}, {\bf{F}}, {\mathbb{P}})$ be a filtered
probability space such that ${\bf{F}}$ satisfies the usual
conditions. Let $X$ be a (real-valued) stochastic process on $\Omega \times
{\mathbb{R}}^+$. Consider the following statements:
\begin{description}[(iii)]
\item[(i)] $X$ is predictable;
\item[(ii)] $X$ is optional;
\item[(iii)] $X$ is progressive;
\item[(iv)] $X$ is adapted.
\end{description}
Then the following implications hold:
\[
\textstyle{(i)} \Rightarrow \textstyle{(ii)} \Rightarrow
\textstyle{(iii)} \Rightarrow \textstyle{(iv)}.
\]
If $X$ is right-continuous, then the following implications hold:
\[
\textstyle{(i)} \Rightarrow \textstyle{(ii)} \iff \textstyle{(iii)}
\iff \textstyle{(iv)}.
\]
If $X$ is left-continuous, then all statements are equivalent.
\end{theorem}
\begin{proof}
\smartqed
The general chain of implications $\textstyle{(i)} \Rightarrow
\textstyle{(ii)} \Rightarrow \textstyle{(iii)} \Rightarrow
\textstyle{(iv)}$ is well-known (for a detailed discussion cf.
e.\,g. \cite[Chapter 3]{CW1990}). If $X$ is left-continuous and
adapted, then $X$ is predictable. Hence, in this case, all four
statements are equivalent. If $X$ is right-continuous and adapted,
then $X$ is optional (cf. e.\,g. \cite[Theorem 4.32]{HWY1992}). In particular, $X$ is
progressive. \qed
\end{proof}
Recall that a function $f : \R^{+} \longrightarrow \R$ is said to be \textit{regulated on $\R^{+}$} if $f$ has right- and left-limits everywhere on $(0, \infty)$ and $f(0+)$ exists (cf. e.\,g. \cite[Ch. 7.6]{D1960}).
Let us also commemorate the following
\begin{lemma}\label{lemma:optional and regulated paths}
Let $X : \Omega \times \R^+ \longrightarrow \R$ be a stochastic process such that its trajectories are regulated. Then all trajectories of the left limit process $X^{-}$ $($respectively of the right limit process $X^{+}$$)$ are left-continuous $($respectively right-continuous$)$. If in addition $X$ is optional, then $X^{-}$ is predictable and $X^+$ is adapted.
\end{lemma}
Given an optional process $X$ with regulated trajectories, we put
\[
\{\Delta X \not= 0 \} : = \{(\omega, t) \in \Omega \times {{\R}^+} : \Delta X_t(\omega) \not= 0\}\,.
\]
Recall the important fact that for any $\varepsilon > 0$ and any regulated function $f : \R^+ \longrightarrow \R$ the set $J_f(\varepsilon) := \{t >0 : \vert \Delta f(t) \vert > \varepsilon\}$ is at most countable, implying that
\[
J_f : = \{t > 0: \Delta f(t) \not= 0\} = \{t > 0 : \vert
\Delta f(t) \vert > 0\} = \bigcup\limits_{n \in \N}J_f(\frac{1}{n})
\]
is at most countable as well (cf. \cite[p. 286-288]{Ho1921} and \cite[Theorem 1.3]{K2004}).
\section{Construction of Thin Sets of Jumps of C\`{a}dl\`{a}g Adapted Processes}
\label{sec:2}
In the general framework of semimartingales with jumps (such as e.\,g. L\'{e}vy processes) there are several ways to describe a stochastic integral with respect to a (random) jump measure $j_X$ of a c\`{a}dl\`{a}g adapted stochastic process $X = (X_t)_{t\geq 0}$. One approach is to implement the important subclass of ``thin'' subsets of ${\Omega \times \mathbb{R}}^{+}$ (cf. \cite[Def. 1.30]{JS2003}) in order to analyse the set $\{\Delta X \not= 0 \}$:
\begin{theorem}[Dellacherie, 1972]\label{thm:Dellacherie}
Let $X = (X_t)_{t \geq 0}$ be an arbitrary ${\bf{F}}$-adapted c\`{a}dl\`{a}g stochastic process on $(\Omega , {\mathcal{F}}, {\bf{F}}, {\mathbb{P}})$. Then there exist a sequence $(T_n)_{n \in {\mathbb{N}}}$ of ${\bf{F}}$-stopping times such that $\rsi T_n \lsi \cap \rsi T_k \lsi = \emptyset$ for all $n \not= k$ and
\[
\{\Delta X \not= 0 \} = \dju{n=1}{\infty}{3.3}{2.0}\rsi T_n \lsi\,.
\]
In particular, $\Delta X_{T_n(\omega)}(\omega) \not= 0$ for all $\omega \in \Omega$ and $n \in {\mathbb{N}}$.
\end{theorem}
A naturally appearing, iterative and hence implementable exhausting representation is given in the following important special case (cf. e.\,g. \cite[p. 25]{P2004} or the proof of \cite[Lemma 2.3.4.]{A2009}):
\begin{prop}\label{thm:iterative exhausting representation}
Let $X = (X_t)_{t \geq 0}$ be an arbitrary ${\bf{F}}$-adapted c\`{a}dl\`{a}g stochastic process on $(\Omega , {\mathcal{F}}, {\bf{F}}, {\mathbb{P}})$ and $A \in {\mathcal{B}}({\mathbb{R}})$ such that $0 \notin \overline{A}$. Put
\[
T_1^A(\omega) : = \inf\{t > 0 : \Delta X_t(\omega) \in A\}
\]
and
\[
T_n^A(\omega) : = \inf\{t > T_{n-1}^A(\omega) : \Delta X_t(\omega) \in A\} \hspace{5mm} (n \geq 2).
\]
Up to an evanescent set $(T_n^A)_{n \in {\mathbb{N}}}$ defines a sequence of strictly increasing ${\bf{F}}$-stopping times, satisfying
\[
\{\Delta X \in A \} = \dju{n=1}{\infty}{3.3}{2.4}\rsi S_n^A \lsi \,,
\]
where
\[
S_n^A : = T_n^A \,\ind_{A}\big(\Delta X_{T_n^A}\big) + (+\infty) \,\ind_{A^c}\big(\Delta X_{T_n^A}\big) \,.
\]
\end{prop}
\begin{proof}
\smartqed
In virtue of \cite[Chapter 4, p. 25ff]{P2004} each $T_n^A$ is a ${\bf{F}}$-stopping time and $\Omega_0 \times {\mathbb{R}}^{+}$ is an evanescent set, where $\Omega_0 : = \{\omega \in \Omega : \lim\limits_{n \to \infty} T_n^A(\omega) < \infty\}$.
Fix $(\omega, t) \notin \Omega_0 \times {\mathbb{R}}^{+}$.
{}
Assume by contradiction that $T_{m_0}^A(\omega) = T_{m_0 + 1}^A(\omega) = : t^\ast$ for some $m_0 \in \mathbb{N}$. By definition of $t^\ast = T_{m_0 + 1}^A(\omega)$, there exists a sequence $(t_n)_{n \in \mathbb{N}}$ such that for all $n \in \mathbb{N}$ $\lim\limits_{n \to \infty} t_n = t^\ast$, $\Delta X_{t_n}(\omega) \in A$, and $t^\ast = T_{m_0}^A(\omega) < t_{n+1} \leq t_n$. Consequently, since $X$ has right-continuous paths, it follows that $\Delta X_{t^\ast}(\omega) = \lim\limits_{n \to \infty} \Delta X_{t_n}(\omega) \in \bar{A}$, implying that $\Delta X_{t^\ast}(\omega) \not= 0$ (since $0 \notin \overline{A}$). Thus $\lim\limits_{n \to \infty} t_n = t^\ast$ is an accumulation point of the at most countable set $\{t > 0 : \Delta X_t(\omega) \not=0 \}$ - a contradiction.
To prove the set equality let firstly $\Delta X_t(\omega) \in A$. Assume by contradiction that for all $m \in {\mathbb{N}}$ $T_m^A(\omega) \not= t$. Since $\omega \notin \Omega_0$, there is some $m_0 \in {\mathbb{N}} \cap [2, \infty)$ such that $T_{m_0}^A(\omega) > t$. Choose $m_0$ small enough, so that $T_{m_0 - 1}^A(\omega) \leq t < T_{m_0}^A(\omega)$. Consequently, since $\Delta X_t(\omega) \in A$, we must have $t \leq T_{m_0 - 1}^A(\omega)$ and hence $T_{m_0 - 1}^A(\omega) = t$. However, the latter contradicts our assumption. Thus, $\{\Delta X \in A \} \subseteq \bigcup_{n=1}^{\infty} \rsi T_n^A \lsi \,$.
{}
The claim now follows from \cite[Theorem 3.19]{HWY1992}.\qed
\end{proof}
\begin{rem}
Note that $\{S_n^A < +\infty \} \subseteq \{\Delta X_{T_n^A} \in A\} \subseteq \{S_n^A = T_n^A\}$. Hence,
\[
\ind_{A}\big(\Delta X_{T_n^A}\big)\ind_{\{T_n^A \leq t\}} = \ind_{\{S_n^A \leq t\}}
\]
for all $n \in {\mathbb{N}}$.
\end{rem}
Next, we recall and rewrite equivalently the construction of a random measure on ${\mathcal{B}}{}({\mathbb{R}}^{+} \times {\mathbb{R}}{})$ (cf. e.\,g. \cite[Def. 1.3]{JS2003}):
\begin{definition}
A random measure on $\R^{+} \times \R$ is a family $\mu \equiv (\mu(\omega; d(s, x)) : \omega \in \Omega)$ of non-negative measures on $(\R^{+} \times \R, {\mathcal{B}}{}({\mathbb{R}}^{+} \times {\mathbb{R}}{}))$, satisfying $\mu(\omega; \{0\} \times \R) = 0$ for all $\omega \in \Omega$.
\end{definition}
Given an adapted $\R$-valued c\`{a}dl\`{a}g process $X$, a particular (integer-valued) random measure (cf. e.\,g. \cite[Prop. 1.16]{JS2003}) is given by the \textit{jump measure of $X$}, defined as
\begin{eqnarray*}
j_X(\omega, B) & : = & \sum_{s
>0}\ind_{\{\Delta X \not= 0\}}(\omega, s)\,\varepsilon_{\big(s, \Delta
X_{s}(\omega)\big)}(B)\\
& = & \sum_{s > 0} \ind_B\big(s, \Delta X_s(\omega)\big)\ind_{{\mathbb{R}}^\ast}(\Delta X_s(\omega))\\
& = & \#\big\{s > 0 : \Delta X_s(\omega) \not= 0 \mbox{ and } \big(s, \Delta X_s(\omega)\big) \in B \big\}\,,
\end{eqnarray*}
where $\varepsilon_{a}$ denotes the Dirac measure at point $a$ and $B \in {\mathcal{B}}{}({\mathbb{R}}^{+} \times {\mathbb{R}}{})$.
\comment{
Implementing the exhausting series of stopping times $(T_n)_{n \in \N}$ of the thin set $\{\Delta X \not= 0\}$ from Theorem \ref{thm:Dellacherie}, we immediately obtain
\begin{corollary}
Let $B \in {\mathcal{B}}{}({\mathbb{R}}^{+} \times {\mathbb{R}}{})$ and $\omega \in \Omega$. Then
\begin{eqnarray*}
j_X(\omega, B) & = & \int_{{\mathbb{R}}^{+} \times {\mathbb{R}}} \ind_{B}(s, x) j_X(\omega, d(s,x))\\
& = & \sum_{n=1}^\infty \ind_B\big(T_n(\omega), \Delta X_{T_n(\omega)}(\omega)\big)\\
& = & \#\big\{n \in {\mathbb{N}} : \big(T_n(\omega), \Delta X_{T_n(\omega)}(\omega)\big) \in B \big\}\,.
\end{eqnarray*}
\end{corollary}
\begin{proof}
\smartqed
Since $\ind_{\{\Delta X \not= 0\}}(\omega, s) = \sum_{n=1}^\infty \ind_{\,\csbl T_n \csbr}(\omega, s) = \sum_{n=1}^\infty \ind_{\{T_n(\omega)\}}(s)$, we just have to permute the two sums.
\qed
\end{proof}
}
Keeping the above representation of the jump measure $j_X$ in mind, we now are going to consider an important special case of a Borel set $B$ on ${\mathbb{R}}^{+} \times {\mathbb{R}}$, leading to the construction of ``stochastic'' integrals with respect to the jump measure $j_X$ including the construction of stochastic jump processes which play a fundamental role in the theory and application of L\'{e}vy processes.
{}
To this end, let us consider all Borel sets $B$ on ${\mathbb{R}}^{+} \times {\mathbb{R}}$ of type $B = [0, t] \times A$, where $t \geq 0$ and
\[
A \in {\mathcal{B}}^\ast : = \{A : A \in {\mathcal{B}}({\mathbb{R}}), 0 \notin \overline{A}\}\,.
\]
Obviously, $A \subseteq {\mathbb{R}}\setminus(-\varepsilon, \varepsilon)$ for \textit{all} $\varepsilon > 0$, implying in particular that $A \in {\mathcal{B}}^\ast$ is bounded from below. Let us recall the following
\begin{lemma}\label{thm:Applebaum}
Let $X = (X_t)_{t\geq 0}$ be a c\`{a}dl\`{a}g process. Let $A \in {\mathcal{B}}^\ast$ and $t > 0$. Then $N_X^A(t) : = j_X(\cdot, [0, t] \times A) < \infty$ a.\,s.
\end{lemma}
\begin{proof}
This is \cite[Lemma 2.3.4.]{A2009}.
\qed
\end{proof}
\begin{prop}\label{thm:integral}
Let $X = (X_t)_{t\geq 0}$ be a c\`{a}dl\`{a}g process and $f : {\mathbb{R}}^{+} \times {\mathbb{R}} \rightarrow {\mathbb{R}}$ be measurable. Let $A \in {\mathcal{B}}^\ast$ and $t > 0$. Then for all $\omega \in \Omega$ the function $\ind_{[0, t] \times A} \, f$ is a.\,s. integrable with respect to the jump measure $j_X(\omega, d(s,x))$, and
\begin{eqnarray*}
& {} & \int\limits_{[0, t] \times A} f(s, x) j_X(\omega, d(s,x)) {}\\
& = & \sum_{0 < s \leq t} f\big(s, \Delta X_s(\omega)\big)\ind_{A}(\Delta X_s(\omega)){}\\
& = & \sum_{n=1}^\infty f\big(T_n(\omega), \Delta X_{T_n(\omega)}(\omega)\big)\ind_{A}\big(\Delta X_{T_n(\omega)}\big)\ind_{\{T_n \leq t\}}(\omega).
\end{eqnarray*}
{}
Moreover, given $\omega \in \Omega$ there exists $c_t^A(\omega) \in \R^{+}$ such that
\[
\int\limits_{[0, t] \times A} \vert f(s, x) \vert j_X(\omega, d(s,x)) \leq c_t^A(\omega) \, j_X(\omega, [0, t] \times A)\,.
\]
\end{prop}
\begin{proof
Fix $\omega \in \Omega$ and consider the measurable function $g_t^A : = \ind_{[0, t] \times A}\, f$. Then ${\mathbb{R}}^{+} \times {\mathbb{R}} = B_1(\omega) \dotcup B_2(\omega)$, where $B_1(\omega) : = \{(s, \Delta X_s(\omega) : s > 0\}$ and $B_2(\omega) : = {\mathbb{R}}^{+} \times {\mathbb{R}}\setminus B_1(\omega)$. Obviously, we have
\[
j_X(\omega, B_2(\omega)) = \sum\limits_{s > 0} \ind_{B_2(\omega)}\big(s, \Delta X_s(\omega)\big)\ind_{{\mathbb{R}}^\ast}(\Delta X_s(\omega)) = 0\,,
\]
implying that $I_2 : = \int\limits_{B_2(\omega)} \vert g_t^A(s, x) \vert j_X(\omega, d(s,x)) = 0$. Put $I_1 : = \int\limits_{B_1(\omega)} \vert g_t^A(s, x) \vert j_X(\omega, d(s,x))$. Since on $[0, t]$ the c\`{a}dl\`{a}g path $s \mapsto X_s(\omega)$ has only finitely many jumps in $A \in {\mathcal{B}}^\ast$ there exist finitely many elements $(s_1, \Delta X_{s_1}(\omega)), \ldots, (s_N, \Delta X_{s_N}(\omega))$ which all are elements of $\big([0, t] \times A\big) \cap B_1(\omega)$ (for some $N = N(\omega, t, A) \in {\mathbb{N}}$). Put
\[
0 \leq c_t^A(\omega) : = \max\limits_{1 \leq k \leq N}{}\vert f(s_k, \Delta X_{s_k}(\omega))\vert < \infty \,.
\]
Then
\[
\vert g_t^A \vert = \ind_{[0, t] \times A} \, \vert f \vert \leq c_t^A(\omega) \, \ind_{[0, t] \times A} \mbox{ on } B_1\,,
\]
and it follows that $I_2 \leq c_t^A(\omega) \, j_X(\omega, [0,t] \times A)$. A standard monotone class argument finishes the proof.\qed
\end{proof}
{}
\begin{rem}
Note that in terms of the previously discussed stopping times $S_n^A$ we may write
\[
\int\limits_{[0, t] \times A} f(s, x) j_X(\omega, d(s,x)) = \sum_{n=1}^\infty f\big(S_n^A(\omega), \Delta X_{S_n^A(\omega)}(\omega)\big)\ind_{\{S_n^A \leq t\}}(\omega)\,.
\]
\end{rem}
{}
In the case of a L\'{e}vy process $X$ the following important special cases $f(s,x) : = 1$ and $f(s,x) : = x$ are embedded in the following crucial result (cf. e.\,g. \cite{A2009}):
\begin{thm}
Let $X = (X_t)_{t\geq 0}$ be a (c\`{a}dl\`{a}g) L\'{e}vy process and $A \in {\mathcal{B}}^\ast$.
\begin{description}[(ii)]
\item[(i)] Given $t \geq 0$
\begin{eqnarray*}
N_X^A(t) & = & \int\limits_{A} N_X^{dx}(t) := j_X(\cdot, [0, t] \times A) = \int\limits_{[0, t] \times A} j_X(\cdot, d(s,x)) \\
& \,\,= & \sum_{0 < s \leq t} \ind_{A}(\Delta X_s) \, = \, \sum_{n=1}^\infty \ind_{A}\big(\Delta X_{T_n}\big)\ind_{\{T_n \leq t\}} \, = \, \sum_{n=1}^\infty \ind_{\{S_n^A \leq t\}}
\end{eqnarray*}
induces a Poisson process $N_X^A = \big(N_X^A(t)\big)_{t \geq 0}$ with intensity measure $\nu_X(A) : = {\mathbb{E}}[N_X^A(1)] < \infty$.
\item[(ii)] Given $t \geq 0$ and a Borel measurable function $g : {\mathbb{R}} \longrightarrow {\mathbb{R}}$
\begin{eqnarray*}
Z_X^A(t) & : = & \int\limits_{A} g(x)\, N_X^{dx}(t) = \int\limits_{[0, t] \times A} g(x)\, j_X(\cdot, d(s,x))\\
& \,\, = & \sum_{0 < s \leq t} g\big(\Delta X_s\big) \, \ind_{A}(\Delta X_s) = \sum_{n=1}^\infty g\big(\Delta X_{T_n}\big)\ind_{A}\big(\Delta X_{T_n}\big)\ind_{\{T_n \leq t\}}\\
& \,\, = & \sum_{n=1}^\infty g\big(\Delta X_{S_n^A}\big)\ind_{\{S_n^A \leq t\}} = \sum_{n = 1}^{N_X^A(t)}g\big(\Delta X_{S_n^A}\big)
\end{eqnarray*}
induces a compound Poisson process $Z_X^A = \big(Z_X^A(t)\big)_{t \geq 0}$. Moreover, if $g \in L^1(A, \nu_X)$ then ${\mathbb{E}}[Z_X^A(t)] = t\nu_X(A){\mathbb{E}}[g\big(\Delta X_{S_1^A}\big)]$.
\end{description}
\end{thm}
\section{Jump Measures of Optional Processes with Regulated Trajectories}
\label{sec:3}
One of the aims of our paper is to transfer particularly Theorem \ref{thm:Dellacherie} to the class of optional processes with regulated trajectories in order to construct a well-defined jump measure of such optional processes.
As we have seen the right-continuity of the paths of $X$ plays a significant role in the proof of Proposition \ref{thm:iterative exhausting representation}. We will see that a similar result holds for optional processes with regulated trajectories. However, it seems that we cannot simply implement the above sequence $(S_n^A)_{n \in \N}$ if the paths of $X$ are not right-continuous.
Our next contribution shows that we are not working with ``abstract nonsense'' only:
\begin{ex}
Optional processes which do not necessarily have right-continuous paths have emerged as naturally appearing candidates in the framework of enlarged filtration in financial mathematics (formally either describing ``insider trading information'' or ``extended information by inclusion of the default time of a counterparty'') including the investigation of the problem whether the no-arbitrage conditions are stable with respect to a progressive enlargement of filtration and how an arbitrage-free semimartingale model is affected when stopped at a random horizon (cf. \cite{ACDJ_1_2013}, \cite{ACDJ_2_2013} and \cite{ACDJ_3_2013}).
Given a random time $\tau$, one can construct the smallest right-continuous filtration $\mathbb{G}$ which contains the given filtration $\mathbb{F}$ and makes $\tau$ a $\mathbb{G}$-stopping time (known as \textit{progressive enlargement of $\mathbb{F}$ with $\tau$}). Then one can associate to $\tau$ the two $\mathbb{F}$-supermartingales $Z$ and $\widetilde{Z}$, defined through
\[
Z_t : = {\mathbb{P}}(\tau > t \vert {\mathcal{F}}_t ) \textup { and } {\widetilde{Z}}_t : = {\mathbb{P}}(\tau \geq t \vert {\mathcal{F}}_t )\,.
\]
$Z$ is c\`{a}dl\`{a}g, while $\widetilde{Z}$ is an optional process with regulated trajectories only.
\end{ex}
A first step towards the construction of a similar iterative and implementable exhausting representation of the set $\{\Delta X \not= 0\}$ for optional processes is encoded in the following
\begin{prop}\label{thm:finitely_layered_jump_rep}
Let $f : {\R^{+}} \longrightarrow {\mathbb{R}}$ be an arbitrary regulated function. Then
\[
J_f = \dju{n=1}{\infty}{3.3}{2.0}D_n\,,
\]
where each $D_n$ is a finite set.
\end{prop}
\begin{proof}
\smartqed
Since $(0, \infty) = \dju{n=1}{\infty}{7.0}{6.0}(n-1, n]$ it follows that $J_f = \dju{n=1}{\infty}{7.0}{6.0}J_{f_n}$, where $f_n : = f\vert_{(n-1, n]}$ denotes the restriction of $f$ to the interval $(n-1, n]$. Fix $n \in \N$. Since every bounded infinite set of real numbers has a limit point (by Bolzano-Weierstrass) the at most countable set
\[
J_{f_n}(\frac{1}{m}) = \big\{t: n-1 < t \leq n \textup{ and } \vert \Delta f(t) \vert > \frac{1}{m}\big\}
\]
must be already finite for each $m \in \N$ (cf. \cite[Theorem 2.6]{B1998} and \cite[p. 286-288]{Ho1921}). Moreover, $J_{f_n}(\frac{1}{m}) \subseteq J_{f_n}(\frac{1}{m+1})$ for all $m \in \N$. Consequently, we have
\[
J_{f_n} = \bigcup_{m = 1}^{\infty} J_{f_n}(\frac{1}{m}) = \dju{m=1}{\infty}{3.4}{1.8}A_{m,n},
\]
where $A_{1, n} : = J_{f_n}(1) = \{\vert \Delta f_n \vert > 1\}$ and $A_{m+1, n} := \{\Delta f_n \in \big(\frac{1}{m+1}, \frac{1}{m}\big]\}$ for all $m \in \N$, and hence
\[
J_f = \dju{n=1}{\infty}{3.4}{1.8}\dju{m=1}{\infty}{3.4}{1.8}A_{m,n}\,.
\]
Since $A_{m, n} \subseteq J_{f_n}(\frac{1}{m})$ for all $m \in \N$, each set $A_{m,n}$ consists of finitely many elements only.
\qed
\end{proof}
\begin{lemma}\label{lemma:recursive construction of finite sets}
Let $\emptyset \not= D$ be a finite subset of $\R$, consisting of
$\kappa_D$ elements. Consider
\[
s_1^D : = \min(D)
\]
and, if $\kappa_D \geq 2$,
\[
s_{n}^D := \min(D \cap (s_{n-1}^D, \infty)\big) = \min\{t > s_{n-1}^D: t \in D\},
\]
where $n \in \{2, 3, \ldots, \kappa_D\}$. Then $D \cap (s_{n-1}^D, \infty) \not= \emptyset$ and $s_{n-1}^D < s_{n}^D$ for all $n \in \{2, 3, \ldots, \kappa_D\}$. Moreover, we have
\[
D = \big\{s_1^D, s_2^D, \ldots, s_{\kappa_D}^D\big\}\,.
\]
\end{lemma}
\begin{proof}
\smartqed
Let $\kappa_D \geq 2$. Obviously, it follows that $D \cap (s_{1}^D, \infty) \not= \emptyset$. Now assume by contradiction that there exists $n \in \{2, \ldots, \kappa_D - 1\}$ such that $D \cap (s_{n}^D, \infty) = \emptyset$. Let $m^\ast$ be the smallest $m \in \{2, \ldots, \kappa_D - 1\}$ such that $D \cap (s_{m}^D, \infty) = \emptyset$. Then $s_{k}^D : = \min(D \cap (s_{k-1}^D, \infty)\big)
\in D$ is well-defined for all $k \in \{2, \ldots, m^\ast\}$, and we obviously have $s_{1}^D < s_{2}^D < \ldots < s_{m^\ast}^D$. Moreover, by construction of $m^\ast$, it follows that
\begin{equation}\label{eq:induction statement}
s \leq s_{m^\ast}^D \textup{ for all } s \in D.
\end{equation}
Assume now that there exists $\widetilde{s} \in D$ such that $\widetilde{s} \not\in \{s_{1}^D, s_{2}^D, \ldots, s_{m^\ast}^D\}$. Then, by (\ref{eq:induction statement}), there must exist $l \in \{1, 2, \ldots, m^\ast-1 \}$ such that $s_{l}^D < \widetilde{s} < s_{l+1}^D$, which is a contradiction, due to the definition of $s_{l+1}^D$. Hence, $\widetilde{s}$ cannot exist, and it consequently follows that $D = \{s_{1}^D, s_{2}^D, \ldots, s_{m^\ast}^D\}$. But then $m^\ast = \#(D) \leq \kappa_D - 1 < \kappa_D$, which is a contradiction. Hence, $D \cap (s_{n}^D, \infty) \not= \emptyset$ for any $n \in \{2, \ldots, \kappa_D - 1\}$, implying that $s_{n}^D \in D$ is well-defined and $s_{n}^D < s_{n+1}^D$ for all $n \in \{1, 2, \ldots, \kappa_D - 1\}$. Clearly, we must have $D = \big\{s_1^D, s_2^D, \ldots, s_{\kappa_D}^D\big\}$.
\qed
\end{proof}
Let $A \subseteq \Omega \times {\mathbb{R}}^{+}$ and $\omega \in
\Omega$. Consider
\[
D_A(\omega) : = \inf \{t \in {\mathbb{R}}^{+} : (\omega , t) \in A\}
\in [0, \infty]
\]
$D_A$ is said to be the \textit{d\'{e}but} of $A$. Recall that
$\inf(\emptyset) = + \infty$ by convention. $A$ is called a
\textit{progressive set} if $\ind_{A}$ is a progressively measurable
process. For a better understanding of the main ideas in the proof
of Theorem \ref{thm:jumps of optional processes and stopping
times}, we need the following non-trivial result (a detailed proof
of this statement can be found in e.\,g. \cite{HWY1992}):
\begin{theorem}\label{thm:optional debut is stopping time}
Let $A \subseteq \Omega \times {\mathbb{R}}^{+}$. If $A$ is
a progressive set, then $D_A$ is a stopping time.
\end{theorem}
Next, we reveal how these results enable a transfer of the jump measure for c\`{a}dl\`{a}g and adapted processes to optional processes with infinitely many jumps and regulated trajectories which need not necessarily be right-continuous. To this end, we firstly generalise Theorem \ref{thm:Dellacherie} in the following sense:
\begin{theorem}\label{thm:jumps of optional processes and stopping times}
Let $X : \Omega \times \R^+ \longrightarrow \R$ be an
optional process such that all trajectories of $X$ are regulated
and $\Delta X_0 = 0$. Then $\Delta X$ is also optional.
If for each trajectory of $X$ its set of jumps is not finite, then there
exists a sequence of stopping times $(T_n)_{n \in \N}$ such that
$(T_n(\omega))_{n \in \N}$ is a strictly increasing sequence in $(0,
\infty)$ for all $\omega \in \Omega$ and
\[
J_{X_{\bullet}(\omega)} = \dju{n=1}{\infty}{3.3}{2.4}\{T_n(\omega)\} \textup{
for all } \omega \in \Omega,
\]
or equivalently,
\[
\{\Delta X \not= 0\} = \dju{n = 1}{\infty}{3.3}{2.4}\rsi T_n \lsi\,.
\]
In particular $\{\Delta X \not= 0\}$ is a thin set.
\end{theorem}
\begin{proof}
\smartqed
Due to the assumption on $X$ and Lemma \ref{lemma:optional and regulated paths},
$X^{-}$ is predictable, $X^{+}$ is adapted
and all trajectories of $X^{+}$ are
right-continuous on $\R^+$. Hence, by Theorem \ref{thm:POPA} both,
$X^{-}$ and $X^+$ are optional processes, implying that
the jump process $\Delta X = X^{+} - X^{-}$ is optional as well.
Fix $\omega \in \Omega$. Consider the trajectory $f : =
X_{\bullet}(\omega)$. Due to Proposition
\ref{thm:finitely_layered_jump_rep} we may represent $J_f$ as
\[
J_f = \dju{m = 1}{\infty}{3.4}{2.2} D_m(\omega),
\]
where $\kappa_m(\omega) : = {\#}(D_m(\omega)) < +\infty$
for all $m \in \N$.
Let ${\mathbb{M(\omega)}} : =\{m \in \N : D_{m}(\omega) \not=
\emptyset\}$. Fix an arbitrary $m \in {\mathbb{M(\omega)}}$. Consider
\[
0 < S_1^{(m)}(\omega) : = \min(D_{m}(\omega))
\]
and, if $\kappa_m(\omega) \geq 2$,
\[
0 < S_{n+1}^{(m)}(\omega) := \min\big(D_{m}(\omega) \cap
(S_n^{(m)}(\omega), \infty)\big),
\]
where $n \in \{1, 2, \ldots, \kappa_m(\omega) - 1\}$.
Since $\Delta X$ is optional, it follows that $\{\Delta X \in B\}$
is optional for all Borel sets $B \in \mathcal{B}(\R)$. Moreover,
since $\Delta f(0) = \Delta X_0(\omega) : = 0$ (by assumption), it
actually follows that $\{s \in \R^+ : (\omega, s) \in \{\Delta X \in
C\}\} = \{s \in (0, \infty) : (\omega, s) \in \{\Delta X \in C\}\}$
for all Borel sets $C \in \mathcal{B}(\R)$ which do not contain $0$.
Hence, as the construction of the sets $D_{m}(\omega)$ in the proof
of Proposition \ref{thm:finitely_layered_jump_rep} clearly
shows, $S_1^{(m)}$ is the d\'{e}but of an optional set.
Consequently, due to Theorem \ref{thm:optional debut is stopping
time}, it follows that $S_1^{(m)}$ is a stopping time. If
$S_n^{(m)}$ is a stopping time, the stochastic interval $\lsi
S_n^{(m)}, \infty \rsi$ is optional too (cf. \cite{HWY1992},
Theorem 3.16). Thus, by construction, $S_{n+1}^{(m)}$ is the
d\'{e}but of an optional set and hence a stopping time. Due to Lemma
\ref{lemma:recursive construction of finite sets}, we have
\[
J_f = \dju{m \in {\mathbb{M(\omega)}}}{}{6.0}{2.4} D_{m}(\omega) =
\dju{m \in {\mathbb{M(\omega)}}}{}{6.0}{4.0} \dju{n =
1}{\kappa_m(\omega)}{4.5}{3.0}\{S_n^{(m)}(\omega)\}.
\]
Hence, since for each trajectory of $X$ its set of jumps is not
finite, the at most countable set ${\mathbb{M(\omega)}}$ is not
finite, hence countable, and a simple relabeling of the stopping
times $S_n^{(m)}$ finishes the proof.
\qed
\end{proof}
\begin{theorem}\label{thm:optional random measures}
Let $X : \Omega \times \R^+ \longrightarrow \R$ be an
optional process such that all trajectories of $X$ are regulated,
$\Delta X_0 = 0$ and the set of jumps of each trajectory of $X$ is
not finite. Then the function
\begin{eqnarray*}
j_X : \Omega \times \mathcal{B}(\R^+) \otimes
\mathcal{B}(\R) & \longrightarrow & \Z^+ \cup \{+ \infty\}\\
(\omega, G) & \mapsto & \sum_{s > 0} \ind_{G}\big(s, \Delta
X_{s}(\omega) \big) \ind_{\{\Delta X \not= 0\}}(\omega, s)
\end{eqnarray*}
is an integer-valued random measure.
\end{theorem}
\begin{proof}
\smartqed
We only have to combine Theorem \ref{thm:jumps of optional
processes and stopping times} and \cite{HWY1992}, Theorem
11.13.
\qed
\end{proof}
Implementing the exhausting series of stopping times $(T_n)_{n \in \N}$ of the thin set $\{\Delta X \not= 0\}$ from Theorem \ref{thm:jumps of optional processes and stopping times}, we immediately obtain
\begin{corollary}
Let $B \in {\mathcal{B}}{}({\mathbb{R}}^{+} \times {\mathbb{R}}{})$ and $\omega \in \Omega$. Then
\begin{eqnarray*}
j_X(\omega, B) & = & \int_{{\mathbb{R}}^{+} \times {\mathbb{R}}} \ind_{B}(s, x) j_X(\omega, d(s,x))\\
& = & \sum_{n=1}^\infty \ind_B\big(T_n(\omega), \Delta X_{T_n(\omega)}(\omega)\big)\\
& = & \#\big\{n \in {\mathbb{N}} : \big(T_n(\omega), \Delta X_{T_n(\omega)}(\omega)\big) \in B \big\}\,.
\end{eqnarray*}
\end{corollary}
\begin{proof}
\smartqed
Since $\ind_{\{\Delta X \not= 0\}}(\omega, s) = \sum_{n=1}^\infty \ind_{\,\csbl T_n \csbr}(\omega, s) = \sum_{n=1}^\infty \ind_{\{T_n(\omega)\}}(s)$, we just have to permute the two sums.
\qed
\end{proof}
We finish our paper with the following two natural questions:
\begin{prb}
Let $X : \Omega \times \R^+ \longrightarrow \R$ be an
optional process such that all trajectories of $X$ are regulated
and $\Delta X_0 = 0$. Does Lemma \ref{thm:Applebaum} hold for $X$?
\end{prb}
\begin{prb}
Let $X : \Omega \times \R^+ \longrightarrow \R$ be an
optional process such that all trajectories of $X$ are regulated
and $\Delta X_0 = 0$. Does Proposition \ref{thm:integral} hold for $X$?
\end{prb}
\begin{acknowledgement}
The author would like to thank Monique Jeanblanc for the indication of the very valuable references \cite{ACDJ_1_2013}, \cite{ACDJ_2_2013} and \cite{ACDJ_3_2013}.
\end{acknowledgement}
\input{referenc_FO_2}
\end{document}
-------
\section{Section Heading}
\label{sec:2}
Instead of simply listing headings of different levels we recommend to
let every heading be followed by at least a short passage of text.
Further on please use the \LaTeX\ automatism for all your
cross-references and citations.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
Use the standard \texttt{equation} environment to typeset your equations, e.g.
\begin{equation}
a \times b = c\,,
\end{equation}
however, for multiline equations we recommend to use the \texttt{eqnarray} environment\footnote{In physics texts please activate the class option \texttt{vecphys} to depict your vectors in \textbf{\itshape boldface-italic} type - as is customary for a wide range of physical subjects}.
\begin{eqnarray}
a \times b = c \nonumber\\
\vec{a} \cdot \vec{b}=\vec{c}
\label{eq:01}
\end{eqnarray}
\subsection{Subsection Heading}
\label{subsec:2}
Instead of simply listing headings of different levels we recommend to
let every heading be followed by at least a short passage of text.
Further on please use the \LaTeX\ automatism for all your
cross-references\index{cross-references} and citations\index{citations}
as has already been described in Sect.~\ref{sec:2}.
\begin{quotation}
Please do not use quotation marks when quoting texts! Simply use the \verb|quotation| environment -- it will automatically render Springer's preferred layout.
\end{quotation}
\subsubsection{Subsubsection Heading}
Instead of simply listing headings of different levels we recommend to
let every heading be followed by at least a short passage of text.
Further on please use the \LaTeX\ automatism for all your
cross-references and citations as has already been described in
Sect.~\ref{subsec:2}, see also Fig.~\ref{fig:1}\footnote{If you copy
text passages, figures, or tables from other works, you must obtain
\textit{permission} from the copyright holder (usually the original
publisher). Please enclose the signed permission with the manucript. The
sources\index{permission to print} must be acknowledged either in the
captions, as footnotes or in a separate section of the book.}
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
\paragraph{Paragraph Heading}
Instead of simply listing headings of different levels we recommend to
let every heading be followed by at least a short passage of text.
Further on please use the \LaTeX\ automatism for all your
cross-references and citations as has already been described in
Sect.~\ref{sec:2}.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
For typesetting numbered lists we recommend to use the \verb|enumerate| environment -- it will automatically render Springer's preferred layout.
\begin{enumerate}
\item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.}
\begin{enumerate}
\item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.}
\item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.}
\end{enumerate}
\item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.}
\end{enumerate}
\subparagraph{Subparagraph Heading} In order to avoid simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text.
For unnumbered list we recommend to use the \verb|description| environment -- it will automatically render Springer's preferred layout.
\begin{description}
\item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development, cf. Table~\ref{tab:1}.}
\begin{description}
\item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.}
\item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.}
\end{description}
\item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.}
\end{description}
\runinhead{Run-in Heading Boldface Version} Use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}.
\subruninhead{Run-in Heading Italic Version} Use the \LaTeX\ automatism for all your cross-refer\-ences and citations as has already been described in Sect.~\ref{sec:2}\index{paragraph}.
\begin{table}
\caption{Please write your table caption here}
\label{tab:1}
\begin{tabular}{p{2cm}p{2.4cm}p{2cm}p{4.9cm}}
\hline\noalign{\smallskip}
Classes & Subclass & Length & Action Mechanism \\
\noalign{\smallskip}\svhline\noalign{\smallskip}
Translation & mRNA$^a$ & 22 (19--25) & Translation repression, mRNA cleavage\\
Translation & mRNA cleavage & 21 & mRNA cleavage\\
Translation & mRNA & 21--22 & mRNA cleavage\\
Translation & mRNA & 24--26 & Histone and DNA Modification\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
$^a$ Table foot note (with superscript)
\end{table}
\section{Section Heading}
\label{sec:3}
Instead of simply listing headings of different levels we recommend to
let every heading be followed by at least a short passage of text.
Further on please use the \LaTeX\ automatism for all your
cross-references and citations as has already been described in
Sect.~\ref{sec:2}.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
If you want to list definitions or the like we recommend to use the Springer-enhanced \verb|description| environment -- it will automatically render Springer's preferred layout.
\begin{description}[Type 1]
\item[Type 1]{That addresses central themes pertainng to migration, health, and disease. In Sect.~\ref{sec:1}, Wilson discusses the role of human migration in infectious disease distributions and patterns.}
\item[Type 2]{That addresses central themes pertainng to migration, health, and disease. In Sect.~\ref{subsec:2}, Wilson discusses the role of human migration in infectious disease distributions and patterns.}
\end{description}
\subsection{Subsection Heading}
In order to avoid simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Use the \LaTeX\ automatism for all your cross-references and citations citations as has already been described in Sect.~\ref{sec:2}.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
\begin{svgraybox}
If you want to emphasize complete paragraphs of texts we recommend to use the newly defined Springer class option \verb|graybox| and the newly defined environment \verb|svgraybox|. This will produce a 15 percent screened box 'behind' your text.
If you want to emphasize complete paragraphs of texts we recommend to use the newly defined Springer class option and environment \verb|svgraybox|. This will produce a 15 percent screened box 'behind' your text.
\end{svgraybox}
\subsubsection{Subsubsection Heading}
Instead of simply listing headings of different levels we recommend to
let every heading be followed by at least a short passage of text.
Further on please use the \LaTeX\ automatism for all your
cross-references and citations as has already been described in
Sect.~\ref{sec:2}.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
\begin{theorem}
Theorem text goes here.
\end{theorem}
\begin{definition}
Definition text goes here.
\end{definition}
\begin{proof}
Proof text goes here.
\qed
\end{proof}
\paragraph{Paragraph Heading}
Instead of simply listing headings of different levels we recommend to
let every heading be followed by at least a short passage of text.
Further on please use the \LaTeX\ automatism for all your
cross-references and citations as has already been described in
Sect.~\ref{sec:2}.
Note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
\begin{theorem}
Theorem text goes here.
\end{theorem}
\begin{definition}
Definition text goes here.
\end{definition}
\begin{proof}
\qed
Proof text goes here.
\qed
\end{proof}
\begin{acknowledgement}
If you want to include acknowledgments of assistance and the like at the end of an individual chapter please use the \verb|acknowledgement| environment -- it will automatically render Springer's preferred layout.
\end{acknowledgement}
\section*{Appendix}
\addcontentsline{toc}{section}{Appendix}
When placed at the end of a chapter or contribution (as opposed to at the end of the book), the numbering of tables, figures, and equations in the appendix section continues on from that in the main text. Hence please \textit{do not} use the \verb|appendix| command when writing an appendix at the end of your chapter or contribution. If there is only one the appendix is designated ``Appendix'', or ``Appendix 1'', or ``Appendix 2'', etc. if there is more than one.
\begin{equation}
a \times b = c
\end{equation}
\input{referenc_FO_2}
\end{document}
|
1,116,691,499,102 | arxiv | \section{Introduction}\label{sec:Intro}
The history of the universe is encoded in the matter and radiation that it contains. Within the current leading paradigm of the early universe all deviations from homogeneity and isotropy today were sourced by quantum fluctuations of some, currently unknown, fields during an initial phase of approximately de Sitter expansion known as inflation. We have access to these fields through their statistical properties, which are imprinted onto everything we see in the universe today. This gives us a window through which we can learn about the physics of our universe at energies far higher than we can recreate on earth. So, understanding the statistics of these fields, in particular their correlation functions and the wavefunction from which they originate is a major goal of theoretical cosmology.
In recent years much progress has been made towards developing bootstrap techniques\cite{Creminelli:2011mw,Kehagias:2012pd,Mata:2012bx,Bzowski:2013sza,Ghosh:2014kba,Kundu:2014gxa,Kundu:2015xta,Pajer:2016ieg,Bzowski:2019kwd,Baumann:2019oyu,PSS,Isono:2020qew,Green:2020ebl,Baumann:2020dch,benincasa2022wavefunctionals} for both wavefunction coefficients and correlators. By now the combination of unitarity\cite{COT,sCOTt,Cespedes:2020xqq,goodhew2021cutting}, locality\cite{MLT} and the flat space amplitude limit\cite{Raju:2012zr,Maldacena:2011nz,Arkani-Hamed:2018kmz,benincasa2022amplitudes} have been successful in constraining the most general form of three point functions for both scalars\cite{BBBB} and tensors\cite{cabass2022bootstrapping,cabass2022graviton} as well as specific four point functions\cite{bonifacio2021amplitudes} and loop corrections to the power spectrum\cite{sCOTt}, for a review see\cite{Baumann:2022jpr}. However, to fix these functions at tree level we require a very restrictive, polynomial, ansatz\cite{BBBB,cabass2022bootstrapping}. This is analogous in scope and constraining power to the flat space result that tree level amplitudes can only contain at worst simple poles of invariants built from the exchanged 4-momenta\cite{Benincasa:2007xk,TASI}. A priori we cannot make this assumption and in fact we find logarithmic terms in, for example, non-derivative contact interactions involving both massless and conformally coupled fields\cite{falk1992angular,Arkani-Hamed:2018kmz}. If present, such divergences significantly weaken the constraining power of both the cosmological optical theorem\cite{COT} and related cutting rules\cite{sCOTt,goodhew2021cutting} as well as the manifestly local test\cite{MLT}. This is because there are many more ways to match the discontinuities and singularities they require if more general functions are allowed.
In this paper we prove that, for a theory that is:
\begin{itemize}
\item Massless,
\item Bosonic,
\item Scale invariant,
\item In even spacetime dimension,
\item Parity even,
\item Built from interactions with at least two derivatives, and
\item Tree level
\end{itemize}
all wavefunction coefficients are rational functions in the energies of the internal and external particles (plus perhaps some contractions of helicity tensors and three momenta). The conditions that are required on these fields may seem quite restrictive. However, in even spacetime dimension, they cover all gravitational interactions which are built from second derivatives of the metric through the Riemann tensor. Furthermore, they also are sufficient to capture the effective field theory (EFT) of single field inflation. This is because the EFT of inflation is a theory of a massless goldstone mode and so is built entirely from its derivatives in the scale invariant limit\cite{pich2018effective}. This work builds on and generalises the results in\cite{anninos2015late} where they show that the tree level wavefunction is free from logarithmic divergences for gauge fields and gravity in $3+1$ spacetime dimensions.
To do this we will start in \cref{sec:Free} by demonstrating that it is possible to express the solution to the equations of motion as a series containing no odd powers of the conformal time less than the number of spatial dimensions. As a corollary of this we will derive an extension to the manifestly local test\cite{MLT} that is valid in arbitrary spacetime dimension. We then show, in \cref{sec:Contact}, that this result is incompatible with logarithmic divergences from contact interactions of two derivative theories. This is done by first considering the time dependence arising from the time integrals. Then using an altered ansatz for the solution to demonstrate that the only possible irrational function of the momentum that can contribute must be accompanied by a logarithmic time divergence. In \cref{sec:Exchange}, we extend these arguments to exchange diagrams. In \cref{sec:Conclusion}, we first briefly discuss how breaking the assumptions outlined above invalidates these arguments and reintroduces the possibility of logarithmic divergences. Then, finally, we highlight some potential future directions for extending this work and make some concluding remarks.
\section{The Free Theory}\label{sec:Free}
In this section we find a series solution to the equations of motion which, for massless particles, contain no odd powers of $k$ or $\eta$ less than the spatial dimension. In order to make these discussions concrete whilst staying as theory agnostic as possible, we will consider a general class of free theories. In $d+1$ dimensional spacetime, for traceless\footnote{If we also wish to consider fields with a non-zero trace we can subtract the trace and treat it as an additional scalar field.}, integer spin fields we use the free action developed in \cite{bordin2018light} and discussed in the context of the bootstrap method in \cite{goodhew2021cutting},
\begin{equation}
S=\int d^{d+1}x a^{d-1}\frac{1}{2s!}\left[{\Phi_{i_1\dots i_s}'}^2-c_s^2\left(\partial_j\Phi_{i_1\dots i_s}\right)^2-\delta c_s^2 \left(\partial^j\Phi_{ji_2\dots i_s}\right)^2-m^2a^2\left(\Phi_{i_1\dots i_s}\right)^2\right].
\end{equation}
Here, $'$ indicates derivatives with respect to conformal time $\eta$ and the latin indices, $i,j$, span the $d$ dimensional spacelike hypersurface orthogonal to this coordinate. We have enforced scale invariance by including inverse factors of the scale factor for each coordinate derivative. Just as in \cite{goodhew2021cutting} we Fourier transform and diagonalise this using the helicity modes, $\Phi_h$, defined by
\begin{equation}
\Phi_{i_1\dots i_s}=\Phi_h \epsilon^h_{i_1\dots i_s}.
\end{equation}
These helicity tensors are defined as an outer product of helicity vectors,
\begin{equation}
\epsilon^h_{i_1\dots i_s}=\epsilon^{h_1}_{i_1}\dots \epsilon^{h_s}_{i_s},
\end{equation}
which are themselves defined so that they satisfy
\begin{align}
\epsilon^h_i(-\textbf{k})&=\left[\epsilon^h_i(\textbf{k})\right]^*,& \left[\epsilon_i^h(\textbf{k})\right]^*\epsilon_i^{h'}(\textbf{k})&=4\delta^{h h'}.
\end{align}
Note that these fields are not assumed to be transverse, $h$ is allowed to take $d$ different values including $0$ where $\epsilon^0$ is proportional to the momentum. The contributions from the other helicity modes are therefore transverse by the orthogonality condition. The equations of motion in terms of these variables are therefore
\begin{equation}
\eta^2 \Phi''_{h}+p(\eta)\eta \Phi'_{h}+q_{h}(\eta)\Phi_{h}=0,
\end{equation}
where
\begin{align}
p(\eta)&=1-d,&q_{h}(\eta)&=\frac{m^2}{H^2}+\eta^2 c_s^2k^2+\eta^2\delta c_s^2 k^2 \lambda_h,
\end{align}
and $k^2\lambda_h$ are the eigenvalues corresponding to each helicity mode $h$, the $k^2$ has been factored out so that $\lambda_h$ is just a number, independent of $k$ or $\eta$. From this point onwards we will be setting $H=1$ for notational simplicity. Due to the physical origin of each of the terms in $p$ and $q$ they are guaranteed to be analytic everywhere except, perhaps, in the infinite past.
This equation is of the form studied by Frobenius\cite{boas1999mathematical} and therefore has a solution that converges everywhere in the range $[-\infty,0)$ of the form
\begin{equation}
\Phi_h=\eta^\Delta\sum_{k=0}^\infty A_k \eta^k.
\end{equation}
Where $\Delta$ satisfies the indicial polynomial,
\begin{equation}
I(\Delta)=\Delta(\Delta-1)+(1-d)\Delta+m^2=0 \Rightarrow \Delta_\pm= \frac{d}{2}\pm\sqrt{\frac{d^2}{4}-m^2}=\frac{d}{2}\pm\nu.
\end{equation}
When $\Delta_+-\Delta_-=2\nu$ is not an integer we therefore have two linearly independent solutions that can be defined recursively
\begin{align}
\Phi_h^\pm&=\eta^{\Delta_\pm}\sum_{n=0}^\infty A_n^{\pm}\eta^n,& A_n^{\pm}=\frac{-1}{I(n+\Delta_\pm)}\sum_{m=0}^{n-1}\frac{q_h^{(n-m)}(0)}{(n-m)!}A_m^{\pm}.
\end{align}
Where $A_0^{\pm}$ are fixed by the initial (or boundary) conditions.
When $2\nu$ is an integer these two solutions are not guaranteed to be linearly independent and it is possible for there to be a logarithmic term. In these cases, the method of Frobenius tells us to take a pair of solutions of the form
\begin{align}
\Phi_h^1&=\Phi_h^+,\\
\Phi_h^2&=C\Phi_h^1\log(\eta)+\eta^{\Delta_-}\sum_{n=0}^\infty B_n\eta^n.
\end{align}
The coefficients $B_n$ and $C$ are fixed by the equations of motion,
\begin{equation}
\sum_{n=0}^\infty 2C\eta^{2\nu}A_n^{+}\eta^n(\nu+n)+\eta^nB_n\left((n+\Delta_-)(n-\Delta_+)+\sum_{m=0}^\infty \frac{q_h^{(m)}(0)}{m!}\eta^m\right)=0.
\end{equation}
For $n<2\nu$ this expression is exactly the same as the equation fixing $A_n^{h,-}$ and so $B_n=A_n^{h,-}$ for $n<2\nu$. However, for $n=2\nu$ we find
\begin{equation}
2CA_0^{+}\nu+B_{2\nu}(-\Delta_+\Delta_-+q_{h}^{(0)})+\sum_{m=1}^{2\nu}\frac{q_h^{(m)}(0)}{m!}A_{2\nu-m}^{-}=0.
\end{equation}
The term multiplying $B_{2\nu}$ is zero and as such we cannot constrain it so it can be set arbitrarily. In fact we can see that any non-zero contribution coming from $B_{2\nu}$ is degenerate with $\Phi^1_h$ so we can absorb this arbitrariness into $A_0$. Furthermore, for finite $A_{2\nu}$, the sum vanishes,
\begin{equation}
\sum_{m=1}^{2\nu}\frac{q_h^{(m)}(0)}{m!}A_{2\nu-m}^{-}=\sum_{m=0}^{2\nu-1}\frac{q_h^{(2\nu-m)}(0)}{(2\nu-m)!}A_{m}^{-}=-I(\Delta_+)A_{2\nu}^{-}=0,
\end{equation}
which implies that
\begin{equation}
2CA_0^{+}\nu=0.
\end{equation}
We know that $A_0^{+}\neq0$ as this would result in $\Phi_h^1=0$. Then, we must have either $C=0$ or $\nu=0$. Therefore, for $\nu\neq0$ and finite $A_{2\nu}$, we will be free from these logarithmic contributions to the modefunction and the solution can be written as a sum of powers of $\eta$.
Scale invariance fixes $m,\ c_s$ and $\delta c_s$ to be constants and so all odd coefficients will vanish. To see this first note that the only the 0th and 2nd derivatives of $q$ are non-zero,
\begin{equation}
q_h^{(n)}=m^2\delta_{n0}+(c_s^2+\delta c_s^2 \lambda_h)k^2 \delta_{n2}.
\end{equation}
This gives us a closed form expression for the coefficients,
\begin{align}
A_{2n}^{\pm}&=A_0^{\pm}k^{2n}\left(\frac{c_h^2}{-4}\right)^n\frac{\Gamma(1\pm\nu)}{\Gamma(1+n)\Gamma(1\pm \nu+n)},\\A_{2n+1}^\pm&=0,
\end{align}
where $c_h^2=c_s^2+\delta c_s^2\lambda_h$. For integer $\nu$ we can see that this expression diverges for $2n=2\nu$ and so we cannot conclude that $C=0$. This can also be seen as insisting that $B_{2\nu}=0$ terminates this sum but the resulting polynomial is not a solution to the equations of motion. For odd integer $2\nu$ we can replace $A_n^-$ with $B_n$ for all $n$. Fixing $B_{2\nu}=0$ in this case ensures that the two are equal beyond $n=\nu$. The general solution to the equations of motion for non-integer $\nu$ can be written as a linear combination of these two sums,
\begin{equation}
\Phi_h=
B_0\sum_{n=0}^{\infty} \frac{ \Gamma(1-\nu)(c_hk\eta)^{2n+\Delta_-}}{(-4)^n\Gamma(1+n)\Gamma(1-\nu+n)}+A_0 \sum_{n=0}^\infty \frac{\Gamma(1+\nu)(c_hk\eta)^{2n+\Delta_+}}{(-4)^n\Gamma(1+n)\Gamma(1+\nu+n)}.
\end{equation}
Here we have redefined the arbitrary coefficients to include powers of $c_hk$ that will be convenient later and dropped the $+$ label. This expression is valid for all non-integer values of $\nu$ as we recovered these series solutions when $2\nu$ is an odd integer. The bulk-boundary propagator coming from this solution is
\begin{equation}\label{eq:K}
K_k^h(\eta)=\left(\frac{\eta}{\eta_0}\right)^{\Delta_-}\left(\sum_{n=0}^{\infty}\frac{ \Gamma(1-\nu)(c_hk\eta)^{2n}}{(-4)^n\Gamma(1+n)\Gamma(1-\nu+n)}+\frac{A_0}{B_0}\sum_{n=0}^\infty \frac{\Gamma(1+\nu)(c_hk\eta)^{2n+2\nu}}{(-4)^n\Gamma(1+n)\Gamma(1+\nu+n)} \right).
\end{equation}
The important point to note here is that, for massless particles, $\Delta_-=0$, there are no odd powers of $\eta$ less than $\eta^d$ in this expression. This will be the key observation that will allow us to exclude the possibility of logarithmic divergences\footnote{I would like to thank Enrico Pajer for sharing an unpublished manuscript suggesting this approach to the problem of identifying divergences.}. It is interesting that this absence of odd powers of $\eta$ is precisely what is guaranteed by the Fefferman-Graham expansion of the metric. This was used in \cite{anninos2015late} to conclude that gravity contains no logarithmic divergences. The formalism used here allows us to extend this result to higher (and lower) spins. The bulk-bulk propagator is
\begin{equation}
G_p^h(\eta,\eta')=\begin{cases}\displaystyle
\frac{\Phi_h^1(p,\eta')\Phi_h^2(p,\eta)}{(-\eta')^{1-d}W(\Phi_h^1,\Phi_h^2)}&\eta'\leq\eta,\\\displaystyle
\frac{\Phi_h^1(p,\eta)\Phi_h^2(p,\eta')}{(-\eta')^{1-d}W(\Phi_h^1,\Phi_h^2)}&\eta\leq\eta',
\end{cases}
\end{equation}
where $\Phi_h^{1/2}$ are solutions with coefficients chosen so that they satisfy the specified boundary conditions. The factor of $(-\eta')^{1-d}$ arises because the propagator is the Green's function for the equation
\begin{equation}
a^{d-1}G''+p(\eta)a^{d}G'+q(\eta)a^{1+d}G=\delta(\eta-\eta'),
\end{equation}
so we must adjust the junction condition accordingly.
\subsection{The Manifestly Local Test}\label{sec:MLT}
We now make a brief aside to the main topic of the paper to consider the Manifestly Local Test (MLT) for more general theories. Consider an arbitrary wavefunction coefficient. This is built out of bulk-bulk and bulk-boundary propagators plus some differential operators, $\mathcal{O}_a$ that act on them,
\begin{equation}
\psi_N=\int \prod_{a}\frac{d\eta_a}{\eta_a^{d+1}} \prod_b \mathcal{O}_b K_{k_b}^{h_b} \prod_c \mathcal{O}_c G_{p_c}^{h_c}(\eta,\eta').
\end{equation}
The operators $\mathcal{O}_a$ are arbitrary except they cannot contain any inverse Laplacians (this is the sense in which the theory is ``Manifestly'' Local). For massless particles in $3+1$ spacetime dimensions the MLT\cite{MLT} constrains the second term in the Taylor expansion about zero of all such wavefunction coefficients,
\begin{equation}
\left.\frac{\partial \psi_n}{\partial k_a}\right\rvert_{k_a=0}=0.
\end{equation}
The MLT may appear to some readers like a consistency relation. However, it is distinct as it holds away from physical momenta.
This property is inherited from the fact that there is no term linear in $k$ in the bulk-boundary or bulk-bulk propagators. It is not spoiled by the presence of the operators $\mathcal{O}$ as momentum and time derivatives commute whilst spatial derivatives just bring down additional even powers of the momentum,
\begin{equation}
\textbf{k}_a\cdot\textbf{k}_a=k_a^2,
\end{equation}
or
\begin{equation}
\textbf{k}_a\cdot\textbf{k}_b=\frac{1}{2}\left[(\textbf{k}_a+\textbf{k}_b)\cdot(\textbf{k}_a+\textbf{k}_b)-k_a^2-k_b^2\right]=\frac{1}{2}\left(p^2-k_a^2-k_b^2\right).
\end{equation}
Note that the internal momenta, $p$, are regarded as independent variables when performing this derivative. There may also be some polarisation factors that contract with the momenta but these are explicitly stripped off in the application of the MLT and so we ignore them from the outset here.
In the general dimensional case we find that the first odd power of $k$ in the bulk-boundary propagator is $k^{2\nu}$ and so
\begin{equation}\label{eq:MLT}
\left.\frac{\partial^n K_{k}^{h}(\eta)}{\partial^n k}\right\rvert_{k=0}=0,\ \forall\textrm{ odd } n<2\nu.
\end{equation}
In particular, for massless fields this is true for all odd $n<d$. This follows straightforwardly from the series solution in \cref{eq:K}. Just as in the $d=3$ case, this property of the propagators is passed on to the wavefunction coefficients. One might worry about the potential for $A_0$ and $B_0$ to depend on $k$. However, if we look at the equations of motion we can, by making the substitution $\eta\rightarrow x=c_hk\eta$, conclude that all $k$ dependence in the solution is through $x$. Therefore, the only additional $k$ dependence possible is through an overall scaling which is explicitly canceled in the bulk-boundary propagator by its late time limit.
To see the power of this result consider the three point interactions of a massless field in $d=5$ with two derivatives. The wavefunction coefficient will have a total energy pole of order $3$\cite{COT,MLT}. The most general time independent contribution to the solution that we can write down with the correct scaling\footnote{Scale invariance demands that all wavefunction coefficients for massless fields scale like the momentum to the power of the dimension.} is thus
\begin{multline}
k_T^3\psi_3=C_{00}k_T^8+C_{10}e_2k_T^6+C_{01}e_3k_T^5+C_{20}e_2^2k_T^4+C_{11}e_2e_3k_T^3+C_{30}e_2^3k_T^2+C_{02}e_3^2k_T^2\\+C_{21}e_2^2e_3k_T+C_{40}e_2^4+C_{12}e_2e_3^2,
\end{multline}
where $k_T,\ e_2,\ e_3$ are the elementary symmetric polynomials,
\begin{align}
k_T&=k_1+k_2+k_3\\
e_2&=k_1 k_2+k_1 k_3+k_2 k_3\\
e_3&=k_1 k_2 k_3.
\end{align}
We expect to find two distinct wavefunction coefficients, arising from the interactions $(\partial_i\phi)^2\phi$ and ${\phi'}^2\phi$. However, the $d=3$ MLT is only enough to reduce the number of coefficients from $10$ to $5$. The second condition that this extended MLT gives us is exactly sufficient to further reduce the number of free coefficients to $2$,
\begin{multline}
k_T^3\psi_3=\frac{1}{3}(C_{01}-5C_{00})e_2e_3^2+\frac{1}{2}(C_{01}-5C_{00})e_2^2e_3k_T+\frac{1}{6}(C_{01}-5C_{00})(3e_2^3-e_3^2)k_T^2\\-C_{01}e_2e_3k_T^3+\frac{1}{2}(15C_{00}-C_{01})e_2^2k_T^4+C_{01}e_3k_T^5-5C_{00}e_2k_T^6+C_{00}k_T^8.
\end{multline}
We recover results proportional to $(\partial_i\phi)^2\phi$ for $C_{00}=C_{01}$ and to ${\phi'}^2\phi$ for $C_{00}=0$. Note that we have ignored the possibility of time divergences in this ansatz. If we had included them we would have been able to constrain the allowed terms to
\begin{equation}
\psi_3\supset \frac{A(k_T^2-2e_2)}{\eta_0^3}+\frac{B(2e_3k_T-e_2^2)+C(k_T^2-2e_2)^2}{\eta_0}.
\end{equation}
These are precisely the divergent contributions to the two derivative interactions considered.
The absence of odd powers of $k$ less than $k^{2\nu}$ in the bulk-boundary propagator is true for all real $\nu$\footnote{The series expansion also holds for imaginary $\nu$ but this size comparison loses meaning in that case.}. Furthermore, for integer $\nu$ the logarithmic terms enter first at $(k\eta)^{2\nu}\log(k\eta)$ and all lower powers of $k$ are even. This formalism thus further demonstrates that the MLT extends beyond massless fields and holds for all sufficiently light fields. We can see this in $d=3$ for the case of two conformally coupled fields interacting with a massive field which has a three point function given by\cite{Arkani-Hamed:2015bza,COT}
\begin{equation}
\psi_3^{\varphi\varphi\sigma}(k_1,k_2,k_3)\propto k_3^{-\frac{1}{2}+\nu}\left._2F_1\right.\left[\frac{1}{2}-\nu,\frac{1}{2}+\nu,1,\frac{k_3-k_1-k_2}{2k_3}\right].
\end{equation}
Here $k_1$ and $k_2$ are the momenta of the conformally coupled fields and $k_3$ is the momentum of the massive field whose mass gives us $\nu$.
Conformally coupled scalars have $2v=1$ and so we have no expectation that the first derivative with respect to $k_1$ or $k_2$ will vanish. However, the first derivative with respect to $k_3$ is
\begin{equation}
\lim_{k_3\rightarrow 0}\partial_{k_3}\psi_3^{\varphi\varphi\sigma}(k_1,k_2,k_3)\propto \lim_{k_3\rightarrow 0}k_3^{2\nu-1}(k_1+k_2)^{-\frac{1}{2}-\nu}+\mathcal{O}(k_3),
\end{equation}
which vanishes for $2\nu>1$ exactly as expected from \cref{eq:MLT}.
\section{Contact Diagrams}\label{sec:Contact}
In this section we demonstrate that for theories satisfying the assumptions in \cref{sec:Intro} all contact interactions generate wavefunction coefficients that are rational functions of the energy. For theories involving gravity some of these terms will come from solving the constraint equations which are, generically, differential equations relating the transverse, traceless component of the metric to its other components on the final slice of inflation. This is potentially problematic as it drastically increases the complexity of the allowed interaction terms. However, if we choose to decompose the metric in the ADM formalism in the unitary gauge\cite{arnowitt2008republication,Maldacena2003} the lapse and shift enter into these equations with no time derivatives. So, in Fourier space, these equations become non-dymamical and can be solved algebraically. We are, therefore, in a specific gauge but the absence of divergences in the correlation functions is a physical statement and so must be gauge independent. This does not completely remove the complication however, as it requires us to take into consideration the possibility of inverse spatial derivatives. One might worry that this spoils the assumption that our theory contains interactions that involve at least two derivatives. Fortunately, the net number of derivatives on all the interaction terms (counting these inverse derivatives negatively) entering the equations will remain at least two. This is ensured by the dimensionality of the constraint equations.
To begin with we consider only contact diagrams. We will assume throughout that we have removed all time derivatives higher than the first using the equations of motion to arbitrarily high order in perturbation theory. This avoids the complications that higher derivatives bring to the definition of the conjugate momentum. For massive fields the equations of motion allow us to replace a term containing a second time derivative with the mass term which has no derivatives. Therefore, such theories will generically contain interactions involving no derivatives. This will spoil our arguments and so we must at this point restrict ourselves to massless fields. In odd spacetime dimension $2\nu=d$ is even and so we expect logarithms in the propagator and, by extension, potentially in the correlators. In even spacetime dimension, however, the propagator is given by
\begin{equation}\label{eq:propK}
K_k^h(\eta)=\left(\sum_{n=0}^{\infty}\frac{ \Gamma\left(1-\frac{d}{2}\right)(c_hk\eta)^{2n}}{(-4)^n\Gamma(1+n)\Gamma\left(1-\frac{d}{2}+n\right)}+\frac{A_0}{B_0}\sum_{n=0}^\infty \frac{\Gamma\left(1+\frac{d}{2}\right)(c_hk\eta)^{2n+d}}{(-4)^n\Gamma(1+n)\Gamma\left(1+\frac{d}{2}+n\right)} \right).
\end{equation}
Even for massless fields we are able to reduce the number of time derivatives by one as the equations of motion contain both a first and a second time derivative. In the scale invariant limit of the effective field theory of single-clock inflation, all interaction terms are built from derivatives of fields and so there are always at least three fields that have derivatives acting on them (as interaction terms involve at least three fields multiplying each other). Because of this, the equations of motion cannot reduce the number of derivatives to zero and so, there will always be at least three derivatives acting on the fields even after removing all higher time derivatives. For multifield inflation this is not guaranteed as the other fields can have non-derivative couplings with the inflaton. However, we will assume that these interactions between fields also contain at least two derivatives and can be reduced to single time derivatives to arbitrary order in perturbation theory.
Likewise, in the case of the gravity, as was discussed in\cite{cabass2022graviton}, the corrections to the action are constructed from derivatives of the perturbation to the extrinsic curvature and Riemann curvature tensor on the hypersurface at the end of inflation. These perturbations to the extrinsic curvature are built from first time derivatives of the metric whilst the Riemann curvature tensor is built out of derivatives and products of the Christoffel symbols, which each contain first spatial derivatives of the metric. Therefore, any rotation invariant combinations of the two will contain at least first derivatives on a minimum of two terms in the interaction. This ensures that it is not possible to reduce any such interaction terms to ones including fewer than two derivatives. One could, in principle, further reduce the number of derivatives acting on individual terms through integration by parts at the level of the action. However, we will avoid doing this as it produces needless complications due to boundary terms and the cancellation of divergences between the terms that result from the integration by parts.
\subsection{The Absence of Logarithms in Time}\label{sec:Time}
In this section we argue that contact correlation functions in theories satisfying the assumptions in \cref{sec:Intro} cannot have any logarithmic divergences in time. This will allow us to prove that there can be no irrational functions of momenta in the wavefunction coefficients in the following section. To start with, consider a parity even theory with only spatial derivatives. The contribution to the wavefunction coefficient from an interaction with some number, $2m$,\footnote{Note that we have now set the mass to zero and here $m$ is just some arbitrary positive integer} of spatial derivatives acting on $N$ fields is
\begin{equation}
\Psi_N\supset\lim_{\eta_0\rightarrow 0}\int_{-\infty}^{\eta_0}d\eta\frac{1}{\eta^{d+1}}\eta^{2m}F(\textbf{k}_a)\prod_a^N K_{k_a}^{h_a}(\eta).
\end{equation}
The function $F(\textbf{k}_a)$ is constructed from the appropriate contractions of the $N$ polarisation tensors and $2m$ momenta. We allow each of the fields present to be distinct i.e. have a different sound speed or helicity state but all fields must be massless for this analysis to hold.
When $d$ is even we already have logarithmic terms in the propagator and so generically expect logarithmic divergences, this case will not be considered further. However, for odd $d$, the only way to generate a logarithmic divergence from this integral is for it to contain a $\eta^{-1}$ term\footnote{This assumes that we can exchange the sum with the integral, which we will prove in \cref{sec:Momentum}.}. This requires the product over mode functions to contain an $\eta^{d-2m}$ term. The first odd power of $\eta$ in this product is $\eta^d$. Therefore, we cannot have an $\eta^{d-2m}$ term for any integer $m>0$ and there will not be a logarithm. A further important point here is that for parity odd interactions this analysis breaks down and it is once again possible to generate logarithmic divergences in even spacetime dimensions. Unless there are more derivatives than there are dimensions\cite{cabass2022bootstrapping} in which case all powers of $\eta$ are positive.
We also do not generate logarithms in theories with at least two time derivatives. To see this we replace some of the propagators with their first time derivative,
\begin{align}\nonumber
\frac{\partial_\eta K_k(\eta)}{a(\eta)}&=-\left(\sum_{n=1}^{\infty}\frac{ \Gamma\left(1-\frac{d}{2}\right)2n(c_hk\eta)^{2n}}{(-4)^n\Gamma(1+n)\Gamma\left(1-\frac{d}{2}+n\right)}+\frac{A_0}{B_0}\sum_{n=0}^\infty \frac{\Gamma\left(1+\frac{d}{2}\right)(2n+d)(c_hk\eta)^{2n+d}}{(-4)^n\Gamma(1+n)\Gamma\left(1+\frac{d}{2}+n\right)} \right)\\\label{eq:Kdot}
&=\eta^2\mathcal{K}_k(\eta).
\end{align}
The factor of $a$ in the denominator is included to ensure scale invariance. Notice that factorising out $\eta^2$ leaves $\mathcal{K}_k$ regular in the late time limit but changes the lowest odd power of $\eta$ to $d-2$. The wavefunction for an interaction with $n$ time derivatives is then
\begin{equation}\label{eq:timeonly}
\Psi_N\supset\lim_{\eta_0\rightarrow 0}\int_{-\infty}^{\eta_0}d\eta\frac{1}{\eta^{d+1}}\eta^{2n}\prod_i^n\mathcal{K}_{k_i}(\eta)\prod_j^N K_{k_j}(\eta).
\end{equation}
Note that, unlike in the case of spatial derivatives, $n$ and not $2n$ is the number of time derivatives. The factor of $2$ arises here due to the presence of the $\eta^2$ term in \cref{eq:Kdot}. In even spacetime dimensions, the lowest odd power of $\eta$ that is present in the product over the mode functions is now $d-2$ rather than $d$. This means that if we want to avoid logarithmic divergences we now need $d-2>d-2n$ and so we are guaranteed to avoid such terms provided $n>1$.
Finally, we show that interactions with both $2m$ spatial derivatives and $n$ time derivatives cannot have logarithmic divergences. The spatial derivatives introduce a factor of $\eta^{2m}F(\textbf{k}_a)$ to \cref{eq:timeonly} which changes the above condition to $m+n>1$. We can combine the two cases with and without time derivatives to give a single condition which guarantees that we have no logarithmic divergences arising from parity even contact interactions in even spacetime dimensions,
\begin{equation}
2m+n\geq 2,
\end{equation}
i.e. any theory with at least two derivatives.
In deriving this result we have been completely agnostic to the possibility of other divergences in time. This is because the only other time divergences allowed by this ansatz are polynomial and such divergences are consistent with rational wavefunction coefficients. It also appears that we have been similarly agnostic to the initial conditions. However, insisting on the convergence of the integral (required for exchanging the sum and integral) turns out to fix early time behaviour, which can be understood as an initial condition.
\subsection{Divergences in Momenta}\label{sec:Momentum}
In this section we explore the potential divergences in the momentum to prove that the absence of logarithmic divergences in time ensures the rationality of the wavefunction. After performing the integrals in time there remains a time independent piece that could, theoretically, contain a term that depends logarithmically on the energies. As an example of such an integral consider,
\begin{equation}
\int_{-\infty}^0 \frac{e^{ik_1\eta}-e^{ik_2\eta}}{\eta}d\eta=\log(-ik_1)-\log(-ik_2).
\end{equation}
The series solution cannot say anything about the time independent contributions to the integral. So, it is necessary to consider a different ansatz to complete the proof that all contact diagrams will be rational functions of the energy,
\begin{equation}
\Phi_h=e^{i\kappa\eta}P(\eta),
\end{equation}
where $P$ satisfies
\begin{equation}
\eta^2P''+\eta(2i\kappa\eta -d+1 )P'+(i(1-d)\kappa\eta-\kappa^2\eta^2+c_s^2k^2\eta^2+\lambda_hk^2\delta c_s^2\eta^2)P=0.
\end{equation}
We then choose $\kappa^2=c_s^2k^2+\lambda_hk^2\delta c_s^2=c_h^2k^2$ to cancel the $\eta^2$ terms which reduces the equation to the form
\begin{equation}
\eta^2P''+\eta(2ic_h k\eta -d+1 )P'+(i(1-d)c_h k\eta)P=0.
\end{equation}
This is an equation to which we can, once again, apply the method of Frobenius,
\begin{equation}
\eta^2P''+\eta\tilde{p}P'+\tilde{q}P=0.
\end{equation}
Moreover, the indicial equation is unchanged as $p(0)=\tilde{p}(0)$ and $q(0)=\tilde{q}(0)$ and so our two solutions are
\begin{align}
P_1&=\eta^d\sum_{n=0}^\infty a_n\eta^n\,,\\
P_2&=CP_1\log(\eta)+\sum_{n=0}^\infty b_n\eta^n\,.
\end{align}
Restricting to even spacetime dimension, we must have $C=0$ for consistency with the previous results and
\begin{align}
a_n&=-\frac{1}{n(n+d)}\sum_{m=1}^{n-1}\frac{(m+d)\tilde{p}^{(n-m)}(0)+\tilde{q}^{(n-m)}(0)}{(n-m)!}a_m\\&=-\frac{2n-1+d}{n(n+d)}ic_h ka_{n-1},\\
b_n&=-\frac{1}{n(n-d)}\sum_{m=0}^{n-1}\frac{mp^{(n-m)}(0)+q^{(n-m)}(0)}{(n-m)!}b_m\\&=-\frac{2n-1-d}{n(n-d)}ic_h k b_{n-1}.
\end{align}
We can see from this that, in even spacetime dimension, $b_{\frac{1+d}{2}}=0$ and so this solution is just a polynomial. Unlike the previous instance where we found a polynomial solution, here the equations tell us that the series terminates and so this is a solution. Rather than choosing our two solutions to be $P_1$ and $P_2$ as found here, we instead take $P_2$ and then use the fact that our initial differential equation for $\Phi_h$ was real to guarantee that if $\Phi_h$ is a solution then so too is $\Phi_h^*$ to give a second, linearly independent solution\footnote{The independence of these two solutions is not guaranteed but in this case was checked by calculating their Wronskian which is non-zero, as required for linear independence.}
\begin{align}
\Phi_h^\pm=e^{\pm ic_h k \eta} \sum_{n=0}^{\frac{d-1}{2}}\frac{\Gamma\left(\frac{1-d}{2}+n\right)\Gamma\left(1-d\right)}{n!\Gamma\left(\frac{1-d}{2}\right)\Gamma\left(1-d+n\right)}(\mp 2ic_hk \eta)^n.
\end{align}
Acting on either of these solutions with some derivative in space or time we will bring down some factors of the momentum but will not change the order of the polynomial,
\begin{equation}
\partial_\eta \Phi_h^+=ic_h k e^{ic_h k\eta}\sum_{n=0}^{\frac{d-1}{2}}\frac{\Gamma\left(\frac{1-d}{2}+n\right)\Gamma\left(1-d\right)}{n!\Gamma\left(\frac{1-d}{2}\right)\Gamma\left(1-d+n\right)}\left((- 2ic_h k \eta)^n-2n(-2ic_h k\eta)^{n-1}\right).
\end{equation}
The $N$ particle wavefunction coefficient generated from just the solutions with positive energy can be written as
\begin{equation}\label{eq:PProduct}
\psi_N=\lim_{\substack{L\rightarrow \infty\\\eta_0\rightarrow 0}}\int_{-L}^{\eta_0} d\eta \frac{\eta^{n+2m}}{\eta^{d+1}}F(\textbf{k}_a)\prod_{a=1}^N \tilde{P}_{k_a}(\eta) e^{ic_T k_T \eta},
\end{equation}
where $c_Tk_T=\sum_{a=1}^N c_{h_i}k_i$ is the total energy and, just as before, $n$ is the number of time derivatives whilst $2m$ is the number of spatial derivatives. The polynomials $\tilde{P}_k(\eta)$ can be either $P_k(\eta)$ or its first time derivative as the precise coefficients of the polynomial will not be required for the following arguments. This product of polynomials will generate a new polynomial whilst the time dependent prefactor may generate some negative powers of $\eta$ multiplying the exponential.
The computation of this integral reduces to several integrals of the form
\begin{equation}
\lim_{\substack{L\rightarrow \infty\\\eta_0\rightarrow 0}} \int_{-L}^{\eta_0}d\eta \eta^n e^{ik \eta}.
\end{equation}
Which is just the incomplete gamma function. The limit as we take $L\rightarrow \infty$ is poorly defined for real $k$ and so we must impose a boundary condition. The most standard boundary condition, which we assume here, is that we start in the so called ``Bunch-Davies'' vacuum\cite{bunch1978quantum}. This is achieved by rotating our time coordinate at infinity so that there is a brief period of Euclidean time evolution which causes the integral to converge. It also requires us to only consider positive $k$ and so all external lines must be represented by the positive energy solution $\Phi_h^+$. This, likewise, fixes one of the coefficients in our method of Frobenius expansion in terms of the other but the details of how that is done will not be relevant to this discussion.
Ensuring that this integral converges in the infinite past is sufficient to allow us to exchange the infinite sum in our series expansion with the integration. We don't need the integral to be finite in the limit $\eta_0\rightarrow 0$ because there are only finitely many terms in the sum that diverge in this limit. We can therefore separate the sum into terms that might diverge in this limit and the remaining terms that don't. The terms that don't are well behaved in the infinite past (after imposing the correct boundary condition) and so the sum can be taken outside of the integral.
We now explore the $\eta_0\rightarrow 0$ limit. For $n\geq0$ this limit just gives a constant. However, for negative powers of $\eta$ we have
\begin{equation}\label{eq:Gamman}
\lim_{\eta_0\rightarrow 0} \int_{-\infty(1-i\epsilon)}^{\eta_0}d\eta \eta^{-n} e^{ik \eta}=\lim_{\eta_0\rightarrow 0}\frac{e^{ik\eta_0}}{\eta_0^{n-1}}\sum_{m=0}^{n-2}C_m (ik\eta_0)^m+\frac{(ik)^{n-1}}{(n-1)!}\left(Ei(ik\eta_0)+i\pi\right).
\end{equation}
Where $Ei$ is the exponential integral which contributes a logarithmic divergence in this limit. Therefore, for consistency with the previous observation that the integral doesn't contain any logarithmic divergences, the coefficients of the polynomial must exactly conspire to cancel any of these exponential integral terms (plus the $i\pi$). Furthermore, these logarithmic divergences are the only non-rational contributions to this integral. Their absence therefore ensures that the final wavefunction coefficient will be a rational function of the energies.
To see this cancellation in a simple case consider the three point wavefunction coefficient coming from an interaction with two spatial derivatives in $3+1$ dimensional spacetime,
\begin{equation}\label{eq:psi3}
\psi_3=i \int_{-\infty}^{\eta_0}d\eta\textbf{k}_1\cdot\textbf{k}_2\left(\frac{1}{\eta^2}-i\frac{k_T}{\eta}-e_2+ie_3\eta\right)e^{ik_T\eta}+\textrm{ perms}.
\end{equation}
We can evaluate this integral exactly,
\begin{equation}
\psi_3=\textbf{k}_1\cdot\textbf{k}_2\frac{i}{k_T^2 \eta_0}e^{ik_T\eta_0}(-k_T^2+i(k_T e_2+e_3)\eta_0+k_T e_3\eta_0^2)+\textrm{ perms}.
\end{equation}
If we had just considered the first term in the bracket in \cref{eq:psi3} we would have found
\begin{equation}
\psi_3\supset -\frac{i}{\eta_0}e^{ik_T\eta_0}-k_T\left(Ei(i k_T \eta_0)+i\pi\right).
\end{equation}
The resulting exponential integral cancels with an identical expression coming from the second term in the bracket in \cref{eq:psi3} to produce the final answer above which is free from logarithmic divergences. We are therefore left with a rational function in the momenta plus, perhaps some terms that diverge polynomially in time.
The rational nature of this result also ensures the absence of logs in a related class of integrals that will be important later which take the form
\begin{equation}\label{eq:ConjugateP}
\int \frac{d\eta}{\eta^{d-1}}P^*_{k_1}(\eta)\prod_{a=2}^NP_{k_a}(\eta)e^{i\left(\sum_{a=2}^N c_{h_a}k_a-c_{h_1}k_1\right)\eta},
\end{equation}
provided the exponent is positive. This is because this integral can be found by switching the sign of $k_1$ in the previous result and changing the sign of a term in a polynomial will not generate logs.
\section{Exchange Diagrams}\label{sec:Exchange}
We now extend the arguments in the previous section to tree level exchange diagrams. The calculation of such diagrams is significantly more complicated due to the presence nested time integrals but, as we show, the result will always be a rational function of the energies. It is important to emphasise at this point that, although we allow for interactions between different fields, each of these fields must be massless for our arguments to apply. We do not consider the possibility of the exchange of massive particles and the extension to include such interactions reintroduces the possibility of logarithmic divergences.
To show the absence of logarithms in time we once again return to the Frobenius ansatz. As we established when looking at the exponential ansatz we need to specify an initial condition in order for our integrals to converge in the infinite past which fixes $A_0=a_0B_0$,\footnote{The linearity of this condition follows from the linear equations of motion}
\begin{equation}
\Phi_h^+=B_0\sum_{n=0}^{\infty}\frac{\Gamma\left(1-\frac{d}{2}\right)(c_hk\eta)^{2n}}{(-4)^n\Gamma(1+n)\Gamma\left(1-\frac{d}{2}+n\right)}+a_0B_0\sum_{n=0}^{\infty}\frac{\Gamma\left(1+\frac{d}{2}\right) (c_hk\eta)^{2n+d}}{(-4)^n\Gamma(1+n)\Gamma\left(1+\frac{d}{2}+n\right)} .
\end{equation}
Note, that we must now enforce all the assumptions in \cref{sec:Intro}. If we cannot exclude logarithmic divergences in the contact diagram then we expect logarithms (or worse) in the exchange diagrams. Indeed, this is exactly what is seen in conformally coupled theories in $d=3$ where the non-derivative four point single exchange diagram contains di-logarithms\cite{Arkani-Hamed:2015bza,Hillman:2019wgh}. The other, linearly independent, solution is its complex conjugate,
\begin{equation}
\Phi_h^-=B_0^*\sum_{n=0}^{\infty}\frac{\Gamma\left(1-\frac{d}{2}\right)(c_hk\eta)^{2n}}{(-4)^n\Gamma(1+n)\Gamma\left(1-\frac{d}{2}+n\right)}+a_0^*B_0^*\sum_{n=0}^{\infty}\frac{\Gamma\left(1+\frac{d}{2}\right) (c_hk\eta)^{2n+d}}{(-4)^n\Gamma(1+n)\Gamma\left(1+\frac{d}{2}+n\right)}.
\end{equation}
We need the Green's function to vanish in both the infinite past and at the end of inflation and so we take the two solutions
\begin{align}
\lim_{\eta\rightarrow -\infty(1-i\epsilon)}\Phi^1_h(\eta)=0&\Rightarrow\Phi^1_h=\Phi_h^+,\\\lim_{\eta\rightarrow 0}\Phi^2_h(\eta)=0&\Rightarrow\Phi^2_h=\Phi_h^--\frac{B_0^*}{B_0}\Phi_h^+.
\end{align}
The Wronskian of these solutions is
\begin{equation}
W(\Phi_h^1,\Phi_h^2)=W(\Phi_h^+,\Phi_h^-)=c_hk(c_hk\eta)^{d-1}dB_0^*B_0(a_0^*-a_0).
\end{equation}
So, the appropriate Green's function is
\begin{equation}\label{eq:PolyGreen}
G_p^h(\eta,\eta')=\frac{{\eta'}^{d}}{d}K_p^h(\eta)\sum_{n=0}^{\infty}\frac{\Gamma\left(1+\frac{d}{2}\right)(c_hp\eta')^{2n}}{(-4)^n\Gamma(1+n)\Gamma\left(1+\frac{d}{2}+n\right)}\theta(\eta'-\eta)+\eta\leftrightarrow \eta'.
\end{equation}
The contribution from a general exchange diagram with $I$ internal lines to the $N$ point wavefunction coefficient is
\begin{equation}
\Psi_N\supset\lim_{\eta_0\rightarrow0}\int_{-\infty}^{\eta_0} \prod_a^{I+1}\frac{d\eta_a}{\eta_a^{d+1}}\eta_a^{2m_a}F_a(\{\textbf{k}\}_a)\prod_b^N K_{k_b}^{h_b}(\eta)\prod_c^IG_{p_c}^{h_c}(\eta,\eta')\,,
\end{equation}
where $\eta$ and $\eta'$ are arbitrary times in the set $\{\eta_a\}$, $2m_a$ is the number of spatial derivatives acting at the vertex $\eta_a$ and $\{\textbf{k}\}_a$ is the set of momenta entering the vertex at $\eta_a$. We don't consider time derivatives explicitly here but in an identical way to before these arguments will generalise to terms with time derivatives on the bulk-boundary propagators. We separately consider the case of derivatives on the bulk-bulk propagator as this can lead to an additional complication.
It is always possible\cite{COT} to choose a vertex that is connected to only one other vertex. We will label this vertex $\eta$ and the vertex it is attached to $\eta'$. Isolating this term leaves us with an integral of the form
\begin{align}
I&=\lim_{\eta_0\rightarrow0}\int_{-\infty}^{\eta_0} \frac{d\eta}{\eta^{d+1-2m}}F(\textbf{k}_a)\prod_a^{N_\eta}K_{k_a}^{h_a}(\eta)G_{p}^h(\eta,\eta')\\\nonumber&=
\lim_{\eta_0\rightarrow0}\int_{\eta'}^{\eta_0} d\eta\eta^{2m-1}F(\textbf{k}_a)\prod_a^{N_\eta}K_{k_a}^{h_a}(\eta)\frac{1}{d}K_p^h(\eta')\sum_{n=0}^{\infty}\frac{\Gamma\left(1+\frac{d}{2}\right)(c_hp\eta)^{2n}}{(-4)^n\Gamma(1+n)\Gamma\left(1+\frac{d}{2}+n\right)}\\\label{eq:exchange}&+\int_{-\infty}^{\eta'} \frac{d\eta}{\eta^{d+1-2m}}F(\textbf{k}_a)\prod_a^{N_\eta}K_{k_a}^{h_a}(\eta)K_p^h(\eta)\frac{{\eta'}^{d}}{d}\sum_{n=0}^{\infty}\frac{\Gamma\left(1+\frac{d}{2}\right)(c_hp\eta')^{2n}}{(-4)^n\Gamma(1+n)\Gamma\left(1+\frac{d}{2}+n\right)}.
\end{align}
This integral, like $K$, contains no logarithmic divergences nor odd powers of $\eta$ less than $\eta^d$. This property allows us to replace any of the $K$'s in this integral with $I$ and conclude that the resulting integral will be free from such terms too. Repeating this for all the vertices in the diagram we can therefore conclude that we can never generate logarithmic divergences in this way. The absence of these small powers of $\eta$ can be seen in each term individually. To start with consider the final line. This is an integral of exactly the same form as we covered for the contact case and so we are guaranteed that the integral will vanish in the infinite past. Furthermore, at the upper limit any logarithmically divergent terms will vanish and we will just be left with a polynomial expression in $k$ with potentially some poles in $\eta'$. All such poles come from integrating even powers of $\eta$ and so will all be odd negative powers of $\eta'$. None of these terms can diverge faster than ${\eta'}^{2m-d}$ with $m\geq1$ and they are all multiplied by $\eta'^d$. Therefore, this final line can only contain positive powers of $\eta'$ and any powers of $\eta'$ less than ${\eta'}^d$ will be even. This is precisely the condition that prevented the generation of logarithmic divergences in the contact case. Therefore, we can guarantee that performing additional integrals over $\eta'$ will not contribute any logarithmic divergences.
The first term is even more straightforward to deal with, it contains no negative powers of $\eta$ and, as $2m-1$ is odd, all the powers of $\eta$ less than $\eta^d$ in the integrand will be odd. Integrating such terms will only produce even powers of $\eta'$ which ensures that subsequent integrals cannot generate logarithmic divergences. Having removed this singly connected vertex we are left with a new diagram which must also have at least one singly connected vertex. All removed vertices contribute factors like $I$ which preserve all the relevant properties of $K$. Therefore, we can repeat this procedure for each vertex in the diagram and we will never generate a logarithmic divergence in time.
As was mentioned previously, allowing single time derivatives to act on the bulk-boundary propagators will not alter this conclusion for the same reasons as in the contact case. However, we have not allowed for the possibility of time derivatives on the bulk-bulk propagator. It may seem straightforward to account for this by integrating the expression by parts to remove any time derivatives on singly connected vertices. This is true when there are no other time derivative terms but if there are other propagators with time derivatives this introduces second derivatives. Removing such terms with the equations of motion is the natural solution to this that we employed previously. This is only problematic in the instance that there is exactly one other term with a time derivative. If there is more than one other time derivative (or some spatial derivatives) then after application of the equations of motion we will still have at least two time derivatives and we return to familiar territory. However, if there is only one other time derivative then we have
\begin{align}\label{eq:Gderiv}
I&=\lim_{\eta_0\rightarrow0}\int_{-\infty}^{\eta_0} \frac{d\eta}{\eta^{d-1}}\partial_\eta K_{k_1}^{h_1}(\eta)\prod_{a=2}^{N_\eta}K_{k_a}^{h_a}(\eta)\partial_{\eta}G_{p}^h(\eta,\eta')\\\nonumber&=\lim_{\eta_0\rightarrow0}\left[\frac{1}{\eta^{d-1}}\partial_\eta K_{k_1}^{h_1}(\eta)G_p^h(\eta,\eta')\right]_{-\infty}^{\eta_0}-\int_{-\infty}^{\eta_0} \frac{d\eta}{\eta^{d-1}}\partial_\eta K_{k_1}^{h_1}(\eta)\partial_\eta\prod_{a=2}^{N_\eta}K_{k_a}^{h_a}(\eta)G_{p}^h(\eta,\eta')\\&-\int_{-\infty}^{\eta_0} \frac{d\eta}{\eta^{d}}\left((1-d)\partial_\eta K_{k_1}^{h_1}(\eta)+\eta\partial^2_\eta K_{k_1}^{h_1}(\eta)\right)\prod_{a=2}^{N_\eta}K_{k_a}^{h_a}(\eta)G_{p}^h(\eta,\eta').
\end{align}
The boundary term vanishes due to the powers of $\eta$ in the $\eta\rightarrow0$ limit of $G$. The second term on this middle line contains two derivatives acting on different bulk-boundary propagators and so can be understood using the previous arguments. The final line contains only a single term with a time derivative which violates the two derivative condition we set earlier. Fortunately, the equations of motion relate this specific combination to the second spatial derivative and so
\begin{align}\nonumber
I&=\int_{-\infty}^{\eta_0} \frac{d\eta}{\eta^{d-1}}\left(\left(c_{h_1}k_1\right)^2\prod_{a=1}^{N_\eta}K_{k_a}^{h_a}(\eta)-\partial_\eta K_{k_1}^{h_1}(\eta)\partial_\eta\prod_{a=2}^{N_\eta}K_{k_a}^{h_a}(\eta)\right)G_{p}^h(\eta,\eta').
\end{align}
Therefore, all terms with time derivatives acting on the bulk-bulk propagator can be integrated by parts to remove this time derivative in such a way that the remaining terms all contain at least two derivatives with no time derivatives higher than one. Thus, we were justified in ignoring the posibility of time derivatives on the propagator.
Just as in the contact case we haven't fully removed the possibility of logarithmic divergences in the momentum, only in time. However, recall the exponential ansatz for which the Green's function with internal momentum $p$ is
\begin{equation}\label{eq:Green}
G_p(\eta,\eta')=\frac{iP_p(\eta)P^*_p(\eta')}{2 p^d (d-2)!!^2}e^{ic_hp (\eta-\eta')}\theta(\eta'-\eta)+\eta\leftrightarrow\eta'-\frac{iP_p(\eta)P_p(\eta')}{2 p^d (d-2)!!^2}e^{ic_h p(\eta+\eta')},
\end{equation}
where $n!!=n(n-2)\dots1$ for odd $n$. We can see from this expression that when we consider a singly connected vertex the necessary integral will be of the form \cref{eq:PProduct} or \cref{eq:ConjugateP} and we have already established that it will be absent any logarithmic divergences. Moreover, the resulting expression will still be of the form of a polynomial multiplied by an exponential, potentially with some negative powers of $\eta$. However, we know that no integral in the nested integral can contribute a logarithmic divergence in time. So, there is no way to generate an exponential integral term and all such contributions must necessarily cancel.
One may worry that the exponent has the wrong sign in theories that have multiple speeds of sound\footnote{For equal sound speeds we can guarantee that the exponent has the correct sign because $\textbf{p}=\sum_{a}^{N_\eta}\textbf{k}_a$ and so
\begin{equation}
p^2=\left(\sum_{a}^{N_\eta}\textbf{k}_a\right)\cdot \left(\sum_{a}^{N_\eta}\textbf{k}_a\right)\leq \left(\sum_a^{N_\eta}k_a\right)^2\Rightarrow \sum_a^{N_\eta}k_a-p>0,
\end{equation}
which is just a straightforward generalisation of the triangle inequality.} and so exchanging the order of the sum and the integral is invalid. However, the term in which $p\eta$ enters with a negative sign in the exponential isn't taken to the infinite past due to the heaviside theta function. Therefore, the early time limit of this exponential is trivial and we are justified in performing this operation. The effect of taking this early time to be finite is to change the $i\pi$ in \cref{eq:Gamman} to $-Ei(ik\eta')$. We have already argued that the coefficient of this term vanishes and so all our conclusions are still valid.
This observation also protects the case of linear mixing between massless particles\cite{pimentel2022boostless,jazayeri2022slow} from logarithmic divergences. In these theories it is necessary to calculate a mixed propagator which is given (in the absence of time derivatives) by
\begin{align}
\tilde{K}_k(\eta')&=\int_{-\infty}^{\eta_0} \frac{d\eta}{\eta^{d+1}}\eta^{2m}F(\textbf{k})K_k^h(\eta) G_k^{h'}(\eta,\eta')\\&=e^{-ic_{h'}k\eta'}P_k^*(\eta')\int_{-\infty}^{\eta'} \frac{d\eta}{\eta^{d+1}}\eta^{2m}F(\textbf{k})\frac{iP_k(\eta)P_k(\eta)}{2k^d(d-2)!!^2}e^{ik\eta(c_{h'}+c_h)}\\&+e^{ic_{h'}k\eta'}P_k(\eta')\int_{\eta'}^{\eta_0} \frac{d\eta}{\eta^{d+1}}\eta^{2m}F(\textbf{k})\frac{iP_k^*(\eta)P_k(\eta)}{2k^d(d-2)!!^2}e^{ik\eta(c_h-c_{h'})} \\&-e^{ic_{h'}k\eta'}P_k(\eta')\int_{-\infty}^{\eta_0}\frac{d\eta}{\eta^{d+1}}\eta^{2m}F(\textbf{k})\frac{iP_k(\eta)P_k(\eta)}{2k^d(d-2)!!^2}e^{ik\eta(c_{h'}+c_h)}.
\end{align}
Each of these terms has a form to which all of our previous arguments apply. Therefore, this mixed propagator can be expressed as a polynomial multiplied by an exponential with a Taylor expansion containing no odd powers of $\eta$ less than $\eta^d$. If we allow time derivatives to act on only the bulk-boundary propagator then nothing changes in this argument. We can still ensure that no time derivatives act on the bulk-bulk propagator. This follows from an argument proceeding exactly as in \cref{eq:Gderiv} except the product of other propagators is absent and we will be left with a term that looks identical to a second spatial derivative. An unfortunate point to note is that the linear mixing considered in both \cite{pimentel2022boostless} and \cite{jazayeri2022slow} is not exclusively built from terms with at least two derivatives. In addition to terms with additional derivatives they consider interactions of the form $\dot{\phi}\sigma$ and this analysis will not apply to such theories. Indeed, in \cite{pimentel2022boostless}, it was shown that this single derivative case had a logarithmic divergence but that all terms with additional derivatives did not, in agreement with our result.
Simple exchange diagrams for $3+1$ dimensional de Sitter as well known in the literature so the cancellation of their divergences will come as little surprise. Therefore, as an illustrative example consider the $4$ point exchange diagram from the interaction ${\phi'}^2\phi$ for a massless scalar in $5+1$ dimensions. We look exclusively at the diagram with all derivatives on external lines for simplicity. Computing the contribution for $\eta>\eta'$ gives
\begin{align}
\psi_4\supset\lim_{\eta_0\rightarrow 0}i\int_{-\infty}^{\eta_0}\frac{d\eta}{\eta^4}\int_{-\infty}^\eta \frac{d\eta'}{{\eta'}^4}K_{k_1}'(\eta')K_{k_2}'(\eta')K_{k_3}'(\eta)K_{k_4}'(\eta)\left(\phi^-_S(\eta)-\phi^+_S(\eta)\frac{\phi^-_S(\eta_0)}{\phi^+_S(\eta_0)}\right)\phi^+_S(\eta').
\end{align}
In this expression we have introduced the modefunctions $\phi^\pm_k$ for this field which is
\begin{equation}
\phi^\pm_k(\eta)=\frac{3}{\sqrt{2k^5}}\left(1\mp ik\eta-\frac{1}{3}k^2\eta^2\right)e^{\pm ik \eta}\rightarrow K_k(\eta)=\frac{\phi^+_k(\eta)}{\phi^+_k(\eta_0)}.
\end{equation}
Performing just this part of the integral gives a series of rational terms and a logarithmic divergence
\begin{equation}
\psi_4\supset \frac{k_1^2k_2^2k_3^2k_4^2}{36S^5}(k_1^2+k_2^2-k_3^2-k_4^2)\log\left(\frac{k_1+k_2+k_3+k_4+2s}{k_1+k_2+k_3+k_4}\right).
\end{equation}
Just as we argued below \cref{eq:exchange} this term is free from logarithmic divergences in time but it does have this logarithmic divergence in the energies.
This term individually avoids the arguments outlined above because, in the exponential ansatz, it arises from the combination of two terms with different exponents. However, the contribution from $\eta<\eta'$ can be obtained by exchanging $k_1\leftrightarrow k_3$ and $k_2\leftrightarrow k_4$, exactly cancelling this divergence. This was guaranteed as the logarithm generated from integrals over $e^{i(k_1+k_2+k_3+k_4+2s)\eta}$ can only be seen in this truncated expression. Looking at \cref{eq:Green} we recognise that this term is the same for both $\eta>\eta'$ and $\eta<\eta'$ and so is separable into two contact diagrams neither of which permit such a divergence. This returns us to the situation where there is only a single exponent that could give logarithmic divergences in energy which is forbidden in the absence of such divergences in time.
\subsection{Polynomial Time Divergences}
In addition to forbidding logarithmic time divergences we can also draw some conclusions about the remaining polynomial time divergences. All negative powers of $\eta$ in the integrand are generated by products arising from the first $d$ terms of the sum,
\begin{equation}
\sum_{n=0}^{\infty}\frac{ \Gamma\left(1-\frac{d}{2}\right)(c_hk\eta)^{2n}}{(-4)^n\Gamma(1+n)\Gamma\left(1-\frac{d}{2}+n\right)},
\end{equation}
or its derivatives. All coefficients in this sum are real and so for parity even interactions their contribution to the wavefunction coefficients, which is built from $iS$, will be imaginary. Similarly, parity odd interactions will have real divergent contributions. Therefore, in both parity odd and even cases the correlation functions can have no contribution from these divergent terms. This is still true for exchange diagrams as the contribution to the integral in \cref{eq:PolyGreen} is real. So, having integrated out all the vertices, the coefficients of the polynomial in the final time integral will be real for parity even interactions (and imaginary for parity odd ones). Thus the divergent contributions to the action will be absent from the correlation functions.
This result was also noticed in\cite{anninos2015late} and, via an analytic continuation to EAdS and was related in \cite{Maldacena2003} to the ability to holographically renormalise,\cite{Skenderis_2002}, the theory with real counter terms. Relating these divergences to holographic renormalisation in this way suggests an interpretation for their source. Holographic renormalisation can be thought of as subtracting off divergences that arise due to the infinite size of the spacetime. Thus, these divergences can be understood as coming from the infinite time and volume in which the fields can interact. In parity even theories the only possible interactions that can lead to physical divergences are thus those that are logarithmically divergent. As we have seen, such divergences are absent in two, or higher, derivative theories. The requirement for scale invariance means that such interactions become less common as the spacetime expands. So, in spite of the interactions carrying on forever, they become so rare that there are no divergences.
\section{Extensions and Conclusion}\label{sec:Conclusion}
We have established that there is no way to generate logarithmic divergences at tree level for scale-invariant, parity-even theories involving interactions with at least two derivatives in even spacetime dimensions. What happens when we break these assumptions? In parity odd theories we are allowed odd numbers of spatial derivatives and so our arguments will fail in general. The time integrand can have negative odd powers of $\eta$ provided the number of derivatives is less than $d+1$. Unfortunately, this spoils our argument and so it is possible for the time integral to generate logarithmic divergences as was shown in \cite{cabass2022bootstrapping}. However, having sufficiently many derivatives (at least $d+1$) acting on each term will remove all negative powers of $\eta$ in the integrand and prevent logarithmic divergences. Our arguments fail even more catastrophically in odd spacetime dimension due to the presence of logarithms in the propagators. This is not saved in a straightforward way by introducing additional derivatives.
For theories with fewer than two derivatives the arguments straightforwardly break down as there will, generically, be $\frac{1}{\eta}$ terms in the integrand. This is already known and is seen, for example, in the case of $\phi^3$ interactions\cite{falk1992angular}. In the case of massive fields, the propagators that we generate from the Frobenius ansatz are made out of non-integer powers of $\eta$ and so this analysis of divergences is invalid, with the exception of a finite number of specific masses that have odd half integer $\nu$. Likewise, in the exponential ansatz, the function multiplying the exponential will not terminate and so we cannot write it as a polynomial. Therefore, we need to worry about the exchange of the sum and the integral. This is also what will happen if we alter the background theory. For example by allowing the mass or speed of sound to vary in time or if we introduce a parity breaking term in the quadratic action. A potential extension to this work would be to consider the implications for fermionic fields which were ignored in this work. It may also be constructive to explore the behaviour of loop diagrams. However, logarithmic divergences are expected to be completely generic in such cases.
In conclusion, all tree level wavefunction coefficients of scale invariant, massless, integer spin fields, involving parity even interactions with at least two derivatives in even spacetime dimension contain no logarithmic divergences in time or momentum and so are rational functions of the momenta plus potentially some polynomial time divergences in the limit. Furthermore, any such time divergences are guaranteed to be imaginary and thus do not affect correlation functions. These assumptions, whilst restrictive, are not just made for the sake of convenience, any massive fields present during the early universe are expected to decay and so masslessness is required to make predictions for observations. Furthermore, in spite of the insistence on scale invarance restricting us to a de Sitter background we allow boost breaking both through arbitrary sound speeds and non de Sitter invariant interactions.
This is an important ingredient in the bootstrap program as it heavily constrains the allowed forms of wavefunction coefficients by reducing the space of allowed functions that they can take to a polynomial basis with an order fixed by the number of derivatives and the requirement of scale invariance. This result breaks down in the presence of massive fields and so a discovery of this sort of divergence in the data would be a powerful result for the cosmological collider which seeks to understand the spectrum of massive states in the early universe.
\section*{Acknowledgements}
I would like to thank Enrico Pajer for initial collaboration and helpful comments about the draft, James Bonifacio and David Stefanyszyn for useful discussions over the course of the work and Aaron Hillman and Dong-Gang Wang for their comments on the draft. The author is supported by jointly by the Science and Technology Facilities Council through a postgraduate studentship and the Cambridge Trust Vice Chancellor's Award.
\bibliographystyle{JHEP}
|
1,116,691,499,103 | arxiv | \section{Introduction}
In the past half century, evolutionary game theory made a series of remarkable advances~\cite{ETG,GTE,BAMS40479}. It has finally matured into a widely accepted research tool in natural and social sciences. The tool effectively promoted development of these disciplines. Compared with the well-mixed unstructured populations~\cite{GTE,EGPD}, recent results indicate that the cooperators obtain a larger living space in the complex networks~\cite{PRL95098104,PRL98108103,PRE72056128,PRE66021907,EPL7730004,NATURE359826,NATURE433312,NATURE441502,SCIENCE3141560}. Previous numerical studies show the influence of topological structures on the level of cooperation. A variety of structures were then intensively investigated~\cite{PR44697}, such as regular graphs~\cite{NATURE359826,NATURE441502}, small-world networks~\cite{PRL95098104,PRE72056128}, random graphs~\cite{NATURE433312,NATURE441502}, and scale-free networks~\cite{NATURE433312,NATURE441502,PRL95098104,PRL98108103}. In these topological structures, the group of opponents surrounding a node is interpreted as its neighbors. The limited local interactions are the interactions restricted among the node and its neighbors.
In this paper, we investigate the role of fence-sitters in evolutionary games in complex topologies. Fence-sitters are the individuals who change their strategies at least once in the strategy evolutionary process. Considering the difficulty of formulating the strategy updating process in complex topologies, we provide a tool for deriving the pure cooperators, pure defectors and fence-sitters' payoffs.
To stay consistent with the previous studies, we adopt the prisoner's dilemma (PD) as the game model. As a heuristic framework, the prisoner's dilemma describes a commonly identified paradigm in many real-world situations~\cite{GTE,ETG,EC,JCR,GCEC,NATURE398441}. It has been widely studied as a standard model for the confrontation between cooperative and selfish behaviors.
The selfish behavior here is manifested by a defective strategy, aspiring to obtain the greatest benefit from the interaction with others. For a two-strategies game as the PD, an individual gains $T$ (temptation to defect) for defecting a cooperator, $R$ (reward for mutual cooperation) for cooperating with a cooperator, $P$ (punishment for mutual defection) for defecting a defector and $S$ (sucker's payoff) for cooperating with a defector. For the PD, the four payoff values are specifically defined as: $T>R>P\geqslant S$. At the next round, the individual will know the strategy of the other one in the previous round. It can then adjust its strategy according to the updating rule of strategy.
\section{Payoff of Individuals}
To derive the payoff of the pure strategists and fence-sitters, we will introduce a mean-field approach in the following. In the PD, $i$'s strategy can be denoted by $\Omega_i$. $\Omega_i$ takes vectors ${(1,0)}^\textrm{tr}$ and ${(0,1)}^\textrm{tr}$ for the cooperative and defective strategy respectively.
In a two-strategies evolutionary game, $i$'s strategy $\Omega_i(n)$ can be represented as
\begin{equation}
\left(
\begin{array}{cc}
X_i(n) \\
1-X_i(n) \\
\end{array}
\right).\label{X_n}
\end{equation}
$X_i(n)$ can only take $1$ or $0$ at the $n$th round. $i$ is a cooperator (defector) for $X_i(n)=1$ ($0$). Note that this vectorial formulation is completely different from the ``mixed strategy'' in replicator equations~\cite{BAMS40479,PRL97158701,PRL103198702,JSTAT08007}. The element in the mixed strategy denotes the probability of adopting this pure strategy, while our definition of $\Omega_i(n)$ is to facilitate the following calculations of vector. Thus $X_i(n)$ itself does not contain any physical meaning.
At the $n$th round of game playing with the other prisoner $j$, $i$'s payoff $G_i(n)$ can be rewritten as
\begin{equation}
G_i(n)={\Omega_i^\textrm{tr}}\left(
\begin{array}{cc}
R & S\\
T & P\\
\end{array}
\right){\Omega_j}.
\end{equation}
In a structured group, an individual $i$ plays with $k_i$ neighbors, where $k_i=\sum_jA_{ij}$ denotes $i$'s degree or connectivity. $A_{ij}$ is an entry of the adjacent matrix of networks, taking values $A_{ij}=1$ ($i=1$, $2$, $...$, $the~size~of~networks$) whenever node $i$ and $j$ are connected and $A_{ij}=0$ otherwise.
In the evolutionary process, the PD is repeated in the following way: in the first stage, a PD is played by every pair of nodes connected by a link in the network. For a node $i$ in the $n$th round, its payoff reads as
\begin{equation}
G_i(n)={\Omega_i^\textrm{tr}(n)}\left(
\begin{array}{cc}
R & S\\
T & P\\
\end{array}
\right)\sum_jA_{ij}{\Omega_j(n)}.\label{G_i}
\end{equation}
Each node then updates its accumulated payoff, which is the sum of payoffs it receives from the last rounds in memory. We define $\lambda$ as the span of the payoff memory. For $\lambda=5$, an individual's payoff keeps aggregating for five rounds. At the beginning of the sixth round, the whole payoff system is reset. The purpose of introducing the payoff memory into the model is to simulate a points system of aggregating the players' payoff. For example, a season in the English Premier League normally lasts $38$ rounds, in which $38$ denotes the memory span. In the second stage, each node updates its strategy based on its aggregated payoff. According to various replicator dynamics, the node will adjust its strategy in the next round. Note that, the payoff memory mentioned here is different from the memory in ``memory loss'' defined in Refs.~\cite{PRL103198702,JSTAT08007}. In these two references, the memory denotes the aggregated fitness of individuals. It is also different from the ``Time scales'' in Ref.~\cite{PRL97158701}, which denotes the time spans of selections and interactions in one round of game.
For $i$'s local gaming environment, we define $W_i(n)=\frac{\sum_jA_{ij}{\Omega_j^\textrm{tr}(n)}\left(1~0\right)^\textrm{tr}}{k_i}$
as $i$'s local frequency of cooperators at the $n_{th}$ round. For the global gaming environment, we define $Q(n)=\frac{\sum_i{\Omega_i^\textrm{tr}(n)}\left(1~0\right)^\textrm{tr}}{N}$ as the global frequency of cooperators at the $n$th round. $N$ denotes the size of network. To understand exactly the role of fence-sitters, we choose random graphs (networks) as the gaming platform where the distribution of individuals playing a strategy can be served as an arbitrary distribution for one thing. For another, the large amount of connections between the pure cooperators and fence-sitters~\cite{PRL98108103} lead to the fact that the fraction of defectors is sensitive to the fence-sitters' higher payoff. The fence-sitter denotes an individual changing its strategy at least once in the evolutionary process. The comparison among the degree distributions of cooperators, defectors and all the individuals numerically will be shown in our simulation section to clarify this point. In this case,
\begin{equation}
W_i(n)\simeq Q(n).\label{Approxiation}
\end{equation}
By this mean-field approximation, $\sum_jA_{ij}{\Omega_j(n)}$ in random graph can be rewritten as
\begin{equation}
\sum_j\!\! A_{ij}{\Omega_j(n)}\!=\!{\!\langle{k}\rangle}\!\left(\!\!\left(
\begin{array}{cc}
1 \\
0 \\
\end{array}
\right)Q(n)
+\left(
\begin{array}{cc}
0 \\
1 \\
\end{array}
\right)(1-Q(n))\!\!\right),\label{sum_j}
\end{equation}
where $\langle{k}\rangle$ denotes the average degree of all the individuals.
Inserting Eq.~(\ref{sum_j}) and Eq.~(\ref{X_n}) into Eq.~(\ref{G_i}), we have
\begin{eqnarray}
&&G_i(n)=\langle{k}\rangle\left\{\left[S-P+(R-T+P-S)Q(n)\right]\right.\nonumber\\
&&\times X_i(n)+\left.TQ(n)+P(1-Q(n))\right\}.
\end{eqnarray}
To clarify the function of fence-sitters, we consider a payoff memory to amplify their function to an observable level.
The payoff memory is widely adopted in a point system. The point system is composed by aggregated scores of individuals. The scores are aggregated for a number of rounds of games in a season.
Aggregating the payoff gaining from the starting round $n_0$ to round $n_0+\lambda-1$ ($\lambda>1$), one can derive three classes of averaged payoffs:
pure cooperators' average payoff ($\alpha$), pure defectors' average payoff ($\beta$) and fence-sitters' average payoff ($\gamma$).
\begin{equation}
\alpha(n)=\langle{k}\rangle\left[S\cdot \lambda-(R-S)\sum_{n=n_0}^{\lambda+n_0-1}Q(n)\right],
\end{equation}
\begin{equation}
\beta(n)=\langle{k}\rangle\sum_{n=n_0}^{\lambda+n_0-1}\left[(T-P)Q(n)+P)\right],
\end{equation}
and
\begin{eqnarray}
\gamma(n)=\langle{k}\rangle\left\{\sum_{n=n_0~and~X_i(n)=1}^{\lambda+n_0-1}\left[(R-T+P-S)Q(n)+(S-P)\right]+\sum_{n=n_0}^{\lambda+n_0-1}\left[TQ(n)+P(1-Q(n))\right]\right\}.
\end{eqnarray}
For the PD, when the system reaches a relatively stable status for a certain updating rule, these three variables can be rewritten as
\begin{equation}
\alpha_{\infty}=\lambda\langle{k}\rangle \left[(R-S)Q(\infty)+S\right],\label{PCP}
\end{equation}
\begin{equation}
\beta_{\infty}=\lambda\langle{k}\rangle \left[(T-P)Q(\infty)+P\right],\label{PDP}
\end{equation}
and
\begin{equation}
\gamma^{\omega}_{\infty}\!\!\!=\!\lambda\langle{k}\rangle\!\!\left(\frac{S-P+(R-T+P-S){Q(\infty)}}{\omega}+(T-P)Q(\infty)+P\right),\label{FSP}
\end{equation}
where $\infty$ denotes $n\rightarrow\infty$ and $\omega\in\left[1,+\infty\right)$ denotes the average shifting period of cooperation. For example, if $\omega=3$, the fence-sitter plays cooperator once in three rounds on average. For the PD, $\alpha_{\infty}\leq \beta_{\infty}$. Before the extinction of pure cooperators in the population, pure defectors can not die out. Similarly, for $\gamma_{\infty}>\alpha_{\infty}$, if pure cooperators still exist in the population, fence-sitters can survive as well. In an evolutionary game, the number of fence-sitters is determined by the updating rules of strategies.
We define $F_{\infty}^\textrm{PC}$ and $F_{\infty}^{S(\omega)}$ as the frequency of the pure cooperators and fence-sitters with period $\omega$ in the relatively stable status respectively. One can derive the final frequency of cooperators
\begin{equation}
Q(\infty)=F_{\infty}^\textrm{PC}+\sum_{\omega}\frac{F_{\infty}^{S(\omega)}}{\omega}.\label{Qinf}
\end{equation}
The memory promotes the advantage of payoff for the fence-sitters. The higher payoff of the fence-sitters brings them a larger living space. The existence of more fence-sitters enhances the frequency of cooperators. Hence, the payoff memory indirectly promotes the level of cooperation.
\section{Simulations}
To test our theoretical prediction, we run the PD extensively on random graphs with a updating rule proposed by Santos and Pacheco~\cite{PRL95098104}. Santos's updating rule simulates a local random optimized process of strategy. In this process, an individual $i$ chooses a randomly picked neighbor $j$ as its reference. At the $n$th round, if $j$'s payoff is higher than that of $i$, $i$ will play $j$'s strategy in the next round with a probability directly proportional to the difference of their payoffs $\sum_{t=r\times \lambda+1}^n\left(G_j(t)-G_i(t)\right)$ and inversely proportional to $Max\{k_i,k_j\}\cdot T$. The parameter $r=\lfloor\frac{n}{\lambda}\rfloor-1$ for $\frac{n}{\lambda}\in\mathbb{N}$, while $r=\lfloor\frac{n}{\lambda}\rfloor$ for $\frac{n}{\lambda}\notin\mathbb{N}$, so that, one can find $t$ in the time region $\left[r\times \lambda+1,(r+1)\times \lambda\right]$.
For different initial conditions, after a period of initial turbulence, network gaming always reaches a dynamical equilibrium, where the number of cooperators (or defectors) is stabilized at the particular value with minimum fluctuation. As shown in Fig.~\ref{F1}(a) and Fig.~\ref{F1}(b), fence-sitters are the individuals which change their color with time in the dynamical equilibrium. Comparing Fig.~\ref{F1}(a) with Fig.~\ref{F1}(b), one can observe that the colors of a couple of individuals shift from step $10~001$ to $10~002$, which are a part of the fence-sitters in question.
Figure~\ref{F2}(a) and Fig.~\ref{F2}(b) show the degree distributions of cooperators, defectors, and all the individuals, respectively. For both panels, one can find that the three distributions are overlapped basically, which is the condition of Eq.~(\ref{Approxiation}). Figure~\ref{F2}(c) shows the relation between the frequency of cooperators $Q(\infty)$ and the payoff memory $\lambda$. For $T\in[1.2,1.9]$, one can observe that the level of cooperation grows with $\lambda$ monotonously. For $T\leq1.1$, all the individuals are pure cooperators. For these cases, the simulation results are thus not shown in this figure.
Why does payoff memory enhance $Q(\infty)$? Based on Eq.~(\ref{Qinf}), one can find that there are only two possibilities. One is that more pure cooperators emerge; the other is that more fence-sitters emerge. To make clear the final reason, we measure the average payoff and distributions of individuals with different average shifting period $\omega$ in Fig.~\ref{F3}. Comparing Fig.~\ref{F3}(a) and Fig.~\ref{F3}(b) with Fig.~\ref{F3}(c) and Fig.~\ref{F3}(d), one can observe the dramatic increase of the differences between the dash-dot lines and dotted lines, which denote the pure cooperators and fence-sitters' payoff, respectively. The increase leads to a high probability for the fence-sitters' pure cooperative neighbors to copy the fence-sitters' strategy. Naturally, these pure cooperative neighbors turn out to be new fence-sitters in the later rounds. The number of fence-sitters grows with $\lambda$, but it has its upper bound. The upper bound is generated by their payoffs, which are always lower than that of the pure defectors. The pure defectors' payoffs are denoted by the solid lines in Fig.~\ref{F3}(a), Fig.~\ref{F3}(b), Fig.~\ref{F3}(c), and Fig.~\ref{F3}(d).
In Fig.~\ref{F3}(e) and Fig.~\ref{F3}(f), for $\lambda=1$, one can observe that the fence-sitters are more than the pure cooperators corresponding to $\omega=1$ in Fig.~\ref{F3} for both $T=1.2$ and $T=1.3$. For $\lambda=10$, more fence-sitters emerge in both panels. The increase of fence-sitters originates from the nonlinear increase of the difference between their average payoff and the pure cooperators' average payoff.
Notice that the plots are binned in this figure; a large number of fence-sitters with a short average shifting period ($\omega<1.5$) serve as the pure cooperators ($\omega=1$). Actually, for the games with a long payoff memory and a large $T$, the cooperators in the dynamical equilibrium are mainly composed of the cooperating fence-sitters, that is, the second term in Eq.~(\ref{Qinf}). Figure~\ref{F3} indicates that the fitness of fence-sitters is not less than the pure cooperators for all the parameters investigated here. When $\lambda>1$, the advantage of payoff highly promotes the frequency of fence-sitters. Based on Eq.~(\ref{Qinf}), the promotion on the frequency of fence-sitters leads to a remarkable increase on $Q(\infty)$.
The example of random graph suggests that the payoff memory can highly promote the level of cooperation in a type of network. It does not mean that the influence only exists in this type of network. For other networks, such as Watts and Strogatz small-world networks~\cite{NATURE393440}, Barab\'{a}si and Albert's scale-free networks~\cite{SCIENCE286509}, and regular networks, the influences are still considerably apparent. In addition, we also observe a series of similar results for the updating rule proposed by Nowak and May~\cite{NATURE359826}. All these observations indicate that fence-sitters can promote the level of cooperation in evolutionary games in complex networks. Note that our vectorial formalization can also be applied to other two-strategies games, such as the hawk-dove game (also known as the snow-drift or chicken game)~\cite{ETG,GTE,NATURE428643,EL8748}, Hisashi Ohtsuki's model~\cite{NATURE441502} and so forth.
\section{Conclusion
In summary, we have investigated the role of fence-sitters in evolutionary games in complex networks. The fence-sitters are the individuals who change their strategies at least once in the strategy evolutionary process. We introduce the first vectorial formulation to derive three classes of individuals' payoff analytically. The three classes are pure cooperators, pure defectors and fence-sitters. To clarify the function of the fence-sitters, we define a control parameter, payoff memory, as the number of rounds that the individuals' payoffs are aggregated. We observe that the fence-sitters' effects and the level of cooperation grow with the payoff memory apparently.
Previous studies concentrated on the influences of topological structures on the level of cooperation in the complex networks. In these numerical studies, the fence-sitters were observed in all the systems.
In this paper, we have analytically demonstrated that the existence of fence-sitters can promote the level of cooperation. We have introduced a vectorial formulation of individuals' payoffs, which can be applied to all the two-strategies games. We have also proposed a mean-field approach to derive these payoffs analytically. For the prisoner's dilemma, we found the average payoff of the individuals with a shifting strategy is not less than the pure cooperators' payoff. Because of their advantage on payoff, the fence-sitters are more robust than the pure cooperators to the invasion of pure defectors. Although the fence-sitters' payoff is lower than that of the pure defectors, pure cooperators tend to become fence-sitters first. Before the extinction of cooperators, the majority of cooperators are composed of a part of the fence-sitters. Thus the existence of fence-sitters promotes the fitness of cooperators in an indirect way.
In complex networks, our observations indicate that cooperation is protected by the individuals with a shifting strategy. Our results may provide a better understanding of the composition of cooperators in a circumstance where defectors always obtain a higher payoff.
Y. Z. and M. A. A.-A. and C. B. are supported by the region Haute Normandie, France, and the ERDF RISC. S. Z. is supported by the UK Royal Academy of Engineering and the Engineering and Physical Sciences Research Council (EPSRC) under Grant No. 10216/70.
\begin{figure}
\scalebox{0.5}[0.4]{\includegraphics[trim=0 50 0 0]{F1.eps}}
\caption{Illustrations of fence-sitters in networks. Light gray circles and dark gray squares denote cooperators and defectors respectively. Scales of individuals denote their degrees. The simulation results are obtained by the updating rules proposed by Santos and Pacheco~\cite{PRL95098104} for the PD with $T=1.2$, $P=S=0$ and $R=1$. We set $\lambda=1$ to show the fence-sitters in the original model.
(a) and (b) show the strategies of individuals from step $10~001$ to $10~002$. Hollow circles or squares connected by solid lines denote a part of the fence-sitters in question. One can observe their strategies in (a) are different from that in (b).
The random graphs are generated by randomly rewiring all of the links in the regular graphs, which are formed by $128$ identical individuals with average degree $6$.
}\label{F1}
\end{figure}
\begin{figure}
\scalebox{0.35}[0.33]{\includegraphics[trim=0 50 0 0]{F2.eps}}
\caption{Degree distributions of cooperators (squares), defectors (circles) and all the individuals (diamonds). (a) and (b) show the simulation results for $T=1.2$ and $1.3$ respectively. The degree distributions show that cooperators are evenly distributed in the random graphs.
(c) shows the frequency of cooperators $Q(\infty)$ as a function of the payoff memory $\lambda$ for $T\in[1.2,1.9]$. In these simulations, the size of random graphs is set to $1024$.
}\label{F2}
\end{figure}
\begin{figure}
\scalebox{0.36}[0.4]{\includegraphics[trim=0 30 0 0]{F3.eps}}
\caption{Average payoff of different classes of individuals and the distributions of average shifting period $\omega$ in random graphs.
(a), (b), (c), and (d) show the analytic results obtained by Eq.~(\ref{PCP}), Eq.~(\ref{PDP}), and Eq.~(\ref{FSP}). Solid lines, dotted lines, and dash-dot lines denote the payoff of the pure defectors, fence-sitters, and pure cooperators, respectively. (a) and (c) show the cases of $\lambda=1$ and $10$ for $T=1.2$, respectively. The analytic results are based on $Q(\infty)=0.21$ and $0.48$ for $\lambda=1$ and $10$, respectively. The values of $Q(\infty)$ are obtained by corresponding simulations in (e). Similarly, (b) and (d) show the cases of $\lambda=1$ and $10$ for $T=1.3$, respectively. The analytic results are based on $Q(\infty)=0.07$ and $0.27$ for $\lambda=1$ and $10$, respectively. The values of $Q(\infty)$ are obtained by corresponding simulations in (f). The random graphs are formed by $1024$ identical individuals with average degree $6$. We run ten simulations for each of the parameter values for the game on each of the ten networks. Thus each plot in the figure corresponds to $100$ simulations.
(e) and (f) show the distributions of $\omega$ for $T=1.2$ and $1.3$, respectively. (f) shares the same legend with (e). Comparing the data points of $\omega=1$ and $2$ in the ellipses, one can find the number of fence-sitters with $\omega\leq 10$ grows with $\lambda$ dramatically. This behavior originates from the fact that their payoffs are much higher than that of pure cooperators for $\lambda=10$. The differences of the payoffs are shown clearly in (a), (b), (c), and (d).
Note that (a), (b), (c), and (d) are semilog graphs. (e) and (f) are log-log graphs, in which the scales on distributions are different.
}\label{F3}
\end{figure}
|
1,116,691,499,104 | arxiv | \section{Introduction}
The future linear colliders CLIC \cite{CLIC} and ILC \cite{ILC} are designed for precision measurements in elementary-particle physics, complementing measurements performed at the Large Hadron Collider (LHC) now operating at CERN. Despite significant differences in accelerator technology, as well as in center of mass (CM) energy and charge density, the detector design is to a large extent common to both projects \cite{CDR12, ILD}. This is, in particular, true for the instrumentation of the forward region of the detector, including the luminosity calorimeter LumiCal \cite{Abr10, Abr09}. The present study is part of this common effort, and is applicable with small differences in both contexts. The results for ILC have been reported elsewhere \cite{Luk12b}.
Luminosity, $L$, and luminosity spectrum, $\mathcal{L}(E_{CM})$\footnote{The precise definition of the term "luminosity spectrum" as used in this work is given in section \ref{sec-phys}}, are key inputs to many measurements at collider experiments, including mass and cross-section measurements, as well as production-threshold scans. Precision of the luminosity measurement is critical at linear colliders in order to match the inherent precision potential of the lepton machines.
The most precise luminosity measurement method at linear colliders to date is
to count Bhabha-scattering events recognized by coincident detection of showers in the fiducial volume (FV) in both halves of the luminometer in the very forward region in a given energy range. The number of events, $N$, is then divided by the Bhabha cross section, $\sigma$, integrated in the corresponding region of the phase space. Bhabha scattering is a well-known QED process and at several experiments at LEP this technique allowed reaching sub-permille precision \cite{Opal00, Aleph00, L3_00, Arb96, Jad03}. At future linear colliders, however, CM energy will be 3 to 30 times higher, and instantaneous luminosity up to three orders of magnitude higher \cite{CLIC, ILC}. At such high beam power density, the energies and the polar angles of the Bhabha particles are strongly influenced by beam-beam effects \cite{Yok91, Sch96}, which creates severe Bhabha counting losses.
The expression for measured luminosity can be formally written as follows,
\begin{equation}
\label{eq-luminosity}
L = \frac{N(\Xi(\Omega^{lab}_{1,2}, E^{lab}_{1,2}))}{\sigma(Z(\Omega^{CM}_{1,2}, E^{CM}_{1,2}))},
\end{equation}
Here $\Xi(\Omega^{lab}_{1,2}, E^{lab}_{1,2})$ is a function describing the selection criteria for counting the detected events based on the angles $\Omega^{lab}_{1,2}$ and energies $E^{lab}_{1,2}$ of the final particles in the lab frame, and $Z(\Omega^{CM}_{1,2}, E^{CM}_{1,2})$ is a function describing the corresponding region of phase space where the cross section is integrated. These functions can be expressed as products $\Xi = \prod_i \xi_i$ and $Z = \prod_i \zeta_i$ where the functions $\xi_i$ and $\zeta_i$ are based on specific topological and kinematical properties of the detected/generated pair. For each $i$, the physical meaning of $\xi_i$ and $\zeta_i$ corresponds to each other, although their mathematical form may be different (See in particular section \ref{sec-BS} and Eqs. \ref{eq-xi-ang} and \ref{eq-zeta-ang}). The set of functions $\xi_i$ and $\zeta_i$ includes the angular selection requiring both particles to be detected in the FV, as well as the energy range selection and possible further cuts to eliminate background.
Because of the random and asymmetric momentum loss when electrons emit Beamstrahlung, the CM frame of the Bhabha process moves with respect to the lab frame with axial velocity different for every colliding pair. As a consequence, $\Xi$ and $Z$ operate on kinematical arguments in different reference frames. Thus, if $\Xi$ and $Z$ have the same form, different regions of the phase space will be covered, leading to a systematic bias in the luminosity measurement. This systematic bias cannot be neglected at the future linear colliders, and is particularly accute at the 3 TeV CLIC \cite{CLIC}.
A way around this problem is to define $\Xi$ and $Z$ such that the counting rate is independent of the reference frame. Some of the functions $\xi_i$ and $\zeta_i$ can be defined invariant to the boost along the beam axis. This is, for example, the case with the cuts on the reconstructed CM energy. However, the requirement that the outgoing particles hit the FV of the detector on both sides does not possess such invariance. In this paper, a definition of $\xi_{FV}$ and $\zeta_{FV}$ is proposed such that both the experimental count $N$ and the cross-section $\sigma$ are reconstructed in the same reference frame, namely the collision frame, which will be defined in section \ref{sec-phys}.
The physical processes affecting the luminosity measurement are outlined and the used terms and notation defined in section \ref{sec-phys}. The analysis method with the correction procedures, as well as the test results are described in section \ref{sec-corr}. In the conclusions, the main advantages of the presented method are restated, and the final uncertainties are listed and briefly discussed.
\section{The physical processes affecting the luminosity measurement and an outline of the correction procedure}
\label{sec-phys}
The sequence of physical processes relevant to the present discussion is schematically represented in figure \ref{fig-scattering}. Due to the pinch effect during the bunch collision, both particles may emit Beamstrahlung photons and so lose energy and momentum before the interaction. Thus in general, $E_{CM} < E_0 \equiv 2 E_{beam}$. The CM energy distribution at this stage is the actual luminosity spectrum $\mathcal{L}(E_{CM})$. The probability of the Bhabha scattering scales with $1/s \equiv 1/E^2_{CM}$, resulting in the CM energy distribution of the Bhabha events $\mathcal{B}(E_{CM}) \propto \mathcal{L}(E_{CM})/E^2_{CM}$. The Bhabha process is itself accompanied by emission of initial-state radiation (ISR) that is nearly collinear with the initial particle momenta, as well as final-state radiation (FSR) that is approximately collinear with the outgoing particle momenta. Since ISR is nearly collinear with the beam axis, it misses the luminometer, so that the CM energy reconstructed from the detected particles is $E_{CM,rec} < E_{CM}$, and the corresponding spectrum is,
\begin{equation}
\label{eq-ecm-det}
h(E_{CM,rec}) = \int\limits_0^{E_{\max}}{\mathcal{B}(E_{CM}) \frac{1}{E_{CM}} \mathcal{I}(\frac{E_{CM,rec}}{E_{CM}}) \,\mathrm{d} E_{CM}}
\end{equation}
where $\mathcal{I}(x)$ is the distribution of the fractional CM energy losses due to ISR. $\mathcal{I}(x)$ is approximately independent of $E_{CM}$.
Due to the finite energy resolution of the luminometer, the reconstructed spectrum is smeared, which can be represented as a convolution with a normalized Gaussian\footnote{Strictly speaking, the smearing width depends on the deposited energy of the showers. However, as only a relatively narrow energy range is being analyzed here, the smearing width will be treated as being approximately constant.}.
\begin{equation}
\label{eq-ecm-rec}
h^*(E_{CM,rec}) = \frac{1}{\sqrt{2\pi} \sigma} \int\limits_{0}^{\infty} h(E'_{CM,rec}) \exp \left(-\frac{(E_{CM,rec}-E'_{CM,rec})^2}{2 \sigma^2} \right) \,\mathrm{d} E'_{CM,rec}
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics{scattering}
\caption{\label{fig-scattering}Schematic representation of the physical processes affecting the luminosity measurement}
\end{figure}
The term \emph{collision frame} will be used here for the frame of the two-electron system\footnote{Unless stated otherwise, electron always refers to electron or positron} after emission of Beamstrahlung and ISR and before emission of FSR\footnote{In reality, ISR and FSR can not be cleanly separated even theoretically, due to the quantum interference between them. Thus in practice the collision frame is defined as the CM frame of the final electrons together with all radiation within a given tolerance angle with respect to the final electron momenta. The assumption of clean separation between ISR and FSR introduces a small uncertainty in the final result.}. The scattering angle in the collision frame is denoted $\theta^{coll}$. Due to the radiation prior to the collision, the collision frame has a non-zero velocity $\vec\beta _{coll}$, and the outgoing particle angles in the lab frame, $\theta^{lab}_1$ and $\theta^{lab}_2$, are not symmetric. In a significant fraction of events, the acollinearity is so large that the two particles are not detected in coincidence within the FV of the luminometer. In this way, Beamstrahlung and ISR induce an \emph{angular counting loss} of Bhabha events.
Finally the electromagnetic deflection (EMD) of the outgoing electrons in the field of the opposing bunch induces a small additional angular counting loss.
The outline of the procedure of the Bhabha-count analysis is as follows:
\begin{enumerate}
\item Reconstruct the CM energy $E_{CM,rec}$ and the collision-frame velocity $\beta_{coll}$ for each pair detected in the FV of the luminometer, from the angles and the measured particle energies.
\item Assign weights to events to correct for the acceptance reduction due to $\vec\beta _{coll}$, as shown in section \ref{sec-BS}.
\item Deconvolution of the ISR energy loss $\mathcal{I}(x)$ from the spectrum $h^*(E_{CM,rec})$, in order to restore the $\mathcal{B}^*(E_{CM})$ CM energy spectrum of the Bhabha events (section \ref{sec-deconvolution}).
\item Integrate $\mathcal{B}^*(E_{CM})$ over the energy range of measurement.
\item Correct the systematic effect of the finite energy resolution of the luminometer on the number of counts in the peak (section \ref{sec-Eresp}).
\end{enumerate}
The absolute luminosity in the measured energy range is then given by equation \ref{eq-luminosity}, and the approximate differential form of the luminosity spectrum with the luminometer energy smearing can be obtained as $\mathcal{L}^*(E_{CM}) \propto \mathcal{B}^*(E_{CM}) E^2_{CM}$. In the following section the precision of different correction steps will be tested by MC simulation, and expressed as relative contribution to the luminosity uncertainty $\Delta L_\alpha/L$ for each step $\alpha$.
\section{Analysis and correction procedures}
\label{sec-corr}
\subsection{Simulation methods used to test the analysis procedure}
\label{sec-sim}
To test the analysis procedure, Bhabha events in the bunch-collision were simulated with the Guinea-Pig software \cite{Sch96}. The initial bunch coordinate- and momentum distributions were taken from the simulation results by D. Schulte et al. \cite{Sch02}. The coordinate distribution covered more than 10 $\sigma$ bunch widths both in the horizontal and the vertical directions. The angular distribution of the particles in the bunch was quasi-Gaussian, with transverse emittance of 660 nm rad in the horizontal, and 20 nm rad in the vertical direction. The bunch collision was simulated in the CM frame of the colliding bunches, which is equivalent to a head-on collision with zero crossing angle. The beam overlap reduction due to the crossing angle is offset by the crab-crossing scheme.
The Bhabha events were produced using a method resembling that used by C. Rimbault et al. \cite{Rim07}:
\begin{itemize}
\item The initial four-momenta of the colliding $e^- e^+$ pairs are generated in Guinea-Pig by beam-beam simulation
\item The decision is made whether the Bhabha scattering will be realized in the collision, based on the $1/s$ proportionality of the Bhabha cross section.
\item If a Bhabha event is to be realized, the final four-momenta are picked from a file generated at 3 TeV by the BHLUMI generator \cite{Jad97}.
\item The final momenta are scaled to the CM energy of the colliding pair, rotated to match the collision axis, and boosted back to the lab frame.
\item Finally the outgoing Bhabha electrons are tracked to simulate the electromagnetic deflection.
\end{itemize}
Nearly four million Bhabha events were generated. As the polar angles of the two electrons are often severely shifted in the opposite directions when the momenta are boosted into the lab frame, the polar-angle cuts in the generator frame were kept very wide - between 10 and 200 mrad. On the other hand, in order to avoid simulating a large number of events with very low scattering angles, post-generator cuts were applied in the collision frame, and only events with the scattering angle between 37 and 90 mrad were kept in the file. As the limiting angles of the luminometer FV at CLIC are 43 and 80 mrad \cite{Schw11}, these cuts leave a safety margin of 6, respectively 10 mrad, to accomodate for the small parallel shift of the polar angles that can be induced by the EMD and by the off-axis ISR.
The interaction with the detector was approximated in the following way:
\begin{itemize}
\item The four-momenta of all electrons and photons within 5 mrad of the most energetic shower were summed together on each side. The 5 mrad criterion corresponds closely to the Moli\`ere radius of the high-energy showers in the luminometer \cite{Sad08}. The resulting four-momenta were taken to represent the detected final particles.
\item The energy resolution of the luminometer was included by adding random Gaussian fluctuations to the final particle energies. The standard deviation of energy was parametrized as $\sigma_E / E = \sqrt{a^2/E + b^2}$. The value of the stochastic term is $a = 0.21$ in all relevant analyses \cite{Schw11, CDR12, Agu11}. The constant term $b$ is zero in Ref. \cite{Schw11}, and 1.1\% in Ref. \cite{Agu11}. The correction procedure was tested with three different values of the constant term $b$: 0, 0.35\% and 1.1\%. The results of these three tests agree within their respective statistical uncertainties. Only results for $b=0$ are presented in this paper.
\item The finite angular resolution of the luminometer was included by adding random fluctuations to the final particle polar angles. The nominal value of $\sigma_\theta = 2.2 \times 10^{-5} \text{\, rad}$ estimated for the ILC version of LumiCal \cite{Sad08} was used. Higher values for $\sigma_\theta$ were also tested, but no significant effect on the final uncertainties was found for $\sigma_\theta < 2 \times 10^{-4} \text{\, rad}$.
\end{itemize}
\subsection{Invariant counting in the collision frame}
\label{sec-BS}
The movement of the collision frame with respect to the lab frame is responsible for the acollinearity leading to the angular counting loss. The velocity of the collision frame with respect to the lab frame $\vec{\beta} _{coll}$, can be calculated from the measured polar angles. If $\beta_{coll}$ is taken to be collinear with the $z$-axis, the expressions for the boost of the Bhabha scattering angles into the lab frame give,
\begin{equation}
\label{eq-beta}
\beta_{coll} = \frac{\sin (\theta^{lab}_1 + \theta^{lab}_2)}{ \sin \theta^{lab}_1 + \sin \theta^{lab}_2 }
\end{equation}
Equation \ref{eq-beta} does not depend on any assumptions about the number of emitted ISR and Beamstrahlung photons, nor on their direction, apart from the assumption that the vector sum of their momenta is collinear with the z-axis\footnote{\label{foot-radial} Strictly speaking, $\vec{\beta} _{coll}$ has a small radial component $\beta_\rho$, which is larger than 0.01 in only about 5 permille of cases. However, the influence of $\beta_\rho$ on the polar angles of the Bhabha pair is almost indistinguishable from an additional axial boost. Thus for the purpose of recovering the counting loss due to the acollinearity, $\beta_{coll}$ is approximately treated as a scalar quantity.}.
If events from a subset characterized by a given $\beta_{coll}$ are plotted in the $|\tan \theta_2|$ vs. $|\tan \theta_1|$ graph, they lie on a line displaced from the central diagonal, as schematically represented by the dashed line in figure \ref{fig-asymmetry}. As can be seen from the figure, the range of accepted scattering angles decreases with increasing $\beta_{coll}$. The effective limiting angles $\theta^{coll}_{\min}$ and $\theta^{coll}_{\max}$ for the subset of events charaterized by a given $\beta_{coll}$ are obtained by boosting $\theta_{\min}$ and $\theta_{\max}$ into the collision frame.
\begin{figure}[ht]
\centering
\includegraphics{asymmetry}
\caption{\label{fig-asymmetry}Schematic representation of the distortion of the polar angles due to the movement of the collision frame. The box represents the region in which both electrons hit the FV, and the dashed line represents the event subset characterized by a given $\beta_{coll}$. $\theta^{coll}_{\min}$ and $\theta^{coll}_{\max}$ denote the effective limiting scattering angles for this subset.}
\end{figure}
To account for the smaller acceptance of the events characterized by a given $\beta_{coll}$, every event has to be weighted with the appropriate correction factor. In this way, the number of events between $\theta_{\min}$ and $\theta_{\max}$ in the collision frame is recovered for each $\beta_{coll}$ subset separately. The weighting factor is defined as,
\begin{equation}
\label{eq-w}
w(\beta_{coll}) = \frac{\int\limits^{\theta_{\max}}_{\theta_{\min}} \frac{\,\mathrm{d}\sigma}{\,\mathrm{d}\theta} \,\mathrm{d}\theta }{\int\limits^{\theta^{coll}_{\max}}_{\theta^{coll}_{\min}} \frac{\,\mathrm{d}\sigma}{\,\mathrm{d}\theta} \,\mathrm{d}\theta}.
\end{equation}
The FV selection function is thus defined as\footnote{By the standard definition of the polar angle $\theta$, the interval corresponding to the FV on the forward side of the IP is $(\theta_{\min}, \theta_{\max})$, and on the backward side, $(\pi - \theta_{\max}, \pi - \theta_{\min})$},
\begin{equation}
\label{eq-xi-ang}
\xi_{FV} = \left\{
\begin{array}{cl}
w & ; \theta^{lab}_{1,2} \in FV \\
0 & ; otherwise \\
\end{array} \right.
\end{equation}
Using this FV selection function, the number of events $N$ satisfying the condition $\theta^{coll} \in (\theta_{\min}, \theta_{\max})$ in the collision frame is reconstructed. The corresponding function $\zeta_{FV}$ for the cross-section integration is thus,
\begin{equation}
\label{eq-zeta-ang}
\zeta_{FV} = \left\{
\begin{array}{cl}
1 & ; \theta^{coll} \in (\theta_{\min}, \theta_{\max}) \\
0 & ; otherwise \\
\end{array} \right.
\end{equation}
\subsubsection{Test of the collision-frame counting method}
To test the counting method, histograms of $E_{CM,rec}$ reconstructed from kinematic parameters of the detected particles were generated as follows:
\begin{description}
\item [Control histogram]: All events with the scattering angle in the collision frame $\theta^{coll}$ such that $\theta_{\min} < \theta^{coll} < \theta_{\max}$ are accepted. Therefore this histogram is not affected by counting losses due to Beamstrahlung and ISR. This is, of course, only possible in the simulation.
\item [Uncorrected histogram]: Events hitting the FV of the luminometer in the lab frame.
\item [Corrected histogram]: Events hitting the FV of the luminometer in the lab frame, stored with the weight $w$ calculated according to equation \ref{eq-w}
\end{description}
The full kinematical information, including the energy of the detected final particle, was used for the reconstruction of the CM energy. To calculate the correction weight $w$, the approximate expression for the angular differential cross section $d\sigma / d\theta \approx \theta^{-3}$ was used. The results are shown in figure \ref{fig-BS-corr}. The control spectrum is plotted in black, red is the spectrum affected by the counting loss, green is the corrected spectrum.
The blue line in figure \ref{fig-BS-corr} represents the events inaccessible to the correction due to their high values of $\beta_{coll}$. In the subsets of events characterized by $\beta_{coll}$ above a certain treshold, at least one electron is always lost (see figure \ref{fig-asymmetry}). However, for such events, the Beamstrahlung-ISR energy loss is also above a certain minimum, so that they are only present in significant number below 2200 GeV. A small number of high-$\beta_{coll}$ events are also present at energies above 2200 GeV, as seen in the zoomed figure on the right, where these events are scaled by a factor 100. In these events, $\vec{\beta} _{coll}$ has a relatively high radial component, due to the off-axis radiation before collision. This increases the acollinearity of such events relative to other events with similar energy loss (see footnote \ref{foot-radial}). The relative contribution of these events to the peak integral above 95\% of the nominal CM energy is of the order of $2\times 10^{-5}$.
\begin{figure}[b]
\centering
\includegraphics{BS-Corr-loss}
\caption{\label{fig-BS-corr}Correction of the counting loss due to Beamstrahlung and ISR. Left: whole spectrum; right: zoom on energies above 2200 GeV. Black: Simulated control spectrum without counting loss due to Beamstrahlung and ISR; red: Reconstructed $E_{CM}$ spectrum affected by the counting loss; green: Reconstructed spectrum with correction for the counting loss due to Beamstrahlung and ISR; blue: events inaccessible to the correction due to high $\beta_{coll}$ (see text).}
\end{figure}
Before correction, the counting loss in the peak integral above 95\% of the nominal CM energy was 3.8\%. After correction, the remaining relative deviation in the peak with respect to the control spectrum is $(-0.1 \pm 0.4 (\text{stat.})) \times 10^{-3}$. In the tail between 80\% and 90\% of the nominal CM energy, the counting loss before correction was 43.1\%. After correction, the remaining relative deviation in the tail is $(-3.6 \pm 1.8 (\text{stat.})) \times 10^{-3}$, which includes a deviation of $(-2.7 \pm 0.1) \times 10^{-3}$ due to the lost events. The statistical uncertainty of the remaining deviation was estimated taking into account the correlations between the corrected and the control spectra. The precision of the Beamstrahlung-ISR correction is of the order of permille despite the presence of the following sources of systematic uncertainty of the correction:
\begin{itemize}
\item The assumption that the deformation of the Bhabha angles induced by Beamstrahlung and ISR is well described as a Lorentz boost along the beam axis (this assumption is the source of the "lost" events in the peak),
\item The implicit assumption that the cluster around the most energetic shower always contains the Bhabha electron. In a small fraction of events, this is not the case, and the reconstructed polar angles $\theta_{1,2}^{lab}$ do not correspond to the final electron angles.
\item The use of the approximate angular differential cross section for the Bhabha scattering in the calculation of $w$,
\item Assumption that all ISR is lost, and all FSR is detected (this assumption has, in principle, an influence on the calculation of $\beta_{coll}$, and consequently on $w$).
\end{itemize}
\subsection{Deconvolution of the ISR energy loss}
\label{sec-deconvolution}
After correcting for the angular counting loss, the ISR energy loss can be deconvoluted from the resulting spectrum $h(E_{CM,rec})$ to restore the CM energy spectrum of the Bhabha events $\mathcal{B}^*(E_{CM})$\footnote{The Bhabha-event spectrum is marked with a star here, because it is smeared by the finite energy resolution of the luminometer. See section \ref{sec-Eresp}.}. When data according to equation \ref{eq-ecm-det} is binned in $N$ sufficiently narrow bins, it takes approximately the discrete form,
\begin{equation}
\label{eq-h-disc}
h_i \approx \sum\limits_{j=1}^N \mathcal{I}_{ij} \mathcal{B}^*_j
\end{equation}
As the $\mathcal{I}_{ij}$ matrix has triangular form, equation \ref{eq-h-disc} can be solved for $\mathcal{B}^*_j$ exactly, using the Jacobi method. The solution proceeds from high-energy towards the lower-energy bins, indroducing an increasing uncertainty towards lower energies.
To obtain $\mathcal{I}_{i,j}$, the function $\mathcal{I}(x)$ was parametrized by the beta distribution used for the parametrization of the beam spectra of linear colliders \cite{Ohl97},
\begin{equation}
\label{eq-g}
\mathcal{I}(x) = a_0 \delta(x-1) + \left\{
\begin{array}{cl}
a_1 x^{a_2} (1-x)^{a_3} & ; x<1 \\
0 & ; x \ge 1 \\
\end{array} \right.
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics{isr}
\caption{\label{fig-isr}Fit of the relative energy-loss distribution due to ISR.}
\end{figure}
The parameters were obtained by fitting equation \ref{eq-g} to the fractional CM energy distribution after ISR, reconstructed from the same BHLUMI data set as used in Guinea-Pig. The fit was performed with variable binning in order to have sufficiently fine binning near $x=1$, while avoiding large differences in statistical uncertainties for individual bins. The data histogram was first normalized to the unit integral. The results are shown in figure \ref{fig-isr}. The parameter $a_0$ was obtained as the ratio of the number of counts in the narrow peak above $x = 0.99995$ to the number of counts in the entire spectrum, and the remaining coefficients were obtained by fitting the function to the data in the range (0.7, 0.99995)\footnote{\label{ft-cut} The angular generator cuts in the lab frame cause significant losses in the distribution for $x<0.5$ because high energy loss in ISR emission correlates with high acollinearity. This affects the overall normalization, and thus the value of $a_0$. The value of $a_0$ obtained here is appropriate for the deconvolution of the simulated spectrum where the same set of BHWIDE samples was used. However, for the analysis of the real experimental data, ideally the distribution without cuts in the lab frame should be used.}\footnote{The functional form of equation \ref{eq-g} suggests that the ratio $a_0/a_1$ can be fixed by the normalization requirement. However, the beta distribution fails to properly describe the form of $\mathcal{I}(x)$ for $x < 0.7$ (regardless of the angular cuts in the lab frame discussed above), so that the overall norm is different than the integral of the beta distribution extrapolated from the fit. Therefore, $a_1$ was allowed to vary freely in the fit.}.
\subsubsection{Test of the deconvolution procedure}
In this test, the following histograms were generated:
\begin{description}
\item[Control histogram] was filled with simulated CM energies before ISR emission, and then smeared with a normalized Gaussian with constant width corresponding to the luminometer energy-resolution at the peak energy.
\item[Histogram with ISR energy loss] $h(E_{CM,rec})$ is the same as the control histogram from section \ref{sec-BS} -- filled with energies reconstructed from the final-state kinematics, and with inclusion of the luminometer energy resolution.
\item[Deconvoluted histogram] was obtained by solving the system of linear equations represented by equation \ref{eq-h-disc}, taking the binned data of the affected histogram as $h_j$.
\end{description}
For each histogram, event selection was made on the scattering angles in the collision frame, so that the Beamstrahlung-ISR angular counting loss is not present. This was done in order to assess the accuracy of the deconvolution separately from the Beamstrahlung-ISR counting-loss correction. Results are shown in figure \ref{fig-deconv}.
\begin{figure}
\centering
\includegraphics{deconv}
\caption{\label{fig-deconv}Deconvolution of the ISR deformation of the luminosity spectrum. Yellow: the control histogram -- simulated $E_{CM}$ before emission of ISR, smeared with a normalized Gaussian; black: the histogram affected by the ISR energy loss -- reconstructed $E_{CM}$ from the detected showers, green: deconvoluted spectrum}
\end{figure}
Before deconvolution, the relative counting loss in the peak above 95\% of the nominal CM energy was 23.4\%. After deconvolution, the relative remaining deviation of the peak integral with respect to the control histograms is $(+1.3 \pm 2.1) \times 10^{-3}$. In the tail between 80\% and 90\% of the nominal CM energy, the ISR energy loss increases the count by 14.5\%. After deconvolution, the remaining deviation in the tail is $(-2.3 \pm 3.9) \times 10^{-3}$.
The contributions from the uncertainties of the fitted parameters of the ISR energy-loss function $\mathcal{I}(x)$ were added to the statistical uncertainty of the remaining deviation after deconvolution. The full covariance matrix of the fit parameters was used, together with the partial derivatives of the count estimated by variation of the fit parameters by one sigma, one parameter at a time. With the statistics of about four million generated Bhabha events, the uncertainties due to the fit parameters are $(\Delta N/N)_{peak,ISRfit} = 0.53 \times 10^{-3}$ for the peak, and $(\Delta N/N)_{tail,ISRfit} = 0.07 \times 10^{-3}$.
\subsection{Effect of the luminometer energy resolution on the counting rate in the peak}
\label{sec-Eresp}
The finite energy resolution of the luminometer introduces a counting bias in two ways:
\begin{enumerate}
\item \label{slope-cut} By asymmetric redistribution of events from each side of the sharp energy cut $E_{cut}$ used to define the energy range, due to the slope of the underlying distribution at the position of the cut.
\item \label{tail-cut} By smearing the luminosity peak so that a small part of it is cut off below $E_{cut}$.
\end{enumerate}
The second effect is difficult to precisely correct because of the strong dependence on the position of the energy cut, and because of the uncertainties of the inherent width of the luminosity peak and of the energy resolution, as well as the strong correlations between the fitted parameters that dominate the spectrum in the peak area (see Eqs. \ref{eq-lumispec} and \ref{eq-F-param}). However, if the energy cut is made at a sufficient distance from the peak, the second effect becomes negligible, and the energy-resolution effect can be precisely corrected based on the parametrization of the functional form of the experimental spectrum after deconvolution of ISR.
\begin{equation}
\label{eq-lumispec}
\mathcal{B}^*(E_{CM}) = \frac{1}{\sigma \sqrt{2 \pi}} \int\limits_{0}^{\infty} \mathcal{B}(E') \exp \left(- \frac{(E_{CM}-E')^2 }{ 2 \sigma^2 } \right) \,\mathrm{d} E'
\end{equation}
If the inherent width of the luminosity peak is neglected, $\mathcal{B}(E_{CM})$ can be parametrized by the beta distribution,
\begin{equation}
\label{eq-F-param}
\mathcal{B}(E_{CM}) = b_0 \delta(E_{CM}-E_0) + \left\{
\begin{array}{cl}
b_1 E_{CM}^{b_2} (E_0-E_{CM})^{b_3} & ; E_{CM}<E_0 \\
0 & ; E_{CM} \ge E_0 \\
\end{array} \right.
\end{equation}
One may recall here that the use of a constant standard deviation $\sigma$ in equation \ref{eq-lumispec} is an approximation, as $\sigma$ depends on the particle energy, and is thus different for different $E_{CM}$. The systematic error induced by the energy resolution of the luminometer can now be expressed as,
\begin{equation}
\label{eq-err-Eres}
\frac{\Delta N_{Eres}}{N} = \frac{ \int\limits_{E_{cut}}^{ \infty } \frac{E_{CM}^2}{E_0^2} (\mathcal{B}^*(E_{CM}) - \mathcal{B}(E_{CM})) \,\mathrm{d} E_{CM} }{\int\limits_{E_{cut}}^{ \infty } \frac{E_{CM}^2}{E_0^2} \mathcal{B}(E_{CM}) \,\mathrm{d} E_{CM} }
\end{equation}
This expression can now be estimated by numerical integration based on the fitted parameters of $\mathcal{B}^*(E_{CM})$ (Eqs. \ref{eq-lumispec} and \ref{eq-F-param}). Even though the reproduction of the integral count by integration of the fitted function has in principle limited accuracy, rather accurate prediction of the relative error (equation \ref{eq-err-Eres}) is achieved. The fit was performed on the deconvoluted histogram with the fixed parameters $E_0 = 3 \text{ TeV}$ and $\sigma = 13.7 \text{ GeV}$, while $b_{0-3}$ were varying freely. The value of $\sigma$ was obtained by fitting the data in the region of the luminosity peak. It contains contributions from both the energy resolution of the luminometer and the inherent width of the peak. Neglecting the inherent width introduces an uncertainty of 20\% in the magnitude of the correction. This represents a conservative estimate of the precision with which $\sigma$ can be known and, as shown below, the results obtained with this assumption are acceptable.
The relative deviation of the count in the reconstructed peak is shown in the left pane of figure \ref{fig-Eresp} as a function of the relative distance of the energy cut to the peak energy in percent (black line). The predicted deviation according to equation \ref{eq-err-Eres} is also shown for comparison (blue line). There is an excellent agreement between the predicted and the simulated deviations.
To take a safe distance from the peak, only points for which $E_{cut}$ is more than 2.5\% away from $E_0$, corresponding to about 5 $\sigma$ of the fitted peak, will be considered in the following.
The fluctuations of the simulated deviation curve in figure \ref{fig-Eresp} are of statistical nature. These fluctuations can be used as an external measure of the statistical uncertainty of the counting bias in the simulation. In the right pane in figure \ref{fig-Eresp}, the histogram of these fluctuations is shown, calculated as remaining deviations after correction, for $E_{cut}$ more than 2.5\% away from $E_0$. The RMS of the fluctuations corresponds to a relative statistical uncertainty of $0.24 \times 10^{-3}$ with respect to the the peak count in the top 5\%. The relative deviation in the top 5\% estimated from equation \ref{eq-err-Eres} is $-0.29 \times 10^{-3}$. The mean remaining bias after correction is $(0.05 \pm 0.03) \times 10^{-3}$.
Similar procedure was applied to estimate the relative bias and the remaining uncertainty in the tail region from 80\% to 90\% of $E_0$. The RMS of the fluctuations is $0.79 \times 10^{-3}$, the uncorrected deviation is $+0.32 \times 10^{-3}$, and the remaining deviation after correction is $(0.09 \pm 0.09) \times 10^{-3}$.
\begin{figure}
\centering
\includegraphics{Eres-plot}
\caption{\label{fig-Eresp}Left: relative deviation of the peak count induced by the luminometer energy resolution in the reconstructed spectrum as a function of the peak region expressed as fraction of the nominal CM energy $E_0$, compared to the predicted value based on the fitted spectrum. Right: histogram of the normalized remaining deviations of the peak count after correction (see text), for $E_{cut}$ more than 2.5\% away from $E_0$.}
\end{figure}
\subsection{The Electromagnetic Deflection}
\label{sec-EMD}
To estimate the counting loss due to the EMD, the angular selection was applied once before and once after the deflection in the simulation, and the relative difference in the resulting number of events was calculated. The EMD counting loss above 95\% of the nominal CM energy is $(-0.50 \pm 0.05) \times 10^{-3}$. In the tail from 80 to 90\% of the nominal CM energy, the EMD counting loss is $(-1.08 \pm 0.08) \times 10^{-3}$.
\section{Conclusions}
A method of invariant counting of Bhabha events was presented. The number of Bhabha events within a given range of scattering angles in the collision frame, and in a given range of $E_{CM}$ is reconstructed. The corresponding limits can be used for the cross-section integration in a straightforward way. In this way the luminosity expression (equation \ref{eq-luminosity}) is essentially insensitive to the beam-beam effects.
The remaining systematic uncertainties of the Bhabha count with the presented methods were estimated by MC simulations. In addition, the systematic uncertainty due to the EMD-induced counting loss was estimated and found to be small. The remaining relative errors in the top 5\%, as well as in the tail from 80 to 90\% of the nominal CM energy are listed in table \ref{tab-unc}. Beam-beam effects in the luminosity measurement at 3 TeV CLIC can be corrected and the luminosity spectrum reconstructed with a few permille precision above ca. 73\% of the nominal CM energy.
\begin{table}[ht]
\caption{\label{tab-unc}Relative remaining error after correction of different systematic effects in luminosity measurement in the peak above 95\% and the tail from 80 to 90\% of the nominal CM energy. The fraction of events with high $\beta_{coll}$ constitutes part of the remaining bias of the Beamstrahlung-ISR angular loss correction. The last column gives the total remaining error when the high $\beta_{coll}$ contribution is corrected.}
\begin{center}
\begin{threeparttable}
\begin{tabular}{@{} c l R-{2}{3} @{$\pm$} l R-{2}{2} @{$\pm$} l @{}}
\# & Effect & \multicolumn{2}{c}{Top 5\%} & \multicolumn{2}{c}{80 - 90\% of $E_0$} \\
& & \multicolumn{2}{c}{($10^{-3}$)} & \multicolumn{2}{c}{($10^{-3}$)} \\
\hline
1 & Beamstrahlung-ISR angular loss & -0.1 &0.4 & -3.6&1.8 \\
2 & High $\beta_{coll}$ \tnote{a} & -0.019&0.008 & -2.7&0.1 \\
3 & ISR energy-loss & +1.3 &2.0 & -2.3&3.9 \\
4 & Energy resolution & 0.05&0.03 & 0.09&0.09 \\
5 & EMD counting loss (uncorrected) & -0.50&0.05 & -1.08&0.08 \\
\hline
& Total & 1.4 &2.0 & 4.4 &4.3 \\
& Total (corrected for \#2) & 1.4 &2.0 & 2.7 &4.3 \\
\hline
\end{tabular}
\end{threeparttable}
\end{center}
\end{table}
In a separate study for the case of ILC \cite{Luk12b}, it was shown that the method presented here is robust with respect to unaccounted-for bunch size and charge variations up to 20\%, as well as the vertical and horizontal offset up to 1 $\sigma$ bunch height and width, respectively.
\section{Acknowledgements}
This work has been funded by the Ministry of science and education of the Republic of Serbia under the project Nr. OI 171012, "Physics and detector studies in HEP experiments". Additional travel grants from the CLIC Physics and Detectors study are gratefully acknowledged. Thanks to A. Sailer and D. Schlatter for many useful remarks during the review of the LCD note.
\bibliographystyle{JHEP}
|
1,116,691,499,105 | arxiv | \section{Introduction}
Melting, or thermal denaturation of DNA, is the process by which the
two stands of the DNA molecule become fully separated upon an
increase of the temperature \cite{review1,Kafri-2002}. At low
temperatures the strands are partially unbound by forming
fluctuating loops where the two strands are locally separated. As
the melting temperature $T_M$ is approached the average loops size
increases, yielding full denaturation at $T_M$. Melting of DNA has
been extensively studied over the years both theoretically and
experimentally. The natural order-parameter of the denaturation
transition is the fraction of bound base-pairs. This was measured
using specific-heat and UV absorption experiments. Simple models
have yielded theoretical expressions for thermodynamic properties.
Two main approaches have been developed. One, known as the
Peyrard-Bishop model, considers the two strands as directed polymers
interacting via a short-ranged potential \cite{PB}. One then focuses
on the distance between complementary pairs as the melting
transition is approached. The other, known as the Poland-Scheraga
(PS) model, represents the DNA molecule as an alternating sequence
of bound segments and open loops, and focuses on the fraction of
bound base-pairs \cite{ps}. Within the PS approach self-avoiding
interactions, which are inherently long-range, may be taken into
account. As have been shown these interactions affect the loop
entropy, which controls the nature of the melting transition
\cite{Fisher,Kafri-2000}.
Recently, single-molecule techniques such as optical tweezers
\cite{Bockelmann1,Bockelmann2}, magnetic traps
\cite{Prentiss1,Prentiss2} and Fluorescence Correlation Spectroscopy
(FCS) \cite{altan-bonnet,MetlerOleg,MetlerOleg2} have been used to
probe properties of the melting process. Other techniques, such as quenching, have also
been applied \cite{Zocchi}. Some of the experiments utilize an
external force to induce unzipping of the two-strands and study
their dynamics. In others, the distance between two complementary
base-pairs is probed by FCS without applying an external force.
These experimental methods enable one to study not only bulk
properties but rather microscopically fluctuating quantities.
Inspired by these experiments, theoretical treatments of dynamical
properties of DNA have been developed. Several studies have focused
on the dynamics of isolated loops away from the melting transition
\cite{hanke,kats} and at the transition \cite{Bar}. The survival
probability of an isolated loop has been calculated. A toy model for
the dynamics of interacting loops has also been introduced and
analyzed \cite{Livi}.
It has recently been shown that studying the loop dynamics may yield
information on the loop entropy \cite{Bar}. Within the PS approach
the dependence of the entropy of a loop on its length plays a
dominant role in determining the thermodynamic behavior near the
transition. On general grounds one can argue that the entropy of a
loop of length $n$ takes the form $S=k_B \log(\Omega(n))$, where
$\Omega(n)\sim s^n/{n^c}$ is the number of loop configurations. Here
$s$ is a model-dependent constant and $c$ is a universal exponent.
The numerical value of $c$ has been debated over the years. It was
found to be modified when the excluded-volume interactions, which
are long ranged in nature, are taken into account
\cite{ps,Fisher,Kafri-2000}. When interactions between loops are
neglected, and excluded volume interactions are taken into account
only within each loop an exponent $c \simeq 1.76$ was found
\cite{Fisher}. On the other hand, when excluded volume interactions
both within a loop and between the loop and the rest of the chain
are taken into account, the entropy exponent was found to increase
to $c \simeq 2.12$ \cite{Kafri-2002,Kafri-2000}. This latter result,
which predicts a first order denaturation transition for
homopolymers, has been verified numerically \cite{carlon}. While
numerical studies of the homopolymer model with excluded volume
interactions yield a clear first order transition \cite{Causo-2000},
a direct experimental measurement of $c$ is rather difficult and has
not been carried out so far. In \cite{Bar} it was shown that at the
melting transition the time dependence of the base-pair
autocorrelation function depends on the parameter $c$. The base-pair
autocorrelation function is defined as $C_i(t)=\langle
u_i(t+\tau)u_i(\tau) \rangle$ where $u_i(t)=1,0$ is a variable which
indicates if base pair $i$ is open $(1)$ or closed $(0)$ at time
$t$, and $\langle \cdot \rangle$ denotes an average over $\tau$. The
behavior of the autocorrelation function was studied theoretically
away from the melting transition \cite{hanke,kats} and may, in
principle, be obtained experimentally by FCS studies. In these
experiments the states of a specific base-pair is monitored. So far,
FCS experiments have been restricted to short molecules
\cite{altan-bonnet}. Measuring the exponent $c$ requires extending
these studies to longer molecules.
In the present paper we elaborate on and extend the analysis
presented in \cite{Bar} for the dynamical behavior of homopolymers
at the melting transition. The dynamics of a single loop is studied
using a simple model, whose validity is then verified in detail
using numerical simulations. Within this model the entropy exponent
$c$ is introduced as a free parameter which may be chosen at will.
While the studies in \cite{Bar} were tested numerically only for the
case $c=3/2$, here we test the robustness of the results for models
with arbitrary values of $c$. We then consider a toy model, similar
to the one considered in \cite{Livi}, which indicated that the
results still hold when the interaction between loops is taken into
account.
The paper is organized as follows: In Sec. 2 we study the single loop model using both the scaling
argument and microscopic models. In Sec. 3 results for the many loops model are presented. Finally,
we end with a brief summary.
\section{Single Loop Dynamics}
We start by considering the dynamics of an isolated loop. In this
approach one ignores processes like merging of loops and the
splitting of a large loop into two or more smaller ones. This may be
justified by the fact that the cooperativity parameter, which
controls the statistical weight of opening a new loop, is estimated
to be rather small, $\sigma_0 \approx 10^{-4}$ \cite{carlon2}. Thus
splitting a loop into two is unfavorable. Also, the average distance
between loops, which within the PS model is proportional to
$1/\sigma_0$, is large, making the independent loop approximation
plausible. In Sec. \ref{secManyLoop} we introduce a simple model to
effectively take into account the interactions between loops and
show that these interactions do not modify the results obtained
within the single loop approach.
Within the single loop dynamics, we assume that a loop may change
its length by closing or opening of base pairs at its two ends. It
survives as long as its two ends do not meet. Let $G(n_0,t)$ be the
survival probability of a loop of initial length $n_0$ at time $t$.
As discussed above, the quantity of interest is the equilibrium
autocorrelation function
\begin{equation}
C(t) \approx \frac{\sum_{n_0=1}^{\infty}P_{eq}(n_0)n_0 G(n_0,t)
}{\sum_{n_0=1}^{\infty}P_{eq}(n_0)n_0} \;,\label{eqn:corr}
\end{equation}
where for simplicity of notation we have dropped the site index $i$.
Here $P_{eq}(n_0)$ is the probability of having a loop of length
$n_0$ in equilibrium. Hence, $n_0P_{eq}(n_0)$ is the probability of
a particular site to belong to a loop of length $n_0$. Note that we
assume that site $i$ remains open as long as the loop survives. This
approximation does not affect the behavior of the autocorrelation
function in the scaling limit.
We proceed by first presenting a scaling analysis demonstrating that
in the case of a homopolymer and at criticality, the autocorrelation
function decays at large $t$ as $C(t)\sim t^{1-c/2}$ for $c>2$,
while it remains finite, $C(t)=1$, for $c<2$. These results are then
tested and verified using numerical simulations for various values
of $c$.
\subsection{Scaling Analysis}
In the case of a homopolymer and at criticality it has been shown
that the equilibrium loop size distribution is $P_{eq}(n)\sim
1/n^{c}$. To estimate the survival probability of a loop of length
$n_0$, we consider dynamics under which the loops are
non-interacting and do not split into a number of smaller loops.
Similar to \cite{hanke,kats} we further assume that the loop is in a
local thermal equilibrium at any given time during its evolution.
The validity of this assumption will be discussed in detail below.
The loop free energy is thus given by $f \propto c \ln n$. Within
the framework of the Fokker-Planck equation, the probability
distribution of finding a loop of size $n$ at time $t$, $P(n,t)$, is
given by
\begin{equation}
\frac{dP(n,t)}{dt} = D \ddn{} \left[\frac{c}{n} +
\ddn{}\right]P(n,t) \;,
\label{eqn:FPE}
\end{equation}
where $D$ is the diffusion constant. Here we have taken the
continuum limit and assumed the dynamics to be over-damped. This
equation has to be solved with the boundary condition $P(0,t)=0$ and
initial condition $P(n,0)=\delta(n-n_0)$. The survival probability
of the loop is then given by $G(n_0,t)=\int_{0}^{\infty}dnP(n,t)$.
Within the scaling approach the survival probability is written in
the form
\begin{equation}
G(n_0,t) = g\left(Dt/n_0^z\right) \;,
\label{eqn:scalinsurvivh}
\end{equation}
with $z=2$. In Appendix 1 we show that the asymptotic behavior of
the scaling function for small and large values of the argument is
\begin{eqnarray}
g(x)\sim 1 & \;\;\;\;\;\;\;\; & {\rm for} \; x \ll 1\\
g(x)\sim x^{-\frac{1+c}{2}} && {\rm for} \; x \gg 1 \;. \label{eq:asympsurvivh}
\end{eqnarray}
The autocorrelation function (Eq. (\ref{eqn:corr})) may thus be
written as
\begin{equation}
C(t)\approx \frac{\int_1^{N}n_0^{1-c}g(Dt/n_0^2)dn_0}
{\int_1^{N}n_0^{1-c}dn_0} \;,
\end{equation}
where the system size $N$ is taken to infinity in the thermodynamic
limit. We first consider the long time behavior for $c\leq 2$. In
this case the integrals are controlled by the upper limit $N$, where
$g(Dt/n_0^2)\sim 1$. Both numerator and denominator diverge as
$N^{2-c}$ so that $C(t)\sim 1$ for $t \gg 1$. On the other hand for
$c>2$ both integrals are independent of the upper limit. Changing
variables to $y=n_0/\sqrt{Dt}$ yields
\begin{equation}
C(t)\approx
(Dt)^{1-c/2}\frac{1}{\avg{n_0}}\int_{1/\sqrt{Dt}}^{\infty}y^{1-c}g(y^{-2})dy
\;, \label{eqn:OLM_Crit_AC}
\end{equation}
where $\avg{n_0}$ is the average loop size. The asymptotic behavior
of $g(y^{-2})$ at small $y$ (Eq. (\ref{eq:asympsurvivh})) implies
that the integral converges for $t\rightarrow\infty$, yielding
$C(t)\sim t^{1-c/2}$. Hence
\begin{eqnarray}
C(t) \sim \left\{%
\begin{array}{ll}
1 & \hbox{for}\;\;\; c\leq 2 \\
t^{1-c/2} & \hbox{for}\;\;\; c > 2 \;.
\end{array}%
\right. \label{eq:autocorrhomo}
\end{eqnarray}
This expression suggests that measuring $C(t)$ at criticality may be
used to determine the entropy exponent $c$. In particular it can be
used to distinguish between the case of a continuous transition ($c
\leq 2$), where $C(t)=1$, and a first order phase transition
($c>2$), where $C(t)$ decays to zero at long times.
In the above analysis it is assumed that the loop is at local
equilibrium at any given time. For this assumption to be valid, the
survival time of large loops has to be much longer than its
equilibration time. A typical survival time of a loop of length $n$
scales as $n^2$. On the other hand, simple models for the dynamics
of microscopic loop configurations, which are usually based on
diffusion processes, yield relaxation times which also scale as
$n^2$. Thus the two typical times scale in the same way with the
loop size, and it is not a priori clear that during the evolution of
the loop it is at local equilibrium. Note that off criticality the
loop size changes linearly in time and therefore the assumption of
local equilibrium is clearly not valid.
In the following we introduce and study a model for the loop
dynamics. This is done in two steps: First, we consider the simpler
case of $c=3/2$ discussed in \cite{Bar}. We then generalize this
approach to arbitrary values of $c$. We find strong evidence that
the local equilibrium assumption holds asymptotically for the model.
It is thus argued that within the model the local equilibrium
assumption is valid.
\subsection{Microscopic Dynamical Model for $c=3/2$}
In this section we introduce and analyze a simple model of loop
dynamics corresponding to $c=3/2$. Within the model, the loop is
described by a fluctuating interface (or a string), interacting with
an attractive substrate in $d=1+1$ dimensions. Here the interface
height variable corresponds to the distance between complementary
bases. The interface configurations are those of a restricted solid
on solid (RSOS) model defined as follows (see Fig.
(\ref{fig:model})): Let $h_i=0,1,2 \ldots$ be the height of the
interface at site $i$. The heights satisfy $|h_i-h_{i+1}|= \pm 1$.
Consider a loop between sites $0$ and $n$ (where $n$ is even) as
shown in Fig. (\ref{fig:model}). Outside the loop the interface is
bound to the substrate so that $h_{-2k}=h_{n+2k}=0$ and
$h_{-2k-1}=h_{n+2k+1}=1$ for $k=0,1,\ldots$. Inside the loop the
heights $h_1...h_{n-1}$ can take any non-negative value which is
consistent with the RSOS conditions. For simplicity we allow only
one end of the loop to fluctuate while the other is held fixed. This
should not modify any of our results, since the dynamics of the two
ends of long loops are uncorrelated with each other. We consider a
random sequential dynamics in which the loop configuration and its
length are free to fluctuate. Thus the dynamical moves are as
follows:
\begin{equation} h_i \to h_i \pm 2 \;\;\; {\rm with \; rate \; 1 \; for \; sites \;\; } 1
\leq i \leq n-1, \label{eq:looprate}
\end{equation}
as long as the resulting heights are non-negative and the RSOS condition is satisfied. For $i=n$
the loop length is changed according to the rules
\begin{eqnarray}
n &\to& n+2 \;\;\; {\rm with \; rate} \;\;\; \overline{\alpha}/4
\nonumber \\
n &\to& n-2 \;\;\; {\rm with \; rate} \;\;\; \overline{\alpha} \;,
\label{eq:endrate}
\end{eqnarray}
where $n$ can decrease only if $h_{n-2}=0$. At the other end the
height is fixed, $h_0=0$. It is straightforward to verify that the
number of configurations of a loop of size $n$ is given by $2^n/n^c$
with $c=3/2$ for large $n$. This is a result of the fact that the
number of walks of length $n$ in $d=1+1$ dimensions is $2^n$ and the
probability of first return is $n^{-3/2}$. The ratio, $1/4$,
between the two length changing processes in Eq. (\ref{eq:endrate})
is chosen such that in the large $n$ limit the loop is not biased to
either grow or shrink. This corresponds to the model being at the
denaturation transition point, which is determined by equating the
free energies of the pinned segment and that of the open loop.
Combining this with detailed balance yields the ratio between the
rates. The parameter $\overline{\alpha}$ determines the rate of the
length changing processes: $\overline{\alpha}=0$ corresponds to the
dynamics of a loop of a fixed length. As $\overline{\alpha}$ is
increased the length changing processes become faster. In the
following subsection this model is generalized to include a power
law potential between the interface and the substrate. This will
allow us to study other values of $c$.
In a realization of this dynamics one of the $n+1$ attempts defined
above, Eqs. (\ref{eq:looprate}) and (\ref{eq:endrate}), is chosen at any given
time. Of these, $n-1$ are attempts to update the height at sites
$1,2, \ldots, n-1$. The other two are attempts to update the
position of the edge by a move either to the right or to the left.
One attempted move of the edge defines a Monte Carlo sweep.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{Model}
\caption{A typical microscopic configuration of the loop in the RSOS model. Dashed lines indicate
possible dynamical moves of the interface.} \label{fig:model}
\end{figure}
In order to test the validity of Eq. (\ref{eqn:FPE}) we compare its predictions with results
obtained from numerical simulations of the model above. In the numerical simulation we find good
data collapse, when plotted against $t/n_0^z$ with $z\gtrsim 2$, depending on the value of
$\overline{\alpha}$, rather than the expected Fokker-Planck value $z=2$. With these modified $z$
exponents the survival probability agrees well with the results obtained from the discrete version
of the Fokker-Planck equation. The results are summarized in Fig. (\ref{CollapseZ2p2}) where the
survival probability is plotted as a function of the scaling variable $t/n_0^{2.2}$ and
$t/n_0^{2.07}$ for $\overline {\alpha}=1$ and $\overline {\alpha}=0.1$ respectively, for several
values of the loop size $n_0$. The question is whether the discrepancy in the value of $z$ is a
result of a finite size effect or does it persist in the large $n_0$ limit. For the Fokker-Planck
equation to properly describe the system it is essential to show that $z$ approaches $2$ in the
large $n_0$ and $t$ limit.
\begin{figure}[h]
\centering
$\begin{array}{ll}
(a) & (b)\\
\includegraphics[scale=0.4]{CollapseZ2p2} &
\includegraphics[scale=0.4]{surva0p1z2p07}
\end{array}$
\caption{ Data collapse of the survival probability (averaged over
$4 \cdot 10^4$ realizations) for some values of $n_0$ with (a)
$\overline {\alpha}=1$ and $z=2.2$ , and (b) $\overline
{\alpha}=0.1$ and $z=2.07$. The line corresponds to a numerical
solution of Eq. (\ref{eqn:FPE}). } \label{CollapseZ2p2}
\end{figure}
In the following we argue that in fact the value $z=2.2$ in the case
of $\overline {\alpha}=1$ (and $z=2.07$ for $\overline
{\alpha}=0.1$) is a result of finite size effects. For large systems
the value $z=2$ is expected to be recovered. To check this point we
calculate numerically the variance of the loop size
\begin{equation}
w^2(t)=\langle (n(t)-\langle n(t) \rangle )^2 \rangle \;,
\label{eq:variancedef}
\end{equation}
where $\langle \cdot \rangle$ denotes an average over realizations
of the dynamics. In order to evaluate the temporal growth of
$w^2(t)$ we define a variable $\sigma_+(t)$ which takes the value
$1$ if the length of the loop increases at time $t$ and $0$
otherwise. Similarly, we define $\sigma_-(t)$ and $\sigma_0(t)$ for
steps which decrease the loop size and steps in which the loop size
does not change, respectively. Clearly
$\sigma_+(t)+\sigma_-(t)+\sigma_0(t)=1$. The dynamics of the chain, Eq.
(\ref{eq:endrate}), implies that in the limit of large $n_0$ one has
\begin{equation}
\langle \sigma_+(t) \rangle=\langle \sigma_-(t) \rangle = \alpha/8
\;\; ; \;\; \langle \sigma_0(t) \rangle=1-\alpha/4 \;,
\label{eq:moveprob}
\end{equation}
where $\alpha=\overline{\alpha}/\max \{ 1, \overline{\alpha} \}$
in accordance with the random sequential dynamics. Denoting $U(t)
\equiv \sigma_+(t) - \sigma_-(t)$, it is easy to see that
\begin{eqnarray}
\frac{\Delta w^2(t)}{\Delta t} & \equiv & w^2(t)-w^2(t-1)
\nonumber \\
&=& 4\langle U(t)^2 \rangle +8 \sum_{\tau=1}^{t-1} \langle U(\tau)
U(t) \rangle \;, \label{eq:defvariancediff}
\end{eqnarray}
where
\begin{eqnarray}
\langle U(\tau) U(t) \rangle &=& \langle \sigma_+(\tau) \sigma_+(t) \rangle+\langle
\sigma_-(\tau)
\sigma_-(t) \rangle \nonumber \\
&-&\langle \sigma_-(\tau) \sigma_+(t) \rangle -\langle
\sigma_+(\tau) \sigma_-(t) \rangle \;. \label{eq:sigmaSrelation}
\end{eqnarray}
It is evident that a loop increasing step at time $t$,
($\sigma_+(t)=1$), is uncorrelated with steps which took place at
time $\tau<t$. Thus $\langle \sigma_+(\tau) \sigma_+(t) \rangle =
\langle \sigma_-(\tau) \sigma_+(t) \rangle = \alpha^2/64$.
Numerically we find $\langle \sigma_-(\tau) \sigma_-(t)
\rangle=\alpha^2/64$ (see Fig. (\ref{Correlations})). Using these
results we finally obtain
\begin{equation}
\frac{\Delta w^2(t)}{\Delta t}= \alpha - 8 \sum_{\tau=1}^{t-1}
\left[ \langle \sigma_+(\tau) \sigma_-(t) \rangle_c \right] \;,
\label{eq:varfinal}
\end{equation}
with $ \langle \sigma_+(\tau) \sigma_-(t) \rangle_c \equiv \langle
\sigma_+(\tau) \sigma_-(t) \rangle - \alpha^2/64$. Numerical
simulations of the dynamics show strong correlation between
$\sigma_+(\tau)$ and $\sigma_-(t)$ with an algebraic decay in
$t-\tau$ (see Fig. (\ref{Correlations})). It is interesting to note
that the dynamics of the chain induces such long range temporal
correlations between steps of the edge mediated by the loop
dynamics.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{Correlations.eps}
\caption{Correlation functions of the $\sigma$ variables as obtained
by averaging over $1.9 \cdot 10^5$ realizations, for $n_0$=4000.}
\label{Correlations}
\end{figure}
By extrapolating the sum on the right hand side of Eq.
(\ref{eq:varfinal}) using the asymptotic form $B(t-\tau)^{-\gamma}$ with
$B\approx 0.015$ and $\gamma\approx 1.2$, deduced from Fig.
(\ref{Correlations}), we find that the sum converges to a non-zero
value. This is demonstrated in Fig. (\ref{fig:DiffusionCoef}) for
$\overline {\alpha}=1$ and $\overline {\alpha}=0.1$. For example, in
the case $\overline {\alpha}=1$ the sum converges to $\approx
0.84<\alpha=1$ indicating that $w^2(t)\approx 0.16t$ at large $t$,
which in turn yields $z=2$. The slow power-law convergence towards
the asymptotic value implies that it may require large systems to
observe the long time behavior of Eq. (\ref{eq:autocorrhomo}).
\begin{figure}[h]
\centering
$\begin{array}{ll}
(a) & (b)\\
\includegraphics[scale=0.75]{DiffusionCoef} &
\includegraphics[scale=0.75]{DiffusionCoef_a__0_1}
\end{array}$
\caption{The diffusion coefficient $D\equiv \Delta w^2(t)/\Delta t$ , as calculated by
Eq. (\ref{eq:varfinal}), averaged over $270,000$ runs with $n_0=2000$ for (a) $\overline{\alpha}=1$ and (b) $\overline{\alpha}=0.1$. The slow decay of $D$
can be easily observed. }
\label{fig:DiffusionCoef}
\end{figure}
\subsection{Microscopic Dynamical Model for Arbitrary $c$}
In this section we generalize the model of the previous section to
consider the case of arbitrary $c$. This can be done within a
$d=1+1$ dimensional model by introducing a repulsive interaction
between the substrate and the interface. Taking an interaction of
the form $A/h^2$, where $h$ is the distance between the interface and
the substrate and $A$ is a constant, results in an equilibrium
weight of a loop of the form $1/n^c$. The exponent $c$ is related to
the interaction strength $A$ \cite{Lipowsky2,Lipowsky}.
In order to derive the relation between $A$ and $c$ one notes that
at the critical point the distribution of the distance between the
interface and the substrate decays algebraically at large distances,
$Q(h) \sim 1/h^\kappa$. It has been shown that for an interface
model for which self-avoiding interactions play no role, the
exponent $\kappa$ is related to the loop exponent $c$ by
\cite{Baiesi2002}
\begin{equation}
c=(\kappa+3)/2.
\label{eqn:ckapparelation}
\end{equation}
We proceed by introducing a specific model and evaluate $\kappa$ in
terms of the interaction parameter $A$. One then obtains the loop
exponent $c$ from Eq. (\ref{eqn:ckapparelation}). We consider an
RSOS interface model with the Hamiltonian
\begin{equation}
H(h_1,h_2,...,h_n) = \sum_{i}\left[{-\varepsilon\delta_{h_i,0} +
\frac{A}{h_i^2}(1-\delta_{h_i,0})}\right] \;,
\label{eqn:arbchamiltonian}
\end{equation}
where as before, $h_i=0,1,2...$, and $h_i-h_{i+1}=\pm 1$. In this
Hamiltonian $\varepsilon>0$ represents the binding energy between
the substrate and the interface. To evaluate $Q(h)$ we write down
the eigenvalue equation of the transfer matrix corresponding to the
Hamiltonian Eq. (\ref{eqn:arbchamiltonian}). For $h>1$ the equation
is
\begin{equation}
e^{-\beta A/h^2}\Psi_{h-1} + e^{-\beta A/h^2}\Psi_{h+1} =
\lambda\Psi_h \;.\label{eqn:arbceigenvalues}
\end{equation}
Here $\lambda$ is the eigenvalue and $\Psi_h$ are the components of
the eigenvector. The distance distribution is given by $Q(h)\propto
\Psi_h^2$. At criticality the eigenvector component, at large $h$,
has a form $\Psi(h)=1/h^{\kappa/2}$. By using this form in Eq.
(\ref{eqn:arbceigenvalues}) we find the relation
\begin{equation}
\beta A=\frac{1}{8}\kappa(\kappa+2) \;.
\end{equation}
Combining this with Eq. (\ref{eqn:ckapparelation}) yields
\begin{equation}
\beta A=\frac{1}{8}(2c-3)(2c-1) \;.
\end{equation}
We now use the model, Eq. (\ref{eqn:arbchamiltonian}), to study
numerically the dynamics of a loop with $c \neq 3/2$. The dynamics
of the model is similar to that introduced in Sec. 2.2, but with the
transition rates of Eq. (\ref{eq:looprate}) modified according to
the Hamiltonian Eq. (\ref{eqn:arbchamiltonian}). Namely, the
updating rates are given by
\begin{eqnarray}
h_i \to h_i+2 \;\;\; {\rm with \; rate \; 1} \nonumber \\
h_i \to h_i-2 \;\;\; {\rm with \; rate \;} e^{-\beta A \left( (h_i-2)^{-2} -h_i^{-2} \right)} \;,
\label{eq:arbclooprate}
\end{eqnarray}
as long as the resulting heights are non-negative and the RSOS
condition is satisfied. The presence of the long-range interactions
also changes the ratio between the rates by which the loop grows
($R(n\to n+2)$) and shrinks ($R(n+2\to n)$). This ratio is given by
\begin{equation}
\frac{R(n\to n+2)}{R(n+2\to n)} = e^{-\beta \varepsilon} \;.
\label{eq:arbcratesratio}
\end{equation}
The critical temperature is found by equating the free energy of the loop with that of the bound
segment. This yields
\begin{equation} 4 =
e^{-\beta (A-\varepsilon)} \;,
\end{equation}
where $A-\varepsilon$ is the energy of a pair of sites in the bound
segment and $4$ is the statistical weight of a pair of sites in the
open loop. Combining this with Eq. (\ref{eq:arbcratesratio}) gives
\begin{eqnarray}
n \to n+2 &\;\;\;\;\;\;& {\rm with \; rate} \;\;\; \overline{\alpha}e^{-\beta A}/4
\nonumber \\
n \to n-2 && {\rm with \; rate} \;\;\; \overline{\alpha} \;. \label{eq:arbcendrate}
\end{eqnarray}
We have simulated the dynamics of Eqs. (\ref{eq:arbclooprate}) and (\ref{eq:arbcendrate}) for
$A=0.15$ and $A=0.5$. These values of $A$ correspond to $c\approx 1.74 \; (<2)$ and $c\approx 2.12
\; (>2)$ respectively. In Fig. (\ref{fig:arbcACrelation}) we present the loop size distribution for
these two values of the parameter $A$. The resulting $c$ values fit well with the predictions.
\begin{figure}[h]
\centering
\includegraphics[scale=0.75]{A_c_relations}
\caption{The loop size distribution for $A=0.15$ and $A=0.5$ as measured in
numerical simulations, and theoretical resulting exponents $c\approx 1.74$ and $c\approx 2.12$
respectively. The theoretical curves show good fit to the measured data.}
\label{fig:arbcACrelation}
\end{figure}
In studying the survival probability of a loop we follow the same approach which was applied in the
previous section for $c=3/2$. Similar results were obtained for the case of $c>3/2$. In Fig.
(\ref{fig:arbcSurvCollapse}) we present the survival probability as obtained from numerical
simulations of the model. We find good data collapse, but again with a modified exponent $z=2.2$
for $\overline{\alpha}=1$. The scaling function fits well with that obtained from a numerical
integration of a discrete version of Eq. (\ref{eqn:FPE}).
We have also calculated the step-step autocorrelation function as
for the case $c=3/2$ and found similar results. In particular, we
find that the exponent $\gamma$ seems to have a weak dependence on
$A$, with $\gamma\sim 1.4$ for both $A=0.15$ and $A=0.5$ (Figures
not shown).
\begin{figure}[h]
\centering
$\begin{array}{ll}
(a) & (b)\\
\includegraphics[scale=0.4]{survc1p74z2p2} &
\includegraphics[scale=0.4]{survc2p12z2p2}
\end{array}$
\caption{ Data collapse of the survival probability (averaged over
$4 \cdot 10^5$ realizations) for some values of $n_0$ with $z=2.2$,
with (a) $c=1.74$ ($A=0.15$) and (b) $c=2.12$ ($A=0.5$) . The line
corresponds a numerical solution of a discrete version of Eq.
(\ref{eqn:FPE}) with corresponding values of $c$ }
\label{fig:arbcSurvCollapse}
\end{figure}
\section{Many loops model}
\label{secManyLoop} In the previous section we analyzed the dynamics
of a single loop. We found that it is well described by the
Fokker-Planck Eq. (\ref{eqn:FPE}) for asymptotically large loops. In
the present section we extend this model to consider interaction
between loops. This is done by considering a chain composed of an
alternating series of loops and bound segments. Each loop and bound
segment is characterized only by their respective length. In contrast to the study
of the dynamics of a single loop here no
internal degrees of freedom are associated with a loop. Within this model loops
evolve by growing, shrinking, splitting, merging, together with
creation and annihilation processes. The rates of the various
processes are chosen so that the system evolves to the equilibrium
loop length distribution at large times. While the choice of rates
is not unique they are taken to be compatible with the single loop
dynamics whenever applicable. A similar approach has recently been
applied to study dynamical features such as the approach to
equilibrium near the denaturation transition \cite{Livi}. From this
analysis we extract the behavior of the autocorrelation function of
a base-pair inside a dsDNA where many interacting loops coexist.
\subsection {Definition of the Model}
The DNA configurations can be represented by an alternating sequence
of bound base-pairs and loops. We denote by $[k]$ a bound segment
with length $k$ and $(l)$ a loop of length $l$, with $k,l>0$. A
given configuration of the DNA is thus represented by
$[k_1](l_1)[k_2](l_2) \ldots$. In terms of these variables the
dynamics of the model is defined by the following rates:
\begin{itemize}
\item Motion of a loop edge. This corresponds to the same processes
which were considered in the dynamics of an isolated loop in the
previous section.
\begin{equation}
\begin{array}{lll}
\; [k](l) \to [k-1](l+1) &\;\;\;\;\;\;\;\;& {\rm \;with\; rate}\;\;\; \left(\frac{l}{l+1}\right)^c \nonumber \\
\; [k-1](l+1) \to [k](l) && {\rm \;with\; rate}\;\;\; 1 \nonumber \\
\; (l)[k] \to (l+1)[k-1] && {\rm \;with\; rate}\;\;\; \left(\frac{l}{l+1}\right)^c \\
\; (l+1)[k-1] \to (l)[k] && {\rm \;with\; rate}\;\;\; 1 \nonumber
\end{array}
\label{eqn:mlloopsizerates}
\end{equation}
\end{itemize}
These processes are executed as long as the lengths of the resulting
loops and bound segments are non-zero.
\begin{itemize}
\item Splitting and merging of loops
\begin{equation}
\begin{array}{lll}
\; (l_1+l_2+1) \to (l_1)[1](l_2) &\;\;\;\;\;\;\;\;& {\rm \;with\; rate}\;\;\;
\frac{\sigma_0}{\zeta(c)} \left(\frac{l_1+l_2+1}{l_1l_2}\right)^c \\
\; (l_1)[1](l_2)\to (l_1+l_2+1) && {\rm \;with\; rate}\;\;\; 1
\end{array}
\label{eqn:mlsplitmergerates}
\end{equation}
\end{itemize}
In addition we consider creation and annihilation of loops.
\begin{itemize}
\item Creation and annihilation of loops
\begin{equation}
\begin{array}{lll}
\; [k_1+k_2+1] \to [k_1](1)[k_2] &\;\;\;\;\;\;\;\;& {\rm \;with\; rate}\;\;\; \frac{\sigma_0}{\zeta(c)(1-\sigma_0)}\nonumber \\
\; [k_1](1)[k_2]\to [k_1+k_2+1] && {\rm \;with\; rate}\;\;\; 1
\end{array}
\label{eqn:manyloopsrates}
\end{equation}
\end{itemize}
Here $\sigma_0$ is the cooperativity parameter, and
$\zeta(c)=\sum_{n=1}^{\infty}n^{-c}$. It is straightforward to
verify that the choice of rates satisfies detailed balance with
respect to the equilibrium weight for the loop sizes at criticality
$P(n)= \sigma_0 \frac{n^{-c}}{\zeta(c)}$.
\subsection{Numerical Simulation}
To check that indeed interactions between loops do not modify the
asymptotic behavior of the autocorrelation function $C(t)$ we
simulate the model Eq.
(\ref{eqn:mlloopsizerates})--(\ref{eqn:manyloopsrates}). We use the
experimental relevant value $\sigma_0=10^{-4}$ \cite{carlon2} and
consider a DNA length of $100,000$ base-pairs. The autocorrelation is evaluated
by monitoring the state of $1000$ base-pairs uniformly distributed
within the DNA. Fig. (\ref{fig:ManyLoopsRes1}) shows the results for
various values of $c$ along with the theoretically expected slopes.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{ManyLoopsRes1}
\caption{Normalized autocorrelation functions as measured in the simulation of the many loops
model, for $\sigma_0=0.0001$, $L=100000$ and 50000 repetitions. The
thin lines indicate the expected behavior of the autocorrelations for the appropriate
values of $c$.}
\label{fig:ManyLoopsRes1}
\end{figure}
While the results for large values of $c$ agree well with the
theory, there is a systematic deviation from the predicted slopes
for smaller values of $c$ close to 2. These deviations could be
attributed to the finite length of the simulated system. For example
it is clear that for $c<2$ the autocorrelation function of a finite
system decays to zero at long times rather than remaining constant.
This is due to the fact that there is an upper cutoff on the loop
size available. Only for an infinite system $C(t)$ is expected to
remain constant ($=1$) at long times. In order to check this point
we introduce an upper cutoff $N_{max}$ to the loop size in the
equation for the autocorrelation function
\begin{equation}
C(t) \approx \frac{\sum_{n_0=1}^{N_{max}}P_{eq}(n_0)n_0 G(n_0,t)
}{\sum_{n_0=1}^{N_{max}}P_{eq}(n_0)n_0} \;,\label{eqn:corrcutoff}
\end{equation}
The loop size $N_{max}$ is chosen so that it appears roughly one
time during a run. For runs which are not too long this can be
estimated using $\sigma_0LRP(N_{max})=1$, where L is the system size
and R is the number of Monte-Carlo repetitions performed. In Fig.
(\ref{fig:ManyLoopsRes2}) the results of the simulations are
compared with the theoretical expression, Eq. (\ref{eqn:corrcutoff}),
which is summed numerically.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{C_with_cutoff}
\caption{Same as Fig. (\ref{fig:ManyLoopsRes2}), with the numerical calculations of the sum with a cutoff.}
\label{fig:ManyLoopsRes2}
\end{figure}
In summary, we find that our scaling predictions are generally
confirmed by the numerical simulations of the many loops model.
However, for values of $c$ close to $2$, deviations are found. These
seems to be related to finite size effects.
\section{Conclusions }
In this paper the dynamics of loops at the denaturation transition was studied both within
a single loop model and a many loop approach. In particular, special care was given to the
applicability of the Fokker-Planck equation. It was shown that the long-time decay of the
autocorrelation function of the state of complementary bases (closed or open) is sensitive to the
value of the loop exponent. In particular, for $c<2$ it remains finite while for $c>2$ it decays as
$t^{1-c/2}$.
Throughout the paper we have considered homopolymers where the
binding energy between different base-pair is constant. In typical
DNA molecules the binding energy is not homogeneous. While a
preliminary treatment of the effects of disorder was given in
\cite{Bar}, it remains an important and interesting question.
{\bf Acknoledments:} The support of the Israeli Science Foundation
(ISF) and the Albert Einstein Minerva Center for Theoretical Physics
is gratefully acknowledged. YK also acknowledges support by the
US-Israel Binational Science Foundation (BSF).
\section{Appendix I: Asymptotic behavior of the return probability}
In this appendix we derive the asymptotic behavior of the survival probability corresponding to the
Fokker-Planck equation
\begin{equation}
\frac{dP(n,t)}{dt} = D \ddn{} \left[\frac{c}{n} + \ddn{}\right]P(n,t)
\end{equation}
with the boundary conditions
\begin{equation}
P(0,t)=0\;\;\; ; \;\;\; P(\infty,t)=0 \;\;\; ; \;\;\; P(n,0)=\delta(n-n_0) \;.
\label{eqn:OLM_BC}
\end{equation}
To do so, we first perform a Laplace Transform
\begin{equation}
\overline{P}(n,s)=\int_0^\infty e^{-st}P(n,t)dt
\end{equation}
to obtain
\begin{equation}
s\overline{P}(n,s)-\delta(n-n_0) = D\ddn{}\left[\frac{c}{n} + \ddn{}\right]\overline{P}(n,s) \;.
\label{eqn:OLM_FP_LT}
\end{equation}
Integrating over a small interval around $n_0$ yields
\begin{equation}
\left.\partial_n \overline{P}_<(n)\right|_{n=n_0} - \left.\partial_n \overline{P}_>(n)\right|_{n=n_0} =
\frac{1}{D}
\label{eqn:OLM_EQUATING_N0}
\end{equation}
where $\overline{P}_<(n)$ and $\overline{P}_>(n)$ are the solutions of Eq. (\ref{eqn:OLM_FP_LT}) for $n<n_0$ and
$n>n_0$ respectively. By defining $x=\sqrt{s/D}n$ and $P(n,s)=(Ds)^{-\frac{1}{2}}f(\sqrt{s/D}n)$ Eq. (\ref{eqn:OLM_FP_LT}) becomes:
\begin{equation}
f''(x) + \frac{c}{x}f'(x) - \left(1+\frac{c}{x^2}\right)f(x) = 0
\label{eqn:OLM_SingleVar}
\end{equation}
which has the solution
\begin{equation}
f(x) = A x^{\frac{1-c}{2}}I_{\frac{1+c}{2}}(x) + B
x^{\frac{1-c}{2}}K_{\frac{1+c}{2}}(x) \;.
\end{equation}
Here $I_{\nu}$ and $K_{\nu}$ are modified Bessel functions of the
first and second kind \cite{Abramowitz}. Using their asymptotic
behavior and the boundary conditions (\ref{eqn:OLM_BC}) we find
$B=0$ for $x<x_0=\sqrt{s/D}n$ and $A=0$ for $x>x_0$. Denoting $x_< =
min(x,x_0)$ and $x_> = max(x,x_0)$ and using
Eq.(\ref{eqn:OLM_EQUATING_N0}) the Laplace Transform of the loop
size distribution is given by
\begin{equation}
\overline{P}(s,x) = \frac{\left(\frac{x}{x_{0}}\right)^{(1-c)/2}I_{(1+c)/2}(x_{<})K_{(1+c)/2}(x_{>})}
{\sqrt{Ds}\left(I'_{(1+c)/2}(x_{0})K_{(1+c)/2}(x_{0}) - I_{(1+c)/2}(x_{0})K'_{(1+c)/2}(x_{0})\right)}
\end{equation}
Using standard methods \cite{redner} we integrate this expression to find the Laplace transform of
the survival probability
\begin{eqnarray}
\overline{G}(s,x_0) &=& \int_0^{\infty}\overline{P}(s,x)\sqrt{D/s}\;\;\;dx = \nonumber\\
&&\frac{K_{\frac{1+c}{2}}(x_0)\left(I_{\frac{c-1}{2}}(x_0) - \frac{(\frac{1}{2} x_{0})^{\frac{c-1}{2}}}{\Gamma((1+c)/2)}\right) + I_{\frac{1+c}{2}}(x_{0})K_{\frac{1-c}{2}}(x_{0}) }
{s\left(I'_{\frac{1+c}{2}}(x_{0})K_{\frac{1+c}{2}}(x_{0}) - I_{\frac{1+c}{2}}(x_{0})K'_{\frac{1+c}{2}}(x_{0})\right)} \;.
\label{eqn:OLM_Surv_Anal}
\end{eqnarray}
The asymptotic behavior of
$\overline{G}(s,n_0=\sqrt{D/s}x_0)$ for long and short times can be extracted from the behavior of the Bessel functions.
For small $s$ Eq. (\ref{eqn:OLM_Surv_Anal}) turns into
\begin{equation}
\overline{G}(s,n_0)\approx \left\{\begin{array}{ll}
\Phi(c) s^{\frac{c-1}{2}} & c\leq1 \\
\frac{n_0^2}{2D(c-1)} + \Phi(c) s^{\frac{c-1}{2}} &
c> 1
\end{array} \right.
\end{equation}
where $\Phi(c)$ is a constant which depends on $c$. From this we can extract the asymptotic form
of the survival probability for long times: $G(n_0,t\gg n_0^2/D)=g(\xi=Dt/n_0^2\gg 1)\sim
\xi^{-\frac{1+c}{2}}$. The behavior for short times can be obtained in a similar fashion, yieding $g(\xi\ll 1)\approx 1$. In sum, we find that the survival probability for a loop of
initial size $n_0$ has the scaling form
\begin{equation}
G(n_0,t) = g\left(\frac{Dt}{n_0^2}\right) \;,
\label{eqn:OLM_Crit_Surv_Scal}
\end{equation}
With the asymptotic behavior
\begin{eqnarray}
g(\xi\gg 1)\sim \xi^{-\frac{1+c}{2}} \nonumber \\
g(\xi\ll 1)\sim 1 \;.
\label{eqn:OLM_Crit_Surv_Long}
\end{eqnarray}
\section*{References}
|
1,116,691,499,106 | arxiv | \section{Introduction}
\CC\ is an important molecule in the fields of astronomy, combustion science and materials science. It has often been observed in comets~\citep{1968Mayer-a, 1976Jackson-a, 1983Lambert-a, 1983Johnson-a, 1997Sorkhabi-a, 2003Kaiser-a} and in other astronomical environments such as interstellar clouds~\cite{1977Souza-a,1978Chaffee-a,1981Green-a,1982Hobbs-a,1989Federman-a,2010Kazmierczak-a,2012Casu-a}, late-type stars~\citep{1970Vardya-a,1971Querci-a,2000Klochkova-a,2012Hema-a} and the Sun~\citep{1973Grevesse-a,1982Brault-a}. Its reactions are believed to be involved in the formation of hydrocarbons and other organic compounds in interstellar clouds~\citep{2002Kaiser-a}. \CC\ has also been found in flames~\citep{1957Gaydon-Book-a,1965Bleekrode-a,1977Baronavski-a} and from the irradiation of soot~\citep{2010Goulay-a}, and can be formed in carbon plasmas and used to make carbon nanostructures~\citep{2011Nemes-Book-a}.
The most prominent electronic system in the visible region is the Swan system, which involves the electronic transition \upperstate-\lowerstate, with the (0,0) band near 19400 \wavenumbers. The \lowerstate\ state was originally believed to be the ground state~\citep{1969Herzberg-a} as it was observed to be easily excited, which is because it lies only 1536 \wavenumbers\ above the actual X\super{1}$\Sigma^+_g$ ground state.
The Swan system has been investigated extensively. Early vibrational band intensity analyses include those of King~\citep{1948King-a}, Phillips~\citep{1957Phillips-a} and Hagan~\citep{1963Hagan-Report-a}. In 1965, Mentall and Nicholls~\citep{1965Mentall-a} reanalysed the data of three previous works to provide an updated list of absolute band strengths, oscillator strengths and Einstein \textit{A} values for most vibrational bands up to {\em{v\primed}}=4. A full review of previous work was given in 1967 by Tyte \emph{et al.}~\citep{1967Tyte-Report-a}. In 1968, Phillips and Davis~\citep{1968Phillips-Book-a} combined earlier published data with their most recent rotational analysis. They calculated spectroscopic constants for the Swan system, and published a full rotational line list including relative intensities. Danylewych and Nicholls published a list of absolute band strengths, oscillator strengths and Einstein \textit{A} values covering most vibrational bands of up to {\em{v\primed}}=9, with $\Delta${\em{v}} $\leq$ 4~\citep{1974Danylewych-a}. The properties of \CC\ were extensively reviewed by Huber and Herzberg in 1979~\citep{1979Huber-Book-a}.
As new experimental techniques have become available, new studies of the lower vibrational bands have been conducted at high resolution. These include the work of Amiot~\citep{1983Amiot-a}, Curtis and Sarre~\citep{1985Curtis-a} and Suzuki \emph{et al.}~\citep{1985Suzuki-a}, who investigated the (0-0), (0-1) and (1-0) bands, respectively, using laser excitation techniques. Dhumwad \emph{et al.}~\citep{1981Dhumwad-a} observed the Swan system using a quartz discharge tube with tungsten electrodes for the excitation of CO. In 1994, Prasad and Bernath~\citep{1994Prasad-a} analysed nine low vibrational bands ({\em{v\primed}} $\leq$ 3 and {\em{v\primed\primed}} $\leq$ 4) of the Swan system of jet-cooled \CC\ (for low-J), and of \CC\ produced in a composite wall hollow cathode (for \textit{J} up to 25-46) with a Fourier Transform Spectrometer (FTS). The previous observations of Amiot \etal\ and Prasad and Bernath on the (0,0) band were improved upon by Lloyd and Ewart in 1999~\citep{1999Lloyd-a}, using degenerate four-wave mixing spectroscopy. These and other investigations have improved the accuracy of the line assignments originally published by Phillips and Davis in 1968~\citep{1968Phillips-Book-a}.
The higher vibrational bands had not been analyzed with modern high resolution instrumentation until 2002, when Tanabashi and Amano~\citep{2002Tanabashi-a} observed the Swan system by a direct absorption technique using a tunable dye laser. They measured three bands, assigned as (7,9), (6,8) and (5,7). They found that their line positions did not agree with those reported by Phillips and Davis~\citep{1968Phillips-Book-a}, which was the most recent rotational analysis of the high vibrational bands. These discrepancies led to the reanalysis of the entire Swan system with a high resolution FTS~\citep{2007Tanabashi-a}. The assigned line positions from this new comprehensive analysis agreed with their previous one for the (7,9), (6,8) and (5,7) bands. Their line positions for bands involving the higher vibrational levels differed significantly from those of Phillips and Davis~\citep{1968Phillips-Book-a}.
The old rotational line list reported by Phillips and Davis in 1968~\citep{1968Phillips-Book-a} has recently been used in deriving the carbon abundance and \super{12}C/\super{13}C ratio in R Coronae Borealis and hydrogen-deficient carbon stars~\citep{2012Hema-a}, and in comets~\citep{2012Rousselot-a}. Line positions and intensities must be known to derive \CC\ abundance from observed spectra, and it would be beneficial to have a new rotational line list, based on recent measurements and calculations. The purpose of this work is to use the data mainly of Tanabashi \etal\ \citep{2007Tanabashi-a} to calculate theoretical line intensities, and publish an extensive line list. Similar work has recently been carried out for the E$^2\Pi$-X$^2\Sigma^+$ transitions of CaH by Li \etal\ \citep{2012-Li-a}
Tanabashi \emph{et al.} assigned around 5700 observed rotational lines, for 34 vibrational bands belonging to the $\Delta$\textit{v} = -3 to +2 sequences. Transitions up to between \textit{J}=30 and 80 were assigned. Perturbations were found in the \upperstate\ state for \emph{v} = 0, 1, 2, 4, 6, 8, 9 and 10, and for \emph{v} = 4, 6 and 9 they affected almost all of the observed lines. They calculated molecular constants (Tables 3 and 4 in ref~\citep{2007Tanabashi-a}) for both electronic states.
Deperturbation studies of the \upperstate\ \textit{v}\primed=4~\citep{2010Bornhauser-a} and \textit{v}\primed=6~\citep{2011Bornhauser-a} states were performed by Bornhauser \textit{et al.} in 2010 and 2011, respectively, using double-resonant four-wave mixing spectroscopy. This enabled them to assign lines unambiguously, and calculate perturbation constants (Table \ref{TABperts}) and molecular constants of the interacting \bSigma\ (\textit{v}=16 and \textit{v}=19) and \OnePi\ states. They also gave a list of the few transitions that were assigned incorrectly by Tanabashi \etal.
\section{Recalculation of Molecular Constants}
With the recent publication of perturbation constants for the \upperstate\ \textit{v}=4 and \textit{v}=6 levels by Bornhauser \etal\ (Table \ref{TABperts}), there was an opportunity to improve the molecular constants reported by Tanabashi \etal\ (Tables 3 and 4 in ref~\citep{2007Tanabashi-a}). In their calculation of molecular constants in 2007, Tanabashi \etal\ included lines from several other studies. For nine bands up to (3,4), gaps in observations were filled in by using lines from Prasad and Bernath~\citep{1994Prasad-a}. High resolution measurements of the (0,1) band by Curtis and Sarre~\citep{1985Curtis-a} were included, as were cross transitions ($\Delta\Omega\neq$0) from Suzuki \etal\ \citep{1985Suzuki-a} for the (1,0) band. Some cross transitions were also observed by Curtis and Sarre, and these are particularly useful for the accurate calculation of the spin-orbit coupling and $\Lambda$-doubling constants. All lines from Tanabashi and Amano~\citep{2002Tanabashi-a} for the (5,7), (6,8) and (7,9) bands were also included.
Prasad and Bernath also calculated molecular constants, and included in their fit all lines from Curtis and Sarre, Suzuki \etal\ and Amiot (for the (0,0) band). Our recalculation is based mainly on that of Tanabashi \etal, and also this fit by Prasad and Bernath. An explanation of the specific differences is presented below.
The computer program \emph{PGOPHER}~\citep{2010Western-Misc-a}, written by C. M. Western (University of Bristol) was used to recalculate the molecular constants, with the inclusion of the \textit{v\primed}=4 and \textit{v\primed}=6 perturbations, using the standard $N^2$ Hamiltonian for a $^3\Pi$ state~\citep{1979Brown-a, 1994Hirota-a}. A global least squares fit was performed including all lines from Tanabashi \etal, Tanabashi and Amano, Prasad and Bernath, Curtis and Sarre, Suzuki \etal, Lloyd and Ewart~\citep{1999Lloyd-a} (for the (0,0) band) and the two deperturbation studies by Bornhauser \etal~\citep{2010Bornhauser-a, 2011Bornhauser-a}.
The weights for the lines from Tanabashi \etal\ (including those from Tanabashi and Amano) were unchanged here, except for those involving the \textit{v\primed}=4 and \textit{v\primed}=6 states. These had mostly been deweighted, and were weighted more strongly in this study as the perturbations had been included in the fit. In their calculation of the perturbation constants, Bornhauser \etal\ observed lines involving \textit{J}\primed=1-6, 10-12 and 17-23 for the \textit{v\primed}=6 state, and \textit{J}\primed=4-14 for the \textit{v\primed}=6 state. Lines involving these \textit{J} levels were weighted highly, and other line weights were decreased with greater difference between \textit{J}\primed\ and these ranges. The actual lines observed by Bornhauser \etal\ were weighted similarly to those of Tanabashi \etal\ for the same bands.
The five remaining sets of lines were treated as follows. The sets from Prasad and Bernath (two sets), Curtis and Sarre and Suzuki \etal\ were given the same weights as in the fit performed by Prasad and Bernath. Lloyd and Ewart lines were assigned weights to be similar to those of Tanabashi \etal\ for the (0,0) band. To ensure that all lines were on the same wavenumber scale, transitions from these five sets were then compared to matching Tanabashi \etal\ transitions, and a weighted average wavenumber difference (one for each set) using matching lines was calculated, based on the assigned weights. This was added to all of the lines from each set as a wavenumber offset, to compensate for any systematic differences between studies. In their fit, Tanabashi \etal\ deweighted many lines due to the extensive perturbations, and those lines were also deweighted here if they were present in these five sets. This process excluded approximately 11\% of these lines. To further decrease the possibility of using any misassigned lines in the fit, any line whose wavenumber differed from a matching Tanabashi \etal\ line by more than 0.03 \wavenumbers\ was deweighted, excluding a further $\sim$6\%. A preliminary fit was then performed to obtain calculated values of each transition. Lines that had not been matched to Tanabashi \etal\ transitions were then deweighted if their observed-calculated values, as a result of this fit, were greater than 0.03 \wavenumbers.
A final global weighted least squares fit was performed, in which all reported molecular constants for the \lowerstate\ and \upperstate\ were floated, except for $A_D$ for \textit{v}\primed=8, 9 and 10. These were fixed at a value based on those calculated for the lower levels to obtain a good fit. The updated molecular constants are shown in Tables \ref{TABnewConstantsd} and \ref{TABnewConstantsa}. The magnitudes of the perturbation constants reported by Bornhauser \etal\ were also floated to improve the fit, and both the previous and changed values are shown in Table \ref{TABperts}.
\section{Calculation of Line Intensities}
The intensities of the rovibronic transitions are reported here as both Einstein \textit{A} values and oscillator strengths (\emph{f}-values).
Einstein \textit{A} values are calculated with the equation \citep{2005Bernath-Book-a}
\vspace{-0.5cm}
\begin{eqnarray}\label{EQNEinA}
A_{J^\prime\rightarrow J^{\prime\prime}} & = & \frac{16\pi \nu^3S_{J^{\prime\prime}}^{\Delta J}}{3\epsilon_0 hc^3(2J^\prime+1)}\langle \psi_{v^\prime J^\prime}|R_e(r)|\psi_{v^{\prime\prime}J^{\prime\prime}}\rangle^2\\
& = & 3.136\ 189\ 32\ \times 10^{-7} \frac{\bar{\nu}^3S_{J^{\prime\prime}}^{\Delta J}}{(2J^\prime+1)}|\langle \psi_{v^\prime J^\prime}|R_e(r)|\psi_{v^{\prime\prime}J^{\prime\prime}}\rangle|^2,
\end{eqnarray}
where $S_{J^{\prime\prime}}^{\Delta J}$ is the H\"{o}nl-London factor and $|\langle \psi_{v^\prime J^\prime}|R_e(r)|\psi_{v^{\prime\prime}J^{\prime\prime}}\rangle|$ is the transition dipole moment (TDM) matrix element, $A_{J^\prime\rightarrow J^{\prime\prime}}$ is in s\super{-1}, $\bar{\nu}$ in cm\super{-1} and R\sub{e} in debye. These have been converted into \emph{f}-values using the equation
\vspace{-0.5cm}
\begin{eqnarray}\label{EQNAtof}
f_{J^\prime\leftarrow J^{\prime\prime}} & = & \frac{m_e \epsilon_0 c (2J^{\prime}+1)}{2\pi e^2 (100\bar{\nu})^2 (2J^{\prime\prime}+1) } A_{J^\prime\rightarrow J^{\prime\prime}}\\
& = & {1.499\ 193\ 68\ }\frac{1}{\bar{\nu}^2} \frac{(2J^{\prime}+1)}{(2J^{\prime\prime}+1) }A_{J^\prime\rightarrow J^{\prime\prime}}.
\end{eqnarray}
Band intensities are reported as \Avv\ values, and can be converted from these to \fvv\ values using the equation:
\begin{equation}\label{EQNAvvtofvv}
f_{v^\prime\leftarrow v^{\prime\prime}} = {1.499\ 193\ 68\ }\frac{1}{\bar{\nu}^2} A_{v^\prime\rightarrow v^{\prime\prime}}
\end{equation}
where $\bar{\nu}$ is the average wavenumber for the band \citep{1983Larsson-a}.
\textit{PGOPHER} was used to calculate Einstein \textit{A} values. It is able to calculate the necessary rotational TDMs and H\"{o}nl-London factors (Eq. \ref{EQNEinA}) for several types of electronic transitions, including \tPI-\tPI, if provided with a set of molecular constants and band strengths for each vibrational band.
For a diatomic molecule, the wavefunctions $\psi_{vJ}$ used in the calculation of the TDM can be described as a one-dimensional function of internuclear distance~\citep{2005Bernath-Book-a}. Rotationless TDMs were calculated using the computer program \emph{LEVEL}~\citep{2007LeRoy-Report-a}, written by R. J. Le Roy, which is able to calculate eigenfunctions and eigenvalues by solving the one-dimensional Schr\"{o}dinger equation for diatomic molecules. \emph{LEVEL} is able to calculate TDMs for rotational levels above \textit{J}=0, but it assumes a singlet-singlet transition. For this reason, a single TDM (for Q(0)) for each vibrational band was taken from \emph{LEVEL} and input into \emph{PGOPHER}.
\emph{LEVEL} must be provided with a potential energy curve, \emph{V}(\emph{r}), and an electronic TDM, both as a function of internuclear distance.
\subsection{Electronic Transition Dipole Moment}
Our calculation of the electronic TDM of the Swan system has been reported previously~\citep{2007Kokkin-a,2007Schmidt-a,2009Nakajima-a}, with the results shown in Table \ref{TABeTDM} and Figure \ref{FIGeTDM}. A brief description is given here. Wavefunctions were computed using the multi-reference configuration interaction (MRCI) method~\citep{1988Werner-a,1988Knowles-a}, whereby all single and double excitations from a complete active space self-consistent field (CASSCF)\citep{1985Werner-a,1985Knowles-a} reference state are included in the MRCI wave functions. The active space included all molecular orbitals (MO) arising from the C atoms' $2s$ and $2p$ valence orbitals. The basis set is the augmented correlation-consistent polarized aug-cc-pV6Z set of Dunning and co-workers~\citep{1989Dunning-a,1992Kendall-a,1995Woon-a,1996Wilson-a} and de Jong \emph{et al} .\citep{2001deJong-a} Core and core-valence (CV) correlation corrections were obtained using the aug-cc-pCVQZ basis set~\citep{1989Dunning-a,1992Kendall-a,1995Woon-a}. Scalar relativistic energy corrections (Rel) were evaluated via the Douglas-Kroll-Hess approach~\citep{1974Douglas-a,1985Hess-a,1986Hess-a}, in conjunction with the appropriate cc-pVQZ basis sets. The quantum chemical calculations were carried out using the MOLPRO2006.1 program~\citep{2006Werner-Misc-a}.
\subsection{Potential Energy Curves}
The potentials \emph{V}(\emph{r}) (Fig. \ref{FIGpotentials}) were calculated using the computer program \emph{RKR1}~\citep{2004LeRoy-Report-a}, which utilizes the first-order semiclassical Rydberg-Klein-Rees procedure~\citep{1932Rydberg-a,1933Rydberg-a,1932Klein-a,1947Rees-a} to determine a set of classical turning points for each potential, using equilibrium expansion constants $\omega_{e}$, $\omega_{e}x_{e}$, $\omega_{e}y_{e}$, $\omega_{e}z_{e}$, $\alpha_{e}$, $\gamma_{e}$ and $\delta_{e}$ for \Bv\ and \Gv. Values for these constants were calculated (Table \ref{TABconst}) in a weighted least squares fit using the energy level expressions for a vibrating rotator,
\begin{equation}\label{Gv}
G(v) = \omega _{e}(v+\frac{1}{2}) - \omega _{e}x_{e}(v+\frac{1}{2})^2 + \omega _{e}y_{e}(v+\frac{1}{2})^3 + \omega _{e}z_{e}(v+\frac{1}{2})^4
\end{equation}
and
\begin{equation}\label{Bv}
B_{v} = B_e -\alpha_{e}(v+\frac{1}{2}) + \gamma _{e}(v+\frac{1}{2})^2 + \delta _{e}(v+\frac{1}{2})^3,
\end{equation}
and the updated \Bv\ and \Gv\ values in Tables \ref{TABnewConstantsd} and \ref{TABnewConstantsa}. The weightings of the vibrational levels were based on the standard deviation of \Bv\ and \Gv\ values from the $PGOPHER$ line position fit.
\subsection{Vibrational Band Intensities}
\Avv\ values for each vibrational band were also calculated, and are shown in Table \ref{TABAvv}. They were calculated as the sum of all single rotational Einstein \textit{A} values for possible transitions within the relevant band from the \textit{J}\primed=1, $\Omega$\primed=0 level. These were converted into \fvv\ values using Equation \ref{EQNAvvtofvv}.
\section{Analysis and Discussion}
Our final line list including positions, \emph{f}-values and Einstein \textit{A} values, is available at http://bernath.uwaterloo.ca/download/AutoIndex.php?dir=/C2/ (C2SwanLineList2012.txt). Calculated line positions and lower state energies are included for all lines, and observed positions are present when available. Line intensities are reported as both Einstein \textit{A} values and \textit{f}-values. Positions and intensities were calculated for all possible bands involving the observed vibrational levels (i.e. up to (0,9) and (10,0)).
\emph{PGOPHER} was also used for the purpose of validation, as it is able to calculate and plot spectra based on its line list, which can be compared to the observed spectrum. In all of the spectra shown, a constant Gaussian instrument function was added to best match the observed broadening. The experimental procedure that Tanabashi \emph{et al}. used involved observing \CC\ emission from a microwave discharge in a flow of acetylene (C\sub{2}H\sub{2}) diluted in argon through a discharge tube. In such a system the molecular vibration, and to a lesser extent the rotation, will not be at thermal equilibrium. For this reason, the rotational and vibrational temperatures of the simulation were also adjusted for best agreement. Two spectra were recorded, one for the $\Delta$\emph{v}=-1 to +2 sequences and another for the $\Delta$\emph{v}=-2 and -3 sequences. Rotational and vibrational temperatures of 1140 K and 6800 K, and 940 K and 5000 K were used for the $\Delta$\emph{v}=-1 to +2 and $\Delta$\emph{v}=-2 to -3 spectra, respectively. The final parameter that had to be added manually was a linear scaling factor, as the y-axis units of the recorded spectrum are arbitrary. This value could not be kept constant during the production of each figure given below (Figs. \ref{fig(0,0)} to \ref{fig(2,0)}). This is due to the presence of an instrument response function, which cannot be corrected for at this point.
There are numerous perturbations in the \upperstate\ state, which have caused many of the line positions calculated by \emph{PGOPHER} to be slightly inaccurate, and in turn have also had a small effect on the reported intensities. With the inclusion of the perturbation constants for the \textit{v\primed}=4 and \textit{v\primed}=6 levels, the average error for lines involving those upper levels is improved from 0.203 \wavenumbers\ to 0.057 \wavenumbers\, and 0.569 \wavenumbers\ to 0.038 \wavenumbers, respectively. The first values were calculated using the molecular constants of Tanabashi \etal\, and the second using those in Tables \ref{TABnewConstantsd} and \ref{TABnewConstantsa}, excluding any lines that had been heavily deweighted in the final fit. A more detailed description of the observed perturbations is available in ref~\citep{2007Tanabashi-a}.
The spectra match very well for the lower vibrational and rotational levels; most of the inaccuracies mentioned are present in the higher vibrational and rotational levels. Three small sections of the spectra are shown in Figures \ref{fig(0,0)} to \ref{fig(2,0)} that are typical of the rest of the range.
For further validation, lifetimes of vibrational levels have been calculated and are compared to previous theoretical and experimental results in Table \ref{TABLifetimes}. For each upper vibrational level, lifetimes were calculated as the reciprocal of the sum of the Einstein \textit{A} values for all possible transitions from the \textit{J}\primed=1, $\Omega$\primed=0 level. Good agreement is shown with both sets of data. The theoretical values of Schmidt and Bacskay include transitions to the c$^3\Sigma\rm{_g^+}$ state. They state that this system contributes 3-4\% to their radiative lifetimes, and if this is taken into account, excellent agreement with our values is shown. Our \Avv\ values were converted into \fvv\ values for comparison with those of Schmidt \etal\ (for up to \textit{v}\primed=5 and \textit{v}\primed\primed=5) \citep{2007Schmidt-a}, using Equation \ref{EQNAvvtofvv} and the wavenumber of the P(0) transition as the band wavenumber. It should be noted that slightly different \fvv\ values would be obtained with a different choice of band wavenumber. Excellent agreement is shown for most bands, however some of the higher vibrational bands disagree by up to $\sim$60\%, as shown in Table \ref{TABfvv}.
\section{Conclusion}
Many perturbations are present in the \upperstate\ state, and only those shown in Table \ref{TABperts} (involving the \upperstate, \textit{v}=4 and \textit{v}=6 levels) have been accounted for in these calculations. While many of the lines in the new line list do not match experiment precisely, the positions and intensities reported here are an improvement on previously available data, where results have been based on the partly incorrect assignments made by Phillips and Davis~\citep{1968Phillips-Book-a}. The calculated vibrational level lifetimes show good agreement with experimental and theoretical studies. The line list produced is an improvement over what is currently available, and will be of use to astronomers, materials and combustion scientists in the analysis of the \CC\ Swan system.
\section{Acknowledgements}
Support for this work was provided by a Research Project Grant from the Leverhulme Trust and a Department of Chemistry (University of York) studentship.
|
1,116,691,499,107 | arxiv | \section{Introduction}
\label{intro}
The opportunities of space experimentation are rare and their waiting time is often very long. For this reason,
other ways of achieving reduced gravity (or simulated reduced gravity) are often used as a replacement. Drop
tower and parabolic flight experiments provide short time low-gravity conditions, 4-9 s (Bremen drop tower) and
25 s (ESA Zero-g aircraft). For experiments that require several minutes of low gravity, sounding rockets are
available (ESA Maxus program, 13 min). For experiments that require long low-gravity duration as e.g. in life
sciences, simulation devices like random positioning machines or clinostats can be used. However, all those
means are prohibited in some cases because of security considerations. This concerns the flight experiments
with highly flammable fluids like hydrogen and especially oxygen whose study is extremely important as they are
the fuel components for space propulsion engines.
Another means is used more and more often to achieve long-time low gravity conditions: the magnetic gravity
compensation. Comparing to the other approaches, this means has several undeniable advantages.
\begin{itemize}
\item It is performed in a ground-based facility with no moving parts so that a good security level can be achieved.
\item The low gravity duration is unlimited.
\item In principle, no waiting time.
\item Reasonable cost.
\item Possibility of controlling gravity levels (such as corresponding to the Moon, Mars etc.).
\item Possibility of controlling time variation of gravity, which can reproduce the acceleration (or deceleration) of
space vehicles.
\end{itemize}
However, drawbacks and important limitations do exist. They will be discussed below. Some additional
explanations and definitions need to be given first.
\subsection{Magnetic gravity compensation versus magnetic levitation}
Magnetic gravity compensation means (total or partial) controlled reduction of the gravity force \emph{at
each point of the object}. This definition is not equivalent to that of magnetic levitation. The latter
requires that the object be suspended, which does not necessarily means that the gravity is compensated
\emph{inside} the object when it is rigid. An example of levitation without gravity compensation is a
transparent bowl placed on a superconductive disk. The bowl contains water with a goldfish. The whole system
is levitated. The photo by \cite{goldfish90} shows that the meniscus of the water is flat, which means that
both water and fish still experience the strong gravity. In what follows, the magnetic gravity compensation
inside \emph{fluids} will be considered. The term magnetic levitation will be rather applied to solid
objects.
\subsection{Magnetic field and magnetic forces}
The magnetic field is characterized by two variables, the magnetic field intensity $\vec{H}$ [A/m] and the
magnetic induction (called also magnetic flux density) $\vec{B}$ [T]. In vacuum, they are related to each other
by the expression
\begin{equation}\label{BHv}
\vec{B}=\mu_0\vec{H},
\end{equation}
where $\mu_0=4\pi\cdot 10^{-7}$ [T$\cdot$m/A] is a constant called vacuum permeability.
The action of the magnetic field $\vec{H}$ on the matter provokes its own magnetic field called
magnetization:
\begin{equation}\label{M}
\vec{M}=\chi\vec{H},
\end{equation}
where the coefficient of proportionality $\chi$ is the magnetic susceptibility of the matter. The total magnetic
field is equal to the sum of the external and induced fields,
\begin{equation}\label{Bm}
\vec{B}=\mu_0(\vec{H}+\vec{M})=\mu\mu_0\vec{H},
\end{equation}
where $\mu=1+\chi$ is the magnetic permeability. The susceptibility defines the magnetic properties of the
matter. When its absolute value is comparable or larger than unity, the matter is strongly magnetic. This is the
case of ferromagnetic ($\chi\gg 1$) or superconductive ($\chi\approx -1$) substances. In what follows we will
consider only weakly magnetic substances ($|\chi|\ll 1$) that can be either diamagnetic ($\chi<0$) or
paramagnetic ($\chi>0$).
It is important to note that for weakly magnetic substances, $\chi\propto\rho$, where $\rho$ is the mass
density. We will introduce the specific magnetic susceptibility,
\begin{equation}\label{al}
\alpha=\chi/\rho,\end{equation} which characterizes such substances.
Since $\mu\approx 1$ with high accuracy for weakly magnetic substances such as air, the magnetic field created
by a given installation in air is equal to that created in vacuum. For this reason, the $\vec{H}$ value is
related to $\vec{B}$ by the universal relation (\ref{BHv}) and $\vec{B}$ is also often called magnetic field.
Most pure fluids (e.g. H$_2$O, H$_2$, N$_2$) and organic substances are diamagnetic. Some fluids (e.g. O$_2$,
NO) are paramagnetic. The magnetic susceptibility of paramagnetic substances varies with temperature; that of
diamagnetic substances is almost independent on temperature.
The magnetic force that acts on the unit volume of a substance is
\begin{equation}\label{Fm}
\vec{F}_m=\frac{\chi}{2\mu_0}\nabla (\vec{B}^2),
\end{equation}
where $\nabla$ is the vector gradient operator. The gravity force per unit volume is
\begin{equation}\label{Fg}
\vec{F}_g=\rho\vec{g},
\end{equation}
where $\vec{g}$ is the Earth gravity acceleration. An ideal compensation is achieved when
\begin{equation}\label{ideal}
\vec{F}_m+\vec{F}_g=0.
\end{equation}
In a cylindrical $r-z$ reference system where the $z$ axis is directed upwards, this expression is equivalent
to two equations,
\begin{eqnarray}
\frac{\partial(B^2)}{\partial r}&=&0 \label{Br}\\
\frac{\partial(B^2)}{\partial z}&\equiv&\nabla(B^2)_z=\frac{2\mu_0g}{\alpha}\equiv G,\label{comp}
\end{eqnarray}
where $\alpha$ is defined by (\ref{al}). It means that for ideal compensation, the magnetic field would need
to satisfy the equation $B=\sqrt{c+Gz}$, where $c$ is an arbitrary constant. It has been shown by
\cite{Quettier} that such a solution of the Maxwell equations for magnetic field does not exist so that the
ideal compensation in any finite volume is impossible. In practice, the ideal compensation is achieved in a
single or at most several points.
The stability of levitation is an important issue and is discussed by many authors starting from
\cite{Braunbek39}. For the purposes of the present study, it is important to mention that the levitation of a
drop (or, generally, of a denser phase) in the surrounding gas is stable for diamagnetic fluids and unstable for
paramagnetic fluids. On the contrary, the levitation of a bubble (or, generally, of a less dense phase) inside
the liquid is stable for paramagnetic and unstable for diamagnetic fluids \citep{MST09}.
\subsection{Required magnetic fields}
It is important to underline that the magnetic compensation does not work for all substances at the same
time. The $\nabla(B^2)_z$ value required to compensate the gravity for a particular substance is given by the
material constant $G$ from Eq. \ref{comp}. This value for different substances is shown in Fig.~\ref{gradB2}.
Note that the $G$ value for oxygen is the smallest. In most installations, the magnetic field is created with
one or several co-axial solenoids, for which the radial component of the magnetic force is zero at the axis
so that $|\nabla(B^2)|=|\nabla(B^2)_z|$, where $\nabla(B^2)_z$ can be positive or negative. For this reason,
one speaks often of $|\nabla(B^2)|$ instead of $\nabla(B^2)_z$.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{gradB2_Fluids}
\caption{The values of $|\nabla(B^2)|$ required for gravity compensation for different fluids. The value for
O$_2$ is about 8 T$^2$/m, which is so small that the corresponding bar is almost invisible. The signs of the
required $\nabla(B^2)_z$ are opposite for paramagnetic (O$_2$ and NO) and diamagnetic (all other) fluids.}
\label{gradB2}
\end{figure}
Generally speaking, if a sample is submitted to $\nabla(B^2)_z$ needed to compensate the gravity in a given
substance, the gravity is not compensated for the others.
Note that Eq. \ref{comp} does not involve the density nor the mass of the sample. It means that the gravity will
be compensated independently of the sample mass. If the gravity is compensated for the liquid phase of a
substance, it is also compensated for the gas phase of the same substance, i.e. the buoyancy force for the gas
bubbles or solid crystals in the liquid is compensated either.
In agreement with Eq. \ref{comp}, the ability of a given magnetic installation to compensate the gravity is
characterized by $|\nabla(B^2)|$ that the installation is able to generate.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{Solenoid}
\caption{An example of the variation of $B$ and $\nabla(B^2)_z$ along the axis $z$ of a solenoid. The
locations appropriate for levitation of dia- and para-magnetic samples are indicated. The HYLDE solenoid
\citep{Cryo02} data are shown. $z$ is measured in cm.} \label{Sol}
\end{figure}
The variation of this value along the axis of a typical solenoid (Fig.~\ref{Sol}) shows that it has two
extrema situated near the ends of the solenoid. These extrema are the most suitable places for the gravity
compensation because they provide the maximum value of $|\nabla(B^2)|$ for a given current in the solenoid.
The upper extremum is a minimum and is suitable for the levitation of diamagnetic substances. The lower
extremum is a maximum and is suitable for paramagnetic substances (Fig.~\ref{Sol}).
\section{Past and present of magnetic gravity compensation}
The bases of magnetic levitation have been put forward by \cite{Braunbek39}. He has succeeded the levitation of
diamagnetic bismuth that has a very low required $|\nabla(B^2)|$. He also provided the theory of levitation.
The first gravity compensation experiments have been realized in the 1960's independently in Berkeley (USA)
by \cite{Lyon65} and in Kharkov (USSR, Ukraine at present) by \cite{Kirichenko68}. They dealt with the
studies of the boiling heat transfer in oxygen that had a very small required $|\nabla(B^2)|$. The magnetic
field was created with resistive solenoids. These studies have been motivated by the importance of oxygen as
a rocket fuel.
The development of superconductive solen\-oids opened the way to their wide use for gravity compensation. It has
been pioneered by \cite{B&T91}, who levitated multiple organic samples, both solid and fluid. The levitation of
the frog embryos by \cite{LevFrogUS} is the first known to us application of the magnetic gravity compensation
in the life sciences. A large number of works on magnetic gravity compensation has been published since then.
\begin{table*}
\centering \caption{Available magnetic gravity compensation installations worldwide} \label{tabI}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Location& $B$, T& $|\nabla\,(B^2)|$, T$^2$/m& Bore {\O}, mm& Latest citation \\
\hline\noalign{\smallskip}
Nottingham, UK& 16.5& 2940& 50&\cite{Hill08} \\
Nijmegen, NL& $\sim 17$& $\sim 3000$& 40&\cite{Geim00} \\
Gainesville FL, USA& 15& $ 3000$ & 66& \cite{Mouse09} \\&& 760& 195&\cite{Brooks01}\\
Providence RI, USA& 9.5& $ 3200$ & 11& \cite{Brown04} \\
Xi'an, China&16.12& 3026&51&\cite{China08}\\
Hiroshima, Japan& 15& $\sim 3000$& 50& \cite{Hiroshima} \\
Tohoku, Japan& ~& $\sim 4000$& 52& \cite{Tohoku1} \\
Tsukuba, Japan& 8.5 & 448 & 50&\cite{Tsukuba1} \\& 17& 1600&&\\
Grenoble, France& 10 & 1000 & 50 &\cite{PRL06}\\
& 2& 10& 180& \cite{MST09} \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
The known to us experimental installations available at present for magnetic gravity compensation are presented
in Table \ref{tabI} with their main parameters such as the maximum attainable $|\nabla(B^2)|$ value and the bore
diameter. The latter defines the maximum sample size that can be used. Installations that can attain $\sim 3000$
T$^2$/m may be used for gravity compensation in water or biological tissues that consist mainly of water (cf.
Fig.~\ref{gradB2}); their bore diameters correspond to the thermally insulated part of the bore at room
temperature. Two last lines in the table refer to the installations developed in our group, the HYdrogen
Levitation DEvice (HYLDE) and Oxygen Low Gravity Apparatus (OLGA), respectively.
The physical sciences studies performed with the magnetic gravity compensation in fluids concerned most\-ly
the shape and motion of bubbles and drops. The studies performed at isothermal conditions dealt with the drop
shape \citep{H20SolLev,WunenH2,Hiroshima,Hill08}, drop vibrations \citep{B&T91,HeOscill}, drop coalescence
\citep{B&T91,HeNoncoal}, applications in microfluidics \citep{Lyuksyutov04}, surface instability in the
magnetic field \citep{O2inst03}, and, more recently, a study of the liquid meniscus under fast acceleration
change \citep{GvarELGRA09,OLGA}. The non-isothermal studies concerned boiling
\citep{Lyon65,Kirichenko68,PRL06,MST09}, drop behavior under temperature gradients
\citep{Tohoku1,GradELGRA09}, phase transitions under vibrations
\citep{BeysVib05,BeysVib05a,ActaAstro07,BeysVib08} and gravity influence on flame \citep{Flame08}.
The life sciences studies are concerned with the microgravity influence on protein crystals' growth
\citep{Tsukuba2,China08}, expression of genes \citep{ArabLev07,Coleman2007,ExprELGRA09}, growth of living cells
\citep{Brown04,CellELGRA09,HumELGRA09,Hammer09} or example levitation of small creatures
\citep{LevFrogUS,Mouse09}, and plant morphology \citep{Manzano09}.
\section{Magnetic force heterogeneity issue}
It has already been mentioned that the ideal compensation is achieved only in isolated points. However, it is
possible to approach the ideal compensation conditions within a given accuracy in any volume. The effective
gravity spatial heterogeneity is thus the most important issue that limits the applicability of magnetic gravity
compensation.
The compensation quality can be characterized by the spatial distribution of the effective gravity acceleration
\begin{equation}\label{aeff}
\vec{a}_{eff}=(\vec{F}_m+\vec{F}_g)/\rho=\vec{g}+\frac{\alpha}{2\mu_0}\nabla (\vec{B}^2)
\end{equation}
defined with (\ref{Fm},\ref{Fg}). In practice, the non-dimensional acceleration heterogeneity
$\vec{\varepsilon}=\vec{a}_{eff}/g$ is more convenient.
There are several possible causes for $\vec{a}_{eff}$ spatial variation. Those related to the spatial
variation of $\nabla ({B}^2)$ manifest itself even in a single-component system, i.e. in a pure fluid sample
where its gas and liquid phases might coexist. Additional spatial variation of the effective gravity appears
in multi-phase samples where $\alpha$ varies. This variation might lead to internal mechanical stresses or
even component displacement in such systems and needs to be analyzed separately for each specific case, for
which the magnetic susceptibility $\chi$ for each of the components needs to be known with precision. Note
that the magnetic susceptibility is well studied for life sciences systems for high frequency magnetic field.
However, $\chi$ value for the \emph{constant} field might be very different and is yet to be determined
experimentally. In the rest of this section we consider only the single-component fluids.
A spatial variation of $\nabla ({B}^2)$ may appear in such fluids because of several reasons. First, there is
a variation of the background force field of the magnetic installation (sec. \ref{Solf}). Second, distortions
can be induced by the sample. These include the variation because of the experimental cell structure (see
below) and the fluid itself. Let us consider each of the heterogeneity causes separately.
\subsection{Background field heterogeneity}\label{Solf}
The spatial variation of the field heterogeneity $\vec{\varepsilon}$ can be calculated numerically when the
magnetic field is known with certainty. This is the case of the magnetic field of a solenoid
(Fig.~\ref{Sol}). In Fig.~\ref{HomSol}, one can locate two compensation points at the axis ($r=0$) at which
$\vec{\varepsilon}=0$. One of them corresponds to the stable levitation of a bubble; another is unstable
\citep{MST09}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth, clip]{epsz3D_sansInsert1}\\(a)\\
\includegraphics[width=\columnwidth, clip]{epsr3D_sansInsert1}\\(b)
\caption{Magnetic force heterogeneity axial (a) and radial (b) components for the solenoid (height: 555 mm)
of the OLGA installation \citep{MST09}. $r=0$, $z=z_0=-248$~mm is one of two compensation points.
Fig.~\ref{HomSol}a corresponds to the zoomed lower portion of the complete $\nabla (B^2)_z$ curve for the
OLGA solenoid (similar to that of Fig.~\ref{Sol}).} \label{HomSol}
\end{figure}
Because of the cylindrical symmetry, the vector $\vec{\varepsilon}$ has only two components: axial
$\varepsilon_z$ and radial $\varepsilon_r$. It is important to know for the estimation purposes which field
value $B$ is necessary to obtain a given $\vec{\varepsilon}$ inside a sphere of the radius $R$ for a
substance requiring the $|\nabla ({B}^2)|$ value $G$ (see Eq. \ref{comp} and Fig. \ref{gradB2}). The answer
\citep{Quettier} is given by the expression
\begin{equation}\label{mag}
B=\frac{1}{2}\sqrt{\frac{3GR}{2\varepsilon_r+\varepsilon_z}}.
\end{equation}
In spite of its simplicity, it gives quite accurate results. Two examples can be given for compensation in
water, a case particularly important for life sciences applications. To obtain the gravity heterogeneity
$\varepsilon_z=\varepsilon_r=1$\% inside a sphere of $2R=50$~mm diameter, the magnetic installation should
create, according to (\ref{mag}), the field $B=41$~T. This is close to the world field record obtained with
the hybrid (superconductive+resistive) installations. Such an installation would be extremely expensive. For
$B=16.5$~T, Eq. (\ref{mag}) results in $\varepsilon=1.2$\% for $2R=10$~mm, which corresponds to the existing
installations (Table \ref{tabI}).
Eq. (\ref{mag}) helps finding ways to improve the gravity homogeneity of an existing installation. The local
$B$ increase can be achieved by using ferromagnetic inserts inside the solenoid \citep{Quettier}. It is well
known that the field increases in the vicinity of a ferromagnetic component. The force homogeneity calculated
in presence of the insert from Fig. \ref{ins}a is shown in Figs. \ref{ins}b,c. The improvement of the radial
heterogeneity is especially large. The calculation of the field has been performed with the Radia freeware
package \citep{Radia} available from the ESRF web site together with its complete description. Comparing to
the case with no insert (Figs. \ref{HomSol}), one obtains an increase of the compensation volume by a factor
5 to 8.
\begin{figure}
\centering
\includegraphics[width=0.5\columnwidth]{insert}\\
(a)\\
\includegraphics[width=\columnwidth, clip]{epsz3D_avecInsert}\\
(b)\\\includegraphics[width=\columnwidth, clip]{epsr3D_avecInsert}\\
(c) \caption{Insert scheme (a). Axial (b) and radial (c) magnetic force heterogeneities for the same solenoid
as that of Figs. \ref{HomSol} but with the insert. $r=0$, $z=z_0=-155$ mm is the compensation point. The
exact position of the insert with respect to the solenoid center is shown in Fig. \ref{geom-reelle}a below.}
\label{ins}
\end{figure}
\subsection{Fluid-induced distortion of effective gravity}\label{DistFl}
Let us first consider a two-phase fluid in the \emph{constant} magnetic field and under Earth gravity. Since
$|\chi|\ll 1$ both for liquid and gas phases, a distortion of the background field induced by the liquid and
gas domains and by the interface separating them is usually small. However, it is well known that, in the
electric field, the field distortion can be strongly amplified near the regions of high interface curvature.
Since the equations for the static magnetic field are similar to their electrostatic counterparts (they can
also be expressed in terms of the scalar potential), an analogous effect exists in the magnetic field. The
field distortion is localized in the vicinity of the high curvature interface points. We explain below that
such points can appear in paramagnetic fluids like oxygen.
It is well known \citep{Ferromagn67} that the surface of a \emph{ferromagnetic} fluid becomes corrugated when
$B$ exceeds a threshold value $B_c\sim [\sigma g(\rho_L -\rho_V)]^{1/4}$. Here $\sigma$ is the interface
tension; the indices L and V refer to liquid and vapor, respectively. The period of corrugation is
$\lambda=2\pi l_c$, where $l_c=[\sigma/g(\rho_L -\rho_V )]^{1/2}$ is the capillary length. The $\chi$ sign for
paramagnetic substances is the same as for ferromagnetic substances, but the absolute value is much smaller.
For this reason, the same instability occurs for the paramagnetic fluids but at much larger fields, see Fig.
\ref{ferro}a.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{O2corrug}\\
(a)\\
\includegraphics[width=0.6\columnwidth, clip]{O2conic}\\
(b) \caption{Surface corrugation (a) and conical bubble shape (b) for oxygen at $T=154.5$K, close to its
critical point ($T_c=154.8$K, $p_c=50$ bar) in OLGA. Two vertical threaded rods that keep the cell together are
visible. Both $B$ and $|\nabla ({B}^2)|$ values for the case (b) are slightly larger than for (a).}
\label{ferro}
\end{figure}
When $\lambda/2$ is larger than the cell size, this effect leads to a distortion of the bubble shape. At some
conditions it may develop a conical end (or cusp) \citep{O2sharp2}, see Fig. \ref{ferro}b. This means that
the field is distorted in the vicinity of this end. It is the mutual amplification of the field distortion
and interface deformation that leads to the cusp geometrical singularity \citep{Conical}. This effect is
absent in diamagnetic fluids where an interface deformation induces the field change that causes the
interface smoothing.
When the field with a strong gradient is applied, the situation becomes even more complicated. It is now the
effective gravity acceleration $a_{eff}$ that needs to be used instead of $g$ for the calculation of $B_c$
and $\lambda$. Thus $B_c\to 0$ at compensation conditions \citep{O2inst03} so that the instability always
occurs at compensation. Since $\lambda\to\infty$, the interface corrugation is not observed. According to our
observations, the instability manifests itself by the bubble shape deformation. Far from the critical point,
bubble is of elongated oval shape. Since the instability strength is controlled by the difference $(B-B_c)$,
the elongation should grow with $B$. This leads to an apparent paradox that appears when one uses a
ferromagnetic insert to improve the homogeneity of the background force field (sec. \ref{Solf}). One might
expect an improvement of the bubble sphericity. On the contrary, the bubble deformation grows because the
insert increases $B$ and thus strengthens the instability. Close to the critical point, a cusp appears (Fig.
\ref{ferro}b) because the instability becomes especially strong with the decrease of $B_c\sim [\sigma (\rho_L
-\rho_V)]^{1/4}$.
As mentioned above, this instability is absent in diamagnetic fluids.
\subsection{Container-induced distortion of effective gravity}\label{DistCell}
One needs to be particularly cautious about the materials used for the fabrication of the experimental cell
and its fixation. In practice, the stainless steel is often used because of its strength and high chemical
resistance to corrosion. It is considered to be a non-ferromagnetic material. This is true for a raw piece of
stainless steel. Any mechanical or thermal stress converts at least some superficial layer, adjacent to the
treated surface, to the ferromagnetic state. As an example, one can mention the welding joints. However, the
magnetic strength (i.e. the saturation field) of such components is rather weak. Such a conversion can be
easily demonstrated e.g. with a small but strong rare earth magnet.
To estimate the influence of the weakly ferromagnetic structural elements on the effective gravity field,
another field heterogeneity calculation was necessary. Several cell components (Fig. \ref{geom-reelle}a) with
a saturation field of 0.1 T (exaggerated for estimation purposes) has been simulated.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{geom-reelle}\\
(a)\\
\includegraphics[width=\columnwidth, clip]{epsz3D_geom-re_Ins}\\
(b) \caption{(a) Several stainless steel cell components inside the OLGA ferromagnetic insert. The position of
the insert with respect to the solenoid center (Fig. \ref{ins}a) is shown with the coordinates in mm. The RADIA
grid used for numerical calculations is visible on the insert. (b) The axial gravity heterogeneity corresponding
to the geometry shown in Fig. \ref{geom-reelle}a, to be compared with Fig. \ref{ins}b.} \label{geom-reelle}
\end{figure}
One can see that there is practically no long range force field distortion. It is limited to several mm range
around the element. This means that one can use such elements provided that they are far enough from the
working region. It is better however to avoid the potentially ferromagnetic materials; we replaced stainless
steel by brass or titanium wherever possible.
The influence of one of the simulated elements, a flat stainless steel ring (the smallest of the three rings
shown in Fig. \ref{geom-reelle}a) situated at the bottom of the experimental cell was found experimentally. The
vapor bubble was attracted to the ring if the distance between them was small enough. This attraction
corresponds to the negative $\varepsilon_z$ (Fig. \ref{geom-reelle}b) in the vicinity of the ring, at
$z-z_0\in[0,5]$ mm.
\subsection{Experimental measurement of $\varepsilon$}\label{Exp}
The experimental measurement of $\vec{\varepsilon}$ is desirable under the compensation conditions. The
background force field testing is possible using a container filled with the liquid and vapor phases of the
same substance. Let container be placed into the magnetic field in such a way that the bubble does not touch
the container walls. Under the ideal (space) weightlessness conditions, the shape of the bubble would be
spherical. Under magnetic compensation, the bubble center is situated at the compensation point or close to
it. Because of the spatial variation of the magnetic force, the bubble becomes elongated (Fig. \ref{bubble}).
\begin{figure}\sidecaption
\centering
\includegraphics[width=0.3\columnwidth]{ElBubble}
\caption{Sketch of a vapor bubble magnetically levitated inside the liquid. The bubble is deformed by the
residual effective gravity field.} \label{bubble}
\end{figure}
From the bubble image, one can measure the bubble surface curvatures $K_H$ and $K_C$ at the points H and C
respectively. The heterogeneity of the effective gravity acceleration can be estimated (see Appendix A) with the
expression that follows from (\ref{est}),
\begin{equation}\label{eps}
\varepsilon \approx l_c^2\frac{K_H-K_C}{\delta z}.
\end{equation}
Since $\varepsilon_r$ is neglected in such an estimation, its accuracy is the best for small bubbles. Note that
the sensitivity of this method increases with the decrease of the surface tension. For large $\sigma$, the
bubble remains spherical even at large gravity heterogeneity and it is difficult to measure the difference of
$K_H$ and $K_C$.
More detailed information on the field configuration is obtained if the temperature and pressure of the fluid
can be kept very close to the fluid's liquid-gas critical point (which requires, in general, a precise
thermal regulation). In this case the surface tension can be made extremely small and the corresponding term
can be neglected in (\ref{sh1}). The liquid-vapor interface then follows an equipotential surface for the
"magneto-gravitational" potential \citep{Lorin09},
\begin{equation}\label{U}
U=\frac{\alpha}{2\mu _0g}B^2-z.
\end{equation}
An equipotential surface (or rather its intersection with the image plane) can thus be visualized directly
\citep{Lorin09}. Different equipotential surfaces are obtained by varying the field or the cell position. The
spatial distribution of the gravity heterogeneity can be found from the shape of the equipotential lines as
$\vec{\varepsilon}=\nabla U$. The latter equation can be established by comparison of (\ref{U}) with
(\ref{aeff}).
Note that the above described methods are not applicable to the case of paramagnetic fluids (sec.
\ref{DistFl}), where the interface deformation and magnetic field distortion are coupled and lead to a strong
self-induced interface deformation even in highly homogeneous effective gravity field.
\section{Concluding remarks}
Magnetic gravity compensation method presents a powerful alternative to the classical low-gravity
experimentation methods involving fluids. It becomes increasingly popular last years, especially for life
sciences applications. About ten installations are available worldwide. The residual gravity heterogeneity
imposes limitations on the applicability of magnetic gravity compensation. The homogeneity of the effective
gravity is related to the magnetic field intensity and can be improved with ferromagnetic inserts. We propose
an original method of measurement of the residual gravity from the bubble shape.
The gravity heterogeneity depends not only on the installation, but also on the sample composition and
structural elements. To reduce the gravity heterogeneity, compounds containing the ferromagnetic substances
(Fe, Co, Ni, etc.) need to be avoided. The heterogeneity is the smallest for single component samples and
needs to be carefully evaluated for multicomponent systems (e.g. for life sciences applications) using the
magnetic susceptibility data for each of the components.
For paramagnetic substances like oxygen, an additional cause of gravity heterogeneity appears because of the
coupling of the magnetic field and deformation of the gas-liquid interfaces. In this case, the effective
residual gravity is more difficult to evaluate because it depends on the interface shape.
\begin{acknowledgements}
The partial financial support by CNES and Air Liquide is gratefully acknowledged.
\end{acknowledgements}
|
1,116,691,499,108 | arxiv | \section{Introduction}
\label{sec:intro}
A key prediction of Cold Dark Matter (CDM) is that the halo mass function should follow an unbroken power-law with $dN/dM \propto M^{-\alpha}$, where $\alpha=1.9\pm0.1$ from cluster down to planet mass halos \citep{Diemand++08, Springel++08}. CDM models of the distribution and growth of structure match observations with remarkable success over an enormous range of distance and size scales \citep[e.g.][]{Planck++13}. On smaller scales, tests of CDM become more difficult owing to the uncertain physics of star formation in sub-Milky Way mass halos ($M_{vir}<10^{12} M_\odot$). A famous example of this is the `Missing Satellite Problem', so called due to the fact that CDM simulations predict that thousands of subhalos should be gravitationally bound to the Milky Way, while only $\sim$ tens of \emph{luminous} satellite galaxies have been observed \citep{Moore++1999, Klypin++99, Strigari++07, Weinberg++08, Drlica-Wagner++15}. Extrapolations based on the depth and area of the Sloan Digital Sky Survey (SDSS) and the Dark Energy Survey (DES) forecast that the Milky Way may host as many as $\sim$100 ultra-faint ($L<10^3 L_\odot$) satellite galaxies \citep[see, e.g.][]{Hargis++14, Drlica-Wagner++15}. However, measurements of the halo mass function at these low luminosities becomes extremely difficult. In particular, the stars of low luminosity galaxies occupy only the inner $\sim100$ pc of their dark matter halo. This is a small fraction of the total halo virial radius, which, in absence of the effects of tidal stripping extends out to several kpc even for a very low mass $10^{7}M_\odot$ halo. One consequence of this is that satellite galaxies inhabit halos with kinematically consistent masses over 5 orders of magnitude in luminosity \citep{Strigari++08}.
Simulations with varying implementations of semi-analytic and/or numerical baryonic feedback have demonstrated that it is possible to suppress star formation in dark matter subhalos sufficiently to matched the observed Milky Way satellite luminosity function with an underlying CDM halo mass function at low redshift \citep[e.g.,][]{Thoul++1996,Gnedin++00,Kaufmann++08, Maccio++11, Springel++10,Guo++11, Zolotov++12, Brooks++13, Starkenburg++13,Weinberg++08, Menci++14, Lu++14, Wetzel++16}. However, models which can match the luminosity function of satellites at redshift zero around the Milky Way, are not always successful at reproducing the luminosity function around higher mass hosts, or at higher redshifts \citep[e.g.][]{Nierenberg++13, Nierenberg++16}.
Strong gravitational lensing provides a powerful test of CDM as it enables a measurement of the subhalo mass function without requiring stars or gas to detect subhalos. In a strong gravitational lens, a background source is multiply imaged with the image positions and magnifications depending on the first and second derivatives of the gravitational potential, respectively. The image magnifications are particularly sensitive to low-mass perturbations, with a lower sensitivity limit determined by the source size. If the source is of the scale of $\mu$as (such as a quasar accretion disk), then the image magnifications will be significantly affected by stars in the plane of the lens galaxy (a.k.a microlensing). In contrast, milliarcsecond scale sources are not significantly lensed by stars, but are sensitive to the presence of perturbing subhalos which have characteristic Einstein radii of $\sim$mas, corresponding to typical masses $10^4<M/M_\odot<10^9 $ given typical lens configurations.
Traditionally, radio loud quasar sources have been used to detect subhalos, as they are extended enough (tens to hundreds of parsecs, e.g. \citet{Jackson++15}) to avoid microlensing, and also are not affected by differential dust extinction in the plane of the lens galaxy. \citet{Dalal++02} used the lensed magnification of 6 radio loud quasar lenses and PG1115+08 \citep{Weyman++80} to estimate the average fraction of mass in substructure relative to the mass in the smooth halo component ($f_{sub}$) around these lenses, finding this fraction to be broadly consistent with predictions from CDM although with large uncertainties. This was an important proof of method. Progress requires a larger sample of gravitational lenses which enables a measurement of not only the fraction of mass in substructure but also the slope of the subhalo mass function. Furthermore, \citet{Xu++15}, \citet{Hsueh++16}, and \citet{Gilman++16} have demonstrated that systematic uncertainties may occur in flux-ratio measurements if the deflector is not accurately modelled due to insufficiently deep optical imaging, as may be the case for studies which rely entirely on radio imaging.
There are several paths forward for increasing the sample of lenses which can be used to measure the subhalo mass function. Gravitational imaging for instance, can be used to detect subhalos perturbing the lensed positions of background galaxies. This method currently has a limiting mass sensitivity of $\sim 10^8 M_\odot$ \citep{Vegetti++12,Vegetti++14, Hezaveh++16, Birrer++17}. The sensitivity limit is determined in part by imaging spatial resolution, which enables the measurement of the small astrometric perturbations to the strongly lensed backgrounds source caused by substructure. The next generation of telescopes and adaptive optics will lower this limit for background galaxy sources, and deep VLBI imaging of extended radio jets will potentially enable the detection of masses as low as $\sim 10^6 M_\odot$ with gravitational imaging.
Observations of quasar lenses at longer wavelengths can provide a microlensing-free probe of substructure. Redward of $\sim 4 \mu$m rest-frame, the quasar continuum emission is expected to be dominated by the dusty torus, which is sufficiently large to be unlensed by stars \citep[e.g.,][]{Sluse++12}. For a typical source redshift, this implies that mid-IR imaging at wavelengths greater than $10\mu$m can probe dark matter substructure. This method has been applied successfully to several systems with larger image separations \citep[e.g.,][]{Macleod++09,Minezaki++09, Chiba++05}. JWST can provide the spatial resolution required to extend mid-IR flux ratio measurements to systems with smaller image separations. \citet{Jackson++15} demonstrated that deep radio observations of radio-quiet lensed quasars can also successfully be used as a microlensing free probe of substructure, albeit with larger flux uncertainties than radio-loud systems ($\sim 8-10\%$ compared with $\sim 3-5\%$ for radio loud systems). They estimate that this method can be applied to approximately half of optically selected quasar lenses.
Strongly-lensed quasar \emph{narrow-line} emission provides an alternate probe of substructure, with comparable precision to radio-loud lensing studies. This method, originally proposed by \citet{Moustakas++03}, is extremely promising as it enables the measurement of substructure with current observational facilities in virtually all of the tens of optically selected quadruple quasar lenses predicted to be found in DES \citep[][{\tt http://strides.astro.ucla.edu/}]{Ostrovski++16, Agnello++15} and other wide field imaging surveys, and the hundreds forecast to be found in LSST \citep{Oguri++10}. \citet{Sugai++07} demonstrated this method for the gravitational lens RXS 1131-1231 using seeing-limited observations with the Subaru integral field spectrograph. This lens has an unusually large separation of $>1$ arcsecond between each of the images.
Higher resolution imaging is necessary for the majority of quad lens systems, which often have at least one pair of images separated only by a few tenths of an arcsecond. Adaptive optics can provide the necessary spatial resolution to isolate the lensed images. \citet[][hereafter N14]{Nierenberg++14} used the integral field spectrograph, OSIRIS \citep{Larkin++06}, at Keck with adaptive optics to measure spatially resolved narrow-line flux ratios in the gravitational lens B1422+231 \citep{Patnaik++92}. Adaptive optics is an effective tool, however it can only be applied to those systems in which the narrow-line emission of interest falls in a suitable wavelength range for adaptive optics correction, for instance either $H$ or $K$ band in the case of Keck OSIRIS. Furthermore, adaptive-optics requires the presence of a nearby, bright, tip-tilt star, although often the lensed quasar itself is bright enough for this purpose. Space-based spatially resolved spectroscopy provides an alternative for those systems which fall outside of these wavelength bands, or are at a declination unsuitable to Keck OSIRIS spectroscopy.
In this work we demonstrate that the Hubble Space Telescope infrared grism on the Wide Field Camera 3 (WFC3) can be used to measure spatially-resolved narrow-line image fluxes, with comparable sensitivity to substructure to ground-based results from Keck. We present an analysis of WFC3 grism observations of HE0435-1223 \citep[hereafter HE0435,][]{Wisotzki++02} to demonstrate our reduction mechanism. The deflector is an early-type galaxy at redshift 0.4546 \citep{Morgan++05} and the source is a quasar at redshift 1.693 \citep{Sluse++12}. This system has been extensively studied since its discovery; it has been monitored over a decade, and its relatively long time delay and significant intrinsic variability have made it a powerful probe of the group density profile and the Hubble constant \citep{Wong++16, Bonvin++16, Courbin++11, Kochanek++06}. Thanks to this attention, there are numerous multi-band and spectroscopic measurements available for comparison and significant effort has gone into measuring the properties of the environment of the deflector \citep[e.g.]{Momcheva++06, Momcheva++15, Wong++11,Sluse++16, Wilson++16}, which is important for comparing detected subhalo properties with predictions from CDM.
In Section 2 we describe the observations and initial data reduction for this system. In Section 3 we describe the spectral extraction pipeline developed to measure narrow-line fluxes. In Section 4 we report measured quasar spectral features, and integrated emission fluxes, and compare the measured fluxes to results from broad band and radio studies. In Section 5 we perform a simple gravitational lens inference to test for the presence of substructure. In Section 6, we test for the effects of resolved narrow-line emission on our results. In Section 7 we discuss the constraint from this system. In Section 8 we provide a brief summary of the main conclusions.
We assume a flat $\Lambda$CDM cosmology with
$h=0.7$ and $\Omega_{\rm m}=0.3$. All magnitudes are given in the AB
system \citep{Oke++1974}.
\section{Observations and Initial Reduction}
\label{sec:data}
We observed the gravitationally lensed quasar HE0435 as a part of HST-GO-13732 (P. I. Nierenberg), a grism survey of narrow-line emission in six quasar lenses. The target was observed on August 30, 2015 for 2062s with the G141 Grism, and for 400s with F140W direct imaging. The observation was taken at a dispersion angle of 147 degrees East of North so that the dispersed quasar images would be maximally separated from each other along the direction perpendicular to the dispersion axis. In order to recover sub-pixel information, we split the observations into a four point dither pattern with half-integer sub-pixel offsets following the procedure of \citet{Brammer++12} \citep[see also][]{Schmidt++14,Momcheva++16}.
For each dither position, we took a 100s direct exposure with F140W, immediately followed by a 515s G141 exposure. The F140W direct images at each dither position were used to obtain accurate wavelength solutions for each G141 exposure \citep{Brammer++12}.
Raw F140W and G141 exposures were individually processed with {\tt AstroDrizzle} \citep{Gonzaga++12} in order to reject cosmic rays, remove geometric distortion and perform flat-field subtraction \citep{Koekemoer++11, Brammer++12}. The F140W exposures were drizzled onto a 0\farcs06 pixel scale (approximately half the native pixel size). The upper left panel of Figure 1 shows the drizzled F140W image of the gravitational lens, and nearby spiral galaxy G1.
The G141 exposures were interlaced and combined onto a 0\farcs06 pixel scale, corresponding to an observed wavelength resolution of $\sim 22$~\AA~ per pixel following \citet{Brammer++12}, \citet{Schmidt++14} and \citet{Momcheva++16}. Unlike drizzling, interlacing does not introduce correlated pixel flux errors.
Figure 1 shows the final interlaced grism data with arrows indicating the 5007 and 4959~\AA~[OIII] doublet which is partially blended given the grism resolution.
\section{Spectral Extraction}
Our goal is to measure the narrow-line emission flux in each lensed quasar image, taking into account blending between distinct spectral components after they are dispersed by the grism. The lower left panel of Figure 1 highlights the blending by showing how the light from the ring, quasar images and galaxies combine in a model grism image. In order to rigorously account for the overlapping spectra in the grism image, we employ a forward modelling approach. We discuss this method in detail in the following subsections; in brief, we generate a model direct image and use the {\tt 3D-HST} grism simulation code \citep{Brammer++12} to iteratively map proposed component spectra into a simulated 2D grism image, which is then compared with the original interlaced grism image to compute a $\chi^2$ goodness of fit. In Section 3.1 we discuss how the model direct image is generated, in 3.2 we discuss the 1D models we use for the spectral components, and in 3.3 we describe the statistical inference.
\subsection{Direct image model}
The grism image is effectively a convolution of an object spectrum with its direct image. Thus given a model for the direct image, it is possible to generate a predicted grism spectrum. We model the direct image as having seven distinct spatial plus spectral components: Four separate quasars, the main deflector, the quasar host galaxy which is lensed into a ring, and the nearby galaxy G1. Here we discuss how the direct image model is generated for each of the components. These direct model images are then combined with model 1D spectra, as discussed in Section 3.2, in order to generate model grism images.
We model the four quasar images as point sources, using a nearby star to model the point spread function (PSF). We optimize the point source positions and fluxes using {\tt galfit} \citep{Peng++02, Peng++10}. A possible concern for using a drizzled star image as the model for the point source is that the true PSF is not accurately captured at the exact location of the lensed images. Furthermore, the FWHM of the grism PSF varies slightly with wavelength, which the {\tt 3D-HST} pipeline does not account for in the forward modelling process. We have checked that the exact PSF model does not impact our inference on the [OIII] flux ratios by running the entire modelling process described in the next two subsections with a total of five different PSF models: 1) A median combination of stars in the F140W FOV, 2) and 3) The median star blurred by 10 and 15\%, 4) and 5) two different nearby stars. While the choice of PSF model affected the overall fit to the 2D grism image, we found that it had no impact on our inference of the relative [OIII] fluxes. The quasar image positions are listed in Table A1, with uncertainties given by the variation in best fit {\tt galfit} positions for the different PSF models.
In order to disentangle the lens galaxy light from the prominent ring and bright quasar images, we start with the empirical model of the deflector by \citet{Wong++16} derived from very deep (9337 s), F160W imaging. This model is generated from a superposition of Chameleon profiles \citep{Dutton++11}, and is based on a simultaneous fitting of the quasar point images, a model for the lensed background quasar host galaxy, and the lens. To generate a model for the F140W light profile, we start by fitting two S\'ersic profiles to the empirical F160W model. Next, we hold all of the parameters for the S\'ersic profiles obtained in the previous step fixed except for the total flux, and fit the F140W direct image as a combination of the lens and quasar images. We subtract the best fit galaxy and QSO models from the direct F140W image and are left with a residual image composed primarily of the ring. In the third step, we subtract the residual ring image from the original F140W image, and re-fit the ring-subtracted image with the galaxy and QSO models, this time allowing all of the S\'ersic parameters for the galaxy model to vary freely. The final model for the galaxy is taken from the third step. We have tested several different iterations of this process, including allowing for less flexibility in the galaxy model to verify that the inferred narrow-line fluxes and image positions are not sensitive to the exact galaxy model, although the overall fit to the 2D grism image varies.
The ring model is generated by subtracting the best fit lens galaxy and quasar models from the direct image. We mask a small region near the centre of each QSO image where the PSF subtraction is noisy. We have confirmed that the inferred [OIII] fluxes are not sensitive to the exact size of the masked region. Finally for G1, we simply use a small cutout of the direct image, which is possible because it is isolated in the direct image.
\begin{figure*}
\centering
\includegraphics[scale=0.6, trim = 0 0 0 0 clip=true]{forwardModelDemo_4_labelled_cropped.pdf}
\caption{Demonstration of the forward modelling method used to infer spectral parameters. Note that the image contrasts have been altered between images to highlight different features.
{\bf Panel i)} Drizzled F140W image, arrow indicates North. {\bf Panel ii)} Interlaced G141W grism image, with light dispersed along the x-axis of the F140W image. QSO spectra (A-D) are labeled. They overlap with spectra from the ring, the main deflector (G) and the spiral galaxy (G1). Blue arrows indicate the location of narrow [OIII] 4959 and 5007 \AA~emission which are partially blended at this resolution. {\bf Column iii)} MCMC proposed 1D spectra for four of the seven components labelled in panel i. Each of the QSO images A-D has a separate model spectrum (shown in Figure 3), only spectrum A is shown here.
{\bf Column iv)} Model direct images for each separate spectral component, described in Section 3.1. The central QSO pixels are masked in the ring model to account for noisy PSF subtraction in this region.
{\bf Column v)} Model 2D grism images for each spectral component generated from convolving the model spectra in column iii with the model direct image in column iv.
{\bf Panels vi, vii)} Final, combined model direct image and model grism image, generated from the sum of columns iv and v respectively (and the other three QSO images not shown). Colours are the same as in columns iii, iv and v.
The goodness of fit is calculated by the $\chi^2$ difference between true and model 2D G141 images.}
\label{fig:prmass}
\end{figure*}
\subsection{1D Spectral Models}
In this section we describe the analytic models we use for the 1D spectra. Although our grism data extend over a significantly larger wavelength region, we confine our model and comparison with the data to a small wavelength region around [OIII]. Figure 2 shows the fitted regions for each QSO image. The extent of the region is approximately rest frame 4500-5200~\AA, but it varies between the lensed QSO images and is selected to achieve two goals. First, to provide sufficient spectral coverage to obtain a good constraint on the broad Fe, H$\beta$ and continuum features which overlap with the [OIII] emission (Figure 2). Second, the modeled region is extended where there is a possibility of an emission feature blending with the [OIII] emission of a neighbouring 2D spectrum. An example of the latter case is found in the redward extension of the fitted region for QSO spectrum C, in order to include the possible broad Fe flux contribution from image C to the image B narrow line emission (Figure 1).
Although gravitational lensing is not wavelength-dependent, the spectra of the four QSO images may not necessarily be related by a simple multiplicative magnification factor owing to the intrinsic variability of the QSO (which can lead to different image spectra owing to the varying image arrival times), and to the differential effect of stellar microlensing as a function of intrinsic source size. Thus we construct a model for the four QSO spectra which enables variations due to these effects. The QSO spectrum in the wavelength range of interest is composed of broad Fe and H$\beta$ emission, continuum emission and narrow [OIII] and H$\beta$ emission.
We model the broad H$\beta$ emission as a Gaussian with an intrinsic redshift offset from the [OIII] emission left as a free parameter. Although other studies have found that the H$\beta$ line profile can have a complicated structure, the low resolution of the grism data does not warrant a higher order model; Figure 2 demonstrates that a Gaussian is sufficient to fit the line profile in this case. The broad Fe emission is modelled using IZw1 templates which we interpolate in velocity space following \citet{Bennert++11} and \citet{Woo++06}. The broad Fe emission velocity is independent of the broad emission width in our model. The model allows for a redshift offset between the broad Fe emission and the [OIII] emission.
We model the continuum emission as a straight line rather than a power-law, given the small fractional size of the wavelength region of interest. The slope and amplitude of the line are left as free parameters.
The broad-line and continuum emission both arise from regions which can be affected by microlensing, which we discuss in more detail in Section 4. To account for this possibility we allow the width and amplitude of the H$\beta$ emission and the slope and amplitude of the continuum emission to vary independently between the model spectra. We did not find significant evidence for variations in the broad Fe velocities between images, and so kept them fixed for our final analysis, however we allowed the Fe amplitudes to vary independently from the H$\beta$ amplitudes.
Unlike the broad and continuum emission, narrow [OIII] and H$\beta$ emission come from a sufficiently extended source (greater than tens of parsecs) to not be affected by either stellar microlensing or intrinsic variability \citep{Moustakas++03, Muller-Sanchez++11, Bennert++06a, Bennert++06b}. Owing to this, we assume that both the line widths and the relative amplitudes of [OIII] and narrow H$\beta$ should be constant between the lensed images. We model the [OIII] doublet and H$\beta$ narrow-lines as Gaussians, and assume that they have the same redshift, which is valid given the spectral resolution of the grism. The ratio of the [OIII] doublet 4959 and 5007 amplitudes is fixed to the quantum-mechanically predicted value of 1/3.
The 1D models for the deflector, ring and G1 spectra are modelled as straight lines over the short wavelength region of interest, with amplitudes and slopes as free parameters. We do not find evidence requiring the inclusion of emission or absorption features in any of these spectra relative to the measurement uncertainties and given the brightness of the QSO spectra (see e.g. Figure 2).
We assume that the image fluxes are not affected by differential dust extinction. In the rest frame of the lens, the [OIII] emission lines lie at roughly $\sim9300$~\AA. At this wavelength, total dust extinction in lens galaxies, and early-type galaxies in general, is typically of order only a few hudredths of a magnitude \citep[e.g][]{Fal++99, Ferrari++99}, which is well within our overall flux measurement uncertainty. This assumption is further supported by the similarity of the broad-band optical colours of the images \citep{Wisotzki++03}. The images also have mutually consistent CIV (lens rest frame $\sim 2790$ \AA) and H$\beta$ (lens rest frame $\sim 9300$~\AA) broad-line flux ratios.
\subsection{Inference of QSO spectral parameters}
We infer the probability distribution of the parameters of the 1D spectral models using a Bayesian forward modelling approach with the {\tt emcee} Markov Chain Monte Carlo software package \citep{Foreman-Mackey++13}. For each step, the MCMC algorithm proposes parameters for the 1D spectra of all seven distinct spectral components (four QSO images, the main galaxy, the lens ring and G1).
We then simulate dispersed images of each separate component and add them to generate a full model 2D grism image. Finally, the $\chi^2$ of the fit is computed relative to the original 2D interlaced image. Figure 1 illustrates how the model 2D direct image components are dispersed into the model 2D grism image for each MCMC step.
\begin{figure*}
\centering
\includegraphics[scale=0.55]{1dspecs_lineLabelled_cropped}
\caption{Lensed quasar spectra extracted along the x-axis of the 2D grism image (Figure 1) via PSF weighted averaging along the y-axis. Absolute fluxes are arbitrary. 1D Spectra include contamination from neighbouring dispersed QSO images, lens galaxy light, the lens ring, and the nearby spiral galaxy G1 as illustrated in Figure 1. The residual is derived by subtracting the 2D grism model from the 2D grism data, and then performing the PSF weighted y-axis averaging; thus it is not a simple subtraction of the blue line from the black points. The residual has been offset from zero by the amount indicated by the horizontal lines for ease of visualization. The modelled region varies slightly for each image depending on its position in the 2D image, as discussed in Section 3.2. }
\label{fig:prmass}
\end{figure*}
\section{Spectral Forward Modelling Results}
Figure 2 shows the 1D model, data and residual `traces' for the four lensed QSO images. These traces are obtained by integrating the flux along the y axis in the 2D image, weighted by the relative flux of the direct F140 model PSF along that axis. Jumps in flux are due to small misalignments between the dispersion axis and the detector axis. This comparison shows that the input model provides an excellent fit to the observed spectra.
From the spectral modelling we obtain flux ratios between the broad H$\beta$ fluxes and the [OIII] fluxes from the image pairs A/C, B/C, D/C. Given that the intrinsic quasar luminosity is not known, gravitational lensing analyses rely on ratios of image fluxes rather than their absolute values. In Figure 3, we compare these flux ratios with measurements from other studies across a range of filters and for fixed filters at multiple dates. These measurements are chosen to represent how the flux ratios vary with wavelength and time, and are only a small subset of the many measurements of this system obtained for time variability studies \citep[e.g.][]{Bonvin++16, Courbin++11, Kochanek++06}. Table A2 contains references and observing dates for all flux ratios plotted in Figure 3.
The narrow [OIII] flux ratios are strikingly different from optical to near-IR flux ratios which are subject to contamination by microlensing and intrinsic QSO time variability.
HE0435 has been monitored for 15 years \citep{Bonvin++16, Courbin++11, Kochanek++06}, and during that time has shown highly variable broad band flux ratios due to stellar microlensing and intrinsic variability. The intrinsic variability particularly affects images B and D which have time delays of over a week relative to images A and C. Figure 3 highlights several repeat measurements of the system which show significant variability.
Based on simulations of QSO accretion disks and dusty tori, blueward of rest-frame $\sim$4$\mu$m, (observed $\sim$10 $\mu$m) the accretion disk makes a dominant contribution to the QSO emission \citep{Sluse++13}. From chromatic microlensing studies of this system, the quasar continuum emission has a half light radius of $\sim10^{16.3\pm0.3}$ cm (or 0.003 pc) at a rest-frame wavelength of 8000~\AA. \citet{Sluse++12} estimate the MgII broad line region size to be $\sim10^{18\pm 1}$ cm (or 0.03 pc). These sizes correspond to $0.5-5\mu$as at the lens redshift, and are thus affected by stellar microlensing. \citet{Bonvin++16} have inferred the approximate amplitude of the observed $R$ band (rest frame $\sim 2500 $\AA) stellar microlensing as a function of time for each of the images since 2003. Figure 4 shows an estimate from \citet{Bonvin++16} for the microlensing effect on the $R$ band flux ratios as a function of time assuming a `true' flux ratio value indicated by the straight lines.
The amplitude of microlensing depends on the source size
thus bluer filters are more strongly affected by microlensing
while redder continuum measurements are less affected. \citet{Blackburne++14} have performed a detailed study of differential microlensing as a function of wavelength for this system and their data are included in Figure 3, and in Table A2.
The broad line emission flux ratios of both H$\beta$ from this study and CIII] and CIV from \citet{Wisotzki++03} are closer to the narrow-line emission flux ratios, which is consistent with microlensing being a function of emission region size.
We can test for the effects of microlensing on our data by comparing the relative amplitudes of emission features in our inferred spectra. In Figure 5, we plot the marginalised models for the lensed image spectra from our analysis, normalised to the peak of the [OIII] flux at 5007 \AA~ in order to highlight how the emission features vary
between the lensed images. Image A shows significant morphological differences, with continuum and broad H$\beta$ fluxes which are much higher relative to the [OIII] flux than the other three images. This indicates that there is significant source-size dependent lensing. This finding is consistent with the inferred $R$ band microlensing of image A as observed by \citet{Bonvin++16} and shown in Figure 4.
The narrow-line [OIII] flux ratios are consistent with 5 GHz radio measurements from \citet{Jackson++15}, with A/C, B/C and D/C differing at 0.25, 1.8 and 2.2 $\sigma$ respectively. This is expected given that both sources are expected to be extended enough to avoid all microlensing contamination.
Although the results do not differ significantly, we note that \citet{Jackson++15} found that their radio emission was somewhat resolved, with an intrinsic source size of $\sigma\sim 288 $ pc, assuming the source had a Gaussian flux distribution. This affects the flux ratios predicted from gravitational lensing relative to a point source for a fixed deflector mass model. We discuss this further in Section 6, where we also place limits on the size of the narrow-emission region in our data and we examine the effects of a resolved narrow emission line region on our results.
\begin{figure*}
\centering
\includegraphics[scale = 0.7]{fluxRatios_6_cropped.pdf}
\caption{Flux ratio measurements for HE0435-1223 selected to represent variations with wavelength and time. References along with measurement dates are listed in Table A2. Numbers correspond to dates plotted in Figure 4. Squares, circles and diamonds indicate broadband continuum flux measurements, which are subject to time-delay induced variability as well as microlensing blueward of $\sim4\mu$m rest frame. Stars and triangles represent [OIII] and broad-line flux ratios. Measurements in the same filter, but from different years are slightly offset from each other for clarity, with the later measurement plotted with an open symbol. [OIII] values have been shifted redward to avoid overlap with broad H$\beta$ results. The B/C [OIII] flux ratio has been artificially shifted redward so it does not lie on top of the A/C value. Dashed, solid and dash-dot lines represent the best smooth model prediction for the A/C, B/C and D/C flux ratios respectively, given the image positions and narrow-line fluxes measured in this work. Top labels list observed bands, where F14 and F16 are abbreviations for the HST filters F140W and F160W respectively. }
\label{fig:prmass}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale = 0.45]{microlensingLabelled_cropped.pdf}
\caption{An estimate of fluctuations induced by microlensing in the $R$ band flux ratios as a function of time for HE0435, by \citet{Bonvin++16}. Vertical bars with numbers correspond to approximate dates for the measurements plotted in Figure 3 and listed in Table A2. Measurements within 2 months of each other are combined to the same time point. Horizontal lines indicate the model `true' flux ratios at the start of the monitoring campaign. MHJD = HJD-2450000.}
\label{fig:prmass}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.45]{CompareQSOfluxes.pdf}
\caption{Posterior model spectra for each lensed quasar image, normalized to the peak of the narrow-line flux at 5007 \AA~ to highlight variations in emission regions relative to the narrow-line. Image A shows the most significant difference, with continuum and broad line fluxes higher relative to the [OIII] flux than in the other three images.}
\label{fig:prmass}
\end{figure}
\section{Gravitational lens modelling}
The lensed image positions and [OIII] flux ratios are sensitive to the mass distribution of the deflector. As discussed in the Introduction, the image fluxes are particularly sensitive to small scale perturbations caused by dark matter subhalos. In this section we perform a gravitational lensing analysis of the system using the image positions and [OIII] fluxes.
We do not include the time delays as a constraint, given that they are minimally sensitive to perturbers near the lensed images, relative to the image fluxes and positions, and given the time delay measurement uncertainties for this system \citep{Keeton++09a, Keeton++09b}. Furthermore, unlike image positions and fluxes, which are most sensitive to the local mass distribution and can thus be well matched without including G1 explicitly \citep[e.g.][]{Sluse++12}, the time delays are sensitive to the larger scale environment and using them as a constraint would require including the complex multi-plane lensing effect of G1 which is at a higher redshift than the main deflector \citep{Wong++16, Bonvin++16, Jackson++15, Sluse++16}. The macromodel parameters including the external shear term are left free to vary, and thus absorb the large-scale contributions from G1.
In Subsection 5.1 we discuss the optimum smooth mass model fit to the data. In Subsection 5.2 we place limits on the presence of substructure near the lensed images. In Subsection 5.3 we discuss the effects of a finite narrow-line emission region.
\subsection{Smooth Model}
We start with a simple, smooth mass distribution model of the system. We model the main deflector as a singular isothermal ($\rho(r)\propto r^{-2}$) ellipsoid (SIE) which has been shown to provide an excellent match to the combined stellar and dark matter mass distributions of elliptical galaxies well beyond the Einstein radius \citep[e.g.][]{Rusin++03, Rusin++05, Gavazzi++07, Treu++10, Gilman++16}. The SIE has five free parameters; the centroid, the ellipticity and position angle, and the Einstein radius. The smooth model also includes parameters describing the magnitude and direction of external shear which can be generated by the group environment of the galaxy.
We optimize these model parameters relative to the observed lensed image positions and [OIII] fluxes using {\tt gravlens} \citep{Keeton++01a, Keeton++01b}, and find an overall best fit $\chi^2$ of 1.7 for one degree of freedom. Table 1 lists the mean and one sigma uncertainties for the lens model parameters. Figure 3 shows the best-fit model fluxes as straight lines. The $\chi^2$ of 1.7 for one degree of freedom indicates that the data has a roughly $\sim 20\%$ chance of being drawn from the best fit model, thus we do not find significant evidence indicating that a more complex model with a significant subhalo contribution is necessary to fit the data. Our inferred lens model parameters are comparable to other values in the literature which are inferred from microlensing-free data. In particular, our inferred Einstein radius and external shear parameters are consistent with the single lens model parameters from \citet{Sluse++12}. As expected, our inferred spherical equivalent Einstein radius of 1\farcs200$\pm$0\farcs003 for the single lens model is somewhat higher than other values in the literature which explicitly include G1 and find values for the Einstein radius of the main deflector ranging from 1\farcs07$\pm$0\farcs02 \citep{Jackson++15} to 1\farcs182$\pm$ 0\farcs002 arcseconds \citep{Wong++16}. We also infer a somewhat higher value for the external shear than models which include G1 in addition to an external shear; $\gamma_{ext}=$0.063$\pm$0.007 compared with 0.039$\pm0.004$ and 0.030$\pm0.003$ for \citet{Jackson++15} and \citet{Wong++16} respectively.
This result differs from the result of \citet{Fadely++12} who found that the $L$ band image fluxes and positions could not be fit with a smooth model although they did not report their best fit lens model parameters. They note that observed $L $ band corresponds to the rest frame 14000 \AA at the redshift of the quasar, which should have a significant flux contribution from the dusty torus in addition to the accretion disk continuum emission \citep{Sluse++13, Wittkowski++04, Honig++08}. In order to test for possible microlensing of the continuum component of the $L$ emission, they analysed two years worth of monitoring data from \citet{Kochanek++06}. Unluckily over this time scale there was not significant evidence for microlensing induced variability in image A. Longer baseline data over 15 years \citep{Bonvin++16} reveals significant microlensing of image A, as shown in Figure 4, which likely affected $L$ band flux ratios.
In the following subsection we place limits on the presence of perturbing subhalos near the lensed images.
\begin{table}
\centering
\scriptsize\begin{tabular}{lll}
\hline
Parameter & Value &Description \\
\hline
$\theta_E$ & 1\farcs200$\pm 0.003$ & Spherical Einstein radius \\
q & $0.91\pm 0.03$ & b/a \\
PA & $-8\pm 5$ & Degrees E of N \\
$\gamma_{ext}$ & $0.063\pm 0.007$ & External shear amplitude \\
$\theta_\gamma$ & $-18 \pm 2$ & Direction of external shear (Deg. E of N) \\
\hline \hline
\end{tabular}
\caption{Gravitational lens model parameters for the main deflector and external shear inferred from image positions and [OIII] fluxes which are given in Tables A1 and A2 respectively.}
\end{table}
\subsection{Limits on the presence of substructure}
Each of the image positions and fluxes provides a local constraint on the presence of small scale structure. In this subsection we test the limits on the presence of a single perturbing subhalo given our [OIII] flux and position measurements. We test the measurement sensitivity to two different perturber masses of $M_{600} \sim10^8 M_\odot $ and $10^7 M_\odot$ where $M_{600}$ is the integrated mass within 600 pc of the centre of the perturber. These masses are chosen to be above and below the limit where the `Missing Satellite Problem' is observed in the Milky Way \citep[e.g.][]{Strigari++08}.
As demonstrated in \citet{Nierenberg++14}, the perturber mass distribution can significantly affect the predicted lensing signal with fixed $M_{600}$. This is due to the fact that shallower mass profiles must have a higher overall normalisation than steeper mass profiles in order to achieve the same interior $M_{600}$ integrated mass. This in turn causes the shallower mass profile to have a longer range impact on the observed image fluxes and magnifications. Here we demonstrate the lensing effect for two mass profiles, a singular isothermal sphere (SIS) which has an $m(r)\propto r^{-2}$, and a Navarro, Frenk and White \citep[NFW][]{NFW++1996} halo, which has a shallow interior profile of $m(r)\propto r^{-1}$, which transitions to a steeper value $m(r)\propto r^{-3}$ outside of a scale radius. We obtain the scale radius from the mass-concentration relation predicted by \citet{Maccio++08} assuming a WMAP5 cosmology \citep{Dunkley++09}.
For each perturber mass, and for each mass profile, we iteratively place the perturber at a fixed position, re-optimize the smooth model parameters, and compare the new best fit $\chi^2$ with the original $\chi^2$ in the absence of a perturbation. We choose the grid spacing qualitatively to ensure that relevant angular dependences are captured in the $\chi^2$ distribution as a function of position. We find that variations are well captured by a spacing of 0\farcs1 for the $ M_{600} \sim10^8 M_\odot$ mass perturber and 0\farcs01 for the $\sim10^7 M_\odot$ respectively.
Figure 6 shows the projected two and three sigma exclusion regions ($p<5$\% and $0.3$\%) for a singular isothermal sphere perturber with Einstein radius of 0\farcs01 and 0\farcs001 respectively. Assuming the perturber is in the plane of the lens galaxy, these Einstein radii correspond to integrated masses of $\sim10^{8.2} $ and $10^{7.2} M_\odot$ respectively within 600 pc of the centre of the perturber, making them comparable to the Milky Way satellites Fornax and Sagittarius \citep{Strigari++08}, albeit with steeper mass profiles. Based on the average minimum radius at which including a perturber results in a model probability which is lower than $0.3\%$, the average exclusion radius is $\sim$0\farcs4 (0\farcs1), 0\farcs3 (0\farcs08), 0\farcs4 (0\farcs09) and 0\farcs3 (0\farcs06) for images A, B, C and D for the 0\farcs01 (0\farcs001) Einstein radius perturber.
Figure 7 shows the projected two and three sigma exclusion regions for an NFW perturber with scale radius of 1\farcs0 and 0\farcs1 corresponding to integrated masses of $\sim10^8 $ and $10^{7.2} M\odot$ within the central 600 pc of the perturber. Again, assuming a best fit model probability lower than $0.3\%$ be excluded, the average radius of exclusion is A: 1\farcs2 (0\farcs1), B: 0\farcs3 (0\farcs08), C: 1\farcs1 (0\farcs09), D: 0\farcs8 (0\farcs06) for images A, B, C and D for the 1\farcs0 (0\farcs1) perturber. As expected, the NFW perturber must be further from the lensed images than an SIS perturber with the same mass to avoid significantly perturbing their fluxes and positions relative to the best fitting smooth mass model.
\begin{figure*}
\centering
\includegraphics[scale=0.33,trim = 0 0 0 0 clip=true]{SISLim_kpcAsecs.pdf}
\includegraphics[scale=0.33,trim = 0 0 0 0 clip=true]{SISLim_zoom_kpcAsecs.pdf}
\caption{Projected exclusion regions for a singular isothermal spheroid perturber with fixed mass. Light grey and dark grey positions are ruled out with greater than $95\%$ and $99.7\%$ respectively, based on the $\chi^2$ probability of the best fit gravitational lens model to the image positions and [OIII] fluxes after adding a perturber with $M_{600} = 10^{8.2} M_\odot $ (left panel) and $10^{7.2} M_\odot$ (right panel).
The left panel shows the entire lens system with the green square indicating the lens centroid, and the orange circles representing the quasar images. The orange box in the left panel represents the size of the zoomed regions shown in the right panel. The average projected radial limits are $\sim$0\farcs4 (0\farcs1), 0\farcs3 (0\farcs08), 0\farcs4 (0\farcs09) and 0\farcs3 (0\farcs06) for images A, B, C and D, respectively, for the $10^{8.2}$ ($10^{7.2} ) M_\odot$ perturber. These exclusion regions correspond to cylinders with radii of $\sim 2 (0.5)$ kpc around each lensed image, projected along the entire host halo. }
\label{fig:prmass}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.33,trim = 0 0 0 0 clip=true]{NFWLim_kpcAsecs.pdf}
\includegraphics[scale=0.33,trim = 0 0 0 0 clip=true]{NFWLim_zoom_kpcAsecs.pdf}
\caption{Exclusion regions for an NFW perturbing subhalo with $M_{600} = 10^{8} M_\odot $ and $10^{7.2} M_\odot$, corresponding to NFW scale radii of 1\farcs0 and 0\farcs1 respectively, determined the same way as in Figure 6. The average radial limits are 1\farcs2 (0\farcs1), 0\farcs3 (0\farcs08), 1\farcs1 (0\farcs09), and 0\farcs8 (0\farcs06) for images A, B, C and D, respectively, for a $10^{8}$ ($10^{7.2}) M_\odot$ perturber. These angular scales correspond to an average projected exclusion region of $\sim6$ (0.6) kpc at the redshift of the lens.}
\label{fig:prmass}
\end{figure*}
\section{Finite Source Effects}
Until this point we have assumed that the narrow-line emission in HE0435 is unresolved with HST resolution, even after being strongly lensed and thus magnified. In this section, we explore the effect of a resolved emission region on our results. \citet{Jackson++15} found that the 5 GHz radio emission fluxes for HE0435 were best fit with a smooth deflector mass distribution and a resolved source with
with an intrinsic size of $\sigma\sim 34$mas (288 pc) for a Gaussian profile. This model significantly improved the fit to their data relative to an unresolved emission model. Given this, it is important that we test whether the narrow-line emission is resolved in our data, and what the effects would be of incorrectly assuming the narrow-line emission was unresolved.
The narrow-line region is extended and in some cases has been observed to extend out several hundreds of parsecs \citep[e.g., ][]{ Bennert++06b}. The narrow-line flux is not uniform, and can be dominated by the central tens of parsecs even for very luminous quasars \citep{Muller-Sanchez++11}. \citet{Sluse++07} demonstrated that the [OIII] emission in the gravitational lens RXS J1131$-$1231 is resolved in their data, and thus inferred a minimum size of $\sim 150$ pc. \citet{Nierenberg++14} found that the narrow [OIII] emission for B1422+231 \citep{Patnaik++92} was marginally resolved with a dispersion of $\sim 10$ mas (50 pc).
We perform two tests here. First, we simulate narrow-line emission for three different source sizes and then redo the forward modelling inference using these simulated extended narrow-line images in place of the original point source model for the direct image
to test whether this leads to an improved fit to the observed spectra.
Secondly, we simulate gravitational lenses with three different narrow-line source sizes, and model them under the (in this case, incorrect) assumption that the narrow-emission is unresolved to test for the effect on the inferred flux ratios.
To test the model fit with a resolved narrow-line emission, we repeat the forward modelling inference described in Section 3 with an additional model component. As before, the QSO broad and continuum emission regions are modelled as being emitted by a point source. In order to account for a resolved narrow-line source, we generate a new model direct image for the narrow-line emission only. This extended emission model is generated assuming that the intrinsic source is a Gaussian with width $\sigma_{\rm{NL}}$. We use the best fit gravitational lens model inferred from the continuum image positions and [OIII] flux ratios in order to generate lensed extended emission models for the direct image. We generate models for $\sigma_{\rm{NL}}$ of 50 pc, 100 pc, and 288 pc. The latter source size is the best fit size for the 5 GHz radio emission for this system found by \citet{Jackson++15}.
We then re-estimate the best fit 1D spectral parameters following the steps in Section 3 with two adjustments. First we use the new simulated extended source model as the direct image model for the narrow-line emission. The other QSO spectral components are modelled as being emitted by point sources as before. Secondly, because the simulated direct image for the [OIII] fluxes by definition fixes the relative image fluxes, the spectral model has only a single parameter for the overall normalization of the narrow-line flux. Thus there are three fewer model parameters than the point source model in which the [OIII] fluxes vary independently.
The left panels of Figure 8 show the best fit simulated grism images for the narrow-line components only of the three extended models and the point source model for comparison. Note that the narrow H-$\beta$ emission is also modelled as being extended as we assume it is emitted from the same region as the [OIII] emission. While the 50 and 100 pc models differ only marginally from the point source model, the 288 pc source size is clearly extended with HST resolution. The [OIII] doublet for images B and D in particular are nearly completely blended.
In the right panels of Figure 8 we compare the best fit model 1D trace to the data (analogous to Figure 2) for each of the source sizes. The $\chi^2$ comparison between the data and the simulated grism image grows progressively worse as the source size increases, with best fit $\chi^2$ values of 13228, 13368 and 13990 for 3909 DOF for the 50, 100 and 288 pc narrow-line regions, compared with 13112 for 3906 DOF for the point source model\footnote{The point source model has three extra degrees of freedom as the narrow-line fluxes are allowed to vary independently}. Increasing the narrow-emission source size results in a decreasing best fit peak narrow-line flux, despite the fact that the intrinsic narrow-line emission width is a free model parameter. This is due to the significantly extended emission which cannot be well fit in 2D.
\begin{figure*}
\centering
\includegraphics[scale=0.21,trim = 0 0 0 0 clip=true]{simGrism_compareModel_0pc.pdf}
\hspace{5mm}
\includegraphics[scale=0.21,trim = 0 0 0 0 clip=true]{simGrism_compareModel_50pc.pdf}
\\
\includegraphics[scale=0.21,trim = 0 0 0 0 clip=true]{simGrism_compareModel_100pc.pdf}
\hspace{5mm}
\includegraphics[scale=0.21,trim = 0 0 0 0 clip=true]{simGrism_compareModel_288pc.pdf}
\caption{Effects of resolved narrow-line emission on model fit. We repeat the analysis of Section 4, now using a simulated extended narrow-emission region with a Gaussian flux distribution with dispersion $\sigma_{\rm{NL}}=$ 50, 100 and 288 pc. Here we show in the large left panels, the best fit grism model for the extended narrow-line emission with a fixed finite size, after re-optimizing the 1D spectral parameters. While the full spectral model is used in the inference, we show only the narrow-line emission in the left panels for clarity. In the adjacent right hand panels we show a comparison of the model 1D trace spectra with the data. As in Figure 2, the black points, blue line and light grey points correspond to the data, model and residual respectively. The point source model and comparison between model and data are show in the upper left corner for comparison. The best fit $\chi^2$ values calculated from the difference between the 2D model grism and true grism images are 13228, 13368 and 13990 for 3909 DOF for the 50, 100 and 288 pc narrow-line regions, and 13112 for 3906 DOF for the point source model. The point source model has three extra degrees of freedom as the narrow-line fluxes are allowed to vary independently for each image.}
\label{fig:prmass}
\end{figure*}
We can also test what the effect would be on the inferred flux ratios, if we use a point source model for the narrow-line flux when the flux is actually resolved. For this we considered only the 50 pc and 100 pc models as the 288 pc model provides a markedly worse fit to the data. We simulated a mock spectrum using the resolved narrow-line models for the narrow-line component, and then inferred the image fluxes under the assumption used elsewhere in this work, that the narrow-emission is unresolved in our data. The resulting flux ratios were: A/C $= 0.97\pm 0.08$, B/C $= 0.99\pm$0.07 and D/C $=0.65\pm0.05$ for the 50 pc source, and
A/C $= 1.0\pm 0.1$, B/C $= 1.0\pm$0.1 and D/C $=0.65\pm0.08$ for the 100 pc source. Both results are consistent with the input fluxes used to generate the resolved narrow-line source mock lenses, indicating that our results would not be biased by a marginally resolved narrow-line region.
\section{Discussion}
We have demonstrated that the WFC3 IR Grism provides sufficient spatial and spectral resolution to precisely measure lensed narrow-line emission with similar precision to continuum emission studies while avoiding the effects of microlensing, variability and differential dust extinction. We measure the [OIII] flux ratios to be significantly offset from optical/near-IR continuum measurements of this system, particularly for image A. This is consistent with the results from long term monitoring for this system, which have indicated that image A is significantly affected by microlensing, which systematically affects smaller emission regions. The [OIII] flux ratios are consistent with the 5 GHz radio measurement from \citet{Jackson++15}, which are also extended and thus not affected by microlensing.
In order to fit the measured F140W image positions and [OIII] fluxes, we rule out the presence of a perturbing NFW subhalo projected within roughly $\sim$1\farcs0 (0\farcs1) for a $M_{600}\sim 10^8$ $(10^7) M_\odot$ perturber near the lensed images. At the redshift of the lens these angular sizes correspond to approximately $\sim6$ and 0.6 kpc respectively. It is informative to compare these limits with an approximate prediction of the number of subhalos at these radii based on CDM models.
We perform a very basic estimate assuming that all possible perturbers are within the virial radius of the HE0435 group and neglecting line-of-sight associations. From \citet{Sluse++16}, the HE0435 group has a virial mass of $\log[M_{200}/M_\odot] = 13.7\pm0.4$, where $M_{200}$ is the mass within the region of the halo which has a mean density 200 times the critical density.
We estimated the number of subhalos based on the results of \citet{Han++16}, scaled to the virial mass of HE0435, assuming that approximately half of subhalos are destroyed through tidal interactions and merging following the recommendation of \citet{Han++16}, and that an additional $\sim 30$\% are destroyed by tidal interactions with a central baryonic potential \citep[e.g.][]{Garrison-Kimmel++17}
We assume that the subhalo mass profiles are NFW with the mass-concentration relation given by \citet{Maccio++08} scaled to the redshift of the group following the relation by \citet{Prada++12} and neglecting scatter. We fix $M_{600}$ for subhalos at infall, where $M_{600}$ is the mass within the interior 600 pc of the subhalo. We make a simplifying assumption that $M_{600}$ is not affected by tidal stripping after infall. This yields an estimated number of $\sim 250$ surviving subhalos in the group with masses greater than $M_{600}>10^8 M_\odot$ and $\sim 19000$ subhalos with $M_{600}>10^7 M_\odot$ within $R_{200} = 540 $kpc.
We examined two different spatial distributions for the subhalos. First, that the subhalo number density follows the mass distribution of the host halo everywhere, as predicted by pure CDM simulations in the absence of tidal stripping. Second, that the subhalo number density follows the mass distribution of the host halo except within the three-dimensional scale radius, where we assume all subhalos are destroyed. This mimics an extreme version of the impact of the disk seen by \citet{Garrison-Kimmel++17}. Both spatial distributions are normalized to have the same number of subhalos.
These two subhalo spatial distributions are chosen to bracket limits of the possible effect of the baryonic potential on our predicted detection rate.
In the first case, where the subhalo number density simply follows the NFW profile of the lens halo, we expect approximately $\sim 0.8$ total subhalos with $M_{600}>10^8 M_\odot$ to be found within each $\sim 6$ kpc projected excluded region.
The mass function and size of the sensitivity region scales so that we expect to find $\sim1$ $M_{600}>10^7 M_\odot$ subhalo within each of the four smaller $\sim0.6$ kpc projected regions around the images where we are sensitive to these lower masses.
In the second case in which all subhalos are removed within the NFW scale radius of the host, we expect to detect approximately 0.08 (0.1) subhalos with $M_{600}>10^8 M_\odot$ ($10^7 M_\odot$) per exclusion region.
We also explored the effect of a WDM mass function, using the result from \citet{Schneider++13} to estimate the shape of the subhalo mass function at infall given a 3 keV/c$^2$ thermal relic dark matter particle, which is consistent with Ly$\alpha$ forest measurements \citep[e.g.][]{Viel++09}. In this case, almost no subhalos with masses $M_{600}<10^8 M_\odot$ survive. Unlike the central baryonic potential, which is predicted to have a largely mass-independent effect on the subhalo population, WDM selectively destroys low mass subhalos.
Promisingly, subhalos with $M_{600}>10^8 M_\odot$ are all expected to contain a significant number of stars \citep{Strigari++08} in order to match comparisons between the Milky Way satellite luminosity function and CDM predictions \citep[e.g.][]{Strigari++08}.
This mean that CDM can be tested by comparing the rate of detections of substructure in narrow-line lenses with the rate predicted by luminous satellite studies, or measured by gravitational imaging studies which are sensitive to $M_{600}>10^8 M_\odot$ subhalos, after accounting for variations in host halo mass and sensitivity. This test can be performed in a sample of $\sim20$ lenses in which we would expect to detect $\sim10$ $M_{600}>10^7 M_\odot$ subhalos in the case of a CDM subhalo mass function, even in the case in which all subhalos are destroyed within the NFW scale radius.
We emphasise however, that a true comparison requires a CDM model which incorporates effects such as tidal stripping, as well as a marginalization over possible halo orientations, masses and formation histories \citep[e.g.][]{Jiang++16}, and a range of baryonic physics implementations \citep[e.g.][]{Chua++16, Despali++16}. We leave such an analysis to a future paper, in which we jointly infer the properties of the subhalo mass function given our sample of narrow-line gravitational lenses measured with OSIRIS and the WFC3 grism.
\section{Summary}
\begin{enumerate}
\item We present a forward modelling method which uses the {\tt 3D-HST} pipeline \citep{Brammer++12} to measure spectra in the presence of significant spatial blending for G141 grism data. We apply this method to infer the lensed narrow, broad and continuum fluxes of the images in HE0435-1223.
\item The narrow [OIII] flux ratios for HE0435 are consistent with radio measurements from \citet{Jackson++15}, and are significantly different from other emission measures which are subject to contamination by microlensing and intrinsic QSO time variability.
\item We find that the [OIII] fluxes and image positions are well modelled with a simple gravitational lens model consisting of a singular isothermal ellipsoid for the main galaxy in the presence of external shear.
\item Our data strongly disfavours a perturber with mass greater than $M_{600}=10^{8.2} (10^{7.2}) M_\odot$ within $\sim1$ (0.1) arcsecond of the lensed images, where $M_{600}$ is the projected perturber mass within its central 600 pc (best fit model probability $<0.3$\%).
\item This demonstration that WFC3 grism measurement of narrow-line lensed quasars can be used to detect low-mass $M_{600}\sim10^{7}M_\odot$ subhalos is extremely promising for future constraints of dark matter given the large number of quadruply imaged quasar lenses to be discovered in optical surveys such as DES and LSST, and with the follow-up which will be enabled by JWST.
\end{enumerate}
\section*{Acknowledgments}
Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program \#13732.
Support for program \#13732 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
T.T. thanks the Packard Foundation for generous support through a Packard Research Fellowship. A.M.N thanks the Center of Cosmology and AstroParticle Physics for support via a CCAPP postdoctoral fellowship. C.S.K. is supported by NSF grant AST-1515876.
We thank K. Wong for providing model constraints for the deflector and inferred lensed image fluxes based on F160W data. We thank V. Bonvin and F. Courbin for providing microlensing light curve constraints.
\bibliographystyle{apj_2}
|
1,116,691,499,109 | arxiv | \section{Introduction}
The evolution of the latest generations of mobile communication systems has been mainly driven by setting highly ambitious, often hard to achieve, goals. Although this maximalistic approach may trigger technological advances, it often comes with inflated requirements in terms of resources to meaningfully scale.
Wireless networks are currently evolving to cater to emerging cyber-physical and real-time interactive systems, such as swarm robotics, self-driving cars, and smart Internet of Things. A fundamental shift in thinking is necessary to satisfy the pressing requirements for timely multimodal communication, autonomous decision-making, and efficient distributed processing. Goal-oriented semantic communication is a new paradigm that aims at redefining importance, timing, and effectiveness in future networked intelligent systems \cite{popovski2020semantic,kountouris2021semantics,Qin22arxiv,tolga21SP}. Leveraging a minimalist design approach, it has the potential of significantly improving network resource usage, energy consumption, and computational efficiency. Various attempts in this direction have been made in the past \cite{Carnap,Juba,bao2011,SemanticGame} without though leading to an elegant and insightful theory with immediate practical applications.
In this context, \emph{semantics of information} is a recently emerged measure of the significance and the usefulness of messages with respect to the goal of data exchange. This composite performance metric appears to be instrumental in enabling effective communication of concise information that is both timely and valuable for achieving end users’ requirements.
Age of information (AoI) performance metrics \cite{NowAoI,yates2021age}, which describe information freshness in networks, and value of information (VoI) \cite{VoI_USSR,VoI}, which quantifies the information utility or gain in decision making, can be viewed as simple, quantitative surrogate for information semantics.
In this paper, we consider a communication system in which a transmitter receives status updates generated from a known discrete distribution with finite support and seeks to communicate them to a remote receiver. The updates generated by the information source may correspond to observations or measurements of a random phenomenon.
The transmitter performs semantics-aware filtering and sends to the receiver only the most relevant randomly arriving source symbols in a timely fashion over an error-free channel.
We consider a simple coding scheme focusing on less frequent events, i.e., the transmitter encodes only a fraction of the least frequent realizations, treating the remaining ones as not informative or irrelevant, thus providing more information about events that happen less often. Additionally, the semantics of information is captured through a timeliness metric for the received updates, which is a nonlinear function of age of information. Our objective is to design a coding scheme that optimizes the weighted sum of a semantics-aware utility function and a quadratic cost term on the average codeword length.
This work falls within the realm of timely source coding problem \cite{TSC1,TSC2,TSC3}. These works study the design of lossless source codes and block codes that minimize the average age in status update systems under different queuing theoretic considerations. The most closely related to our work is \cite{bastopcu2020optimal}, which considers a selective encoding mechanism at the transmitter for timely updates. The optimal real codeword lengths that minimize the average age at the receiver are derived therein. Our paper extends previous results in several ways. We introduce semantics-aware metrics, which quantify update packet importance and timeliness of information at the receiver. The latter is a nonlinear function of age and we derive the average timeliness expression for three indicative cases. Furthermore, we add a quasiarithmetic penalty term related to the average codeword length \cite{Campbell,Larmore}. We derive the optimal real codeword lengths that maximize a semantics-aware utility function and minimize a quadratic average length cost, highlighting the performance gains of semantics-aware filtering and source coding.
\section{System Model}\label{Section2}
We consider a communication system in which an information source $X$ generates status updates in the form of packets and forwards them to a transmitter in order to send them to a remote monitor (c.f. Figure~\ref{Fig1}). The source generates discrete symbols from a finite set $\mathcal{X} = \{x_i ~ |~ i\!\in\!\mathcal{I}_n\}$, $\mathcal{I}_n = \{1,2,...,n\}$, each having a probability of realization $\tilde{p}_i = P_X(x_i)$ where $P_X(\cdot)$ is a known pmf (probability
mass function). Without loss of generality, we assume $\tilde{p}_i \geq \tilde{p}_j, \forall i \leq j$. Furthermore, we assume that the sequence of observations is independent and identically distributed and that packets are generated according to a Poisson distribution with rate $\lambda$.
Semantic filtering is performed, where only the $k$ least probable realizations are selected for transmission, while update packets from the remaining $n - k$ realizations are discarded. The set of selected update packets' indices (admitted packets) is denoted $\mathcal{I}_k \subseteq \mathcal{I}_n$. A first metric of semantic value associates importance with probability of occurrence of less frequent or atypical events. The less frequent an event (or the less probable a realization) is, the more important it is for the remote monitor.
The transmitter then encodes an admitted packet from the $i$-th realization using a prefix-free code based on the truncated distribution with conditional probabilities $p_i = \tilde{p_i}/q_k$, $\forall i\in\mathcal{I}_k$ (and zero otherwise), where $q_k = \sum_{i\in \mathcal{I}_k} \tilde{p}_i$.
The transmitter node is bufferless, hence a newly admitted packet is blocked when the channel is busy. Assuming an error-free channel, if an admitted packet arrives at the transmitter when the channel is idle, it is correctly delivered to the receiver, then coined as a successive packet.
After successfully delivering the previous packet to the receiver, the transmitter waits for the next admitted packet arrival.
We define $t_{i-1}$ the time instant that the $i$-th packet is received. Hence, the update interval between the $i$-th successive arrival and its next one is modeled as a random variable (r.v.) $Y_i = t_i - t_{i-1}$. This interval consists of the service time $S_i$ and the waiting time $W_i$. $W_i$ is the time between admitted status updates that are transmitted, thus the waiting time can be written as $W_i=\sum_{k=1}^{N} Z_k$, where $N$ is an r.v. of the number of admitted arrivals that are generated before finding the channel idle. $Z_k$ is the time between two admitted arrivals and is exponentially distributed with mean $\gamma = \frac{1}{\lambda q_k}$, since the admitted arrivals are generated according to a Poisson process with rate $\lambda q_k$.
The transmission time is proportional to the codeword length, thus, the service time (transmission time) of realization $x_i$ is $S_i = \ell_i$ time units, where $\ell_i$ is the length of the codeword assigned to $x_i$. The average transmission time is $\mathbb{E}[S]=\sum_{i=1}^{k} p_i \ell_i$.
\begin{figure}[t!]
\centering
\pstool[scale=0.4]{figures/SystemModelNew.eps}{
\psfrag{Source}{\hspace{-0.12cm}\scriptsize Source}
\psfrag{Semantic}{\hspace{-0.15cm}\scriptsize Semantic}
\psfrag{Filtering}{\hspace{-0.106cm}\scriptsize filtering}
\psfrag{Buffer}{\hspace{-0.15cm}\scriptsize Buffer}
\psfrag{Selective}{\hspace{-0.06cm}\scriptsize Packet}
\psfrag{Encoding}{\hspace{-0.106cm}\scriptsize encoding}
\psfrag{FilteringP}{\hspace{-0.1cm}\scriptsize Filtering}
\psfrag{olicy}{\hspace{-0.145cm}\scriptsize policy}
\psfrag{ChanlStatusF}{\hspace{+0.1cm}\scriptsize Channel}
\psfrag{eed}{\hspace{-0.25cm}\scriptsize condition}
\psfrag{AccessN}{\hspace{-0.07cm}\scriptsize Access}
\psfrag{etwork}{\hspace{-0.172cm}\scriptsize channel}
\psfrag{Monitor}{\hspace{-0.16cm}\scriptsize Monitor}
\psfrag{Transmitter}{\hspace{-0.0cm}\scriptsize Transmitter}
}
\vspace{-0.1cm}
\caption{System model of semantics-aware transmission.}
\label{Fig1}
\end{figure}
\section{Problem Statement}
\subsection{Key Metrics of Interest}
We introduce a semantics of information (SoI) metric that measures the importance and usefulness of information at the receiver's side. SoI is generally a composite function $\mathcal{S}(t) = \nu(\psi(\mathcal{I})$, where $\psi:\mathbb{R}^m \to \mathbb{R}^z, m\geq z$ is a (nonlinear) function of $m \in \mathbb{Z}^+$ information attributes $\mathcal{I} \in \mathbb{R}^m$, and $\nu: \mathbb{R}^z \to \mathbb{R}$ is a context-dependent, cost-aware function \cite{kountouris2021semantics,pappas2021goal}. In this paper, we consider \textit{timeliness}, a contextual attribute defined as a non-increasing utility function $f:\mathbb{R}_0^+ \to \mathbb{R}$ of information freshness, i.e., $\mathcal{S}(t) = f(\Delta(t))$. $\Delta(t) = t - u(t)$ is the instantaneous AoI at the receiver, i.e., the difference of the current time instant and the timestamp $u(t)$ of the most recently received update. $\mathcal{S}(t)$ is a time varying stochastic process and the average SoI in stationary and ergodic systems for an observation interval $(0,T)$ is defined as $\bar{\mathcal{S}} = \displaystyle \underset{T\rightarrow\infty}{\lim} \dfrac{1}{T} \!\int_{0}^{T} f(\Delta(t)) {\rm d}t$.
\vspace{-2mm}
\subsection{Problem Formulation}
Our objective is to find the codeword lengths $\ell_i$ that optimize a weighted sum of the average SoI and the average length for a cost function $\phi(\ell):\mathbb{R}_0^+ \to \mathbb{R}_0^+$, i.e., $\sum_{i\in \mathcal{I}_k}p_i \phi(\ell_i)$.
Maximizing the average SoI is equivalent to minimizing the average cost (penalty) of lateness
\begin{eqnarray}\label{Eq2}
L(\Delta) = \underset{T\rightarrow\infty}{\lim} \dfrac{1}{T} \!\int_{0}^{T} g(\Delta(t)) {\rm d}t
\end{eqnarray}
where $g:\mathbb{R}_0^+ \to \mathbb{R}$ is a non-decreasing function \cite{yates2021age}. Converting the maximization problem into a minimization one is mainly done for analytical convenience.
The average codeword length term, also known as quasiarithmetic penalty, is related to Campbell's coding problem \cite{baer2006source}.
The optimization problem is constrained by the integer constraint $\ell_i \in \mathbb{Z}^+$ and the Kraft-McMillan inequality \cite{cover1999elements} for the existence of a uniquely decodable code for a given set of codeword lengths.
Thus, we formulate the problem as
\begin{equation}\label{Eq1}
\begin{aligned}
\mathcal{P}_1\!:\,
&\underset{{\{\ell_i\}}}{\text{min}}~~ L(\Delta)
+ w \sum_{i\in \mathcal{I}_k} p_i \phi(\ell_i)
\\
&\text{s.t.} ~ \sum_{i\in \mathcal{I}_k} 2^{-\ell_i} \leq 1, \\
&~~~~\, \ell_i \in \mathbb{Z}^+
\end{aligned}
\end{equation}
where $w>0$ denotes a weight parameter.
We employ a quadratic cost function for the codeword length under binary alphabetic $\phi(x) = \alpha x + \beta x^2$, ${\alpha}, {\beta} \geq 0$ \cite{Larmore}. Since $\phi$ is convex, longer (shorter) codewords are penalized more (less) harshly than in the linear case (e.g., Huffman coding) \cite{baer2006source}.
First, we relax the integer constraint in $\mathcal{P}_1$ and allow non-negative real valued codeword lengths. Note that for any set of real-valued lengths $\ell_i$, we can obtain integer-valued lengths by using the rounded-off values $\lceil \ell_i \rceil$.
Second, in order to explicitly calculate \eqref{Eq2}, we need to specify $g(\Delta(t))$. Three different instances of the penalty function are considered in this work. For exposition, in Figure \ref{Fig2} we show a sample path for the case $g(\Delta(t))={\rm exp}(\Delta(t))$. The calculation of \eqref{Eq2} is reduced to calculating the areas $Q_i$ in Figure \ref{Fig2} and then taking the average as follows
\begin{eqnarray}
L(\Delta)=\underset{T\rightarrow\infty}{\lim} \dfrac{1}{T} \bigg\{\sum_{i=1}^{\mathcal{N}(T)} Q_i + {Q}_\infty \bigg\}\!
= \eta \mathbb{E}[Q]
\end{eqnarray}
where $\eta = \underset{T\rightarrow\infty}{\lim}\frac{\mathcal{N}(T)\!-\!1}{T}$ is the steady-state time average arrival rate and $\mathcal{N}(T)$ is the number of admitted packet by time $T$. A more detailed and general analysis can be found in \cite{kosta2020cost}.
Merging $\eta$ with $w$ as both being positive constants, we have
\begin{equation}\label{Eq6}
\begin{aligned}
\mathcal{P}_2\!:\,
&\underset{{\{\ell_i\}}}{\text{min}}~~ \underbrace{\mathbb{E}[Q] + w \sum_{i\in \mathcal{I}_k} p_i (\alpha \ell_i \!+\! \beta \ell_i^2)}_\text{$\triangleq \mathcal{J}_{{\rm SoI}}$}\\
&\text{s.t.} ~ \sum_{i\in \mathcal{I}_k} 2^{-\ell_i} \leq 1, \\
&~~~~\, \ell_i \in \mathbb{R}^+.
\end{aligned}
\end{equation}
\section{Semantics-aware Source Encoding Design}\label{Section3}
In this section, we determine the semantics-aware optimal real codeword lengths for three different instances of penalty function $g(\cdot)$, namely
\begin{align}\label{Eq3}
g(\Delta(t))=
\begin{cases}
\operatorname{exp}(\rho\Delta(t))~~~\text{EDT case}\\
\ln(\rho\Delta(t))~~~~~\text{LDT case}\\
\rho(\Delta(t))^{\kappa}~~~~~~\text{PDT case}
\end{cases}
\end{align}
where $\rho\geq0$ denotes a constant coefficient and $\kappa \in \mathbb{Z}^+$.
The above cases correspond to an \emph{exponentially} (E-), \emph{logarithmically} (L-), and \emph{polynomially} decreasing timeliness (PDT) scenario, respectively.
\subsection{Optimal Codeword Design}
\begin{figure}[t!]
\centering
\pstool[scale=0.53]{figures/UtilityForms.eps}{
\psfrag{U(Delta)}{\hspace{-0.32cm}\scriptsize $\operatorname{exp}(\Delta(t))$}
\psfrag{t}{\hspace{0.05cm}\scriptsize $t$}
\psfrag{0}{\hspace{0.0cm}\scriptsize $0$}
\psfrag{t1}{\hspace{0.0cm}\scriptsize $t_{1}$}
\psfrag{tn}{\hspace{0.0cm}\scriptsize $t_{n}$}
\psfrag{ti-1}{\hspace{0.0cm}\scriptsize $t_{i-2}$}
\psfrag{ti}{\hspace{-0.07cm}\scriptsize $t_{i-1}$}
\psfrag{ti+1}{\hspace{0.05cm}\scriptsize $t_{i}$}
\psfrag{Si}{\hspace{-0.05cm}\scriptsize $S_{i}$}
\psfrag{Wi}{\hspace{-0.05cm}\scriptsize $W_{i}$}
\psfrag{Yi+1}{\hspace{-0.1cm}\scriptsize $Y_{i-1}$}
\psfrag{Q1}{\hspace{-0.06cm}\scriptsize $Q_{1}$}
\psfrag{Qi}{\hspace{-0.2cm}\scriptsize $Q_{i-1}$}
\psfrag{Qi+1}{\hspace{0.05cm}\scriptsize $Q_{i}$}
\psfrag{Qn}{\hspace{-0.1cm}\scriptsize $Q_\infty$}
\psfrag{Tau}{\hspace{0.02cm}\scriptsize $T$}
\psfrag{umax}{\hspace{-0.06cm}\scriptsize $u_\text{max}$}
\psfrag{Smax}{\hspace{-0.02cm}\scriptsize $S_\text{min}$}
}
\vspace{-0.1cm}
\caption{Sample evolution of the exponential penalty function (\ref{Eq3}) over time for $\rho=1$.}
\label{Fig2}
\end{figure}
\subsubsection{EDT Case}
For this case, the area $Q_{i-1}$ for $i\geq3$ yields
\begin{eqnarray}\label{Eq7}\nonumber
Q_{i-1} \!\!\!\!&=&\!\!\!\! \int_{t_{i-2}}^{t_{i-1}+S_i} e^{\rho(t\!-\!t_{i-2})}{\rm d}t - \int_{t_{i-1}}^{t_{i-1}+S_i} e^{\rho (t-t_{i-1})}{\rm d}t\\\nonumber
\!\!\!\!&\stackrel{(a)}{\approx}&\!\!\!\! \dfrac{\rho}{2} Y_{i-1}^2 + \rho S_i Y_{i-1} + Y_{i-1}
\end{eqnarray}
where ($\alpha$) comes from using the second-order Taylor expansion for the exponential function.
Then, we calculate $\mathbb{E}[Q]$ as follows
\begin{align} \label{Eq8}
&\mathbb{E}[Q] \approx \dfrac{\rho}{2} \mathbb{E}[Y^2] + \rho \mathbb{E}[S] \mathbb{E}[Y] + \mathbb{E}[Y] \nonumber \\
&~~~\overset{(b)}{=} \dfrac{\rho}{2} \mathbb{E}[L^2] + \rho (\mathbb{E}[L])^2 + (1\!+\!2\rho\gamma) \mathbb{E}[L] + \rho\gamma^2 + \gamma.
\end{align}
To reach $(b)$, we have $\mathbb{E}[Y] = \mathbb{E}[L] + \gamma$, $\mathbb{E}[Y^2] = \mathbb{E}[L^2] + 2\gamma \mathbb{E}[L] + 2\gamma^2$, where $\gamma = (\lambda q_k)^{-1}$ \cite{bastopcu2020optimal}. Also, $\mathbb{E}[S] = \mathbb{E}[L]$ and $\mathbb{E}[S^2] = \mathbb{E}[L^2]$, with $\mathbb{E}[L] = \sum_{i\in \mathcal{I}_k} p_i \ell_i$, and $\mathbb{E}[L^2] = \sum_{i\in \mathcal{I}_k} p_i \ell_i^2$ being the first and second moments of the codeword lengths, respectively.
Putting (\ref{Eq8}) into $\mathcal{P}_2$, we solve the following problem.
\begin{equation}\label{Eq9}
\begin{aligned}
{\mathcal{P}}_3\!:\,
&\underset{{\{\ell_i \in \mathbb{R}^+\}}}{\text{min}}\, \Big\{(\dfrac{\rho}{2} \!+\! w\beta) \mathbb{E}[L^2] + \rho (\mathbb{E}[L])^2 \\
&~~~~~~~~~~ + (1\!+\!2\rho\gamma \!+\! w\alpha)\mathbb{E}[L] + \rho\gamma^2 + \gamma \Big\} \\
&~~~\text{s.t.} ~ \sum_{i\in \mathcal{I}_k} 2^{-\ell_i} \leq 1.
\end{aligned}
\end{equation}
\begin{proposition}
The unique solution of problem $\mathcal{P}_3$ (EDT case) for $\ell_i$, $\forall i\in \mathcal{I}_k$, is given as
\begin{align}\label{Eq15}
\ell_i = -\ln_2\!\left( \dfrac{\mathcal{C}_1 p_i}{\mu (\ln(2))^2} W_0\!\!\left(\! \dfrac{\mu (\ln(2))^2}{\mathcal{C}_1 p_i} 2^{\frac{\mathcal{C}_2}{\mathcal{C}_1}}\!\right) \!\right)\!
\end{align}
where $\mu\geq0$ is the Lagrange multiplier, $\mathcal{C}_1 = \rho + 2w\beta$,
\begin{align}
\mathcal{C}_2 = \frac{2\rho\mu \ln(2) + \mathcal{C}_1(1\!+\!2\rho\gamma \!+\! w\alpha)}{\mathcal{C}_1 + 2\rho},
\end{align}
and $W_0(.)$ is the principal branch of Lambert $W$ function.
\end{proposition}
\begin{IEEEproof}
We define the Lagrange function
\begin{align}\label{Eq10}\nonumber
\mathcal{L}(\ell_i;\mu) &= (\dfrac{\rho}{2} \!+\!w\beta) \sum_{i\in \mathcal{I}_k} p_i \ell_i^2 + \rho \bigg(\!\sum_{i\in \mathcal{I}_k} p_i \ell_i\!\bigg)^{\!\!2} \\ \nonumber
&~~~+(1\!+\!2\rho\gamma \!+\! w\alpha)\bigg(\!\sum_{i\in \mathcal{I}_k} p_i \ell_i\!\bigg) + \rho\gamma^2 \\
&~~~ + \gamma + \mu \bigg(\!\sum_{i\in \mathcal{I}_k} 2^{-\ell_i}\!-\!1\!\bigg)
\end{align}
where $\mu\geq0$ denotes the Lagrange multiplier. Now, we write the Karush-Kuhn-Tucker (KKT) condition as follows
\begin{align}\label{Eq11}\nonumber
&\!\frac{\partial \mathcal{L}(\ell_i;\mu)}{\partial \ell_i} = (\rho \!+\! 2w\beta) p_i \ell_i +2\rho p_i \bigg(\!\sum_{i\in \mathcal{I}_k} p_i \ell_i\!\bigg) \\
&~~~+ (1\!+\!2\rho\gamma \!+\! w\alpha)p_i - \mu \ln(2) 2^{-\ell_i}=0,~ \forall i\in\mathcal{I}_k.
\end{align}
The complementary slackness condition is
\begin{align}\label{Eq12}
\mu \bigg(\!\sum_{i\in \mathcal{I}_k} 2^{-\ell_i}\!-\!1\!\bigg)\!=0.
\end{align}
There exist two conditions, one of which meets (\ref{Eq12}): (i) $\mu=0$, hence $\sum_{i\in \mathcal{I}_k} 2^{-\ell_i} < 1$, or (ii) $\mu \neq 0$, hence $\sum_{i\in \mathcal{I}_k} 2^{-\ell_i}=1$. Condition (i) results in $\ell_i = \mathbb{E}[L] = - \big(\! \frac{1+2\rho\gamma + w\alpha}{3\rho + 2w\beta} \!\big) < 0$ from (\ref{Eq11}), which is not feasible. Thus, condition (ii) must satisfy (\ref{Eq12}). Thus, the moments of codeword lengths are obtained as
\begin{subequations}
\begin{align}\label{Eq13a}
\mathbb{E}[L] &= \bigg(\!\frac{\mu \ln(2) -(1\!+\!2\rho\gamma\!+\! w\alpha)}{\mathcal{C}_1 + 2\rho}\!\bigg), \\ \label{Eq13b}
\mathbb{E}[L^2] &= \bigg(\!\frac{\mu \ln(2) -(1\!+\!2\rho\gamma\!+\! w\alpha)}{\mathcal{C}_1 + 2\rho}\!\bigg)^{\!\!2}
\end{align}
\end{subequations}
where $\mathcal{C}_1 = \rho + 2w\beta$. Dividing (\ref{Eq11}) by $p_i$ and after some algebraic manipulations, we reach the following equation
\begin{align}\label{Eq14}
\dfrac{\mu (\ln(2))^2}{\mathcal{C}_1 p_i}2^{- \ell_i} \operatorname{exp}\!\left(\!\dfrac{\mu (\ln(2))^2}{\mathcal{C}_1 p_i} 2^{-\ell_i}\!\right) = \dfrac{\mu (\ln(2))^2}{\mathcal{C}_1 p_i} 2^{\frac{\mathcal{C}_2}{\mathcal{C}_1}}
\end{align}
where $\mathcal{C}_2 = \frac{2\rho\mu \ln(2) + \mathcal{C}_1(1\!+\!2\rho\gamma \!+\! w\alpha)}{\mathcal{C}_1 + 2\rho}$.
The form of (\ref{Eq14}) is equal to $x \operatorname{exp}(x) = y$ for which the solution is $x = W_m(y)$, where $m=0$ or $m=\!-1$ if $y\geq0$ or $-e^{-1}\leq y<0$, respectively.
\end{IEEEproof}
In order to find the optimal codeword lengths, we start from a value of $\mu$ that satisfies $\sum_{i\in \mathcal{I}_k} 2^{-\ell_i}=1$. Then, its value is updated by the use of (\ref{Eq13a}) and (\ref{Eq15}).
\subsubsection{LDT Case}
In this case, the area $Q_{i-1}$ for $i\geq3$ yields
\begin{eqnarray*}
Q_{i-1} &=&\int_{t_{i-2}}^{t_{i-1}+S_i} \ln(\rho(t\!-\!t_{i-2})){\rm d}t\\
&-& \int_{t_{i-1}}^{t_{i-1}+S_i} \ln({\rho (t-t_{i-1})}){\rm d}t \\ &\approx& \rho Y_{i-1}^2 + 2\rho S_iY_{i-1} - 2Y_i,
\end{eqnarray*}
which results in
\begin{align}\label{Eq17}
\mathbb{E}[Q]
= \rho \mathbb{E}[L^2] + 2\rho (\mathbb{E}[L])^2 + 2(2\rho\gamma\!-\!1) \mathbb{E}[L] + 2\rho\gamma^2 - 2\gamma.
\end{align}
Inserting (\ref{Eq17}) into $\mathcal{P}_2$, we obtain the following problem.
\begin{equation}\label{Eq18}
\begin{aligned}
{\mathcal{P}}_4\!:\,
&\underset{{\{\ell_i \in \mathbb{R}^+\}}}{\text{min}}\, \Big\{ (\rho \!+\! w\beta) \mathbb{E}[L^2] + 2\rho (\mathbb{E}[L])^2 \\
&~~~~~~~~~~~~ + 2(2\rho\gamma\!-\!1\!+\!\dfrac{w\alpha}{2}) \mathbb{E}[L] + 2\rho\gamma^2 - 2\gamma \Big\} \\
&~~\text{s.t.} ~ \sum_{i\in \mathcal{I}_k} 2^{-\ell_i} \leq 1.
\end{aligned}
\end{equation}
Following the same procedure as (\ref{Eq10})--(\ref{Eq12}) and (\ref{Eq14}), the unique solution for $\ell_i$, for fixed $k$ is
\begin{align*}
\ell_i = -\ln_2\!\left( \dfrac{\mathcal{C}_3 p_i}{\mu^{\prime} (\ln(2))^2} W_{0}\!\!\left(\!\dfrac{\mu^{\prime} (\ln(2))^2}{\mathcal{C}_3 p_i} 2^{\frac{\mathcal{C}_4}{\mathcal{C}_3}}\!\right) \!\right)\!
\end{align*}
where $\mu^{\prime}>0$, $\mathcal{C}_3 = 2\rho + 2w\beta$, and
\begin{align*}
\mathcal{C}_4 = \frac{4\rho\mu^\prime \ln(2) + 2\mathcal{C}_3(2\rho\gamma \!-\!1\!+\! \frac{w\alpha}{2})}{\mathcal{C}_3 + 4\rho}.
\end{align*}
Besides, the moments of the codeword lengths are given by
\begin{subequations}
\begin{align*}
\mathbb{E}[L] &= \bigg(\!\frac{\mu^{\prime} \ln(2) -2(2\rho\gamma \!-\!1\!+\! \frac{w\alpha}{2})}{\mathcal{C}_3 + 4\rho}\!\bigg), \\
\mathbb{E}[L^2] &= \bigg(\!\frac{\mu^{\prime} \ln(2) -2(2\rho\gamma \!-\!1\!+\! \frac{w\alpha}{2})}{\mathcal{C}_3 + 4\rho}\!\bigg)^{\!\!2}.
\end{align*}
\end{subequations}
\subsubsection{PDT Case}
For this case (considering $\kappa = 1$), we obtain $Q_i = \frac{\rho}{2} Y_i^2 + \rho S_iY_{i-1}$, whose expected value is given by
\begin{align}\label{Eq21}
\mathbb{E}[Q]
= \dfrac{\rho}{2} \mathbb{E}[L^2] + \rho (\mathbb{E}[L])^2 + 2\rho\gamma \mathbb{E}[L] + \rho\gamma^2.
\end{align}
The resulting cost minimization problem (inserting (\ref{Eq21}) into $\mathcal{P}_2$) is then
\begin{equation}\label{Eq22}
\begin{aligned}
{\mathcal{P}}_5\!:\,
&\underset{{\{\ell_i \in \mathbb{R}^+\}}}{\text{min}}\, \Big\{ (\dfrac{\rho}{2} \!+\! w\beta) \mathbb{E}[L^2] + \rho (\mathbb{E}[L])^2 \\
&~~~~~~~~~~~~ + (2\rho\gamma\!+\!{w\alpha}) \mathbb{E}[L] + \rho\gamma^2 \Big\} \\
&~~\text{s.t.} ~ \sum_{i\in \mathcal{I}_k} 2^{-\ell_i} \leq 1.
\end{aligned}
\end{equation}
Similarly to other scenarios, the unique solution is
\begin{align*}
\ell_i = -\ln_2\!\left( \dfrac{\mathcal{C}_1 p_i}{\mu^{\prime\prime} (\ln(2))^2} W_{0}\!\!\left(\!\dfrac{\mu^{\prime\prime} (\ln(2))^2}{\mathcal{C}_1 p_i} 2^{\frac{\mathcal{C}_5}{\mathcal{C}_1}}\!\right) \!\right)\!
\end{align*}
where $\mu^{\prime\prime}>0$ and
\begin{align*}
\mathcal{C}_5 = \frac{2\rho\mu^{\prime\prime} \ln(2) + \mathcal{C}_1(2\rho\gamma \!+\! w\alpha)}{\mathcal{C}_1 + 2\rho}.
\end{align*}
The corresponding moments of codeword lengths are
\begin{subequations}
\begin{align*}
\mathbb{E}[L] &= \bigg(\!\frac{\mu^{\prime\prime} \ln(2) -(2\rho\gamma\!+\! {w\alpha})}{\mathcal{C}_1 + 2\rho}\!\bigg), \\
\mathbb{E}[L^2] &= \bigg(\!\frac{\mu^{\prime\prime} \ln(2) -(2\rho\gamma\!+\! {w\alpha})}{\mathcal{C}_1 + 2\rho}\!\bigg)^{\!\!2}.
\end{align*}
\end{subequations}
\section{Numerical Results}\label{Section4}
In this section, we present numerical results in order to find SoI-optimal codeword lengths and to assess the performance gains of semantics-aware filtering and source coding. Unless otherwise stated, we use a Zipf($n,s$) distribution with pmf
\begin{eqnarray}
P_X(x)= \frac{1/x^s}{\sum_{j=1}^n 1/j^s},
\end{eqnarray}
with $n = \lvert \mathcal{X} \rvert = 100$ and the exponent $s = 0.4$. The parameter $s$ of the Zipf distribution allows us to vary from a uniform distribution ($s=0$) to a “peaky distribution”. We set $\rho=0.5$, ${\alpha} = {\beta} = 1$ and $T = 10\,[\text{sec}]$. For each scenario, the weight $w$ in the objective function is set in a way that the value range of average SoI and coding cost penalty terms becomes comparable.
\begin{figure}[t!]
\centering
\subfloat[]{
\pstool[scale=0.55]{figures/FigA_SoI_vs_K.eps}{
\psfrag{NormalizedSoI}{\hspace{0.35cm}\footnotesize $\mathcal{J}_{{\rm SoI}}$}
\psfrag{KofNsamples}{\hspace{-1.21cm}\footnotesize Number of selected packets ($k$)}
\psfrag{lambda=0.5}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!0.5$}
\psfrag{lambda=1}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!1$}
\psfrag{lambda=5}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!5$}
\psfrag{lambda=10}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!10$}
\psfrag{lambda=20}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!20$}
\psfrag{Optimalk}{\hspace{-0.17cm}\footnotesize Optimal $k$}
\psfrag{0}{\hspace{-0.01cm}\scriptsize $0$}
\psfrag{10}{\hspace{-0.05cm}\scriptsize $10$}
\psfrag{20}{\hspace{-0.05cm}\scriptsize $20$}
\psfrag{30}{\hspace{-0.05cm}\scriptsize $30$}
\psfrag{40}{\hspace{-0.05cm}\scriptsize $40$}
\psfrag{50}{\hspace{-0.05cm}\scriptsize $50$}
\psfrag{60}{\hspace{-0.05cm}\scriptsize $60$}
\psfrag{70}{\hspace{-0.05cm}\scriptsize $70$}
\psfrag{80}{\hspace{-0.05cm}\scriptsize $80$}
\psfrag{90}{\hspace{-0.05cm}\scriptsize $90$}
\psfrag{100}{\hspace{-0.06cm}\scriptsize $100$}
\psfrag{0.5}{\hspace{-0.06cm}\scriptsize $0.5$}
\psfrag{0.6}{\hspace{-0.06cm}\scriptsize $0.6$}
\psfrag{0.7}{\hspace{-0.06cm}\scriptsize $0.7$}
\psfrag{0.8}{\hspace{-0.06cm}\scriptsize $0.8$}
\psfrag{0.9}{\hspace{-0.06cm}\scriptsize $0.9$}
\psfrag{L01}{\hspace{-0.04cm}\scriptsize $10^1$}
\psfrag{L02}{\hspace{-0.04cm}\scriptsize $10^2$}
\psfrag{L03}{\hspace{-0.04cm}\scriptsize $10^3$}
\psfrag{L04}{\hspace{-0.04cm}\scriptsize $10^4$}
\psfrag{L05}{\hspace{-0.04cm}\scriptsize $10^5$}
}}
\hfil
\vspace{-0.05cm}
\subfloat[]{
\pstool[scale=0.55]{figures/FigB_SoI_vs_K.eps}{
\psfrag{NormalizedSoI}{\hspace{0.35cm}\footnotesize $\mathcal{J}_{{\rm SoI}}$}
\psfrag{KofNsamples}{\hspace{-1.21cm}\footnotesize Number of selected packets ($k$)}
\psfrag{lambda=0.5}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!0.5$}
\psfrag{lambda=1}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!1$}
\psfrag{lambda=5}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!5$}
\psfrag{lambda=10}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!10$}
\psfrag{lambda=20}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!20$}
\psfrag{0}{\hspace{-0.01cm}\scriptsize $0$}
\psfrag{10}{\hspace{-0.05cm}\scriptsize $10$}
\psfrag{20}{\hspace{-0.05cm}\scriptsize $20$}
\psfrag{30}{\hspace{-0.05cm}\scriptsize $30$}
\psfrag{40}{\hspace{-0.05cm}\scriptsize $40$}
\psfrag{50}{\hspace{-0.05cm}\scriptsize $50$}
\psfrag{60}{\hspace{-0.05cm}\scriptsize $60$}
\psfrag{70}{\hspace{-0.05cm}\scriptsize $70$}
\psfrag{80}{\hspace{-0.05cm}\scriptsize $80$}
\psfrag{90}{\hspace{-0.05cm}\scriptsize $90$}
\psfrag{100}{\hspace{-0.06cm}\scriptsize $100$}
\psfrag{0.5}{\hspace{-0.06cm}\scriptsize $0.5$}
\psfrag{0.6}{\hspace{-0.06cm}\scriptsize $0.6$}
\psfrag{0.7}{\hspace{-0.06cm}\scriptsize $0.7$}
\psfrag{0.8}{\hspace{-0.06cm}\scriptsize $0.8$}
\psfrag{0.9}{\hspace{-0.06cm}\scriptsize $0.9$}
\psfrag{L01}{\hspace{-0.04cm}\scriptsize $10^1$}
\psfrag{L02}{\hspace{-0.04cm}\scriptsize $10^2$}
\psfrag{L03}{\hspace{-0.04cm}\scriptsize $10^3$}
\psfrag{L04}{\hspace{-0.04cm}\scriptsize $10^4$}
\psfrag{L05}{\hspace{-0.04cm}\scriptsize $10^5$}
}}
\hfil
\vspace{-0.05cm}
\subfloat[]{
\pstool[scale=0.55]{figures/FigC_SoI_vs_K.eps}{
\psfrag{NormalizedSoI}{\hspace{0.35cm}\footnotesize $\mathcal{J}_{{\rm SoI}}$}
\psfrag{KofNsamples}{\hspace{-1.21cm}\footnotesize Number of selected packets ($k$)}
\psfrag{lambda=0.5}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!0.5$}
\psfrag{lambda=1}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!1$}
\psfrag{lambda=5}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!5$}
\psfrag{lambda=10}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!10$}
\psfrag{lambda=20}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!20$}
\psfrag{0}{\hspace{-0.01cm}\scriptsize $0$}
\psfrag{10}{\hspace{-0.05cm}\scriptsize $10$}
\psfrag{20}{\hspace{-0.05cm}\scriptsize $20$}
\psfrag{30}{\hspace{-0.05cm}\scriptsize $30$}
\psfrag{40}{\hspace{-0.05cm}\scriptsize $40$}
\psfrag{50}{\hspace{-0.05cm}\scriptsize $50$}
\psfrag{60}{\hspace{-0.05cm}\scriptsize $60$}
\psfrag{70}{\hspace{-0.05cm}\scriptsize $70$}
\psfrag{80}{\hspace{-0.05cm}\scriptsize $80$}
\psfrag{90}{\hspace{-0.05cm}\scriptsize $90$}
\psfrag{100}{\hspace{-0.06cm}\scriptsize $100$}
\psfrag{0.5}{\hspace{-0.06cm}\scriptsize $0.5$}
\psfrag{0.6}{\hspace{-0.06cm}\scriptsize $0.6$}
\psfrag{0.7}{\hspace{-0.06cm}\scriptsize $0.7$}
\psfrag{0.8}{\hspace{-0.06cm}\scriptsize $0.8$}
\psfrag{0.9}{\hspace{-0.06cm}\scriptsize $0.9$}
\psfrag{L01}{\hspace{-0.04cm}\scriptsize $10^1$}
\psfrag{L02}{\hspace{-0.04cm}\scriptsize $10^2$}
\psfrag{L03}{\hspace{-0.04cm}\scriptsize $10^3$}
\psfrag{L04}{\hspace{-0.04cm}\scriptsize $10^4$}
\psfrag{L05}{\hspace{-0.04cm}\scriptsize $10^5$}
}}
\caption{The objective function $\mathcal{J}_{{\rm SoI}}$ versus the number of selected packets $k$ for the (a) EDT, (b) LDT, and (c) PDT scenarios with Zipf(100,0.4) distribution.}
\label{Fig3}
\end{figure}
Figures~\ref{Fig3}\,(a), \ref{Fig3}\,(b), and \ref{Fig3}\,(c) show the value of the objective function $\mathcal{J}_{{\rm SoI}}$ (i.e., cost of lateness and coding penalty term) versus the number of $k$ realizations for the EDT, LDT, and PDT cases, respectively. Evidently, increasing the arrival rate reduces $\mathcal{J}_{{\rm SoI}}$ as well as the optimal $k$.
For infrequent update arrivals, the transmitter does not filter out most updates ($k \neq 1$), whereas no filtering ($k \to n$) results in performance degradation due to longer transmission times for infrequent realizations. Among the three sample cases, the PDT (LDT) scenario offers the lowest (highest) value of $\mathcal{J}_{{\rm SoI}}$. Comparing with the linear age scenario $g(\Delta(t)) \!=\! \rho\Delta(t)$, the optimal $k$ is $19.3$, $13.2$, $9.8$, $7$, and $5.3$ for the arrival rates of $0.5$, $1$, $5$, $10$, and $20$, respectively. Therefore, an exponential penalty (nonlinear age) results in lower values for optimal $k$ compared to the linear one (cf. Figure~\ref{Fig3}\,(a)).
\begin{figure}[t!]
\centering
\pstool[scale=0.55]{figures/FigA_SoI_vs_K_Uniform.eps}{
\psfrag{NormalizedSoI}{\hspace{0.35cm}\footnotesize $\mathcal{J}_{{\rm SoI}}$}
\psfrag{KofNsamples}{\hspace{-1.21cm}\footnotesize Number of selected packets ($k$)}
\psfrag{lambda=0.5}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!0.5$}
\psfrag{lambda=1}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!1$}
\psfrag{lambda=5}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!5$}
\psfrag{lambda=10}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!10$}
\psfrag{lambda=20}{\hspace{-0.0cm}\scriptsize $\lambda\!=\!20$}
\psfrag{0}{\hspace{-0.01cm}\scriptsize $0$}
\psfrag{10}{\hspace{-0.05cm}\scriptsize $10$}
\psfrag{20}{\hspace{-0.05cm}\scriptsize $20$}
\psfrag{30}{\hspace{-0.05cm}\scriptsize $30$}
\psfrag{40}{\hspace{-0.05cm}\scriptsize $40$}
\psfrag{50}{\hspace{-0.05cm}\scriptsize $50$}
\psfrag{60}{\hspace{-0.05cm}\scriptsize $60$}
\psfrag{70}{\hspace{-0.05cm}\scriptsize $70$}
\psfrag{80}{\hspace{-0.05cm}\scriptsize $80$}
\psfrag{90}{\hspace{-0.05cm}\scriptsize $90$}
\psfrag{100}{\hspace{-0.06cm}\scriptsize $100$}
\psfrag{0.5}{\hspace{-0.06cm}\scriptsize $0.5$}
\psfrag{0.6}{\hspace{-0.06cm}\scriptsize $0.6$}
\psfrag{0.7}{\hspace{-0.06cm}\scriptsize $0.7$}
\psfrag{0.8}{\hspace{-0.06cm}\scriptsize $0.8$}
\psfrag{0.9}{\hspace{-0.06cm}\scriptsize $0.9$}
\psfrag{L01}{\hspace{-0.04cm}\scriptsize $10^1$}
\psfrag{L02}{\hspace{-0.04cm}\scriptsize $10^2$}
\psfrag{L03}{\hspace{-0.04cm}\scriptsize $10^3$}
\psfrag{L04}{\hspace{-0.04cm}\scriptsize $10^4$}
\psfrag{L05}{\hspace{-0.04cm}\scriptsize $10^5$}
}
\caption{The objective function $\mathcal{J}_{{\rm SoI}}$ versus $k$ for uniform probability distributions under the EDT scenario and $n=100$.}
\label{Fig3new}
\end{figure}
To investigate the effect of the pmf and the source characteristics on the performance, in Figure~\ref{Fig3new} we plot the objective function $\mathcal{J}_{{\rm SoI}}$ versus $k$ for the EDT case under uniform distribution. Despite the similarity in the shape, the optimal $k$ is slightly smaller than the Zipf pmf. The reason is that the critical point of the objective function versus $k$, hence $q_k$, is proportional to the input rate. Thus, for each input rate, there is an optimal $q_k$ yielding an optimal value for $k$. For instance, for $\lambda=10$, the Zipf and the uniform distribution results in optimal $k=5$ and $k=3$, respectively.
\begin{figure}[t!]
\centering
\subfloat[]{
\pstool[scale=0.55]{figures/FigD_SoI_vs_Lambda.eps}{
\psfrag{NormalizedSoI}{\hspace{0.35cm}\footnotesize $\mathcal{J}_{{\rm SoI}}$}
\psfrag{Lambda}{\hspace{-0.1cm}\footnotesize Rate ($\lambda$)}
\psfrag{K=N/10AAA1}{\hspace{-0.0cm}\scriptsize $k\!=\!n/10$}
\psfrag{K=N/4}{\hspace{-0.0cm}\scriptsize $k\!=\!n/4$}
\psfrag{K=N/2}{\hspace{-0.0cm}\scriptsize $k\!=\!n/2$}
\psfrag{K=N}{\hspace{-0.0cm}\scriptsize $k\!=\!n$}
\psfrag{0}{\hspace{-0.01cm}\scriptsize $0$}
\psfrag{0.5}{\hspace{-0.06cm}\scriptsize $0.5$}
\psfrag{0.6}{\hspace{-0.06cm}\scriptsize $0.6$}
\psfrag{0.7}{\hspace{-0.06cm}\scriptsize $0.7$}
\psfrag{0.8}{\hspace{-0.06cm}\scriptsize $0.8$}
\psfrag{0.9}{\hspace{-0.06cm}\scriptsize $0.9$}
\psfrag{1}{\hspace{-0.02cm}\scriptsize $1$}
\psfrag{2}{\hspace{-0.01cm}\scriptsize $2$}
\psfrag{4}{\hspace{-0.01cm}\scriptsize $4$}
\psfrag{6}{\hspace{-0.01cm}\scriptsize $6$}
\psfrag{8}{\hspace{-0.01cm}\scriptsize $8$}
\psfrag{10}{\hspace{-0.01cm}\scriptsize $10$}
\psfrag{12}{\hspace{-0.01cm}\scriptsize $12$}
\psfrag{14}{\hspace{-0.01cm}\scriptsize $14$}
\psfrag{16}{\hspace{-0.01cm}\scriptsize $16$}
\psfrag{18}{\hspace{-0.01cm}\scriptsize $18$}
\psfrag{20}{\hspace{-0.01cm}\scriptsize $20$}
\psfrag{60}{\hspace{-0.06cm}\scriptsize $60$}
\psfrag{80}{\hspace{-0.06cm}\scriptsize $80$}
\psfrag{100}{\hspace{-0.06cm}\scriptsize $100$}
\psfrag{120}{\hspace{-0.06cm}\scriptsize $120$}
\psfrag{140}{\hspace{-0.06cm}\scriptsize $140$}
\psfrag{160}{\hspace{-0.06cm}\scriptsize $160$}
\psfrag{180}{\hspace{-0.06cm}\scriptsize $180$}
\psfrag{200}{\hspace{-0.06cm}\scriptsize $200$}
}}
\hfil
\vspace{-0.05cm}
\subfloat[]{
\pstool[scale=0.55]{figures/FigE_SoI_vs_Lambda.eps}{
\psfrag{NormalizedSoI}{\hspace{0.35cm}\footnotesize $\mathcal{J}_{{\rm SoI}}$}
\psfrag{Lambda}{\hspace{-0.1cm}\footnotesize Rate ($\lambda$)}
\psfrag{K=N/10AAA1}{\hspace{-0.0cm}\scriptsize $k\!=\!n/10$}
\psfrag{K=N/4}{\hspace{-0.0cm}\scriptsize $k\!=\!n/4$}
\psfrag{K=N/2}{\hspace{-0.0cm}\scriptsize $k\!=\!n/2$}
\psfrag{K=N}{\hspace{-0.0cm}\scriptsize $k\!=\!n$}
\psfrag{0}{\hspace{-0.01cm}\scriptsize $0$}
\psfrag{0.5}{\hspace{-0.06cm}\scriptsize $0.5$}
\psfrag{0.6}{\hspace{-0.06cm}\scriptsize $0.6$}
\psfrag{0.7}{\hspace{-0.06cm}\scriptsize $0.7$}
\psfrag{0.8}{\hspace{-0.06cm}\scriptsize $0.8$}
\psfrag{0.9}{\hspace{-0.06cm}\scriptsize $0.9$}
\psfrag{1}{\hspace{-0.02cm}\scriptsize $1$}
\psfrag{2}{\hspace{-0.01cm}\scriptsize $2$}
\psfrag{4}{\hspace{-0.01cm}\scriptsize $4$}
\psfrag{6}{\hspace{-0.01cm}\scriptsize $6$}
\psfrag{8}{\hspace{-0.01cm}\scriptsize $8$}
\psfrag{10}{\hspace{-0.01cm}\scriptsize $10$}
\psfrag{12}{\hspace{-0.01cm}\scriptsize $12$}
\psfrag{14}{\hspace{-0.01cm}\scriptsize $14$}
\psfrag{16}{\hspace{-0.01cm}\scriptsize $16$}
\psfrag{18}{\hspace{-0.01cm}\scriptsize $18$}
\psfrag{20}{\hspace{-0.01cm}\scriptsize $20$}
\psfrag{100}{\hspace{-0.06cm}\scriptsize $100$}
\psfrag{120}{\hspace{-0.06cm}\scriptsize $120$}
\psfrag{140}{\hspace{-0.06cm}\scriptsize $140$}
\psfrag{160}{\hspace{-0.06cm}\scriptsize $160$}
\psfrag{180}{\hspace{-0.06cm}\scriptsize $180$}
\psfrag{200}{\hspace{-0.06cm}\scriptsize $200$}
}}
\hfil
\vspace{-0.05cm}
\subfloat[]{
\pstool[scale=0.55]{figures/FigF_SoI_vs_Lambda.eps}{
\psfrag{NormalizedSoI}{\hspace{0.35cm}\footnotesize $\mathcal{J}_{{\rm SoI}}$}
\psfrag{Lambda}{\hspace{-0.1cm}\footnotesize Rate ($\lambda$)}
\psfrag{K=N/10AAA1}{\hspace{-0.0cm}\scriptsize $k\!=\!n/10$}
\psfrag{K=N/4}{\hspace{-0.0cm}\scriptsize $k\!=\!n/4$}
\psfrag{K=N/2}{\hspace{-0.0cm}\scriptsize $k\!=\!n/2$}
\psfrag{K=N}{\hspace{-0.0cm}\scriptsize $k\!=\!n$}
\psfrag{0}{\hspace{-0.01cm}\scriptsize $0$}
\psfrag{0.5}{\hspace{-0.06cm}\scriptsize $0.5$}
\psfrag{0.6}{\hspace{-0.06cm}\scriptsize $0.6$}
\psfrag{0.7}{\hspace{-0.06cm}\scriptsize $0.7$}
\psfrag{0.8}{\hspace{-0.06cm}\scriptsize $0.8$}
\psfrag{0.9}{\hspace{-0.06cm}\scriptsize $0.9$}
\psfrag{1}{\hspace{-0.02cm}\scriptsize $1$}
\psfrag{2}{\hspace{-0.01cm}\scriptsize $2$}
\psfrag{4}{\hspace{-0.01cm}\scriptsize $4$}
\psfrag{6}{\hspace{-0.01cm}\scriptsize $6$}
\psfrag{8}{\hspace{-0.01cm}\scriptsize $8$}
\psfrag{10}{\hspace{-0.01cm}\scriptsize $10$}
\psfrag{12}{\hspace{-0.01cm}\scriptsize $12$}
\psfrag{14}{\hspace{-0.01cm}\scriptsize $14$}
\psfrag{16}{\hspace{-0.01cm}\scriptsize $16$}
\psfrag{18}{\hspace{-0.01cm}\scriptsize $18$}
\psfrag{20}{\hspace{-0.01cm}\scriptsize $20$}
\psfrag{50}{\hspace{-0.06cm}\scriptsize $50$}
\psfrag{100}{\hspace{-0.06cm}\scriptsize $100$}
\psfrag{150}{\hspace{-0.06cm}\scriptsize $150$}
\psfrag{200}{\hspace{-0.06cm}\scriptsize $200$}
}}
\caption{The objective function $\mathcal{J}_{{\rm SoI}}$ versus rate $\lambda$ for the (a) EDT, (b) LDT, and (c) PDT scenarios and Zipf(100,0.4) pmf.}
\label{Fig4}
\end{figure}
Figure~\ref{Fig4} depicts the objective function versus the rate parameter $\lambda$ for different values of $k$. Increasing the input rate decreases $\mathcal{J}_{\rm SoI}$; however, this decrease diminishes and saturates at higher rate values. Furthermore, by increasing the number of selected packets, lower input rates are required to reduce the penalty terms. For instance, in the EDT scenario, the lowest attained $\mathcal{J}_{\rm SoI}$ value is $60$, $57$, $71$, and $87$ for $k=\frac{n}{10}$, $\frac{n}{4}$, $\frac{n}{2}$, and $n$, respectively. For large $k$, the objective function gets high values for any input rates. Based on the analytical expressions derived throughout the paper, in the EDT case, we find the global optimal values of $\lambda^* = 19.34$, $16.71$, $10.12$, and $5.83$ for $k=\frac{n}{10}$, $\frac{n}{4}$, $\frac{n}{2}$, and $n$, respectively.
\begin{figure}[t!]
\centering
\pstool[scale=0.55]{figures/FigG_SoI_vs_K_AB_main.eps}{
\psfrag{NormalizedSoI}{\hspace{0.3cm}\footnotesize $\mathcal{J}_{{\rm SoI}}$}
\psfrag{KofNsamples}{\hspace{-1.35cm}\footnotesize \rotatebox{15.55}{Number of selected packets ($k$)}}
\psfrag{AlphaBeta}{\hspace{0cm}\footnotesize ${\alpha}={\beta}$}
\psfrag{K=N/10AAA1}{\hspace{-0.0cm}\scriptsize $k\!=\!n/10$}
\psfrag{K=N/4}{\hspace{-0.0cm}\scriptsize $k\!=\!n/4$}
\psfrag{K=N/2}{\hspace{-0.0cm}\scriptsize $k\!=\!n/2$}
\psfrag{K=N}{\hspace{-0.0cm}\scriptsize $k\!=\!n$}
\psfrag{0}{\hspace{-0.02cm}\scriptsize $0$}
\psfrag{2}{\hspace{-0.02cm}\scriptsize $2$}
\psfrag{4}{\hspace{-0.02cm}\scriptsize $4$}
\psfrag{6}{\hspace{-0.02cm}\scriptsize $6$}
\psfrag{8}{\hspace{-0.02cm}\scriptsize $8$}
\psfrag{10}{\hspace{-0.02cm}\scriptsize $10$}
\psfrag{20}{\hspace{-0.01cm}\scriptsize $20$}
\psfrag{40}{\hspace{-0.01cm}\scriptsize $40$}
\psfrag{60}{\hspace{-0.01cm}\scriptsize $60$}
\psfrag{80}{\hspace{-0.01cm}\scriptsize $80$}
\psfrag{100}{\hspace{-0.06cm}\scriptsize $100$}
\psfrag{L01}{\hspace{-0.06cm}\scriptsize $10^1$}
\psfrag{L02}{\hspace{-0.06cm}\scriptsize $10^2$}
\psfrag{10x3}{\hspace{0.15cm}\scriptsize $\times 10^3$}
}
\vspace{-0.5cm}
\caption{The interplay among $\mathcal{J}_{{\rm SoI}}$, selected packets $k$ and codeword length cost parameters $\alpha, \beta$ in the EDT scenario with $n=100$ and $\lambda=1$.}
\label{Fig5}
\end{figure}
Figure~\ref{Fig5} plots the objective function $\mathcal{J}_{{\rm SoI}}$ versus the number of selected packets $k$ and the values of the cost parameters (i.e., ${\alpha}$, ${\beta}$) under the EDT scenario. Herein, the cost parameters are assumed to have equal values and $\lambda=1$. As expected, the optimal values of the coding cost parameters depend on the number of selected packets. The objective function continuously increases as the cost parameters increase for small $k$. However, for large $k$, an increase of the cost parameters causes the objective function to first increase and then decrease. The interplay between the two terms of the objective function (timeliness penalty and coding cost) and the number of selected packets $k$ is summarized in Table~\ref{Tab1}, which shows the optimal values of $k$, $\alpha$, and $\beta$ for different input rates under the EDT scenario. We observe that increasing the input rate, hence decreasing the optimal $k$, the optimal values of cost parameters increase. When the input rate is high, one has to assign larger penalties for the codeword length to ensure transmitting the most important data and allocating them larger codewords.
\noindent
\renewcommand{\arraystretch}{1}
\begin{table}[t!]
\begin{center}
\caption{Optimal parameters under the EDT scenario.}\label{Tab1}
\begin{tabular}{ | c | c | c || c | c | c |}
\hline
\small \!\!$\lambda$\!\! & \small \!$k$\! & \small \!$\alpha=\beta$\! & \small \!\!$\lambda$\!\! & \small \!$k$\! & \small \!$\alpha=\beta$\!\\
\hline
\hline
\small $0.5$ & \footnotesize $20$ & \footnotesize $1.26$ & \small $10$ & \footnotesize $5$ & \footnotesize $2.5$\\
\hline
\small $1$ & \footnotesize $18$ & \footnotesize $1.58$ & \small $20$ & \footnotesize $2$ & \footnotesize $12.59$\\
\hline
\small $5$ & \footnotesize $10$ & \footnotesize $1.99$\\
\cline{1-3}
\end{tabular}
\medskip
\end{center}
\end{table}
\section{Conclusion}\label{Section5}
We studied the problem of timely source coding in status update systems, where the transmitter selects the packets generated by an information source based on their importance prior sending them to a remote receiver. Introducing a semantics-aware metric that quantifies information timeliness, we determined the real codeword lengths that optimize a weighted sum of timeliness and quadratic coding cost penalty. The main takeaway is that semantic filtering and source coding can significantly reduce the number of packets that has to be communicated while providing timely updates.
\bibliographystyle{IEEEtran}
|
1,116,691,499,110 | arxiv | \section{Introduction}
There has been a typical problem to define a gravitational action
suffering from divergence in a non-compact space. In spite that several
prescriptions within the concept of reference space have been suggested
so far \cite{gibbons}\cite{brown}\cite{hawking}, those are flawed by
the fact although the divergences could be eliminated by choosing an
appropriate reference space, it is impossible to embed a boundary
with an arbitrary geometry. Another drawback of the reference space
method is that different reference spaces are needed for different
boundary geometries, so that one cannot define relative energies
in a consistent manner.
Recently, a prominent prescription has been suggested \cite{bal} in the
context of AdS/CFT correspondence \cite{mal}\cite{witten}\cite{gubser},
which could be understood as a realization of the holographic principle
\cite{thooft}\cite{susskind2}. According to the correspondence, UV
divergences of quantum field theory living on a boundary of AdS space are
derived from IR divergences of the bulk theory (UV-IR connection
\cite{susskind}). So, the bulk action could be
regularized by adding local counterterms \cite{witten}\cite{henn}.
On asymptotic AdS spaces, this approach gives an elegant expression of
counterterm action in the form of the expansion for AdS radius $\ell$
\cite{bal}\cite{hyun}\cite{emparan}\cite{kra}
\begin{eqnarray}
\label{cotactads}
{\tilde S} &=& -\frac{1}{8\pi G}\int_{\partial X}d^dx\sqrt{-g_0} \left\{
\frac{d-1}{\ell} + \frac{\ell}{2(d-2)}R \right.
\nonumber \\
&& \left. + \frac{\ell^3}{2(d-2)^2(d-4)}\left(
R_{ab}R^{ab} - \frac{d}{4(d-1)}R^2 \right) +
\cdots \right\}.
\end{eqnarray}
In case of even dimensional boundary, however, one encounters
a logarithmic divergent term in evaluating the bulk action functional.
In order to obtain a finite action if we take the counterterm action
involving this log term, it would cause problematic results in calculating
a boundary stress energy tensor \cite{emparan}. Even though the logarithmic
divergence embarrasses to obtain a finite regularized action, it provides
a remarkable consistency check of the AdS/CFT correspondence
\cite{witten}\cite{henn}. In other words, because a conformal anomaly for
$d$-dimensional conformal field theory in coupling to background gravity
comes from logarithmic UV divergences \cite{birrell}, evaluation of the
conformal anomaly in this scheme becomes a nontrivial check of the AdS/CFT
correspondence. (For holographic conformal anomaly for the dilaton coupled
conformal field theory, see Ref.\cite{odintsov}).
The counterterm action of Eq.(\ref{cotactads}) has been also constructed
from the Gauss-Codazzi equations through an iterative process \cite{kra}.
In the Ref.\cite{kra}, counterterm actions for asymptotically flat (AF)
spaces have been also investigated. However, the procedure adapted for
AdS spaces could not be simply generalized on AF descriptions because of
mathematical difficulty due to non-linearity of the Gauss-Codazzi equations.
Taking an alternative approach, they obtained a counterterm action for AF
spaces with $S^{d-n}\times \mbox{\rm$\mbox{I}\!\mbox{R}$}^n$ boundary geometries
\begin{equation}
\label{cotactaf}
\tilde{S}= -\frac{1}{8\pi G} \int_{\partial X} d^d x
\sqrt{-g_0} \sqrt{\frac{R^3}{R^2 - R_{ab}R^{ab}}}.
\end{equation}
Very recently, a different prescription to construct the counterterm
action has been suggested in Ref.\cite{solo}. In the prescription, a length
dimensional parameter analogue to the radius of AdS space was defined,
so that the counterterm actions for asymptotically flat and AdS spaces
are consistently constructed in the expansion for the new length parameter.
In this paper, we introduce another method to construct the counterterm
actions in (\ref{cotactads}) and (\ref{cotactaf}). In this construction,
we take the ADM formalism and show that the counterterm action can be
intrinsically written by the terms of intrinsic boundary geometry.
Using our new expression for counterterm action, we obtain a general form
of the counterterm action available for any $d$-dimensional spherical
boundary. In the description, we also derive {\it arbitrary} dimensional
holographic conformal anomaly. It is also shown that the counterterm action
for AF spaces can be obtained from the AdS description just as taking the
limit of $\ell \rightarrow \infty$.
On the other hand, a counterterm action for AF space with nontrivial
boundary geometry is examined. Our counter example is the $D$-dimensional
generalization of the Kerr metric \cite{myers} setting the mass parameter
to zero. It is the metric of an AF space in spheroidal coordinates. This
example has been considered in Ref.\cite{kra}. The authors have
shown that for $d > 6$ the counterterm action in (\ref{cotactaf}) based on
round sphere boundary does not eliminate all divergent terms.
In Ref.\cite{solo}, it has been shown that leading order of divergent terms
due to deviation from round sphere could be canceled by introducing
an additional counterterm action whose a form is similar with the
counterterms of squared boundary curvature in Eq.(\ref{cotactads}).
Here, we shall show that the additional counterterms can be conjectured in
somewhat interesting scheme as well: Our derivation of the counterterm
action does not use the full Einstein equations. Instead only normal-normal
projection equation, which is obtained by projecting the Einstein equations on
the boundary in normal directions, is used. In case of simple boundary
geometry (round sphere), other equations, tangential-tangential and
tangential-normal projections, are not crucial, because they become trivial
or dummy on the procedure. However, in case of nontrivial boundary
geometry, the full Einstein's equations must be used. Our observation is
that taking only the normal-normal projection equation,
we obtain additional divergent terms in boundary action value (BAV),
which are logarithmic. Thus, the conformal invariance of regularized
action (RA) would be broken by a conformal anomaly. However,
we shall show that for an example of $d=4$ the anomaly is proportional
to $\Box R$. That is, additional counterterms proportional to the squared
boundary curvature can be added on the counterterm action in (\ref{cotactaf})
and the conformal invariance would be recovered. We also briefly discuss
about these aspects comparing with AdS descriptions.
Our paper is organized as follows; In Sect.2, counterterm
action is constructed in the ADM formulation.
Examples for asymptotic AdS spaces are considered in Sect.3;
A counterterm action for asymptotic AdS space with $S^d$ boundary is
constructed. In this example, arbitrary dimensional conformal anomaly
is obtained. In Sect.4, relationship between counterterm actions for
asymptotic AdS and flat spaces is discussed. For an AF space in spheroidal
coordinates, the divergent terms are evaluated. Additional counterterms
to eliminate divergent terms due to deviation form round sphere are
conjectured in observation of logarithmic divergence. We also give a
brief discussion about similarities of these aspects and AdS descriptions
in a holographic sense. Discussions and summary are contained in Sect.5.
\section{Holographic Counterterm Actions}
$(d+1)$-dimensional gravitational action with cosmological
constant $ \Lambda =- d(d-1)/(2\ell^2)$ is given by
\begin{equation}
\label{action}
S = \frac{1}{16 \pi G} \int_{X} d^{d+1} x \sqrt{- \hat{G}}
\left( \hat{R} + \frac{d(d-1)}{\ell^2} \right)
-\frac{1}{8 \pi G} \int_{\partial X}d^d x \sqrt{-g} \Theta ,
\end{equation}
where $g_{ab}$ is boundary metric and $\Theta $ is the trace of
extrinsic curvature of $d$-dimensional timelike boundary $\partial X$
defined by $\Theta_{ab} = - g_a^\mu \nabla_\mu n_b $. $\nabla $
denotes the covariant derivative on $(d+1)$-dimensional manifold
$X$ and $n^\mu $ is an outward unit normal to the boundary $\partial X$.
The boundary term in Eq.(\ref{action}), so called Gibbons-Hawking
term, is required for well defined variational principle.
Our purpose is to add another proper surface integral to the
action in (\ref{action}), so that the action becomes finite in the
limit that the boundary is taken to infinity. According to the counterterm
subtraction approach, the additional surface integral
must be written in terms of intrinsic boundary geometry.
For the procedure, we take the ADM formulation as a guide line for
construction of the counterterm action. As it will be seen in the following,
the ADM formulation guarantees for the counterterm action to be written
in terms of intrinsic boundary geometry.
To rewrite the action (\ref{action}) in a canonical form,
we first take a metric given by
\begin{equation}
\label{metric}
\hat{G}_{\mu\nu}dx^{\mu} dx^{\nu} = N^2d\rho^2 + g_{ab}dx^a dx^b,
\end{equation}
where $N^2 =N^2(\rho )$ and $g=g(\rho, x^a)$. In this coordinate
system, unit normal to the boundary is given by
$n_\mu = N \delta^\rho_\mu $. Then, following the standard ADM procedure,
canonical form of the action (\ref{action}) becomes
\begin{equation}
\label{canaction}
S =\int_{X} d^{d+1}x (\pi^{ab}g^{\prime}_{ab} - N{\cal H}_\rho)
\equiv \int_X d^{d+1}x {\cal L},
\end{equation}
where $\pi^{ab}= \delta {\cal L}/\delta g_{ab}^{\prime}$
is the momentum density conjugate to $g_{ab}$ and $\prime$
denotes the derivative of $\rho$. `Hamiltonian' density
${\cal H}_\rho$ is given by
\begin{equation}
\label{hamil}
{\cal H}_\rho = \frac{16 \pi G}{\sqrt{-g}}
\left( \frac{\pi^2}{d-1} - \pi_{ab}\pi^{ab} \right)
- \frac{\sqrt{-g}}{16 \pi G} \left( R + \frac{d(d-1)}{\ell^2} \right),
\end{equation}
where $R$ is $d$-dimensional scalar curvature of the boundary.
The equation ${\cal H}_\rho = 0$ generates reparametrization of space
coordinate $\rho$. In fact, this equation one of the Gauss-Codazzi
equations that is defined by projecting the Einstein equations on the
boundary in normal directions.
Using the constraint equation ${\cal H}_\rho = 0$, the BAV
evaluated from the action (\ref{action}) on the boundary
$\rho =\rho_0$ is given by a simple form as
\begin{eqnarray}
\label{beact}
S_{cl}&=& \int_{\partial X} d^d x \left\{
\frac{1}{8\pi G}\int^{\rho_0} d\rho
N\sqrt{-g} \left( R + \frac{d(d-1)}{\ell^2} \right) \right\}
\nonumber \\
&\equiv & \int_{\partial X} d^d x A(x^a; \rho_0).
\end{eqnarray}
So, according to the counterterm subtraction approach, regularized
action, $S_{RA}$, is defined by
\begin{equation}
\label{effact}
S_{RA} \equiv S - \tilde{S},
\end{equation}
where the counterterm action $\tilde{S}$ is given by
\begin{equation}
\label{count}
\tilde{S} = - \int_{\partial X} d^d x Div \left(A(x^a;\rho_0)\right)
\equiv - \int_{\partial X} d^d x \sqrt{-g_0}\bar{A}^{div}(x^a;\rho_0),
\end{equation}
where $Div$ means to pick divergent terms after
$\rho$-integration and $g_0$ is the induced metric on the boundary.
What a counterterm action is a coordinate invariant functional of intrinsic
boundary geometry is an important requirement in the counterterm subtraction
method. This is because it is the only way to eliminate divergence of a
gravitational action without disturbing the equations of motion or the
symmetries \cite{kra}. In Eq.(\ref{count}), the counterterm action (functional
$A(x^a;\rho_0)$) is explicitly given in terms of intrinsic boundary geometry.
(The `lapse' function $N$ can be absorbed in the space coordinate $\rho$ by a
coordinate redefinition.) In fact, the divergent terms in BAV is originated
from the Gibbons-Hawking term, which is the surface integral of the extrinsic
curvature, as well as from the bulk part. In the procedure, the extrinsic
curvature term is canceled by a term extracted from the bulk part, and the
divergent structure of the BAV in (\ref{beact}) is determined by the terms
originated from bulk part that are expressed in terms of the intrinsic
boundary geometry.
On the other hand, it must be also noted that in the above procedure,
we have not used the full Einstein's equations in obtaining the counterterm
action, but only the constraint equation, ${\cal H}_\rho = 0$, has been used.
In the following, we shall show that in the case of simple boundary
geometry (round sphere), others in Gauss-Codazzi equations become trivial.
Moreover, in the case of nontrivial boundary geometry, this scheme leads
us to somewhat interesting observation. It will be presented in section 4.
\section{AdS Space and Holographic Anomaly}
The counterterm action for asymptotic AdS spaces in (\ref{cotactads}) is
useful for various boundary geometries. However, evaluation of counterterm
actions for higher dimensional boundaries is not manageable for its
mathematical difficulty. In the expression of counterterm action given in
(\ref{count}), we consider a simple but important example, Euclidean AdS
space with $S^d$ boundary and obtain a general form of the counterterm action
available for any $d$-dimensional boundary.
The Euclidean AdS space with $S^d$ boundary is described by the line
element
\begin{equation}
\label{eclads}
\hat{G}_{\mu\nu}dx^\mu dx^\nu = \left( 1 + \frac{r^2}{\ell^2} \right)^{-1}dr^2
+ r^2 d\Omega^2_d.
\end{equation}
The functional $A$ in Eq.(\ref{beact}) for the metric (\ref{eclads}) becomes
\begin{eqnarray}
\label{aftn1}
A(x^a;r_0) &=& \frac{1}{8\pi G} \int^{r_0} dr \sqrt{\gamma_d}
r^d \left(1+ \frac{r^2}{\ell^2} \right)^{-1/2} \left(R +
\frac{d(d-1)}{\ell^2} \right)
\nonumber \\
&=& - \frac{(d(d-1))^{(d+2)/2}}{16\pi G \ell} \sqrt{\gamma_d}
\int^{R_0} dR R^{-(d+2)/2}
\left(1 + \frac{\ell^2 R}{d(d-1)} \right)^{1/2},
\end{eqnarray}
where $R_0$ denotes the scalar curvature on the boundary and $\gamma_d $
is the metric of $d$-dimensional unit sphere. In the second line
of Eq.(\ref{aftn1}), $d(d-1)/r^2 =R$ was used.
After some algebraic calculation, we obtain
\begin{eqnarray}
\label{aftn1even}
A(x^a;r_0) &=& - \frac{(d(d-1))^{(d+2)/2}}{16\pi G \ell} \sqrt{\gamma_d}
\left(
\frac{2}{d(d-1)} \sqrt{1+ \frac{\ell^2 R}{d(d-1)}} \right.
\\
&\times & \left. \left\{
- \frac{d-1}{R^{d/2}} + \sum^{(d-2)/2}_{k=1} \left[
\left(- \frac{\ell^2}{d(d-1)} \right)^k \prod^k_{m=1} \left(
\frac{d-2m+1}{d-2m} \right) R^{-(d-2k)/2} \right] \right\} \right.
\nonumber \\
&-&\left. \frac{1}{d} \left( - \frac{\ell^2}{d(d-1)} \right)^{d/2}
\prod^{(d-2)/2}_{k=1} \left(\frac{d-2k-1}{d-2k} \right)
\ln{\frac{\sqrt{1+\ell^2 R/(d(d-1))} -1}{\sqrt{1+ \ell^2 R/(d(d-1))} +1}}
\right)
\nonumber
\end{eqnarray}
in even of $d$ and
\begin{eqnarray}
\label{aftn1odd}
A(x^a;r_0) &=& \frac{d(d(d-1))^{d/2}}{8\pi G \ell} \sqrt{\gamma_d}
\left(1+ \frac{\ell^2 R}{d(d-1)}\right)^{3/2}
\nonumber \\
&& \times \left(- \frac{\ell^2}{d(d-1)} \right)^{(d-5)/2}
\sum^{(d-3)/2}_{k=0} \prod^k_{m=0} \left(
\frac{d-2m-1}{d-2m} \right)
\end{eqnarray}
in odd of $d$. (After the Eq.(\ref{aftn1even}), we dropped the subscript `0' of
the scalar curvature for simplicity.) Then, an {\it arbitrary} dimensional
counterterm action for AdS spaces with $S^d$ boundary is given
by a polynomial in the boundary scalar curvature $R$ as follows
\begin{eqnarray}
\label{ecladscnt}
\tilde{S} &=& \frac{1}{8\pi G}\int_{\partial X}d^dx\sqrt{g_0} \left(
\frac{d-1}{\ell} + \frac{\ell}{2(d-2)}R - \frac{\ell^3}{8d(d-1)(d-4)}R^2
\right.
\nonumber \\
&&\left. + \frac{\ell^5}{16(d(d-1))^2(d-6)}R^3 + \cdots \right),
\end{eqnarray}
where the terms in the parenthesis of Eq.(\ref{ecladscnt}) are terminated by
\begin{equation}
\label{terme}
\frac{1}{2}(-1)^{(d+2)/2}
\prod^{(d+2)/2}_{k=1} \left(\frac{2k-3}{2k}\right)
\frac{\ell^{d+1}}{(d(d-1))^{d/2}}R^{(d+2)/2},
\end{equation}
in the case of $d=even$, and
\begin{equation}
\label{termo}
(-1)^{(d+1)/2}
\prod^{(d-1)/2}_{k=1}\left( \frac{2k-3}{2k}\right)
\frac{\ell^{d-2}}{(d(d-1))^{(d-3)/2}}R^{(d-1)/2},
\end{equation}
in odd $d$ case. Using a relation $R_{ab}R^{ab} =R^2/d $,
it can be shown that the counterterm action in (\ref{ecladscnt}) is equivalent
to the Eq.(\ref{cotactads}). That is, the counterterm action
in (\ref{cotactads}) can be written by a polynomial in the boundary scalar
curvature $R$ terminated by the terms given in (\ref{terme}) or (\ref{termo})
for AdS spaces with $S^d$ boundary.
On the other hand, the counterterm action for even dimensional boundary
in (\ref{ecladscnt}) fails on eliminating all divergent terms appearing
in the BAV. Instead, the RA contains a logarithmic divergent term
\begin{equation}
\label{logterm1}
\frac{1}{16 \pi G} \int d^dx \sqrt{g_0}(-1)^{d/2}
\prod^{d/2}_{k=1} \left(\frac{2k-3}{2k}\right)
\frac{\ell^{d-1}}{(d(d-1))^{(d-2)/2}}R^{d/2} \ln{R}.
\end{equation}
It has been already understood in the context of the AdS/CFT correspondence
\cite{witten}\cite{henn}; The regularization of BAV by introducing local
counterterms may break conformal invariance and RA is left with a logarithmic
divergent term. According to this prescription, considering a scale
transformation $\delta r = r \delta \epsilon$ for an infinitesimal constant
parameter $\delta \epsilon$, the holographic conformal anomaly, ${\cal A}$,
for which dual CFT is coupled to the background gravity with $S^d$ boundary
is given by
\begin{equation}
\label{anomaly1}
{\cal A} = -\frac{1}{8\pi G}(-1)^{d/2}
\prod^{d/2}_{k=1} \left(\frac{2k-3}{2k}\right)
\frac{\ell^{d-1}}{(d(d-1))^{(d-2)/2}}R^{d/2}.
\end{equation}
The conformal anomaly in arbitrary dimensions has been given in geometric
description \cite{deser}. Restricting the CFT in background $S^d$ geometry,
Eq.(\ref{anomaly1}) is an alternative expression of the conformal anomaly
in arbitrary dimensions.
For $S^2$ boundary, the Eq.(\ref{anomaly1}) recovers well known
result
\begin{equation}
\label{anod2}
{\cal A}_{d=2} = -\frac{\ell}{16 \pi G}R.
\end{equation}
Comparing the $(1+1)$-dimensional anomaly on a surface of radius $\ell$,
$-1/(8\pi G \ell) = -c/(12\pi \ell^2)$, the central
charge $c$ becomes $3\ell/(2G)$.
From the Eq.(\ref{anomaly1}) for $d=4$, we find that the conformal anomaly
agrees with that of Ref.\cite{henn}
\begin{equation}
\label{anod4}
{\cal A}_{d=4} = \frac{\ell^3}{768 \pi G}R^2 = \frac{\ell^3}{8 \pi G}
\left(-\frac{1}{8}R_{ab}R^{ab}+ \frac{1}{24}R^2 \right).
\end{equation}
The conformal anomaly for ${\cal N}=4$ super
Yang-Mill theory on $S^4$ is $3N^2/(8\pi^2 \ell^4)$.
Comparing with the anomaly on this boundary from the Eq.(\ref{anod4}),
$3/(16\pi G \ell)$, we obtain the expected result
\begin{equation}
\label{rankn}
N^2= \frac{\pi \ell^3}{2 G},
\end{equation}
where $N$ is the rank of the gauge group of the dual ${\cal N}=4$
supersymmetric $d=4$ $SU(N)$ YM theory.
At last, it can be seen that for six dimensional boundary, the anomaly
in (\ref{anomaly1}) is equivalent to that given in \cite{henn}
\begin{eqnarray}
\label{anod6}
{\cal A}_{d=6}&=& - \frac{\ell^5}{115200 \pi G} R^3
\nonumber \\
&=& -\frac{1}{16 \pi G} \left(\frac{\ell^5}{64}\right) \left(
\frac{1}{2} R R_{ab} R^{ab}
- \frac{3}{15} R^3
- R^{ab} R_{acbd} R^{cd} \right. \nonumber \\
&& \left.
+ \frac{1}{5} R^{ab} D_a D_b R
- \frac{1}{2} R^{ab} \Box R_{ab}
+ \frac{1}{20} R \Box R \right).
\end{eqnarray}
In fact, since we are concerned about $S^6$ boundary, the terms in third line
including derivatives vanish. On the other hand, Eq.(\ref{anod6}) can be
verified by considering the central charge of $N$ coincident M5-branes in
the large $N$ limit. It has been shown that the central charge is proportional
to $N^3$ \cite{gubser2}. So, the anomaly on $S^6$ boundary with radius
$\ell$, $15/(64 \pi G \ell)$, is proportional to $N^3/(\pi^4 \ell^6)$.
Thus, we find \cite{mal}
\begin{equation}
\label{centn}
N^3 \sim \frac{\pi^3 \ell^5}{G}.
\end{equation}
Before ending of this section, it is useful on the next section
to consider another Euclidean AdS space with
different boundary geometry, $S^{d-1} \times S^1$,
\begin{equation}
\label{schads}
ds^2 =
\left( 1+ \frac{r^2}{\ell^2} \right) d\tau^2
+ \left( 1+ \frac{r^2}{\ell^2} \right)^{-1}dr^2
+ r^2 d\Omega_{d-1}^2~.
\end{equation}
For the $S^{d-1} \times S^1$ boundary, the functional $A$ in (\ref{beact})
becomes
\begin{equation}
\label{aftn2}
A(x^a;r_0) = \frac{\sqrt{\gamma_{d-1}}}{8\pi G}
\left(\frac{d-1}{\ell^2} \right)
\left( \frac{(d-1)(d-2)}{R} \right)^{d/2} \left(
1+ \frac{\ell^2 R}{(d-1)(d-2)} \right).
\end{equation}
Since all terms in expanding of Eq.(\ref{aftn2}) are divergent for $d>2$,
the counterterm action is just the negative of $S_{cl}$ in (\ref{beact})
\cite{kra}
\begin{equation}
\label{schadscnt}
\tilde{S}= \frac{1}{8\pi G}\int_{\partial X} d^d x \sqrt{g_0}
\left(\frac{d-1}{\ell}
\right)\left(1+ \frac{\ell^2 R}{(d-1)(d-2)} \right)^{1/2}.
\end{equation}
It can be shown that using $R_{ab}R^{ab} = R^2/(d-1)$ and expanding for
$\ell$, the counterterm action in (\ref{schadscnt}) is equivalent to
Eq.(\ref{cotactads}). However, it must be noted that while the counterterm
action for $S^d$ boundary is given by a finite sum of the series in
(\ref{cotactads}), for $S^{d-1} \times S^1$ boundary it is given by an
infinite sum. As mentioned in Ref.\cite{kra}, in the process the divergent
factors $1/(d-4)$, $1/(d-6)$, $\cdots$ in (\ref{cotactads}) are canceled.
Thus, while conformal invariance of the RA for $S^d$ boundary is broken
by the anomaly in (\ref{anomaly1}), for the $S^{d-1} \times S^1$ boundary
it is still conformal invariant.
\section{AF Space and Holographic Anomaly}
Now, consider counterterm actions for asymptotic flat spaces.
In Ref.\cite{kra}, it has been shown that the counterterm action for AF
spaces is not be able to obtain on taking a limit of $\ell \rightarrow \infty$
in the procedure adapted for AdS spaces, an iteration process using the
Gauss-Codazzi equations, because of mathematical difficulty. However,
in our procedure, those are simply obtained by taking the $\ell \rightarrow
\infty$ limit on the functional $A$'s
\begin{equation}
\label{afcnt1}
\tilde{S}
= -\frac{1}{8\pi G} \int_{\partial X} d^d x
\sqrt{-g_0} \sqrt{\frac{dR}{d-1}}
\end{equation}
in (\ref{aftn1even}) and (\ref{aftn1odd}), and
\begin{equation}
\label{afcnt2}
\tilde{S}
= -\frac{1}{8\pi G} \int_{\partial X} d^d x
\sqrt{-g_0} \sqrt{\frac{d-1}{d-2}R}
\end{equation}
in (\ref{aftn2}). In Eqs.(\ref{afcnt1}) and (\ref{afcnt2}), the
counterterm actions were written in the Lorentzian signature.
Given in Ref.\cite{kra}, the counterterm actions in (\ref{afcnt1}) and
(\ref{afcnt2}) can be written by
\begin{equation}
\label{afcntgen}
\tilde{S}= -\frac{1}{8\pi G} \int_{\partial X} d^d x
\sqrt{-g_0} \sqrt{\frac{R^3}{R^2 - R_{ab}R^{ab}}}.
\end{equation}
In fact, the expression of counterterm action for AF spaces in
(\ref{afcntgen}) is more general than those in (\ref{afcnt1}) and
(\ref{afcnt2}). Because it is available for the AF spaces with $S^{d-n}
\times \mbox{\rm$\mbox{I}\!\mbox{R}$}^n$ boundary geometries described by the metric
\begin{equation}
\label{afmetric}
\hat{G}_{\mu\nu}dx^\mu dx^\nu = (-dt^2 + dx^2_1+ \cdots + dx^2_{n-1})
+ dr^2 + r^2d\Omega^2_{d-n}.
\end{equation}
For AF spaces described by the metric (\ref{afmetric}), the functional $A$
becomes
\begin{equation}
\label{aftnaf}
A(x^a; \rho_0) = \frac{1}{8\pi G}\int^{\rho_0} d\rho N\sqrt{-g} R,
\end{equation}
and then the counterterm action is
\begin{eqnarray}
\label{afcnt3}
\tilde{S} &=& -\frac{1}{8\pi G}\int_{\partial X} d^d x \sqrt{-g_0}
\sqrt{\frac{d-n}{d-n-1}R}
\nonumber \\
&=& -\frac{1}{8\pi G} \int_{\partial X} d^d x
\sqrt{-g_0} \sqrt{\frac{R^3}{R^2 - R_{ab}R^{ab}}}.
\end{eqnarray}
Up to now, we have considered counterterm actions for manifolds with simple
boundary geometry (round sphere). Now, we speculate divergences of BAV due to
deviation of boundary from the round sphere. In Ref.\cite{kra}, an AF space
in spheroidal coordinates given by
\begin{equation}
\label{kerrmz}
\hat{G}_{\mu\nu} dx^\mu dx^\nu= -dt^2
+ {\rho^2 \over r^2 + a^2} dr^2 + \rho^2 d\theta^2
+ \sin^2{\theta} (r^2 +a^2) d\phi^2
+ r^2 \cos^2{\theta} d\Omega_{d-3}^2
\end{equation}
was investigated. This space can be obtained by setting the mass to zero in
higher dimensional Kerr metric \cite{myers}
\begin{eqnarray}
\label{kerrbh}
\hat{G}_{\mu\nu}dx^\mu dx^\nu &=&
-\frac{\Delta}{\rho^2}\left(dt - a\sin^2{\theta} d\phi \right)^2
+\frac{\sin^2{\theta}}{\rho^2}\left(adt - (r^2 + a^2)d\phi \right)^2
\nonumber \\
&& +\frac{\rho^2}{\Delta}dr^2 + \rho^2 d\theta^2
+ r^2 \cos^2{\theta} d\Omega_{d-3}^2,
\end{eqnarray}
where $\rho^2 =r^2 + a^2 \cos^2{\theta}$, $\Delta = r^2 -2mGr^{4-d} +a^2$, and
$m$ and $a$ are the black hole mass and the angular momentum per
unit mass, respectively. It is important that the metric in (\ref{kerrmz})
does not describe the asymptotic spacetime of the Kerr black hole in
(\ref{kerrbh}). Because, in the process, one is to meet a naked singularity.
It is just the flat spacetime metric of $n=1$ in Eq.(\ref{afmetric})
written in spheroidal coordinates.
The functional $A(x^a;r_0)$ for the metric (\ref{kerrmz}) becomes
\begin{eqnarray}
\label{aftnkerrmz}
A(x^a;r_0) &=& \frac{\sqrt{\gamma_{d-3}}}{8 \pi G}
\int^{r_0}dr r^{d-3} \left[
\frac{2a^2\left(
(d-3)\sin^2{\theta} - \cos^2{\theta} \right)}{
r^2 + a^2\cos^2{\theta}}
\right.
\nonumber \\
&&\left. +(d-1)(d-2) + (d-3)(d-4)\frac{a^2}{r^2}
\right]\sin{\theta}\cos^{d-3}{\theta}
\nonumber \\
&=& \frac{\sqrt{\gamma_{d-3}}}{8 \pi G}r_0^{d-2} \left[ (d-1) +\left(
d-3 + \frac{2((d-3)\sin^2{\theta} - \cos^2{\theta})}{d-4} \right)
\frac{a^2}{r_0^2} \right.
\nonumber \\
&&\left.-\frac{2\cos^2{\theta}((d-3)\sin^2{\theta} - \cos^2{\theta})}{d-6}
\frac{a^4}{r_0^4} + \cdots\right]\sin{\theta}
\cos^{d-3}{\theta},
\end{eqnarray}
where the divergence terms in the bracket are terminated by
\begin{equation}
\label{termtereven}
2a^2\left(
(d-3)\sin^2{\theta} - \cos^2{\theta} \right)
(- a^2\cos^2{\theta})^{(d-4)/2}r_0^{-(d-2)} \ln{r_0}
\end{equation}
in even $d$, and
\begin{equation}
\label{termterodd}
2a^2\left(
(d-3)\sin^2{\theta} - \cos^2{\theta} \right)
(- a^2\cos^2{\theta})^{(d-5)/2} r_0^{-(d-3)}
\end{equation}
in odd $d$, respectively. In the above calculation, the $d$-dimensional scalar
curvature $R$ is
\begin{eqnarray}
\label{curkerr}
R &=& \frac{2a^2\left((d-3)\sin^2{\theta} - \cos^2{\theta}\right)}{
(r^2 + a^2\cos^2{\theta})^2} + \frac{(d-3)(d-4)}{r^2\cos^2{\theta}}
\nonumber \\
&&+\frac{\left(2(2d-5) -(d-3)(d-4)\tan^2{\theta}\right)}{
r^2 + a^2\cos^2{\theta}}.
\end{eqnarray}
The appearance of logarithmic divergence in Eq.(\ref{termtereven}) looks
like strange. But, this is an artificial effect due to the fact that
we have not used yet the remaining Einstein's equations that are
normal-tangential and tangential-tangential projections.
In fact, rewriting the terms in the bracket of Eq.(\ref{aftnkerrmz}) as
\begin{eqnarray}
\label{diffterm}
&&(d-1) + \left(d-2-\cos^2{\theta} + \frac{d\sin^2{\theta} - 2}{d-4} \right)
\frac{a^2}{r_0^2}
\nonumber \\
&&+\left( (\cos^4{\theta} -\cos^2{\theta})
+ \frac{\cos^2{\theta}
(\cos^2{\theta} - d\sin^2{\theta})}{d-6} \right)\frac{a^4}{r_0^4}
+ \cdot \cdot \cdot,
\end{eqnarray}
comparing with the calculation of Ref.\cite{kra}, some additional
terms appear, $r_0^{-2}/(d-4),~r_0^{-4}/(d-6),\cdots$, which may cause the
logarithmic divergent term of (\ref{termtereven}). After substituting
the remaining equations into the action, those additional terms and the
logarithmic divergence would vanish.
However, this logarithmic divergence seems to tell us something more.
Following the procedure taken in the AdS descriptions,
the logarithmic divergence breaks conformal invariance of the
RA within a conformal anomaly
\begin{equation}
\label{anomaly2}
{\cal A}^{flat} = \frac{r_0^{d-1}}{4 \pi G}\left[ \frac{
a^2(-a^2 \cos^2{\theta})^{(d-4)/2}\left(
(d-3)\sin^2{\theta} - \cos^2{\theta} \right)}{
r_0^{2(d-2)}\sqrt{(r^2_0 + a^2\cos^2{\theta})
(r^2_0 + a^2)}} \right].
\end{equation}
For a strict argument, consider 4-dimensional boundary whose the conformal
anomaly, ${\cal A}_{d=4}^{flat}$, is
\begin{equation}
\label{afanod4}
{\cal A}_{d=4}^{flat}= \frac{r_0^3}{4 \pi G}(\sin^2{\theta} - \cos^2{\theta})
\left(\frac{
a^2}{r_0^6}\right)\left(1+ \frac{a^2 \cos^2{\theta}}{r_0^2}\right)^{-1/2}
\left(1+ \frac{a^2}{r_0^2} \right)^{-1/2}.
\end{equation}
Then we find that up to leading order the anomaly is proportional to $\Box R$
\begin{equation}
\label{afanod4sim}
{\cal A}_{d=4}^{flat}\sim\frac{r_0^3}{4 \pi G} \left(- \frac{1}{40} \Box R
+ {\cal O}\left(\frac{a^4}{r_0^8} \right) \right).
\end{equation}
As well known in the dual field theory on curved boundary, the
4-dimensional anomaly has an ambiguity that $\Box R$ term can be added to
the anomaly with an undetermined coefficient. This corresponds to the choice
of different schemes for regularizing the field theory. In our case,
this comes under different choices for the counterterm action.
So, we can add additional counterterms given by
\begin{equation}
\label{addcotact1}
\Delta \tilde{S} \sim r^3_0 \int_{\partial X} d^4 x \sqrt{-g_0}
\left( a E + b C_{abcd}C^{abcd} + c R^2 \right),
\end{equation}
where $E$ is the Euler invariant $E = R_{abcd}R^{abcd} - 4 R_{ab}R^{ab}
+ R^2$ and $C^{abcd}$ is the Weyl tensor. Taking the
coefficients as $b= -a$, $c=0$, then the additional counterterms become
\begin{equation}
\label{addcotact2}
\Delta \tilde{S} \sim -2a r^3_0 \int_{\partial X} d^4 x \sqrt{-g_0}
\left( R_{ab}R^{ab} - \frac{1}{3}R^2 \right).
\end{equation}
The additional counterterm action in (\ref{addcotact2}) is just that
observed by Solodukhin in Ref.\cite{solo}; Those counterterms cancel
the divergent terms which are caused by deviation of boundary from
round sphere. It should be also noted that in spite that the counterterm
actions in Eqs.(\ref{afcnt2}) and (\ref{afcntgen}) based on round
sphere boundary exactly cancel divergent terms of the gravitational
action for $d<6$, the additional counterterm action in (\ref{addcotact2})
is not proportional to the cubed boundary curvature $r^3_0 \int_{\partial X}
R^3$ but $r^3_0 \int_{\partial X} R^2$. This is because the first
correction (i.e., small deviation from round sphere) identically vanishes
\cite{solo}, i.e., leading terms of $R_{ab}R^{ab}$ and $R^2$ are canceled
each other in the brace of Eq.(\ref{addcotact2}).
\section{Summary and Discussions}
The counterterm subtracting method to define a finite gravitational action
on non-compact spacetime has been speculated.
It has been shown that using the ADM formalism, the counterterm action
could be explicitly written in terms of the intrinsic boundary geometry.
On the other hand, using the form of counterterm action, we have obtained
an expression of counterterm action available for {\it arbitrary} dimensional
AdS spaces with $S^d$ boundary geometry. Moreover, from this expression
the arbitrary dimensional conformal anomaly has been driven. Our additional
observation is that the counterterm action for AF spaces can be obtained
as taking the limit of $\ell \rightarrow \infty$ in the procedure adapted for
AdS spaces.
Another interesting observation in the description for counterterm action
developed in this paper has been given in the example of AF space with
nontrivial boundary geometry. It has been shown that the additional
counterterms to eliminate (leading) divergent terms due to deviation of
boundary from round sphere, which was suggested by Solodukhin \cite{solo},
can be imagined from appearance of logarithmic divergence in BAV and
perspective of the corresponding anomaly proportional to $\Box R$. It seems
that it is due to a deceptive procedure as skipping over tangential-tangential
and tangential-normal projections of Einstein's equations in calculation of
BAV. In fact, simply using the full Einstein's equations, then we would obtain
a BAV without the logarithmic divergent term. However, it appears that
there is something in the holographic sense. For the 5-dimensional Kerr-AdS
spacetime with the boundary of 4-dimensional rotating Einstein universe,
the trace of stress tensor does not vanish, but the evaluated BAV does not
contain a corresponding logarithmic divergence \cite{adel}. As mentioned in
the Ref.\cite{adel}, the corresponding logarithmic divergence of the BAV
should not be present for a spacetime which can be written locally
as a product. It has been also observed that the conformal anomaly, which
corresponds to the CFT anomaly, is also proportional to the $\Box R$
term. In addition, it can be shown \cite{ho} that taking the limit of
$\ell \rightarrow \infty$ on the Kerr-AdS description, the conformal anomaly
becomes the Eq.(\ref{afanod4sim}). These holographic similarities of the
Kerr spacetime with the Kerr-AdS description will be studied in detail
in Ref.\cite{ho}. It is also expected that this study will be of help
to prescribe the problem of construction of the flat-space S-matrix
which has been studied on the large radius limit in the AdS/CFT
correspondence and suffered from non-local holographic mapping
\cite{susskind3}\cite{giddings}.
\vspace{0.2in} {\bf Acknowledgments:} I thank V. Frolov, A. Zelnikov, and
Y. Gusev for helpful discussions. This work was supported
in part by Korea Science and Engineering Foundation and in part by
National Science and Engineering Research Council of Canada.
|
1,116,691,499,111 | arxiv | \section{Introduction}
\label{par2}
This paper is focused on the solution of root-finding problems in several variables where the system is composed by algebraic second degree equations. This kind of problems are of interest in many application areas, including queuing problems, neutron transport theory, linear quadratic differential games (see e.g. \cite{poloni2013quadratic} and references therein).
Our work is motivated by the study of non--negative steady states of biological interaction networks which frequently arise in systems biology \cite{feinberg1995,conradi2005,gabor2015}.
In particular, the problem of determining the steady states of complex Chemical Reaction Networks (CRNs) in healthy and cancer cells is considered \cite{Jordan,Sever}.
Indeed, by applying the law of mass action, the kinetics of the concentration of the proteins involved in the network can be modelled by a large first order polynomial system of Ordinary Differential Equations (ODEs).
Finally, when no exogenous factors are considered, the equation system is quadratic and autonomous \cite{Feinberg,Yu_Craciun_2018,Chellaboina}.
From an abstract point of view, steady states of large system of quadratic equation are far from being known, as a general theory exists up to the two dimensional case \cite{Reyn2007PhasePO}.
In the case of an ODE representing a CRN, unknown concentrations cannot assume negative values, and then the asymptotically steady states must fulfill the non--negative constrained algebraic system of equations deriving from setting equal to zero the time derivatives of the ODE system.
The number of involved unknown protein concentrations may scale up to several hundreds and an efficient and accurate algorithm to solve the non--negative steady state problem is at the basis of tuning the kinetic parameters of the ODE system starting from experimental data, thus enabling the study of cell cancer behaviour in real applications.
From a computational point of view, equilibrium can be found either in the direct way by taking the limit of the flux of the ODEs, or imposing the vanishing of the derivatives and solving the corresponding root-finding problem.
The direct approach is computationally expensive, especially when the orbits of the dynamical system are bent around the equilibrium point as the time to run across the orbit may become arbitrarily large \cite{DORMAND198019}.
On the other hand, the main drawback of the second strategy is that these systems do not usually show any mathematical property which can ensure convergence of the root-finding algorithm.
Indeed, pertinent good mathematical properties, such as matrix positive definiteness, depend on the form of the considered biological network, and are not ensured in a general case.
The typical structure of the ODE system associated with a chemical reaction network (based on the mass action law) not only prevents us from exploiting recent methods to find the steady state solutions by solving vector quadratic equations \cite{poloni2013quadratic}, but it also makes it difficult to use classical methods, such as the Newton's or gradient descent methods.
Indeed, as the non--negative steady state normally belongs to the frontier of the positive cone and therefore it has many components equal to zero, classical non--negative projected Newton-type methods are unstable as the Jacobian matrix - computed in a neighborhood of the solution - is strongly sparse and non-invertible.
Moreover, classical projected gradient methods are known to be stable but with slow convergence, especially in cases of coupling them with a non--negative projection.
In this work we propose to overcome these limitations, by introducing a root-finding strategy based on combining steps of Newton's method and steps of gradient descent.
While the Newton's method is applied to the algebraic equation system, the gradient descent is applied by scalarizing the system, i.e. minimizing the norm of the l.h.s. of the equation system \cite{Khanh1993OptimalityCV}.
To make the Newton's method more stable, instead of the standard orthogonal projection, we use a non--linear projection operator onto the non--negative orthant that is substantially a idempotent operator providing small positive entries rather than zero components.
Doing so, it improves the condition number of the Jacobian matrix preventing the Newton's step to be unstable and hence making regularization unnecessary.
Therefore we combine the (non--linearly projected) Newton's method with a gradient method to iteratively refine the starting point of the former until we get the convergence to a non--negative stationary point.
We prove the convergence of this combined technique provided that a proper backtracking rule on the gradient method is considered.
Moreover, we test the efficiency of the proposed technique in the case of simulated CRN data, showing that, compared to standard ODE solvers, this method computes the steady states achieving greater accuracy in less time.
The MATLAB\textsuperscript{\textregistered} codes implementing the proposed approach is freely available at the GitHub repository \url{https://github.com/theMIDAgroup/CRC_CRN.git}.
The rest of the paper is organized as follows.
In Sect. \ref{sec:math_pb} we introduce the mathematical formulation of the problem and we describe the proposed algorithm whose converge properties are studied in Sect. \ref{sec:NLPC}.
In Sect. \ref{sec:NLPC_for_CRN} we consider the problem of finding the asymptotically steady states of a CRN and we reformulate it as a non--negative constrained root--finding problem.
In Sect. \ref{sec:results} we show the results obtained by applying NLPC to a CRN designed for modeling cell signaling in colorectal cells and the most common mutations occurring in colorectal cancer.
Finally our conclusions are offered in Sect. \ref{sec:conclusions}.
\section{Mathematical formulation}
\label{sec:math_pb}
We consider the box--constrained set of nonlinear equations
\begin{equation}\label{eq:box_eqs}
\begin{cases}
\mbf{f}(\mbf{x}) = \mbf{0} \\
\mbf{x} \in \Omega
\end{cases}
\end{equation}
where $\Omega = \varprod_{i=1}^n \Omega_i \subseteq \mathbb{R}^n$ is the Cartesian product of $n$ closed intervals $\Omega_i \subseteq \mathbb{R}$, and $\mbf{f}: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a continuously differentiable function on $\Omega$. In the considered problem of finding the non--negative steady states of quadratic autonomous ODEs systems, $\mbf{f}$ is composed by second--degree polynomials and $\Omega$ is the positive convex cone.
Several numerical approaches have been proposed to solve (\ref{eq:box_eqs}). Among these, a classical fast approach is the projected Newton's method \cite{Bertsekas1982,bertsekas1997} where the projector on the closed convex set $\Omega$, $P:\mathbb{R}^n \rightarrow \Omega$ such that for all $\mbf{z} \in \mathbb{R}^n$
\begin{equation}\label{eq:classical_P}
P(\mbf{z}) = \underset{\mbf{y} \in \Omega}{\mathrm{argmin}} \norm{ \mbf{y} - \mbf{z}} \, ,
\end{equation}
is applied at each iteration of a Newton's scheme so that the final solution satisfies the box constraints in (\ref{eq:box_eqs}).
However, in the general case convergence properties of the projected Newton's methods strongly depend on the initial point, as no global convergence is guaranteed \cite{nesterov2006}.
Additionally, the standard orthogonal projector $P$ tends to provide iterative estimates on the boundary of $\Omega$ (e.g. when $\Omega$ is the positive cone $P$ sets to zero all the negative components) and therefore it may compromise the stability of the Newton's method as the Jacobian of $\mbf{f}$ can be singular computed at these boundary estimates.
An alternative approach to Newton's method consists in using the projected gradient descent method \cite{goldstein1964,levitin1966} for solving the optimization problem
\begin{equation}\label{eq:eqs_min}
\mbf{x} = \underset{\mbf{x} \in \Omega}{\mathrm{arg min}} \, \Theta(\mbf{x}) \, ,
\end{equation}
where
\begin{equation}\label{eq:def_theta}
\Theta(\mbf{x}) = \frac{1}{2} \norm{\mbf{f}(\mbf{x})}^2 \, .
\end{equation}
As opposite to Newton's method, many convergence results may be proved for the projected gradient methods, see e.g. \cite{bertsekas1997,wang2000} and references therein. On the other hand, the projected gradient method only has a sub--linear convergence rate and thus results to be slower than the Newton's algorithm also when properly designed strategies for selecting the stepsize are used \cite{barzilai1988,crisci2019,dai2005,serafini2005}.
Motivated by this consideration, some recent works have proposed to combine the two approaches \cite{shi1996,han2003,chen2017,di2021}. Along this lines we present the Non--Linearly Projected Combined (NLPC) method that is summarized in Algorithm \ref{algo:gp_newton}. The main ideas behind NLPC are two. First of all, we replace the classical projector with a novel operator $\mathcal{P}$, introduced in the following definition, that ensures the constraint $\mbf{x} \in \Omega$ to be respected while lowering the probability that the points defined at each iteration reach the boundary of $\Omega$.
\begin{definition}\label{def:our_projector}
Given $\Omega = \varprod_{i=1}^n \Omega_i$, $\Omega_i \subseteq \mathbb{R}$ convex for all $i \in \{1, \dots, n\}$, and given $\mbf{x} = \left(x_1, \dots, x_n \right)^\top \in \Omega$, we define the operator $\projP{\, \cdot}{\mbf{x}}: \mathbb{R}^n \rightarrow \Omega$, so that, for all $\mbf{z} \in \mathbb{R}^n$, $\projP{\mbf{z}}{\mbf{x}} = \left(\projPi{1}{z_1}{x_1}, \dots, \projPi{n}{z_n}{x_n} \right)^\top$ where
\begin{equation*}
\projPi{i}{v}{w} =
\begin{cases}
v \quad \text{if} \quad v \in \Omega_i\\
w \quad \text{if} \quad v \not\in \Omega_i
\end{cases}
\end{equation*}
\label{defP}
with $v\in\mathbb R$ and $w\in\Omega_i \subset \mathbb R$.
\end{definition}
The second idea behind NLPC method was inspired by \cite{chen2017} and consists in trying at each iteration a fixed number of step lengths $\alpha^j,\, j \in \{ 0, \dots, J\}$, along the Newton's direction $\mbf{d}_k$, where $\mbf{d}_k$ is defined as the solution of the set of equations $\mathbf{J}_{\mbf{f}}(\mbf{x}_k) \mbf{d}_k = -\mbf{f}(\mbf{x}_k)$, being $\mathbf{J}_{\mbf{f}}(\mbf{x}_k)$ the Jacobian matrix of $\mbf{f}$ evaluated in $\mbf{x}_k$. If none of the tested stepsizes satisfies the Armijo Rule
\begin{equation}\label{eq:AR_newton}
\|\mbf{f}(\projP{\mbf{x}_k + \alpha^{j} \mbf{d}_k}{\mbf{x}_k} )\| \leq \sqrt{1- \alpha^{j} \sigma_N} \ \|\mbf{f}(\mbf{x}_k)\| \, ,
\end{equation}
we then move along the gradient descent direction with a stepsize chosen so as to satisfy two conditions that, as we shall prove in Theorem \ref{th:conv_analysis}, guarantee a convergence result for NLPC algorithm.
\begin{algorithm}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}\SetKwInOut{AND}{and}
\SetKw{And}{\hspace{\algoskipindent}\itshape and\;}
\SetKwBlock{Condition}{}{}
\DontPrintSemicolon
\Input{$\mbf{x}_0 \in \Omega$; $\tau \in (0, +\infty)$; $\alpha $, $\sigma_N$, $\sigma_G$, $\rho \in (0, 1)$; $J \in \mathbb{N} \setminus \{0\}$}
$ FLAG \leftarrow 0$; $ k \leftarrow 0$\;
\While{$\|\mbf{f}(\mbf{x}_k)\| > \tau$}{
\eIf{FLAG=0}{solve $\mathbf{J}_{\mbf{f}}(\mbf{x}_k) \mbf{d}_k = -\mbf{f}(\mbf{x}_k) $\;
$j \leftarrow 0$\;
\While{$j\ \leq\ J$ }{
$\mbf{x}_{k+1} = \projP{\mbf{x}_k + \alpha^{j} \mbf{d}_k}{\mbf{x}_k}$ \;
\eIf{$\|\mbf{f}(\mbf{x}_{k+1} )\| \leq \sqrt{1- \alpha^{j} \sigma_N} \ \|\mbf{f}(\mbf{x}_k)\|$}{
$j \leftarrow J + 1$; $FLAG \leftarrow 0$; $k \leftarrow k + 1$}{
$j \leftarrow j+1$; $ FLAG \leftarrow 1$\;}
}
}{
$\mbf{d}_k = - \nabla \Theta(\mbf{x}_k)$ \
$j \leftarrow 0$\;
\While{$FLAG = 1$}{
$\mbf{x}_{k+1} = \projP{\mbf{x}_k + \alpha^j \mbf{d}_k}{\mbf{x}_k} $ \;
\eIf{\Condition{\mbox{$\Theta(\mbf{x}_{k+1}) \leq \Theta(\mbf{x}_k) + \sigma_G \nabla \Theta(\mbf{x}_k)^T (\mbf{x}_{k+1} - \mbf{x}_k )$} \;
\And
\mbox{$\sqrt{\textstyle\sum\limits_{i \in \mathcal{M}_{\alpha^j}(\mbf{x}_k)} (P_i(x_{k,i} + d_{k,i}) - x_{k,i})^2 } \geqslant \rho \sqrt{\textstyle\sum\limits_{i \in \mathcal{N}_{\alpha^j}(\mbf{x}_k)} (P_i(x_{k,i} + d_{k,i}) - x_{k,i})^2 }$} \;}}{
$FLAG \leftarrow 0$; $k \leftarrow k + 1$ \;
}{
$j \leftarrow j+1$}
}
}
}
\caption{The NLPC algorithm}
\label{algo:gp_newton}
\end{algorithm}
\section{Convergence properties of the NLPC method}\label{sec:NLPC}
Now we present a convergence analysis of NLPC algorithm after describing the main tools exploited in the algorithm and the main properties of $\mathcal{P}$.
\begin{definition}\label{def:set_B_M_N}
Given $\mbf{x} \in \Omega$, $\mbf{d} \in \mathbb{R}^n \setminus \{\mbf{0}\} $ and $\alpha > 0$, we define
\begin{equation}\label{def:b_xd}
\mathcal{B}(\mbf{x}, \mbf{d}) := \left\{ i \in \{1, \dots, n\}\ s.t.\ x_i + \alpha d_i \notin \Omega_i \ \forall \alpha>0 \right\}
\end{equation}
\begin{equation}\label{def:r_xd}
\mathcal{M}_{\alpha}(\mbf{x}, \mbf{d}) := \left\{ i \in \{1, \dots, n\}\ s.t.\ x_i + \alpha d_i \in \Omega_i \right\}
\end{equation}
\begin{equation}\label{def:c_xd}
\mathcal{N}_{\alpha}(\mbf{x}, \mbf{d}) := \{1, \dots, n\}\setminus \left( \mathcal{B}(\mbf{x}, \mbf{d}) \cup \mathcal{M}_{\alpha}(\mbf{x}, \mbf{d}) \right)
\end{equation}
For the ease of notation, when $\mbf{d} = - \nabla\Theta(\mbf{x})$, the set defined in (\ref{def:b_xd}), (\ref{def:r_xd}) and (\ref{def:c_xd}) will be simply denoted as $\mathcal{B}(\mbf{x})$, $\mathcal{M}_{\alpha}(\mbf{x})$, and $\mathcal{N}_{\alpha}(\mbf{x})$, respectively.
\end{definition}
\begin{figure}[H]
\begin{center}
\includegraphics[width=12cm]{sets_1D.pdf}
\end{center}
\caption{(a) Example where $i \in \mathcal{B}(\mbf{x}, \mbf{d})$. (b) Example where $i \in \mathcal{M}_{\alpha}(\mbf{x}, \mbf{d})$ and $i \in \mathcal{N}_{2 \alpha}(\mbf{x}, \mbf{d})$. }\label{fig:sets_1D}
\end{figure}
\begin{remark}
It can be easily shown that, for all $\alpha > 0$,
\begin{equation}
\mathcal{B}(\mbf{x}, \mbf{d})\ \cup \ \mathcal{M}_{\alpha}(\mbf{x}, \mbf{d}) \ \cup \ \mathcal{N}_{\alpha}(\mbf{x}, \mbf{d}) \ = \ \left\{1, \dots, n \right\} ,
\label{sets_BMN}
\end{equation}
and the three sets are pairwise disjoint. More in detail, as illustratively depicted in Figure \ref{fig:sets_1D}(a), the set $\mathcal{B}(\mbf{x}, \mbf{d})$ contains all the coordinates $i$ that prevent $\mbf{d}$ from being a feasible direction as moving along the corresponding component $d_i$ violates the constraint in (\ref{eq:box_eqs}). As an example, when $\Omega = \varprod_{i=1}^n [\ell_i, u_i]$, $\ell_i < u_i$,
\begin{displaymath}
\mathcal{B}(\mbf{x}, \mbf{d}) = \left\{ i \in \{1, \dots, n\}\ s.t. \ (x_i = \ell_i \land d_i < 0) \lor (x_i = u_i \land d_i > 0)\right\} \ .
\end{displaymath}
\end{remark}
Instead, fixed a stepsize $\alpha>0$, $\mathcal{M}_{\alpha}(\mbf{x}, \mbf{d})$ contains all the components $i$ for which $x_i + \alpha d_i$ still satisfies the constraint of the problem, while $\mathcal{N}_{\alpha}(\mbf{x}, \mbf{d})$ collects the components for which the stepsize $\alpha$ is too big, but a feasible vector may be found by lowering it, see Figure \ref{fig:sets_1D}(b). \\
\begin{proposition}\label{prop:proprieties_P}
Given $\mbf{x} \in \Omega$ and $\mbf{d} \in \mathbb{R}^n$, it holds
\begin{description}
\item[(a)] For all $\alpha > 0$
\begin{equation}\label{eq:prop_prod}
\left(\mbf{x} - \projP{\mbf{x}+\alpha\mbf{d}}{\mbf{x}} \right)^T \left(\mbf{x} + \alpha \mbf{d} - \projP{\mbf{x}+\alpha\mbf{d}}{\mbf{x}} \right) = 0
\end{equation}
and
\begin{equation}\label{eq:prop_norm}
\norm{\projP{\mbf{x}+\alpha\mbf{d}}{\mbf{x}} - \mbf{x} } = \alpha \sqrt{\sum_{i \in \mathcal{M}_{\alpha}(\mbf{x}, \mbf{d})}{d_i^2}} \ .
\end{equation}
\item[(b)] $\mathbf{g} : (0, \infty) \rightarrow \Omega$ s.t. $\mathbf{g}(\alpha) = \projP{\mbf{x} + \alpha \mbf{d}}{\mbf{x}}$ is continuous in 0.
\item[(c)] $\varphi : (0, \infty) \rightarrow \mathbb{R}$ s.t. $\varphi(\alpha) = \frac{\norm{\projP{\mbf{x}+\alpha\mbf{d}}{\mbf{x}}- \mbf{x}}}{\alpha}$ is monotonically nonincreasing. \label{phi_decreases}
\end{description}
\end{proposition}
\begin{proof}
(a) Equations (\ref{eq:prop_prod}) and (\ref{eq:prop_norm}) follow from Definition \ref{def:our_projector} which implies
\begin{displaymath}
\begin{split}
( \mbf{x} - & \projP{\mbf{x} + \alpha \mbf{d}}{\mbf{x}} )^T \left(\mbf{x} + \alpha \mbf{d} - \projP{\mbf{x} + \alpha \mbf{d}}{\mbf{x}} \right) = \\
& \sum_{i=1}^n {\left( x_i - \projPi{i}{x_i + \alpha d_i}{x_i} \right) \left(x_i + \alpha d_i - \projPi{i}{x_i + \alpha d_i}{x_i}) \right)} = 0 \,
\end{split}
\end{displaymath}
and
\begin{displaymath}
\begin{split}
\norm{\projP{\mbf{x}+\alpha\mbf{d}}{\mbf{x}} - \mbf{x} } & = \sqrt{\sum_{i=1}^n (\projPi{i}{x_i + \alpha_i d_i}{x_i}-x_i)^2} = \alpha \sqrt{\sum_{i\in\mathcal{M}_{\alpha}(\mbf{x}, \mbf{d})} d_i^2} \, .
\end{split}
\end{displaymath}
(b) The result directly follows from equation (\ref{eq:prop_norm}). Indeed
\begin{displaymath}
\norm{\mathbf{g}(\alpha) - \mathbf{g}(0)} = \norm{\projP{\mbf{x}+\alpha\mbf{d}}{\mbf{x}} - \mbf{x} } = \alpha \sqrt{\sum_{i \in \mathcal{M}_{\alpha}(\mbf{x}, \mbf{d})}{d_i^2}} \leq \alpha \ \norm{\mbf{d}} \xrightarrow[\alpha \to 0^+]{} 0 \, .
\end{displaymath}
(c) We observe that equation (\ref{eq:prop_norm}) implies $\varphi(\alpha) = \sqrt{\sum_{i\in \mathcal{M}_{\alpha}(\mbf{x}, \mbf{d})} d_i^2}$.
Since $\Omega_i$ is a convex set, given $0 \leq \alpha_1 \leq \alpha_2$ it holds $\mathcal{M}_{\alpha_2}(\mbf{x}, \mbf{d}) \subseteq \mathcal{M}_{\alpha_1}(\mbf{x}, \mbf{d})$, and thus
\begin{displaymath}
\begin{split}
\varphi(\alpha_1) - \varphi(\alpha_2) & = \sqrt{\sum_{i\in \mathcal{M}_{\alpha_1}(\mbf{x}, \mbf{d})} d_i^2} - \sqrt{\sum_{i\in \mathcal{M}_{\alpha_2}(\mbf{x}, \mbf{d})} d_i^2} \geq 0
\end{split}
\end{displaymath}
\qed
\end{proof}
As we shall see in the next theorems, the results shown in Proposition \ref{prop:proprieties_P} allow us to prove convergence properties of the proposed NLPC algorithm similar to those holding when the classical projector on the closed set $\Omega$ is employed instead of the operator $\mathcal{P}$ \cite{bertsekas1997,chen2017}.
\begin{theorem}\label{th:stat_point}
Given $\Theta : \mathbb{R}^n \rightarrow \mathbb{R}$ a continuously differentiable function on $\Omega$ and $\mbf{x} \in \Omega$, then $\mbf{x}$ is a stationary point of $\Theta$ in $\Omega$ iff
\begin{equation}\label{eq:proj_and_stat}
\projP{\mbf{x} - \alpha \nabla\Theta(\mbf{x})}{\mbf{x}} = \mbf{x} \quad \forall \, \alpha > 0 \, .
\end{equation}
\label{th:stationarity}
\end{theorem}
\begin{proof}
Let's consider the projector P on the closed convex set $\Omega$, defined in Eq. (\ref{eq:classical_P}).
The following properties hold: (i) $P(\mbf{z}) = (P_1(z_1), \dots, P_n(z_n))$, being
\begin{displaymath}
P_i(z_i) = \left\{
\begin{array}{cl}
\ell_i & \quad \textrm{if} \quad z_i \leq \ell_i \\
z_i & \quad \textrm{if} \quad \ell_i < z_i < u_i \\
u_i & \quad \textrm{if} \quad z_i \geq u_i
\end{array}\right.
\end{displaymath}
where we denoted $\Omega_i = [\ell_i, u_i]$ with $\ell_i,\, u_i \in \mathbb{R} \cup \left\{\pm \infty \right\}$; and (ii) $\mbf{x}$ is a stationary point iff $P(\mbf{x} - \alpha \nabla\Theta(\mbf{x})) = \mbf{x}$ $\forall \, \alpha > 0$ \cite{bertsekas1997}.
We now assume that condition (\ref{eq:proj_and_stat}) holds and thus, $\forall i \in \{1, \dots, n\}$,
\begin{displaymath}
\projPi{i}{x_i - \alpha \partial_i\Theta(\mbf{x})}{x_i} = x_i \quad \forall \alpha > 0 \, .
\end{displaymath}
For each $i \in \{1, \dots, n\}$, we then have only three possibilities:
\begin{itemize}
\item $x_i \in (\ell_i, u_i)$ and $\partial_i\Theta(\mbf{x}) = 0$. Then $P_i(x_i - \alpha \partial_i\Theta(\mbf{x})) = P(x_i) = x_i \ \forall \alpha > 0$;
\item $x_i = \ell_i$ and $\partial_i\Theta(\mbf{x}) \geq 0$. In this case,
$P_i(x_i - \alpha \partial_i\Theta(\mbf{x})) = \ell_i = x_i \ \forall \alpha > 0$
\item $x_i = u_i$ and $\partial_i\Theta(\mbf{x}) \leq 0$. In this case, $P_i(x_i - \alpha \partial_i\Theta(\mbf{x})) = u_i = x_i \ \forall \alpha > 0$
\end{itemize}
In all three cases we obtained $P_i(x_i - \alpha \partial_i\Theta(\mbf{x})) = x_i$ $\forall \alpha > 0$. This implies that $\mbf{x} \in \Omega$ is a stationary point on $\Omega$.
Conversely, consider a stationary point $\mbf{x} \in \Omega$ and let's assume it exists $\alpha > 0$ such that $\mathcal{P}(\mbf{x} - \alpha \nabla\Theta(\mbf{x}); \mbf{x}) \neq \mbf{x}$. From Proposition \ref{prop:proprieties_P} (a) it follows
\begin{displaymath}
\begin{split}
0 & = \left(\mbf{x} - \projP{\mbf{x}-\alpha\nabla\Theta(\mbf{x})}{\mbf{x}} \right)^T \left(\mbf{x} -\alpha\nabla\Theta(\mbf{x}) - \projP{\mbf{x}-\alpha\nabla\Theta(\mbf{x})}{\mbf{x}} \right) \\
& = \norm{\mbf{x} - \projP{\mbf{x}-\alpha\nabla\Theta(\mbf{x})}{\mbf{x}}}^2 - \alpha \nabla\Theta(\mbf{x})^T \left(\mbf{x} - \projP{\mbf{x}-\alpha\nabla\Theta(\mbf{x})}{\mbf{x}} \right)
\end{split}
\end{displaymath}
and thus
\begin{equation}
\nabla\Theta(\mbf{x})^T \left( \projP{\mbf{x}-\alpha\nabla\Theta(\mbf{x})}{\mbf{x}} - \mbf{x}\right) = - \frac{\norm{\mbf{x} - \projP{\mbf{x}-\alpha\nabla\Theta(\mbf{x})}{\mbf{x}}}^2}{\alpha} < 0 \, .
\label{eq9}
\end{equation}
Equation (\ref{eq9}) contradicts the assumption of $\mbf{x}$ being a stationary point, that would imply $\nabla\Theta(\mbf{x})^T(\mbf{z}-\mbf{x}) \geq 0$ $\forall \, \mbf{z} \in \Omega$ \cite{bertsekas1997}.\\
\qed
\end{proof}
\begin{theorem}\label{th:descent_dir}
Given $\Theta : \mathbb{R}^n \rightarrow \mathbb{R}$ a continuously differentiable function on $\Omega$ and $\mbf{x} \in \Omega$ that is not a stationary point of $\Theta$, then it exists $\alpha^*>0$ such that $\forall \alpha \in (0,\alpha^*]$ $\left( \projP{\mbf{x} - \alpha \nabla \Theta(\mbf{x})}{\mbf{x}} - \mbf{x} \right)$ is a descent direction for $\Theta$.
\end{theorem}
\begin{proof}
Since $\mbf{x}$ is not a stationary point, according to Theorem \ref{th:stat_point} it exists $\alpha^* > 0$ such that $\mathcal{P}(\mbf{x} - \alpha^* \nabla\Theta(\mbf{x}); \mbf{x}) \neq \mbf{x}$ and thus $\mathcal{P}(\mbf{x} - \alpha \nabla\Theta(\mbf{x}); \mbf{x}) \neq \mbf{x} \ \forall \alpha \in (0, \alpha^*]$ because $\Omega_i$ is a convex set $\forall \ i \in \{1, \dots, n\}$. Therefore, from (\ref{eq9}) it follows $\nabla\Theta(\mbf{x})^T \left( \projP{\mbf{x}-\alpha\nabla\Theta(\mbf{x})}{\mbf{x}} - \mbf{x}\right) < 0$.
\qed
\end{proof}
\begin{theorem}\label{th:GC}
Given $\Theta : \mathbb{R}^n \rightarrow \mathbb{R}$ a continuously differentiable function on $\Omega$, $\mbf{x} \in \Omega$ and $\sigma_G \in (0, 1)$, it exists $\overline{\alpha}>0$ so that for all $\alpha \in (0, \overline{\alpha}]$
\begin{equation}\label{AR_gradient}
\Theta(\projP{\mbf{x}-\alpha \nabla \Theta(\mbf{x})}{\mbf{x}}) \leqslant \Theta(\mbf{x}) + \sigma_G \nabla \Theta(\mbf{x})^T \left(\projP{\mbf{x} - \alpha \nabla \Theta(\mbf{x})}{\mbf{x}} - \mbf{x} \right) \, .
\end{equation}
\end{theorem}
\begin{proof}
If $\projP{\mbf{x}-\alpha \nabla \Theta(\mbf{x})}{\mbf{x}} = \mbf{x}$ for all $\alpha>0$, then the thesis holds for any $\overline{\alpha} > 0$. Therefore we can assume it exists $\widetilde{\alpha} \in (0, 1)$ such that $\projP{\mbf{x}-\alpha \nabla \Theta(\mbf{x})}{\mbf{x}} \neq \mbf{x}$ for all $\alpha \in (0, \widetilde{\alpha}]$. In the following we shall denote $\mbf{x}(\alpha) := \projP{\mbf{x}-\alpha \nabla \Theta(\mbf{x})}{\mbf{x}}$. \\ By the mean value theorem, it exists $\boldsymbol{\xi}_{\alpha}$ on the segment between $\mbf{x}$ and $\mbf{x}(\alpha)$ so that
\begin{displaymath}
\begin{split}
\Theta(\mbf{x}(\alpha)) - \Theta(\mbf{x}) & =
\nabla\Theta(\boldsymbol{\xi}_{\alpha})^T \left(\mbf{x}(\alpha) - \mbf{x}\right) \\
& = \sigma_G \nabla\Theta(\mbf{x})^T \left( \mbf{x}(\alpha) -\mbf{x} \right) - (\sigma_G -1) \nabla\Theta(\mbf{x})^T \left(\mbf{x}(\alpha) - \mbf{x} \right) + \\
& \qquad + \left(\nabla\Theta(\boldsymbol{\xi}_{\alpha}) - \nabla\Theta(\mbf{x})\right)^T \left(\mbf{x}(\alpha) - \mbf{x} \right) \, .
\end{split}
\end{displaymath}
and thus the inequality (\ref{AR_gradient}) can be rewritten as
\begin{displaymath}
\left(\nabla\Theta(\boldsymbol{\xi}_{\alpha}) - \nabla\Theta(\mbf{x})\right)^T \left(\mbf{x}(\alpha) - \mbf{x} \right) \leqslant (\sigma_G -1) \nabla\Theta(\mbf{x})^T \left(\mbf{x}(\alpha) - \mbf{x} \right)
\end{displaymath}
Since $\sigma_G<1$, from Proposition \ref{prop:proprieties_P} (c) it follows that
\begin{displaymath}
(\sigma_G - 1) \nabla\Theta(\mbf{x})^T \left(\mbf{x}(\alpha) - \mbf{x}\right) = (1-\sigma_G) \frac{\norm{\mbf{x}(\alpha) - \mbf{x}}^2}{\alpha}
\geqslant (1-\sigma_G) \frac{\norm{\mbf{x}(\widetilde{\alpha}) - \mbf{x}}}{\widetilde{\alpha}}\ \norm{\mbf{x}(\alpha) - \mbf{x}} > 0 \, .
\end{displaymath}
The theorem is proved if we show that it exists $\overline{\alpha} \in (0, \widetilde{\alpha}]$ such that for all $\alpha \in [0, \overline{\alpha}]$
\begin{displaymath}
\begin{split}
\left(\nabla\Theta(\boldsymbol{\xi}_{\alpha}) - \nabla\Theta(\mbf{x})\right)^T \left(\mbf{x}(\alpha) - \mbf{x} \right) \leqslant (1-\sigma_G) \frac{\norm{\mbf{x}(\widetilde{\alpha}) - \mbf{x}}}{\widetilde{\alpha}}\ \norm{\mbf{x}(\alpha) - \mbf{x}} \, .
\end{split}
\end{displaymath}
This follows from the fact that
\begin{displaymath}
\lim_{\alpha \rightarrow 0} \left| \left(\nabla\Theta(\boldsymbol{\xi}_{\alpha}) - \nabla\Theta(\mbf{x})\right)^T \frac{\left(\mbf{x} - \mbf{x}(\alpha) \right)}{\norm{\mbf{x} - \mbf{x}(\alpha)}}\right| \leqslant \lim_{\alpha \rightarrow 0} \norm{\nabla\Theta(\boldsymbol{\xi}_{\alpha}) - \nabla\Theta(\mbf{x})} = 0
\end{displaymath}
where the last equality is a consequence of Proposition \ref{prop:proprieties_P} (b) and of the regularity assumptions on $\Theta$.
\end{proof}
\qed
\begin{theorem}\label{thm:exist_alpha}
Given $\Theta : \mathbb{R}^n \rightarrow \mathbb{R}$, a continuously differentiable function on $\Omega$, $\mbf{x} \in \Omega$, and $\rho \in (0, 1]$, it exists $\overline{\alpha}>0$ so that, for all $\alpha \in (0, \overline{\alpha}]$, $\mathcal{N}_{\alpha} (\mbf{x})= \emptyset$ and thus
\begin{equation}\label{eq:new_cond}
\sqrt{ \sum_{i \in \mathcal{M}_{\alpha}(\mbf{x})} (P_i(x_i - \partial_i \Theta(\mbf{x})) - x_i)^2 }\geqslant\ \rho \sqrt{ \sum_{i \in \mathcal{N}_{\alpha}(\mbf{x})} (P_i(x_i - \partial_i \Theta(\mbf{x})) - x_i)^2 } \, .
\end{equation}
\end{theorem}
\begin{proof}
For all $i \in \left\{ 1, \dots, n \right\} \smallsetminus \mathcal{B}(\mbf{x})$ we only have three possibilities:
\begin{itemize}
\item $x_i \in \mathring{\Omega_i}$, where $\mathring{\Omega_i}$ denotes the interior of $\Omega_i$. Then, since $\mathring{\Omega}_i$ is an open set, $\exists \, \overline{\alpha}_i > 0$ such that $x_i- \alpha \partial_i \Theta (\mbf{x}) \in \mathring{\Omega}_i \subseteq \Omega_i$ $\forall \alpha \leq \overline{\alpha}_i$.
\item $x_i = \ell_i$ and $\partial_i \Theta (\mbf{x}) <0 $. Then $x_i- \alpha \partial_i \Theta (\mbf{x}) \in \Omega_i$ $\forall \alpha \leq \overline{\alpha}_i := -\frac{u_i - \ell_i}{\partial_i \Theta (\mbf{x})}$.
\item $x_i = u_i$ and $\partial_i \Theta (\mbf{x}) >0 $. Then $x_i- \alpha \partial_i \Theta (\mbf{x}) \in \Omega_i$ $\forall \alpha \leq \overline{\alpha}_i := \frac{u_i - \ell_i}{\partial_i \Theta (\mbf{x})}$
\end{itemize}
Therefore, for all $i \in \left\{ 1, \dots, n \right\} \smallsetminus \mathcal{B}(\mbf{x})$ it exists $\overline{\alpha}_i> 0$ such that $i \in \mathcal{M}_{\alpha}(\mbf{x})$ $\forall \alpha \in (0, \overline{\alpha}_i]$. By choosing $ \overline{\alpha} =\min\limits_{i \in \{1, \dots, n\} \smallsetminus \mathcal{B}(\mbf{x})} \overline{\alpha}_i$, it follows that, for all $\alpha \leq \overline{\alpha}$, $\mathcal{N}_{\alpha} (\mbf{x})= \emptyset$
and thus
\begin{equation}
\sqrt{ \sum_{i \in \mathcal{M}_{\alpha}(\mbf{x})} (P_i(x_i - \partial_i \Theta(\mbf{x})) - x_i)^2 } \geqslant 0 =\ \rho \sqrt{\sum_{i \in \mathcal{N}_{\alpha}(\mbf{x})} (P_i(x_i - \partial_i \Theta(\mbf{x})) - x_i)^2 } \, .
\end{equation}
Hence the theorem is proved. \qed
\end{proof}
\begin{remark}
The previous theorems hold in particular if $\Theta$ is defined as in equation (\ref{eq:def_theta}). Specifically, inequality (\ref{AR_gradient}) is the classical Armijo rule along the projection arc where we employed the operator introduced in Definition \ref{defP}. Inequality (\ref{eq:new_cond}) is an additional condition that prevents NLPC from choosing a too large stepsize, that would result in an actual update of only few components. An illustrative example can be seen in Figure \ref{fig:new_cond}.
Theorem \ref{th:GC} and Theorem \ref{thm:exist_alpha} together guarantee that the stepsize within the gradient descent step of the NLPC algorithm is well defined.
\end{remark}
\begin{figure}[H]
\begin{center}
\includegraphics[width=12cm]{new_cond.pdf}
\end{center}
\caption{Illustration of the benefit of the additional condition (\ref{eq:new_cond}). In (a) only the first component of $\mbf{x}$ is updated as $1 \in \mathcal{M}_{\alpha}(\mbf{x})$ and $2 \in \mathcal{N}_{\alpha}(\mbf{x})$. In this scenario NLPC may get stucked in a point which is not stationary because the chosen stepsize is too big and the second component never updated. As shown in (b), inequality (\ref{eq:new_cond}) prevents this issue by promoting the choice of a smaller stepsize so that an higher number of components is updated. Here, $\widehat{\mbf{x}} = \projP{\mbf{x}-\alpha \nabla \Theta(\mbf{x})}{\mbf{x}}$, $\mbf{x}_{min}$ is a stationary point of $\Theta$, and $\rho=1$. }\label{fig:new_cond}
\end{figure}
Henceforth, $\left\{ \mbf{x}_k \right\}_{k \in \mathbb{N}} \subseteq \Omega$ and $\left\{ \mbf{\alpha}^{j_k} \right\}_{k \in \mathbb{N}}$ shall denote a sequence of points generated with the NLPC algorithm described in Algorithm \ref{algo:gp_newton}, and the corresponding stepsizes, respectively. In particular, $\alpha \in (0, 1)$, while $j_k$ is a suitable exponent whose value belongs to a different range depending on whether the Newton's or the gradient descent approach has been used at the $k$-th iteration.
\begin{lemma}\label{red:dis_conds}
Let $\left\{ \mbf{x}_k \right\}_{k \in \mathbb{N}}$ be a sequence generated with the NLPC algorithm. For each $k \in \mathbb{N}$
\begin{description}
\item[(a)] if $\mbf{x}_{k+1}$ has been obtained with a projected gradient descent step, then
$\Theta(\mbf{x}_{k+1}) \leq \Theta(\mbf{x}_{k})$
\item[(b)] if $\mbf{x}_{k+1}$ has been obtained with a projected Newton's step, then $$\Theta(\mbf{x}_{k+1}) \leq \left( 1-\alpha^J \sigma_N \right)^{n_{k}+1} \Theta(\mbf{x}_0) \, ,$$
being $n_{k}$ the number of projected Newton's steps performed until iteration $k$.
\end{description}
\end{lemma}
\begin{proof}
(a) By the Armijo rule along the projected arc in equation (\ref{AR_gradient})
\begin{displaymath}
\Theta(\mbf{x}_{k+1}) \leq \Theta(\mbf{x}_{k}) + \sigma_G \nabla \Theta(\mbf{x}_{k})^T (\mbf{x}_{k+1} - \mbf{x}_{k}) \leq \Theta(\mbf{x}_{k})
\end{displaymath}
where the last inequality follows from Theorem \ref{th:descent_dir}. Thus we have the thesis.\\
(b) If $\mbf{x}_{k+1}$ is defined with the projected Newton's method, then it exists $j \in \{ 0, \dots, J \}$ such that
\begin{equation}\label{eq:dis_newton}
\Theta(\mbf{x}_{k+1}) \leq (1-\alpha^j \sigma_N)\ \Theta(\mbf{x}_{k}) \leq (1-\alpha^J \sigma_N)\ \Theta(\mbf{x}_{k}) \, ,
\end{equation}
where in last inequality we exploited the fact that $\alpha < 1$. The thesis follows by iteratively applying (\ref{eq:dis_newton}) for each projected Newton's step, and the results of point (a) for each projected gradient descent step.\\
\qed
\end{proof}
\begin{theorem}\label{th:conv_analysis}
Let $\left\{ \mbf{x}_k \right\}_{k \in \mathbb{N}}$ be a sequence generated with the NLPC algorithm, and let $\mbf{x}^*$ be an accumulation point of $\left\{ \mbf{x}_k \right\}_{k \in \mathbb{N}}$; then $\mbf{x}^*$ is a stationary point of $\Theta$ in $\Omega$. Additionally if the projected Newton's method has been used for infinitely many $k$, then $\mbf{x}^*$ is a solution of (\ref{eq:box_eqs}).
\end{theorem}
\begin{proof}
Since $\mbf{x}^*$ is an accumulation point of $\left\{ \mbf{x}_k \right\}_{k \in \mathbb{N}}$, there exists a subsequence $\left\{ \mbf{x}_{k} \right\}_{k\in K} \subseteq \left\{ \mbf{x}_k \right\}_{k \in \mathbb{N}} $, $K \subseteq \mathbb{N}$, such that $\lim\limits_{k(\in K) \to \infty} \mbf{x}_{k} = \mbf{x}^*$ and thus $\lim\limits_{k(\in K) \to \infty} \Theta(\mbf{x}_{k}) = \Theta(\mbf{x}^*)$.\\
For all $k \in K$, from Lemma $\ref{red:dis_conds}$ it follows
\[
\Theta(\mbf{x}_{k}) \leq \left(
1-\alpha^J \sigma_N \right)^{n_{k}} \Theta(\mbf{x}_0) \, .
\]
If the projected Newton's method has been used for infinitely many $k$, then $\lim\limits_{k(\in K) \to \infty} n_{k} = + \infty$; therefore
\[
\Theta(\mbf{x}^*) = \lim\limits_{k(\in K) \to \infty} \Theta(\mbf{x}_{k}) \leq \lim\limits_{k(\in K) \to \infty} \left( 1-\alpha^J \sigma_N \right)^{n_{k}} \Theta(\mbf{x}_0) = 0 \, .
\]
Hence $\Theta(\mbf{x}^*) = 0$, that is $\mbf{x}^*$ solves (\ref{eq:box_eqs}) and is a stationary point of $\Theta$.
Instead, if the projected gradient direction has been used for all but finitely many iterations, then it exists $ \overline{k} \in \mathbb{N}$ so that $\mbf{x}_{k+1}$ has been obtained through a gradient descent step $\forall\ k \geqslant \overline{k}$. From Lemma \ref{red:dis_conds} it follows $0 \leqslant \Theta(\mbf{x}_{k+1}) \leqslant \Theta(\mbf{x}_{k}) \ \forall \ k \geqslant \overline{k}$, that is $\{\Theta(\mbf{x}_{k})\}_{k \geqslant \overline{k}}$ is not increasing and bounded below by zero. Hence it converges and
$$\lim\limits_{k \to \infty} \left( \Theta(\mbf{x}_{k+1}) - \Theta(\mbf{x}_{k}) \right) = 0 \, .$$
Henceforth we shall denote with $\{ \alpha^{j_k} \}_{k \in \mathbb{N}}$ the sequence of stepsizes used within NLPC. From (\ref{AR_gradient}), (\ref{eq9}), and (\ref{eq:prop_norm}) it holds
\begin{displaymath}
\begin{split}
\Theta(\mbf{x}_{k+1}) - \Theta(\mbf{x}_{k}) & \leqslant \sigma_G \nabla \Theta(\mbf{x}_k)^T \left(\projP{\mbf{x}_k - \alpha^{j_k} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} - \mbf{x}_k \right) \\
& = - \sigma_G \frac{\norm{ \projP{\mbf{x}_k-\alpha^{j_k}\nabla\Theta(\mbf{x}_k)}{\mbf{x}_k}-\mbf{x}_k}^2}{\alpha^{j_k}} \\
& = - \sigma_G\ \alpha^{j_k} \sum_{i \in \mathcal{M}_{\alpha^{j_k}}(\mbf{x}_k)} (\partial_i \Theta(\mbf{x}_k))^2 \leq 0
\end{split}
\end{displaymath}
and thus
\begin{equation}\label{eq:dim_limit}
\lim\limits_{k (\in K) \to \infty}
\alpha^{j_k} \sum_{i \in \mathcal{M}_{\alpha^{j_k}}(\mbf{x}_k)} (\partial_i \Theta(\mbf{x}_k))^2 = 0 \, .
\end{equation}
Two cases exist: $\liminf\limits_{k(\in K) \to \infty} \alpha^{j_k} > 0$ (\textit{case 1}) and $\liminf\limits_{k(\in K) \to \infty} \alpha^{j_k} = 0$ (\textit{case 2}).
If \textit{case 1} holds, then equation (\ref{eq:dim_limit}) implies
\begin{displaymath}
\begin{split}
0 & =
\lim\limits_{k(\in K) \to \infty} \sum_{i \in \mathcal{M}_{\alpha^{j_k}}(\mbf{x}_{k})} (\partial_i \Theta(\mbf{x}_{k}))^2 \\
& \geq \lim\limits_{k(\in K) \to \infty} \sum_{i \in \mathcal{M}_{\alpha^{j_k}}(\mbf{x}_{k})} (P_i(x_{{k},i} - \partial_i \Theta(\mbf{x}_{k})) - x_{{k},i})^2 \\
& \geq \rho \lim\limits_{k(\in K) \to \infty} \sum_{i \in \mathcal{N}_{\alpha^{j_k}}(\mbf{x}_{k})} (P_i(x_{{k},i} - \partial_i \Theta(\mbf{x}_{k})) - x_{{k},i})^2 \, ,
\end{split}
\end{displaymath}
where the last inequality comes from the constraint described by (\ref{eq:new_cond}). Hence, in particular
\begin{displaymath}
\begin{split}
\lim\limits_{k(\in K) \to \infty} & \sum_{i \in \mathcal{M}_{\alpha^{j_k}}(\mbf{x}_{k})} (P_i(x_{{k},i} - \partial_i \Theta(\mbf{x}_{k})) - x_{{k},i})^2 = \\
& \lim\limits_{k(\in K) \to \infty} \sum_{i \in \mathcal{N}_{\alpha^{j_k}}(\mbf{x}_{k})} (P_i(x_{{k},i} - \partial_i \Theta(\mbf{x}_{k})) - x_{{k},i})^2 = 0 \, .
\end{split}
\end{displaymath}
Since additionally $(P_i(x_{{k},i} - \partial_i \Theta(\mbf{x}_{k})) - x_{{k},i}) = 0$ for all $i \in \mathcal{B}(\mbf{x}_{k})$, from equation (\ref{sets_BMN}), from the continuity of $\nabla \Theta$ and of the classical projector, and being $\mbf{x}^*$ the limit point of $\{\mbf{x}_{k}\}_{k \in K}$, it follows
\begin{displaymath}
||P(\mbf{x}^* - \nabla \Theta(\mbf{x}^*)) - \mbf{x}^*||^2 = \lim\limits_{k(\in K) \to \infty} \sum_{i=1}^n (P_i(x_{{k},i} - \partial_i \Theta(\mbf{x}_{k})) - x_{{k},i})^2 = 0 \, .
\end{displaymath}
Hence $\mbf{x}^*$ is a stationary point.
On the other hand, \textit{case 2} implies it exists an infinite set $K' \subset K$ such that
$\lim\limits_{k(\in K') \to \infty} \alpha^{j_k} = 0$, and thus $\lim\limits_{k(\in K') \to \infty} \alpha^{j_k-1} = 0$. Therefore, by defining $J = \left\{ i \in \{1, \dots, n\}\ s.t.\ i \notin \mathcal{B}(\mbf{x}^*)\ \land \ |\partial_i \Theta(\mbf{x}^*)| > 0 \right\}$, the following holds.
\begin{description}
\item[(i)] $\mbf{x}^*$ is a stationary point iff $J = \emptyset$. \\
More specifically, $|P_i(x^*_{i} - \partial_i \Theta(\mbf{x}^*)) - x^*_i| = 0$ iff $i \notin J$.
\item[(ii)] It exists $\overline{k}$ such that $\forall\ k \in K',\ k \geq \overline{k}$, $J \subseteq \mathcal{M}_{\alpha^{j_k-1}}(\mbf{x}_k)$. \\ Indeed, let's consider $i \in J$. Since in particular $i \notin \mathcal{B}(\mbf{x}^*)$ and $\alpha<1$, from Theorem \ref{thm:exist_alpha} it follows that it exists $\overline{j} \in \mathbb{N}$ such that $\mathcal{N}_{\alpha^j}(\mbf{x}^*) = \emptyset$ and $i \in \mathcal{M}_{\alpha^j}(\mbf{x}^*)$ $\forall j \geqslant \overline{j}$.
It can be easily shown that, being $\mbf{x}^*$ the limit point of $\left\{ \mbf{x}_{k} \right\}_{k\in K}$, this implies it exists $k_i$ such that $\forall\ k \in K'$ with $\ k \geq k_i$, $i \in \mathcal{M}_{\alpha^{j}}(\mbf{x}_k)$ $\forall j > \overline{j}$. Additionally, since $\lim\limits_{k(\in K') \to \infty} \alpha^{j_k-1} = 0$, it exists $k'_i \geq k_i $ such that $\forall\ k \in K',\ k \geq k'_i$, $\alpha^{j_k-1} < \alpha^{\overline{j}}$. Hence the thesis follows by considering $\overline{k} = \max\limits_{i \in \{1, \dots, n\}}\{ k'_i\}$.
\end{description}
To prove that $\mbf{x}^*$ is a stationary point we proceed by contradiction and we assume that it exists $i \in J$. Then, by the results in (i) and (ii), it follows
\begin{displaymath}
\begin{split}
\lim\limits_{k(\in K') \to \infty} & \sqrt {\sum_{i \in \mathcal{M}_{\alpha^{j_k-1}}(\mbf{x}_{k})} |P_i(x_{k,i}-\partial_i \Theta(\mbf{x}_{k}))-x_{k,i}|^2 } \\
& \geq \lim\limits_{k(\in K') \to \infty} \sqrt{ \sum_{i \in J} |P_i(x_{k,i}-\partial_i \Theta(\mbf{x}_{k}))-x_{k,i}|^2 } \\
& = \sqrt{ \sum_{i \in J} |P_i(x^*_{i}-\partial_i \Theta(\mbf{x}^*))-x^*_{i}|^2 } > 0 \, ,
\end{split}
\end{displaymath}
while, denoted $J^C = \left\{1, \dots, n \right\} \setminus J$,
\begin{displaymath}
\begin{split}
\lim\limits_{k(\in K') \to \infty} & \rho \sqrt{ \sum_{i \in \mathcal{N}_{\alpha^{j_k-1}}(\mbf{x}_{k})} |P_i(x_{k,i}-\partial_i \Theta(\mbf{x}_{k}))-x_{k,i}|^2 } \\
& \leq \rho \lim\limits_{k(\in K') \to \infty} \sqrt{ \sum_{i \in J^C} |P_i(x_{k,i}-\partial_i \Theta(\mbf{x}_{k}))-x_{k,i}|^2 } \\
& = \rho \sqrt{ \sum_{i \in J^C} |P_i(x^*_{i}-\partial_i \Theta(\mbf{x}^*))-x^*_{i}|^2 } = 0 \, .
\end{split}
\end{displaymath}
Therefore, for sufficiently large $k \in K'$
\begin{displaymath}
\sqrt{ \sum_{i \in \mathcal{M}_{\alpha^{j_k-1}}(\mbf{x}_{k})} |P_i(x_{k,i}-\partial_i \Theta(\mbf{x}_{k}))-x_{k,i}|^2 } \geq\ \rho \sqrt{ \sum_{i \in \mathcal{N}_{\alpha^{j_k-1}}(\mbf{x}_{k})} |P_i(x_{k,i}-\partial_i \Theta(\mbf{x}_{k}))-x_{k,i}|^2 }
\end{displaymath}
i.e. condition (\ref{eq:new_cond}) is satisfied by the stepsize $\alpha^{j_k-1}$ which is the last stepsize tried by NLPC before the chosen one. As a consequence, such a stepsize cannot satisfy condition (\ref{AR_gradient}), i.e.
\begin{displaymath}
\begin{split}
\Theta( \projP{\mbf{x}_k & -\alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k}) - \Theta(\mbf{x}_k)\\
& > \sigma_G \nabla \Theta(\mbf{x}_k)^T \left(\projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} - \mbf{x}_k \right) \, .
\end{split}
\end{displaymath}
By the mean value theorem, it exists $\tau \in (0, 1)$ such that, defined $\boldsymbol{\xi}_k =\tau \mbf{x}_{k} + (1-\tau) \projP{\mbf{x}_{k} - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_{k})}{\mbf{x}_{k}}$, then
\begin{displaymath}
\begin{split}
\Theta( \projP{\mbf{x}_k & -\alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k}) - \Theta(\mbf{x}_k) \\
& = \nabla \Theta(\boldsymbol{\xi}_k)^T \left(\projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} - \mbf{x}_k \right) \\
& = \left(\nabla \Theta(\boldsymbol{\xi}_k) - \nabla \Theta(\mbf{x}_k)\right)^T \left(\projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} - \mbf{x}_k \right) \\
& + \nabla \Theta(\mbf{x}_k)^T \left(\projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} - \mbf{x}_k \right) \, .
\end{split}
\end{displaymath}
Together with the previous result this implies
\begin{displaymath}
\begin{split}
(1-\sigma_G) & \nabla \Theta(\mbf{x}_k)^T \left(\mbf{x}_k - \projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} \right) \\
& < \left(\nabla \Theta(\boldsymbol{\xi}_k) - \nabla \Theta(\mbf{x}_k)\right)^T \left(\projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} - \mbf{x}_k \right) \\
& \leq \norm{\nabla \Theta(\boldsymbol{\xi}_k) - \nabla \Theta(\mbf{x}_k)} \cdot \norm{ \projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} - \mbf{x}_k} \, ,
\end{split}
\end{displaymath}
hence
\begin{displaymath}
\begin{split}
\frac{1}{1-\sigma_G} \norm{\nabla \Theta(\boldsymbol{\xi}_k) - \nabla \Theta(\mbf{x}_k)}
& > \frac{\nabla \Theta(\mbf{x}_k)^T \left(\mbf{x}_k - \projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} \right) }{\norm{ \projP{\mbf{x}_k - \alpha^{j_k-1} \nabla \Theta(\mbf{x}_k)}{\mbf{x}_k} - \mbf{x}_k}}\\
& = \sqrt{\sum_{i \in \mathcal{M}_{\alpha^{j_k-1}}(\mbf{x}_{k})} (\partial_i \Theta (\mbf{x}_{k}))^2 } \, ,
\end{split}
\end{displaymath}
where the last equality comes from equations (\ref{eq9}) and (\ref{eq:prop_norm}).
Therefore, from the properties of the classical projector and from the result previously shown in (ii), it follows
\begin{displaymath}
\begin{split}
0 & = \lim\limits_{k(\in K') \to \infty} \sqrt{\sum_{i \in \mathcal{M}_{\alpha^{j_k-1}}(\mbf{x}_{k})} (\partial_i \Theta (\mbf{x}_{k}))^2 } \\
& \geq \lim\limits_{k(\in K') \to \infty} \sqrt{\sum_{i \in \mathcal{M}_{\alpha^{j_k-1}}(\mbf{x}_{k})} |P_i(x_{k,i}-\partial_i \Theta(\mbf{x}_{k}))-x_{k,i}|^2 } \\
& \geq \sqrt{\sum_{i \in J} |P_i(x^*_{i}-\partial_i \Theta(\mbf{x}^*))-x^*_{i}|^2} \, ,
\end{split}
\end{displaymath}
which is possible only if $J=\emptyset$ and thus contradicts our hypothesis.
\qed
\end{proof}
As a final remark, we observe that all the results proven in this section can be easily extended to the case where the gradient direction is normalized \cite{Wattetal2020}.
\section{Application to chemical reaction networks}\label{sec:NLPC_for_CRN}
Let's consider a chemical reaction network (CRN) composed of $r$ chemical reactions involving $n$ well-mixed proteins. Specifically, in this work we will focus on the CRN devised for modeling cell signaling during the G1-S transition phase in colorectal cells described in \cite{Tortolina2015,sommariva2021_scirep} and henceforth denoted as CR-CRN. In this case $n=419$ and $r=851$.
By assuming that the law of mass action holds \cite{Yu_Craciun_2018,sommariva2021_JMB}, the dynamics of the CRN gives rise to a set of $n$ ordinary differential equations (ODEs)
\begin{equation}
\dot{\mbf{x}} = \mbf{S} \mbf{v}(\mbf{x}, \mbf{k})\label{eq:ODEs_CRN}
\end{equation}
where the state vector $\mbf{x} \in \mathbb{R}^n_+$ contains the protein molecular concentrations (nM); the superposed dot denotes the time derivative; $\mbf{S}$ is the constant stoichiometric matrix of size $n \times r$; $\mbf{k} \in \mathbb{R}_+^r$ are the rate constants of the reactions; and $\mbf{v}(\mbf{x}, \mbf{k}) \in \mathbb{R}^r_+$ is the time-variant vector of the reaction fluxes. Specifically, from the law of mass action it follows \cite{otero2017}
\begin{equation}\label{eq:def_v}
\mbf{v}(\mbf{x}, \mbf{k}) = \textrm{diag}(\mbf{k}) \mbf{z}(\mbf{x})
\end{equation}
where the elements of $\mbf{z}(\mbf{x})$ are monomials of the form $z_j(\mbf{x}) = \prod_{i=1}^n x_i^{p_{ij}}$, $\forall j = 1, \dots, r$. In the CR-CRN, $p_{ij} \in \left\{0, 1, 2 \right\}$, because all the reactions involve up to two reactants.
Given a solution $\mbf{x}(t)$ of system (\ref{eq:ODEs_CRN}), a semi-positive conservation vector is a constant vector $\boldsymbol{\gamma} \in \mathbb{N}^n \setminus \{\mathbf{0}\}$ for which it exists $c \in \mathbb{R}_+$ so that $\boldsymbol{\gamma}^T \mathbf{x}(t) = c$ $\forall \ t$ \cite{sommariva2021_JMB,shinar2009}. Conservation vectors can be determined by studying the kernel of $\mathbf{S}^T$ \cite{schuster1991}. In the remaining of the paper we shall assume that the considered CRN satisfies the following properties in terms of its conservation vectors.
\begin{description}
\item{(i)} The CRN is \textit{weakly elemented} \cite{sommariva2021_JMB}, i.e. it exists a set of independent generators $\left\{ \boldsymbol{\gamma}_1, \dots, \boldsymbol{\gamma}_p \right\} \subset \mathbb{N}^n \setminus \{\mbf{0}\}$ of the semi-positive conservation vectors such that $p = n - \text{rank}(\mathbf{S})$ and, up to a change of the proteins order,
\begin{equation}
\mathbf{N} :=
\begin{bmatrix}
\boldsymbol{\gamma}_1^T \\
\vdots \\
\boldsymbol{\gamma}_{p}^T \end{bmatrix}
= \left[\mathbf{I}_p, \mathbf{N}_2 \right] \ ,
\end{equation}
being $\mathbf{I}_p$ the identity matrix of size $p \times p$.
\item{(ii)} The CRN satisfies the \textit{global stability condition} \cite{sommariva2021_JMB}, i.e. for each $\mbf{c} \in \mathbb{R}^p_+$ it exists a unique asymptotically stable state on the stoichiometric compatibility class (SCC) $\left\{\mbf{x} \in \mathbb{R}^n_+\ \text{s.t.}\ \mbf{N}\mbf{x} = \mbf{c} \right\}$. Fixed a SCC, the corresponding asymptotically stable state $\mbf{x}_e \in \mathbb{R}^n_+$ solves the system
\begin{equation}\label{eq:rec_system}
\begin{cases}
\mbf{S} \mbf{v}(\mbf{x}, \mbf{k}) = 0 \\
\mbf{N} \mbf{x} - \mbf{c} = 0.
\end{cases}
\end{equation}
\end{description}
\begin{lemma}\label{lemma:squared_system}
For a weakly elemented CRN satisfying the global stability condition, the system in (\ref{eq:rec_system}) is equivalent to the square system
\begin{equation}\label{eq:square_system}
\begin{cases}
\mbf{S}_2 \mbf{v}(\mbf{x}, \mbf{k}) = 0 \\
\mbf{N} \mbf{x} - \mbf{c} = 0
\end{cases}
\end{equation}
where $\mbf{S}_2$ is a matrix of size $(n-p) \times r$ defined by the last $n-p$ rows of $\mbf{S}$.
\end{lemma}
\begin{proof}
Obviously, a solution of (\ref{eq:rec_system}) also solves (\ref{eq:square_system}). On the other hand, let $\mbf{x}_e$ be a solution of (\ref{eq:square_system}). The theorem is proved by showing that
\begin{displaymath}
\mbf{S}_1 \mbf{v}(\mbf{x}_e, \mbf{k}) = 0 \ ,
\end{displaymath}
being $\mbf{S}_1$ the matrix of size $p \times r$ defined by the first $p$ rows of $\mbf{S}$. To this end we observe that, since any conservation vector belongs to the kernel of $\mathbf{S}^T$, it holds
\begin{displaymath}
\mbf{0} = \mbf{N} \mbf{S} = \mbf{S}_1 + \mbf{N_2} \mbf{S_2} \ .
\end{displaymath}
Therefore
\begin{displaymath}
\mbf{S}_1 \mbf{v}(\mbf{x}_e, \mbf{k}) = - \mbf{N_2} \mbf{S_2} \mbf{v}(\mbf{x}_e, \mbf{k}) = 0\ .
\end{displaymath}
\qed
\end{proof}
According to Lemma \ref{lemma:squared_system}, in a weakly elemented CRN satisfying the global stability condition, the equilibrium point on a fixed SCC can be computed by solving a box--constrained system as in equation (\ref{eq:box_eqs}), being $\Omega = \mathbb{R}_+^n$ and
\begin{equation}\label{eq:f_CRN}
\mbf{f}(\mbf{x}) = \left[
\begin{array}{c}
\mbf{S}_2 \mbf{v}(\mbf{x}, \mbf{k}) \\
\mbf{N} \mbf{x} - \mbf{c}
\end{array}\right] \, .
\end{equation}
\begin{lemma} Consider the function $\mbf{f}:\mathbb{R}^n \rightarrow \mathbb{R}^n$ defined as in equation (\ref{eq:f_CRN}). $\mbf{f}$ is continuously differentiable on $\mathbb{R}^n_+$ and
\begin{equation}\label{eq:J_f_CRN}
\mbf{J}_{\mbf{f}}(\mbf{x}) = \left[
\begin{array}{c}
\mbf{S}_2 \textrm{diag}(\mbf{k}) \mbf{J}_\mbf{z}(\mbf{x}) \\
\mbf{N}
\end{array}\right],
\end{equation}
where $[J_{\mbf{z}}(\mbf{x})]_{ji} = p_{ij} x_i^{p_{ij}-1} \prod_{\ell=1, \ell \neq i}^n x_\ell^{p_{\ell j}}$, $\forall i \in \{ 1, \dots, n\}$ and $j = 1, \dots, r$.
\end{lemma}
\begin{proof}
The thesis follows from the definition of the reaction fluxes in
Eq. (\ref{eq:def_v}). \qed
\end{proof}
\section{Numerical results on the CR-CRN}\label{sec:results}
\subsection{General consideration}\label{sec:implementation}
To show the advantages of using NLPC for computing the asymptotically stable states of a CRN, we applied it to the CR-CRN. The parameters describing the network in a physiological state have been extensively described in previous works \cite{Tortolina2015,sommariva2021_scirep,sommariva2021_JMB} and can be downloaded from the GitHub repository \url{https://github.com/theMIDAgroup/CRC_CRN.git} as MATLAB\textsuperscript{\textregistered} structure. This includes the list of proteins and reactions involved in the network, as well as the values of the rate constants $\mathbf{k}$ and of the total conserved moieties $\mathbf{c}$. The corresponding stoichiometric matrix $\mathbf{S}$ and reaction fluxes $\mathbf{v}(\mathbf{x}, \mathbf{k})$ can be derived as described in the previous section. The aforementioned repository also contains the MATLAB\textsuperscript{\textregistered} codes implementing the NLPC algorithm and the analysis shown in this paper.
We exploited the model introduced by Sommariva and colleagues \cite{sommariva2021_scirep,sommariva2021_JMB} to test the proposed approach under different biologically-plausible conditions. Specifically, we modified the values of the parameters $\mathbf{k}$ and $\mathbf{c}$ as described in \cite{sommariva2021_scirep} to simulate the effect of some of the mutations that most commonly arise in colorectal cancer. A total of 9 different mutations was considered (loss of function of APC, AKT, SMAD4, PTEN, p53 and gain of function of k-Ras, Raf, PI3K, Betacatenin) which give rise to as many different mutated networks.
From a practical point of view, if not otherwise specified, the parameters required in input by Algorithm \ref{algo:gp_newton} were set as follows. The threshold within the stopping criterion was $\tau=10^{-12}$, while $\sigma_N = \sigma_G = 10^{-4}$ and $\rho = 10^{-2}$. The initial stepsize was $\alpha=0.79$ and a maximum of $J=20$ stepsizes was tested within each iteration of the Newton's method. NLPC is initialized with a point $\mathbf{x}_0$ randomly drawn from the SCC $\left\{\mbf{x} \in \mathbb{R}^n_+\ \text{s.t.}\ \mbf{N}\mbf{x} = \mbf{c} \right\}$ by exploiting the procedure presented in \cite{sommariva2021_JMB}. Additionally, we only retained points such that the condition number of the Jacobian matrix $\mbf{J}_{\mbf{f}}(\mbf{x}_0)$ was lower than $10^{17}$. To avoid the algorithm getting stuck in a stationary point that is not a zero of $\mathbf{f}$ we also set a maximum number of allowed iterations: if the stopping criterion was not reached after 250 iterations, then a new initial point $\mbf{x}_0$ was drawn from the SCC and NLPC was restarted.
Finally, to speed up the performance of NLPC, within the gradient descent method we normalized the gradient direction \cite{Wattetal2020} and we fixed a maximum number of tested stepsizes also for this approach: if conditions (\ref{AR_gradient}) and (\ref{eq:new_cond}) were not met after 40 possible values of the step length, we chose the last tested value and at the following NLPC iteration we performed again a gradient descent step.
\subsection{Comparison with a classical dynamic approach}
\label{par:dyn}
A classical approach \cite{sommariva2021_JMB,sommariva2021_scirep} for computing the stationary state of system (\ref{eq:ODEs_CRN}) on a given SCC consists in simulating the whole concentration dynamics $\mbf{x}(t)$ by solving the Cauchy problem
\begin{equation}\label{eq:chauchy_pb}
\begin{cases}
\dot{\mbf{x}} = \mbf{S} \mbf{v}(\mbf{x}, \mbf{k}) \\
\mbf{x}(0) = \mbf{x}_0
\end{cases} \, ,
\end{equation}
where $\mbf{x}_0$ is a point on the SCC, and then computing the asymptotic value
\begin{equation}\label{eq:dyn_sol}
\mbf{x}_{dyn} = \lim_{\mbf{t} \to +\infty} \mbf{x}(t)\ .
\end{equation}
In this section we compare the results obtained through this approach with those from NLPC algorithm.
To this end, we started from the CR-CRN and we built 10 different experiments, by varying the values of the kinetic parameters $\mbf{k}$ and of the total conserved moieties $\mbf{c}$ that define the SCC, so as to mimic a colorectal cell either healthy or affected by one of the 9 mutations listed in Subsect. \ref{sec:implementation}. For each experiment, we sampled 50 initial points $\mbf{x}_0^{(j)}$ on the corresponding SCC. For each initial point, i.e. for $j = 1, \dots, 50$, we computed the solution $\mbf{x}_{nlpc}^{(j)}$ provided by the NLPC algorithm and we compared it with the asymptotically stable state $\mbf{x}_{dyn}^{(j)}$ computed through the dynamic approach just described. Specifically, as in \cite{sommariva2021_JMB}, we used the MATLAB\textsuperscript{\textregistered} tool \texttt{ode15s} \cite{Shampine} to integrate the ODEs system in (\ref{eq:chauchy_pb}) on the interval $[0, 2.5 \cdot 10^7]$ and we defined $\mbf{x}_{dyn}^{(j)}$ as the value of the computed solution at the last time-point of the interval.
As shown in Fig. \ref{fig:time} and \ref{fig:time_precision}, NLPC outperforms the dynamic approach in terms of both accuracy of the obtained results and computational cost. Indeed, Fig. \ref{fig:time} shows that in all the 10 considered experiments, the elapsed time for the NLPC algorithm, averaged across 50 runs obtained by varying the initial points, ranges from about $5$ sec (mutated network with gain of function of k-Ras) to $33$ sec (mutated network with loss of function of PTEN). On the contrary, the results of the dynamic approach show an higher variability across the different CRNs and the averaged elapsed time scales up to about $8$ min in the network incorporating a gain of function mutation of PI3K. It is worth noticing that, for each of the 10 experiments, few runs of NLPC required an higher elapsed time (higher than the third quartile of the corresponding distributions). These runs needed a large number of restarts of the NLPC algorithm due to the fact that the maximum number of 250 iterations was reached without meeting the stopping criterion on the norm of $\mbf{f}$, probably because the gradient method tended to stationary points that were not roots of $\mbf{f}$. Future work will be devoted to refining the stopping criterion so that, when needed, NLPC is restarted before reaching 250 iterations.
Since we are looking for the roots of $\mbf{f}$, the accuracy of the obtained results was evaluated by computing the $\ell_2$-norm of $\mbf{f}$ in the solutions provided by the two algorithms, namely $\mbf{x}_{nlpc}^{(j)}$ and $\mbf{x}_{dyn}^{(j)}$ , $j \in \{1, \dots, 50\}$. As shown in Fig. \ref{fig:time_precision}, for all 10 considered experiments the norm of $\mbf{f}$ in the NLPC solutions, $\mbf{x}_{nlpc}^{(j)}$, was always below $10^{-12}$ as imposed by the stopping criterion of the algorithm. Instead the value of $||\mbf{f}(\mbf{x}_{dyn}^{(j)})||$ ranged between $10^{-2}$ to $10^1$, regardless of the time employed to compute the solution $\mbf{x}_{dyn}^{(j)}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm]{bp_time_grad2.png}
\end{center}
\caption{Elapsed time for the NLPC algorithm to converge compared to the time required to compute the equilibrium point by solving the dynamical system in (\ref{eq:chauchy_pb}). Boxplots summarize the values obtained across 50 different runs for 10 distinct networks mimicking either a physiological state (phys) or a mutation affecting the protein shown in the axis labels.}\label{fig:time}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm]{scatter_nlpc_dyn2.png}
\end{center}
\caption{Accuracy as a function of the elapsed time for the NLPC algorithm (left) and the dynamic approach (right). Accuracy is quantified as the norm of $\mbf{f}$ evaluated in the results provided by the two algorithms, $\mbf{x}_{nlpc}$ and $\mbf{x}_{dyn}$, respectively. In each panel, 50 different results are shown for each of the considered CRNs that mimic mutation of k-Ras, Raf and PTEN (orange diamonds), physiological state and mutation of Betacatenin, APC, AKT, SMAD4, PTEN, p53 (yellow crosses), and mutation of PI3K (purple dots). This color code as been chosen so as to cluster together results for which the times required for computing $\mbf{x}_{dyn}$ were similar, as depicted in Fig. \ref{fig:time}.
Notice the different scale on the y-axis.}\label{fig:time_precision}
\end{figure}
\subsection{Benefits of the operator $\mathcal{P}$ over the classical projector}
The goal of this section is to quantify the benefit of using the operator $\mathcal{P}$ instead of the classical projector $P$ on the closed convex set $\Omega$ defined in Eq. (\ref{eq:classical_P}). To this end, for each of the 10 experiments defined in the previous section, and for each of the 50 initial points $\mathbf{x}_0^{(j)}$, $j \in \{1, \dots, 50\}$, drawn on the corresponding SCCs, we computed the solution of NLPC by replacing in Algorithm \ref{algo:gp_newton} the proposed operator $\mathcal{P}$ with the classical projector $P$. We denoted with $\mbf{x}_{ort}^{(j)}$ the corresponding solution.
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm]{bp_restarts_ort.png}
\end{center}
\caption{Number of restarts required by NLPC in order to satisfy the stopping criterion within the fixed maximum number of iterations. The boxplots describe the values obtained across 50 different runs for 10 distinct networks when, within NLPC, we employed the proposed non linear projector $\mathcal{P}$ (red) or the classical orthogonal projector $P$ (green).}\label{fig:attempts}
\end{figure}
As shown in Figure \ref{fig:attempts}, if combined with the classical projector, NLPC algorithm requires a higher number of restarts and thus a higher elapsed time than those required when the proposed operator is used. Specifically, the ratio between the number of restarts required by the projector $P$ and the one required by the operator $\mathcal{P}$, averaged over all the 10 considered experiments and all the sampled initial points, is around 4.87.
The bad performances of the projector $P$ are caused by the fact that at any given iteration $k$ all the negative components of the novel proposed point $\mbf{x}_{k+1}$ are set equal to zero. As a consequence, the percentage of proteins estimated as having a null concentration increases sharply and this results in a high condition number of the corresponding Jacobian matrix $\textbf{J}_{\textbf{f}}$ defined as in \eqref{eq:J_f_CRN}.
In turn, the ill-conditioning of $\textbf{J}_{\textbf{f}}$ compromises the stability of Newton's method and thus NLPC algorithm tends to spend most of the allowed iterations by performing gradient descent steps.
As shown in Table \ref{tab:proj_ort}, the use of the operator $\mathcal{P}$ helps preventing this issue.
\begin{table}[ht]
\centering
\caption{Average and standard deviation over 50 initial points of the maximum number of null components (first row) and the maximum condition number of the Jacobian matrix $\mbf{J}_{\mbf{f}}$ (second row) reached across the iterations performed by NLPC. Results obtained by using the novel non liner projector $\mathcal{P}$ (first column) and the classical orthogonal projector $P$ (second column) are compared. Since results across the 10 considered experiments were similar, only those concerning the original physiological CR-CRN are shown. }\label{tab:proj_ort}
\begin{tabular}{c|*{2}{c|}} \cline{2-3} & \textbf{Novel proj.} $\mathcal{P}$ & \textbf{Orth. proj. $P$} \\
\hline
\multicolumn{1}{|c|}{\textbf{Num. null components (\%)}} &0.59 $\pm$ 0.14& 36.99 $\pm$ 8.42 \\\hline
\multicolumn{1}{|c|}{\textbf{Cond. Number $\mbf{J}_{\mbf{f}}$ (log. scale)}} & 14 $\pm$ 2& 17 $\pm$ 3\\
\hline
\end{tabular}
\end{table}
\section{Conclusions}\label{sec:conclusions}
In this paper an iterative algorithm for solving rootfinding box--constrained problems is presented. It combines both Newton’s and gradient descent methods and exploits the operator $\mathcal{P}$ in Def. \eqref{defP} for assuring the required constraints at each iteration (and preventing numerical instability issues that would occur if the projector $P$ was applied).
Together with a suitable backtracking rule we prove that the method converges to a stationary point of the objective function in Eq. \eqref{eq:def_theta}. Despite outperforming the dynamic approach both in accuracy and speed, in CRNs’ framework the NLPC algorithm provides less information than simulating the whole concentration dynamics.
However, in many contexts such as tuning kinetic parameters starting from experimental data or for topics described in \cite{sommariva2021_scirep,sommariva2021_JMB} the comprehension of the whole dynamic is not required, but only knowing equilibrium points of the system is of interest.
Finally, defining and implementing a stop criterion in case the algorithm converged to stationary points which do not coincide with roots of $\mbf{f}$ would be interesting.
This study is left by the authors for future work.
\begin{acknowledgements}
S.B. was granted a Ph.D. scholarship by Roche S.p.A., Italy. F.B. and S.S. have been partially supported by Gruppo Nazionale per il Calcolo Scientifico (GNCS-INdAM).\\
\noindent
\textbf{Data Availability}
The datasets and codes generated and analysed during the current study are available in the GitHub repository, \url{https://github.com/theMIDAgroup/CRC_CRN.git}
\end{acknowledgements}
\bibliographystyle{spmpsci_unsrt}
|
1,116,691,499,112 | arxiv |
\subsection{Layer-Wise Adaptive Model Aggregation}
\textbf{Adaptive Model Aggregation Algorithm} -- We propose FedLAMA, a layer-wise adaptive model aggregation scheme.
Algorithm \ref{alg:fedlama} shows FedLAMA algorithm.
There are two input parameters: $\tau'$ is the base aggregation interval and $\phi$ is the interval increase factor.
First, the parameters at layer $l$ are synchronized across the clients after every $\tau_l$ iterations (line $6$).
Then, the proposed metric $d_l$ is calculated using the synchronized parameters $\mathbf{u}_l$ (line $7$).
At the end of every $\phi \tau'$ iterations, FedLAMA adjusts the model aggregation interval at all the $L$ layers. (line $9$).
Algorithm \ref{alg:lama} finds the layers that can be less frequently aggregated making a minimal impact on the total model discrepancy.
First, the layer-wise degree of model discrepancy is estimated as follows.
\begin{align}
\delta_l = \frac{\sum_{i=1}^{l} \hat{d}_i * \textrm{dim}(\mathbf{u}_i)}{\sum_{i=1}^{L} \hat{d}_i * \textrm{dim}(\mathbf{u}_i)},
\end{align}
where $\hat{d}_i$ is the $i^{th}$ smallest element in the sorted list of the proposed metric $d$.
Given $l$ layers with the smallest $d_l$ values, $\delta_l$ quantifies their contribution to the total model discrepancy.
Second, the communication cost impact is estimated as follows.
\begin{align}
\lambda_l = \frac{\sum_{i=1}^{l} \textrm{dim}({\mathbf{u}_i})}{\sum_{i=1}^{L} \textrm{dim}({\mathbf{u}_i})}
\end{align}
$\lambda_l$ is the ratio of the parameters at the $l$ layers with the smallest $d_l$ values.
Thus, $1 - \lambda_l$ estimates the number of parameters that will be more frequently synchronized than the others.
As $l$ increases, $\delta_l$ increases while $1 - \lambda_l$ decreases monotonically.
Algorithm \ref{alg:lama} loops over the $L$ layers finding the $l$ value that makes $\delta_l$ and $1 - \lambda_l$ similar.
In this way, it finds the aggregation interval setting that slightly sacrifices the model discrepancy while remarkably reducing the communication cost.
\begin{algorithm}[t]
\SetKwInOut{Input}{Input}
\SetKwRepeat{Do}{do}{while}
\Input{$\mathbf{d}$: the observed model discrepancy at all $L$ layers, $\tau'$: the base aggregation interval, $\phi$: the interval increasing factor.}
Sorted model discrepancy: $\hat{\mathbf{d}} \leftarrow $ sort ($\mathbf{d}$)\;
Sorted index of the layers: $\hat{\mathbf{i}} \leftarrow $ argsort ($\mathbf{d}$)\;
Total model size: $\lambda \leftarrow \sum_{l=1}^{L}$ dim($\mathbf{u}_l$)\;
Total model discrepancy: $\delta \leftarrow \sum_{l=1}^{L} d_l * \textrm{dim}(\mathbf{u}_l)$\;
\For{$l = 1$ to $L$}{
$\delta_l \leftarrow (\sum_{i=1}^{l} \hat{d}_i * \textrm{dim}(\mathbf{u}_i)) / \delta$\;
$\lambda_l \leftarrow (\sum_{i=1}^{l} \textrm{dim}(\mathbf{u}_i)) / \lambda$\;
Find the layer index: $i \leftarrow \hat{i}_l$ \;
\If{$\delta_l < \lambda_l$}{
$\tau_i \leftarrow \phi \tau'$\;
}
\Else{
$\tau_i \leftarrow \tau'$\;
}
}
\textbf{Output:} {$\mathbf{\tau}$}: the adjusted aggregation intervals at all $L$ layers.\;
\caption{
Layer-wise Adaptive Interval Adjustment.
}
\label{alg:lama}
\end{algorithm}
Figure \ref{fig:cross} shows the $\delta_l$ and $1 - \lambda_l$ curves collected from a) CIFAR-10 (ResNet20) training and b) CIFAR-100 (Wide-ResNet28-10) training.
The x-axis is the number of layers to increase the aggregation interval and the y-axis is the $\delta_l$ and $1 - \lambda_l$ values.
The cross point of the two curves is much lower than $0.5$ on y-axis in both charts.
For instance, in Figure \ref{fig:cross}.a), the two curves are crossed when $x$ value is $9$, and the corresponding $y$ value is near $0.2$.
That is, when the aggregation interval is increased at those $9$ layers, $20\%$ of the total model discrepancy will increase by a factor of $\phi$ while $80\%$ of the total communication cost will decrease by the same factor.
Note that the cross points are below $0.5$ since the $\delta_l$ and $1 - \lambda_l$ are calculated using the $d_l$ values sorted in an increasing order.
It is worth noting that FedLAMA can be easily extended to improve the convergence rate at the cost of having minor extra communications.
In this work, we do not consider finding such interval settings because it can increase the latency cost, which is not desired in Federated Learning.
However, in the environments where the latency cost can be ignored, such as high-performance computing platforms, FedLAMA can accelerate the convergence by adjusting the intervals based on the cross point of $1 - \delta_l$ and $\lambda_l$ calculated using the list of $d_l$ values sorted in a decreasing order.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{image/fig_cross}
\caption{
The comparison between the model discrepancy increase factor $\delta_l$ and the communication cost decrease factor $1 - \lambda_l$ for a) CIFAR-10 and b) CIFAR-100 training.
}
\label{fig:cross}
\end{figure}
\textbf{Impact of Aggregation Interval Increasing Factor $\phi$} --
In Federated Learning, the communication latency cost is usually not negligible, and the total number of communications strongly affects the scalability.
When increasing the aggregation interval, Algorithm \ref{alg:lama} multiplies a pre-defined small constant $\phi$ to the fixed base interval $\tau'$ (line 10).
This approach ensures that the communication latency cost is not increased while the network bandwidth consumption is reduced by a factor of $\phi$.
FedAvg can be considered as a special case of FedLAMA where $\phi$ is set to $1$.
When $\phi > 1$, FedLAMA less frequently synchronize a subset of layers, and it results in reducing their communication costs.
When increasing the aggregation interval, FedLAMA multiplies $\phi$ to the base interval $\tau'$.
So, it is guaranteed that the whole model parameters are fully synchronized after $\phi \tau'$ iterations.
Because of the layers with the base aggregation interval $\tau'$, the total model discrepancy of FedLAMA after $\phi \tau'$ iterations is always smaller than that of FedAvg with an aggregation interval of $\phi \tau'$.
\section{Convergence Analysis} \label{sec:analysis}
\subsection {Preliminaries}
\textbf{Notations} --
All vectors in this paper are column vectors.
$\mathbf{x} \in \mathbb{R}^d$ denotes the parameters of one local model and $m$ is the number of clients.
The stochastic gradient computed from a single training data point $\boldsymbol{\xi}$ is denoted by $g(\mathbf{x},\boldsymbol{\xi})$.
For convenience, we use $g(\mathbf{x})$ instead.
The full batch gradient is denoted by $\nabla F(\mathbf{x})$.
We use $\|\cdot\|$ and $\| \cdot \|_{op}$ to denote $l2$ norm and matrix operator norm, respectively.
\textbf{Assumptions} --
We analyze the convergence rate of FedLAMA under the following assumptions.
\begin{enumerate}
\item {(Smoothness). Each local objective function is L-smooth, that is, $\| \nabla F_i (\mathbf{x}) - \nabla F_i (\mathbf{y}) \| \leq L \| \mathbf{x} - \mathbf{y} \|, \forall i \in \{ 1, \cdots, m \}$.}
\item {(Unbiased Gradient). The stochastic gradient at each client is an unbiased estimator of the local full-batch gradient: $\mathop{\mathbb{E}}_{\xi}[ g_i(\mathbf{x}, \xi)] = \nabla F_i(\mathbf{x})$.}
\item {(Bounded Variance). The stochastic gradient at each client has bounded variance: $\mathop{\mathbb{E}}_\xi[\| g_i(\mathbf{x}, \xi) - \nabla F_i (\mathbf{x}) \|^2 \leq \sigma^2], \forall i \in \{ 1, \cdots, m \}, \sigma^2 \geq 0$.}
\item {(Bounded Dissimilarity). For any sets of weights $\{p_i \geq 0\}_{i=1}^{m}, \sum_{i=1}^{m} p_i = 1$, there exist constants $\beta^2 \geq 1$ and $\kappa^2 \geq 0$ such that $\sum_{i=1}^{m} p_i \| \nabla F_i(\mathbf{x}) \|^2 \leq \beta^2 \| \sum_{i=1}^{m} p_i \nabla F_i (\mathbf{x}) \|^2 + \kappa^2$. If local objective functions are identical to each other, $\beta^2 = 1$ and $\kappa^2 = 0$.}
\end{enumerate}
\subsection {Analysis}
We begin with showing two key lemmas.
All the proofs can be found in Appendix.
\begin{lemma}
\label{lemma:framework}
(Framework) Under Assumption $1 \sim 3$, if the learning rate satisfies $\eta \leq \frac{1}{2L}$, FedLAMA ensures
\begin{equation}
\begin{split}
\frac{1}{K} \sum_{k=1}^{K} \mathop{\mathbb{E}}\left[ \left\| \nabla F(\mathbf{u}_k) \right\|^2 \right] &\leq \frac{2}{\eta K} \mathop{\mathbb{E}}\left[F(\mathbf{u}_1) - F(\mathbf{u}_{*}) \right] + 2 \eta L \sigma^2 \sum_{i=1}^{m} (p_i)^2 \\
& \quad + \frac{L^2}{K} \sum_{k=1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_k - \mathbf{x}_{k}^{i} \right\|^2 \right].\\
\end{split}
\end{equation}
\end{lemma}
\begin{lemma}
\label{lemma:discrepancy}
(Model Discrepancy) Under Assumption $1 \sim 4$, FedLAMA ensures
\begin{equation}
\begin{split}
\frac{1}{K} \sum_{k=1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_k - \mathbf{x}_{k}^{i} \right\|^2 \right] & \leq \frac{2 \eta^2 (\tau_{max} - 1) \sigma^2}{1 - A} + \frac{A \kappa^2}{L^2 (1 - A)} \\
& \quad + \frac{A \beta^2}{K L^2 (1 - A)} \sum_{k=1}^{K} \mathop{\mathbb{E}} \left[ \left\| \nabla F (\mathbf{u}_k) \right\|^2 \right], \\
\end{split}
\end{equation}
where $A = 4 \eta^2 (\tau_{max} - 1)^2 L^2$ and $\tau_{max}$ is the largest averaging interval across all the layers.
\end{lemma}
Based on Lemma \ref{lemma:framework} and \ref{lemma:discrepancy}, we analyze the convergence rate of FedLAMA as follows.
\begin{theorem}
\label{theorem:fedlama}
Suppose all $m$ local models are initialized to the same point $\mathbf{u}_1$. Under Assumption $1 \sim 4$, if FedLAMA runs for $K$ iterations and the learning rate satisfies $\eta \leq \min \left\{\frac{1}{2L}, \frac{1}{L \sqrt{2\tau_{max} (\tau_{max} - 1) (2\beta^2 + 1)}} \right\}$, the average-squared gradient norm of $\mathbf{u}_k$ is bounded as follows
\begin{align}
\mathop{\mathbb{E}}\left[\frac{1}{K}\sum_{i=1}^{K} \| \nabla F(\mathbf{u}_k) \|^2\right] & \leq \frac{4}{\eta K}\left( \mathop{\mathbb{E}}\left[ F(\mathbf{u}_{1}) - F(\mathbf{u}_{*}) \right] \right) + 4\eta \sum_{i=1}^{m} p_i^2 L \sigma^2 \label{eq:theorem} \\
& \quad\quad + 3 \eta^2 (\tau_{max} - 1) L^2 \sigma^2 + 6 \eta^2 \tau_{max} (\tau_{max} - 1) L^2 \kappa^2 \nonumber,
\end{align}
where $\mathbf{u}_{*}$ indicates a local minimum and $\tau_{max}$ is the largest averaging interval across all the layers.
\end{theorem}
\textbf{Remark 1.} (Linear Speedup) With a sufficiently small diminishing learning rate and a large number of training iterations, FedLAMA achieves linear speedup.
If the learning rate is $\eta = \frac{\sqrt{m}}{\sqrt{K}}$, we have
\begin{align}
\mathop{\mathbb{E}}\left[\frac{1}{K}\sum_{i=1}^{K} \| \nabla F(\mathbf{u}_k) \|^2\right] & \leq \mathcal{O}\left(\frac{1}{\sqrt{mK}}\right) + \mathcal{O}\left(\frac{\sqrt{m}}{\sqrt{K}}\right) + \mathcal{O}\left(\frac{m}{K}\right) \label{eq:complexity}
\end{align}
If $K > m^3$, the first term on the right-hand side becomes dominant and it achieves linear speedup.
\textbf{Remark 2.} (Impact of Interval Increase Factor $\phi$) The worst-case model discrepancy depends on the largest averaging interval across all the layers, $\tau_{max} = \phi \tau'$.
The larger the interval increase factor $\phi$, the larger the model discrepancy terms in (\ref{eq:theorem}).
In the meantime, as $\phi$ increases, the communication frequency at the selected layers is proportionally reduced.
So, $\phi$ should be appropriately tuned to effectively reduce the communication cost while not much increasing the model discrepancy.
\subsection{Convergence Analysis}
Herein, we provide the proofs of the lemmas and theorem shown in Section \ref{sec:analysis}.
\subsubsection {Preliminaries}
FedLAMA periodically chooses a few layers that will be less frequently synchronized.
We call these layers Least Critical Layers (LCL) for short.
\textbf{Notations} --
All vectors in this paper are column vectors.
$\mathbf{x} \in \mathbb{R}^d$ denotes the parameters of one local model and $m$ is the number of workers.
The stochastic gradient computed from a single training data point $\boldsymbol{\xi}$ is denoted by $g(\mathbf{x},\boldsymbol{\xi})$.
For convenience, we use $g(\mathbf{x})$ instead.
The full batch gradient is denoted by $\nabla F(\mathbf{x})$.
We use $\|\cdot\|$ and $\| \cdot \|_{op}$ to denote $l2$ norm and matrix operator norm, respectively.
\textbf{Objective Function} --
In this paper, we consider federated optimization problems as follows.
\begin{align}
\underset{\mathbf{x} \in \mathbb{R}^d}{\min}\left[F(\mathbf{x}) := \sum_{i=1}^{m} p_i F_i(\mathbf{x}) \right] \label{eq:objective_appendix},
\end{align}
where $p_i = n_i / n$ is the ratio of local data to the total dataset, and $F_i(\mathbf{x}) = \frac{1}{n_i} \sum_{\xi \in \mathcal{D}} f_i(\mathbf{x}, \xi)$
is the local objective function of client $i$.
$n$ is the global dataset size and $n_i$ is the local dataset size.
Note that, by definition, $\sum_{i=1}^{m} p_i = 1$.
\textbf{Averaging Matrix} --
We define a time-varying averaging matrix $\mathbf{W_k} \in \mathbb{R}^{dm \times dm}$.
First, $\mathbf{W}$ is defined as follows.
\begin{equation}
\label{eq:w}
\mathbf{W_k} =
\begin{cases}
\mathbf{P}, \hfill \hspace{0.5cm} \textrm{ if } k \textrm{ mod } \tau_{min} \textrm{ is } 0 \\
\mathbf{I}, \hfill \textrm{ otherwise }
\end{cases}
\end{equation}
$\mathbf{P}$ is also a time-varying averaging matrix and $\mathbf{I}$ is an identity matrix.
$\mathbf{P}$ consists of $m \times m$ blocks each of which is a diagonal matrix of size $d \times d$.
\begin{equation}
\label{eq:p}
\mathbf{P} =
\begin{cases}
\mathbf{P}^1, \textrm{ for } m \textrm{ diagonal blocks} \\
\mathbf{P}^0, \textrm{ for all the other blocks}
\end{cases}
\end{equation}
where $\mathbf{P}^1$ is a $d \times d$ diagonal matrix that has $1$ for the diagonal elements that correspond to the LCL parameters and $1 / m$ for all the other diagonal elements.
Likewise, $\mathbf{P}^0$ is a $d \times d$ diagonal matrix that has $0$ for the diagonal elements that correspond to the LCL parameters and $1 / m$ for all the other diagonal elements.
Note that, regardless of which layers are chosen as LCL, $\mathbf{P}$ is a symmetric matrix of size $dm \times dm$.
The averaging matrix $\mathbf{P}$ has the following properties:
\begin{enumerate}
\item {$\mathbf{P}\mathbf{1}_{dm} = \mathbf{1}_{dm}$.}
\item {The product of any different two $\mathbf{P}$ matrices consists only of diagonal block matrices because all the blocks in any $\mathbf{P}$ are diagonal.}
\item {The product of any different two $\mathbf{P}$ matrices is symmetric since all possible $\mathbf{P}$ matrices are symmetric and the diagonal blocks within each $\mathbf{P}$ are all the same.}
\item {The product of any $\mathbf{P}_i$ and $\mathbf{P}_j$ contains all $\frac{1}{m}$ elements in both $\mathbf{P}_i$ and $\mathbf{P}_j$.}
\end{enumerate}
We also define a full-averaging matrix $\mathbf{J}$ as follows.
\begin{equation}
\begin{split}
& \mathbf{J} = \frac{1}{m}\mathbf{1}_{m}\mathbf{1}_{m}^{\top} \otimes \mathbf{I}_{d \times d},\\
\end{split}
\end{equation}
where $\otimes$ indicates Kronecker product, $\mathbf{1}_{m}\in\mathbb{R}^{m}$ is a vector of $1$s, and $\mathbf{I}_{d \times d}\in\mathbb{R}^{d\times d}$ is an identity matrix.
Note that, due to the first property of $\mathbf{P}$, the product of $\mathbf{J}$ and any $\mathbf{P}$ is always $\mathbf{J}$.
\textbf{Vectorization} --
We define a vectorized form of $m$ local model parameters $\mathbf{x}_k \in \mathbb{R}^{dm}$ and its gradients $\mathbf{g}_k \in \mathbb{R}^{dm}$ as follows
\begin{equation}
\label{eq:vectorized}
\begin{split}
& \mathbf{x}_k = vec\left\{ \mathbf{x}_k^1, \mathbf{x}_k^2, \cdots, \mathbf{x}_k^m \right\} \\
& \mathbf{g}_k = vec\left\{ g_1(\mathbf{x}_k^1), g_2(\mathbf{x}_k^2), \cdots, g_m(\mathbf{x}_k^m) \right\} \\
& \mathbf{f}_k = vec\left\{ \nabla F_1(\mathbf{x}_k^1), \nabla F_2(\mathbf{x}_k^2), \cdots, \nabla F_m(\mathbf{x}_k^m) \right\}.
\end{split}
\end{equation}
Using the above notations, the weighted average of the distance between the averaged model $\mathbf{u}_k$ and the local models $\mathbf{x}_{k}^{i}$ can be written as follows.
\begin{equation}
\sum_{i=1}^{m} p_i \left\| \mathbf{u}_k - \mathbf{x}_{k}^{i} \right\|^2 = \left\| \mathbf{J} \mathbf{x}_{k} - \mathbf{x}_{k} \right\|^2 = \left\| (\mathbf{J} - \mathbf{I})\mathbf{x}_{k} \right\|^2
\end{equation}
We also define the following additional vectorized forms of the weighted model parameters and gradients for convenience.
\begin{equation}
\label{eq:weighted}
\begin{split}
\mathbf{\hat{x}}_k &= \text{vec} \left\{\frac{\mathbf{x}_{k}^{1}}{\sqrt{p_1}}, \frac{\mathbf{x}_{k}^{2}}{\sqrt{p_2}}, \cdots, \frac{\mathbf{x}_{k}^{m}}{\sqrt{p_m}} \right\} \\
\mathbf{\hat{g}}_k &= \text{vec} \left\{ \frac{g_{1}(\mathbf{x}_{k}^{1})}{\sqrt{p_1}}, \frac{g_{2}(\mathbf{x}_{k}^{2})}{\sqrt{p_2}}, \cdots, \frac{g_{m}(\mathbf{x}_{k}^{m})}{\sqrt{p_m}} \right\} \\
\mathbf{\hat{f}}_k &= \text{vec} \left\{ \frac{\nabla F_{1}(\mathbf{x}_{k}^{1})}{\sqrt{p_1}}, \frac{\nabla F_{2}(\mathbf{x}_{k}^{2})}{\sqrt{p_2}}, \cdots, \frac{\nabla F_{m}(\mathbf{x}_{k}^{m})}{\sqrt{p_m}} \right\}\\
\end{split}
\end{equation}
\textbf{Assumptions} --
We analyze the convergence rate of FedLAMA under the following assumptions.
\begin{enumerate}
\item {(Smoothness). Each local objective function is L-smooth, that is, $\| \nabla F_i (\mathbf{x}) - \nabla F_i (\mathbf{y}) \| \leq L \| \mathbf{x} - \mathbf{y} \|, \forall i \in \{ 1, \cdots, m \}$.}
\item {(Unbiased Gradient). The stochastic gradient at each client is an unbiased estimator of the local full-batch gradient: $\mathop{\mathbb{E}}_{\xi}[ g_i(\mathbf{x}, \xi)] = \nabla F_i(\mathbf{x})$.}
\item {(Bounded Variance). The stochastic gradient at each client has bounded variance: $\mathop{\mathbb{E}}_\xi[\| g_i(\mathbf{x}, \xi) - \nabla F_i (\mathbf{x}) \|^2 \leq \sigma^2], \forall i \in \{ 1, \cdots, m \}, \sigma^2 \geq 0$.}
\item {(Bounded Dissimilarity). For any sets of weights $\{p_i \geq 0\}_{i=1}^{m}, \sum_{i=1}^{m} p_i = 1$, there exist constants $\beta^2 \geq 1$ and $\kappa^2 \geq 0$ such that $\sum_{i=1}^{m} p_i \| \nabla F_i(\mathbf{x}) \|^2 \leq \beta^2 \| \sum_{i=1}^{m} p_i \nabla F_i (\mathbf{x}) \|^2 + \kappa^2$. If local objective functions are identical to each other, $\beta^2 = 1$ and $\kappa^2 = 0$.}
\end{enumerate}
\subsubsection{Proofs}
\textbf{Theorem 4.1.} \textit{Suppose all $m$ local models are initialized to the same point $\mathbf{u}_1$. Under Assumption $1 \sim 4$, if FedLAMA runs for $K$ iterations and the learning rate satisfies $\eta \leq \min \left\{\frac{1}{2L}, \frac{1}{L \sqrt{2\tau_{max} (\tau_{max} - 1) (2\beta^2 + 1)}} \right\}$, the average-squared gradient norm of $\mathbf{u}_k$ is bounded as follows}
\begin{align}
\mathop{\mathbb{E}}\left[\frac{1}{K}\sum_{i=1}^{K} \| \nabla F(\mathbf{u}_k) \|^2\right] & \leq \frac{4}{\eta K}\left( \mathop{\mathbb{E}}\left[ F(\mathbf{u}_{1}) - F(\mathbf{u}_{*}) \right] \right) + 4\eta \sum_{i=1}^{m} p_i^2 L \sigma^2 \nonumber \\
& \quad\quad + 3 \eta^2 (\tau_{max} - 1) L^2 \sigma^2 + 6 \eta^2 \tau_{max} (\tau_{max} - 1) L^2 \kappa^2,
\end{align}
where $\mathbf{u}_{*}$ indicates a local minimum.
\begin{proof}
Based on Lemma \ref{lemma:framework} and \ref{lemma:discrepancy}, we have
\begin{align}
\frac{1}{K} \sum_{k=1}^{K} \mathop{\mathbb{E}}\left[ \left\| \nabla F(\mathbf{u}_k) \right\|^2 \right] & \leq \frac{2}{\eta K}\left( \mathop{\mathbb{E}}\left[ F(\mathbf{u}_{1}) - F(\mathbf{u}_{*}) \right] \right) + 2\eta \sum_{i=1}^{m} p_i^2 L \sigma^2 \nonumber \\
&\quad + L^2 \left( \frac{\eta^2 (\tau_{max} - 1) \sigma_j^2}{1 - A} + \frac{A \beta^2}{K L^2 (1 - A)} \sum_{k=1}^{K} \mathop{\mathbb{E}} \left[ \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 \right] + \frac{A \kappa^2}{L^2 (1 - A)} \right). \nonumber
\end{align}
After re-writing the left-hand side and a minor rearrangement, we have
\begin{align}
\frac{1}{K} \sum_{k=1}^{K} \mathop{\mathbb{E}}\left[ \left\| \nabla F(\mathbf{u}_k) \right\|^2 \right] & \leq \frac{2}{\eta K}\left( \mathop{\mathbb{E}}\left[ F(\mathbf{u}_{1}) - F(\mathbf{u}_{*}) \right] \right) + 2\eta \sum_{i=1}^{m} p_i^2 L \sigma^2 \nonumber \\
& \quad\quad + \frac{1}{K} \sum_{k = 1}^{K} \frac{A \beta^2}{1 - A} \mathop{\mathbb{E}} \left[ \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 \right] \nonumber \\
& \quad\quad + L^2 \left( \frac{\eta^2 (\tau_{max} - 1) \sigma^2}{1 - A} + \frac{A \kappa^2}{L^2 (1 - A)} \right). \nonumber
\end{align}
By moving the third term on the right-hand side to the left-hand side, we have
\begin{align}
\frac{1}{K} \sum_{k=1}^{K} \left(1 - \frac{A \beta^2}{1 - A} \right) \mathop{\mathbb{E}}\left[ \left\| \nabla_j F(\mathbf{u}_k) \right\|^2 \right] & \leq \frac{2}{\eta K}\left( \mathop{\mathbb{E}}\left[ F(\mathbf{u}_{1}) - F(\mathbf{u}_{*}) \right] \right) + 2\eta \sum_{i=1}^{m} p_i^2 L \sigma^2 \nonumber \\
& \quad\quad + L^2 \left( \frac{\eta^2 (\tau_{max} - 1) \sigma^2}{1 - A} + \frac{A \kappa^2}{L^2 (1 - A)} \right). \label{eq:niid_almost}
\end{align}
If $A \leq \frac{1}{2 \beta^2 + 1}$, then $\frac{A \beta^2}{1 - A} \leq \frac{1}{2}$.
Therefore, (\ref{eq:niid_almost}) can be simplified as follows.
\begin{align}
\frac{1}{K} \sum_{k=1}^{K} \mathop{\mathbb{E}}\left[ \left\| \nabla F(\mathbf{u}_k) \right\|^2 \right] & \leq \frac{4}{\eta K}\left( \mathop{\mathbb{E}}\left[ F(\mathbf{u}_{1}) - F(\mathbf{u}_{*}) \right] \right) + 4\eta \sum_{i=1}^{m} p_i^2 L \sigma^2 \nonumber \\
& \quad\quad + 2 L^2 \left( \frac{\eta^2 (\tau_{max} - 1) \sigma^2}{1 - A} \right) + 2 \frac{A \kappa^2}{1 - A}. \nonumber
\end{align}
The learning rate condition $A \leq \frac{1}{2 \beta^2 + 1}$ also ensures that $\frac{1}{1 - A} \leq 1 + \frac{1}{2 \beta^2}$.
Based on Assumption 4, $\frac{1}{2 \beta^2} \leq \frac{2}{3}$, and thus $\frac{1}{1 - A} \leq \frac{2}{3}$.
Therefore, we have
\begin{align}
\frac{1}{K} \sum_{k=1}^{K} \mathop{\mathbb{E}}\left[ \left\| \nabla F(\mathbf{u}_k) \right\|^2 \right] & \leq \frac{4}{\eta K}\left( \mathop{\mathbb{E}}\left[ F(\mathbf{u}_{1}) - F(\mathbf{u}_{*}) \right] \right) + 4\eta \sum_{i=1}^{m} p_i^2 L \sigma^2 \nonumber \\
& \quad\quad + 3 L^2 \eta^2 (\tau_{max} - 1) \sigma^2 + 6 \eta^2 \tau_{max} (\tau_{max} - 1) L^2 \kappa^2 \nonumber \\
& = \frac{4}{\eta K}\left( \mathop{\mathbb{E}}\left[ F(\mathbf{u}_{1}) - F(\mathbf{u}_{*}) \right] \right) + 4\eta \sum_{i=1}^{m} p_i^2 L \sigma^2 \nonumber \\
& \quad\quad + 3 \eta^2 (\tau_{max} - 1) L^2 \sigma^2 + 6 \eta^2 \tau_{max} (\tau_{max} - 1) L^2 \kappa^2. \nonumber
\end{align}
We complete the proof.
\end{proof}
Here, we provide proofs of the two key Lemmas as follows.
\textbf{Lemma 4.1.} \textit{(Framework) Under Assumption $1 \sim 3$, if the learning rate satisfies $\eta \leq \frac{1}{2L}$, FedLAMA ensures}
\begin{equation}
\begin{split}
\frac{1}{K} \sum_{k=1}^{K} \mathop{\mathbb{E}}\left[ \left\| \nabla F(\mathbf{u}_k) \right\|^2 \right] &\leq \frac{2}{\eta K} \mathop{\mathbb{E}}\left[F(\mathbf{u}_1) - F(\mathbf{u}_{*}) \right] + 2 \eta L \sigma^2 \sum_{i=1}^{m} (p_i)^2 \\
& \quad + \frac{L^2}{K} \sum_{k=1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_k - \mathbf{x}_{k}^{i} \right\|^2 \right].\\
\end{split}
\end{equation}
\begin{proof}
Based on Assumption 1, we have
\begin{align}
\mathop{\mathbb{E}}\left[ F(\mathbf{u}_{k+1}) - F(\mathbf{u}_{k}) \right] &\leq - \eta \underset{T_1}{\underbrace{ \mathop{\mathbb{E}} \left[ \langle \nabla F(\mathbf{u}_{k}), \sum_{i=1}^{m} p_i g_i(\mathbf{x}_{k}^{i}) \rangle \right] }} + \frac{\eta^2 L}{2} \underset{T_2}{\underbrace{ \mathop{\mathbb{E}}\left[ \left\| \sum_{i=1}^{m} p_i g_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] }} \label{eq:framework}
\end{align}
First, $T_1$ can be rewritten as follows.
\begin{align}
T_1 & = \mathop{\mathbb{E}} \left[ \langle \nabla F(\mathbf{u}_{k}), \sum_{i=1}^{m} p_i \left( g_i(\mathbf{x}_{k}^{i}) - \nabla F_i(\mathbf{x}_{k}^{i}) \right) \rangle \right] + \mathop{\mathbb{E}} \left[ \langle \nabla F(\mathbf{u}_{k}), \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \rangle \right] \nonumber \\
& = \mathop{\mathbb{E}} \left[ \langle \nabla F(\mathbf{u}_{k}), \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \rangle \right] \nonumber \\
& = \frac{1}{2} \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 + \frac{1}{2} \mathop{\mathbb{E}} \left[ \left\| \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] - \frac{1}{2} \mathop{\mathbb{E}} \left[ \left\| \nabla F(\mathbf{u}_{k}) - \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right], \label{eq:t1}
\end{align}
where the last equality holds based on a basic equality: $2\mathbf{a}^{\top}\mathbf{b} = \| \mathbf{a} \|^2 + \| \mathbf{b} \|^2 - \| \mathbf{a} - \mathbf{b} \|^2$ .
Then, $T_2$ can be bounded as follows.
\begin{align}
T_2 & = \mathop{\mathbb{E}}\left[ \left\| \sum_{i=1}^{m} p_i \left( g_i(\mathbf{x}_{k}^{i}) - \mathop{\mathbb{E}} \left[ g_i(\mathbf{x}_{k}^{i}) \right] \right) + \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ g_i(\mathbf{x}_{k}^{i}) \right] \right\|^2 \right] \nonumber \\
& = \mathop{\mathbb{E}}\left[ \left\| \sum_{i=1}^{m} p_i \left( g_i(\mathbf{x}_{k}^{i}) - \nabla F_i(\mathbf{x}_{k}^{i}) \right) + \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \nonumber \\
& \leq 2 \mathop{\mathbb{E}}\left[ \left\| \sum_{i=1}^{m} p_i \left( g_i(\mathbf{x}_{k}^{i}) - \nabla F_i(\mathbf{x}_{k}^{i}) \right) \right\|^2 \right] + 2 \mathop{\mathbb{E}}\left[ \left\| \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \nonumber \\
& = 2 \sum_{i=1}^{m} p_i^2 \mathop{\mathbb{E}}\left[ \left\| g_i(\mathbf{x}_{k}^{i}) - \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] + 2 \mathop{\mathbb{E}}\left[ \left\| \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \nonumber \\
& \leq 2 \sigma^2 \sum_{i=1}^{m} p_i^2 + 2 \mathop{\mathbb{E}}\left[ \left\| \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right], \label{eq:t2}
\end{align}
where the last equality holds because $g_i(\mathbf{x}_{k}^{i}) - \nabla F_i(\mathbf{x}_{k}^{i})$ has $0$ mean and is independent across $i$, and the last inequality follows Assumption 3.
By plugging in (\ref{eq:t1}) and (\ref{eq:t2}) into (\ref{eq:framework}), we have the following.
\begin{align}
\mathop{\mathbb{E}}\left[ F(\mathbf{u}_{k+1}) - F(\mathbf{u}_{k}) \right] &\leq - \frac{\eta}{2} \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 - \frac{\eta}{2} \mathop{\mathbb{E}} \left[ \left\| \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \nonumber \\
& \quad + \frac{\eta}{2} \mathop{\mathbb{E}} \left[ \left\| \nabla F(\mathbf{u}_{k}) - \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] + \eta^2 L \sigma^2 \sum_{i=1}^{m} p_i^2 \nonumber \\
& \quad + \eta^2 L \mathop{\mathbb{E}}\left[ \left\| \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \nonumber \\
&= - \frac{\eta}{2} \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 - \frac{\eta}{2}(1 - 2 \eta L) \mathop{\mathbb{E}} \left[ \left\| \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \nonumber \\
& \quad + \frac{\eta}{2} \mathop{\mathbb{E}} \left[ \left\| \nabla F(\mathbf{u}_{k}) - \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] + \eta^2 L \sigma^2 \sum_{i=1}^{m} p_i^2 \nonumber
\end{align}
If $\eta \leq \frac{1}{2L}$, it follows
\begin{align}
\frac{\mathop{\mathbb{E}}\left[ F(\mathbf{u}_{k+1}) - F(\mathbf{u}_{k}) \right]}{\eta} & \leq - \frac{1}{2} \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 + \eta L \sigma^2 \sum_{i=1}^{m} p_i^2 \nonumber\\
& \quad + \frac{1}{2} \mathop{\mathbb{E}} \left[ \left\| \nabla F(\mathbf{u}_{k}) - \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \nonumber\\
& \leq - \frac{1}{2} \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 + \eta L \sigma^2 \sum_{i=1}^{m} p_i^2 \label{eq:Jensen} \\
& \quad + \frac{1}{2} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{u}_{k}) - \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \nonumber\\
& \leq - \frac{1}{2} \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 + \eta L \sigma^2 \sum_{i=1}^{m} p_i^2 + \frac{L^2}{2} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber,
\end{align}
where (\ref{eq:Jensen}) holds based on the convexity of $l2$ norm and Jensen's inequality.
By taking expectation and averaging across $K$ iterations, we have.
\begin{align}
\frac{1}{K} \sum_{k=1}^{K} \frac{\mathop{\mathbb{E}}\left[ F(\mathbf{u}_{k+1}) - F(\mathbf{u}_{k}) \right]}{\eta} & \leq -\frac{1}{2K} \sum_{k=1}^{K} \left\| \nabla F(\mathbf{u}_k) \right\|^2 + \eta L \sigma^2 \sum_{i=1}^{m} p_i^2 \nonumber \\
& \quad \quad + \frac{L^2}{2K} \sum_{k-1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right]. \nonumber
\end{align}
After a minor rearrangement, we have a telescoping sum as follows.
\begin{equation}
\begin{split}
\frac{1}{K} \sum_{k=1}^{K} \mathop{\mathbb{E}}\left[ \left\| \nabla F(\mathbf{u}_k) \right\|^2 \right] &\leq \frac{2}{\eta K} \mathop{\mathbb{E}}\left[F(\mathbf{u}_1) - F(\mathbf{u}_{k+1}) \right] + 2 \eta L \sigma^2 \sum_{i=1}^{m} p_i^2 \nonumber \\
& \quad + \frac{L^2}{K} \sum_{k=1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_k - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber \\
&\leq \frac{2}{\eta K} \mathop{\mathbb{E}}\left[F(\mathbf{u}_1) - F(\mathbf{u}_{*}) \right] + 2 \eta L \sigma^2 \sum_{i=1}^{m} p_i^2 \nonumber \\
& \quad + \frac{L^2}{K} \sum_{k=1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_k - \mathbf{x}_{k}^{i} \right\|^2 \right], \nonumber
\end{split}
\end{equation}
where $\mathbf{u}_{*}$ indicates the local minimum.
Here, we complete the proof.
\end{proof}
\textbf{Lemma 4.2.} \textit{(Model Discrepancy) Under Assumption $1 \sim 4$, FedLAMA ensures}
\begin{equation}
\begin{split}
\frac{1}{K} \sum_{k=1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_k - \mathbf{x}_{k}^{i} \right\|^2 \right] & \leq \frac{2 \eta^2 (\tau_{max} - 1) \sigma^2}{1 - A} + \frac{A \kappa^2}{L^2 (1 - A)} \\
& \quad + \frac{A \beta^2}{K L^2 (1 - A)} \sum_{k=1}^{K} \mathop{\mathbb{E}} \left[ \left\| \nabla F (\mathbf{u}_k) \right\|^2 \right], \\
\end{split}
\end{equation}
\textit{where $A = 4 \eta^2 (\tau_{max} - 1)^2 L^2$ and $\tau_{max}$ is the largest averaging interval across all the layers.}
\begin{proof}
We begin with rewriting the weighted average of the squared distance using the vectorized form of the local models as follows.
\begin{align}
\sum_{i=1}^{m} p_i \left\| \mathbf{u}_k - \mathbf{x}_{k}^{i} \right\|^2 = \left\| \mathbf{J} \mathbf{\hat{x}}_{k} - \mathbf{\hat{x}}_{k} \right\|^2 = \left\| (\mathbf{J} - \mathbf{I})\mathbf{\hat{x}}_{k} \right\|^2 \\
\end{align}
Then, according to the parameter update rule, we have
\begin{align}
(\mathbf{J} - \mathbf{I})\mathbf{\hat{x}}_{k} & = (\mathbf{J} - \mathbf{I})\mathbf{W}_{k-1}(\mathbf{\hat{x}}_{k-1} - \eta \mathbf{\hat{g}}_{k-1})\\
& = (\mathbf{J} - \mathbf{I}) \mathbf{W}_{k-1} \mathbf{\hat{x}}_{k-1} - (\mathbf{J} - \mathbf{W}_{k-1}) \eta \mathbf{\hat{g}}_{k-1},
\end{align}
where the second equality holds because $\mathbf{J} \mathbf{W} = \mathbf{J}$ and $\mathbf{I} \mathbf{W} = \mathbf{W}$.
Then, expanding the expression of $\mathbf{x}_{k-1}$, we have
\begin{align}
(\mathbf{J} - \mathbf{I}) \mathbf{\hat{x}}_{k} & = (\mathbf{J} - \mathbf{I}) \mathbf{W}_{k-1} (\mathbf{W}_{k-2} (\mathbf{\hat{x}}_{k-2} - \eta \mathbf{\hat{g}}_{k-2})) - (\mathbf{J} - \mathbf{W}_{k-1}) \eta \mathbf{\hat{g}}_{k-1} \nonumber \\
& = (\mathbf{J} - \mathbf{I}) \mathbf{W}_{k-1} \mathbf{W}_{k-2} \mathbf{\hat{x}}_{k-2} - (\mathbf{J} - \mathbf{W}_{k-1} \mathbf{W}_{k-2}) \eta \mathbf{\hat{g}}_{k-2} - (\mathbf{J} - \mathbf{W}_{k-1}) \eta \mathbf{\hat{g}}_{k-1}. \nonumber
\end{align}
Repeating the same procedure for $\mathbf{\hat{x}}_{k-2}$, $\mathbf{\hat{x}}_{k-3}$, $\cdots$, $\mathbf{\hat{x}}_{2}$, we have
\begin{align}
(\mathbf{J} - \mathbf{I}) \mathbf{\hat{x}}_{k} & = (\mathbf{J} - \mathbf{I}) \prod_{s=1}^{k-1} \mathbf{W}_{s} \mathbf{\hat{x}}_{1} - \eta \sum_{s=1}^{k-1} (\mathbf{J} - \prod_{l=s}^{k-1}\mathbf{W}_{(j,l)}) \mathbf{\hat{g}}_{s} \nonumber \\
& = - \eta \sum_{s=1}^{k-1} (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l}) \mathbf{\hat{g}}_{s}, \label{eq:niid_distance}
\end{align}
where (\ref{eq:niid_distance}) holds because $\mathbf{x}_{1}^i$ is the same across all the workers and thus $(\mathbf{J} - \mathbf{I})\mathbf{\hat{x}}_{1} = 0$.
Based on (\ref{eq:niid_distance}), we have
\begin{align}
& \frac{1}{K} \sum_{k = 1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber \\
& \quad\quad = \frac{1}{K} \sum_{k = 1}^{K} \left( \mathop{\mathbb{E}} \left[ \left\| (\mathbf{J} - \mathbf{I}) \mathbf{\hat{x}}_{k} \right\|^2 \right] \right) \nonumber \\
& \quad\quad = \frac{1}{K} \sum_{k = 1}^{K} \left( \eta^2 \mathop{\mathbb{E}} \left[ \left\| \sum_{s=1}^{k-1} (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l}) \mathbf{\hat{g}}_{s} \right\|^2 \right] \right) \nonumber \\
& \quad\quad = \frac{1}{K} \sum_{k = 1}^{K} \left( \eta^2 \mathop{\mathbb{E}} \left[ \left\| \sum_{s=1}^{k-1} (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l}) ( \mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s} ) + \sum_{s=1}^{k-1} (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l}) \mathbf{\hat{f}}_{s} \right\|^2 \right] \right) \nonumber \\
& \quad\quad \leq \frac{2 \eta^2}{K} \left( \underset{T_3}{\underbrace{ \sum_{k = 1}^{K} \mathop{\mathbb{E}} \left[ \left\| \sum_{s=1}^{k-1} (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l}) (\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s} ) \right\|^2 \right] }} + \underset{T_4}{\underbrace{ \sum_{k = 1}^{K} \mathop{\mathbb{E}} \left[ \left\| \sum_{s=1}^{k-1} ( \mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l} ) \mathbf{\hat{f}}_{s} \right\|^2 \right] }} \right) \label{eq:niid_t3t4}
\end{align}
where (\ref{eq:niid_t3t4}) holds based on the convexity of $\ell2$ norm and Jensen's inequality.
Now, we focus on bounding $T_3$ and $T_4$, separately.
\textbf{Bounding $T_3$}
\begin{align}
& \sum_{k = 1}^{K} \mathop{\mathbb{E}} \left[ \left\| \sum_{s=1}^{k-1} (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l})(\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s}) \right\|^2 \right] \nonumber \\
& \quad\quad = \sum_{k = 1}^{K} \sum_{s = 1}^{k - 1} \mathop{\mathbb{E}} \left[ \left\| (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l})(\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s}) \right\|^2 \right] \label{eq:niid_t3_jensen} \\
& \quad\quad \leq \sum_{k = 1}^{K} \sum_{s = 1}^{k - 1} \mathop{\mathbb{E}} \left[ \left\| (\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s}) \right\|^2 \left\| (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l}) \right\|_{op}^2 \right], \label{eq:niid_t3_op}
\end{align}
where (\ref{eq:niid_t3_jensen}) holds because $\hat{\mathbf{g}}_s - \hat{\mathbf{f}}_s$ has $0$ mean and independent across $s$, and (\ref{eq:niid_t3_op}) holds based on Lemma \ref{lemma:operator}.
Without loss of generality, we replace $k$ with $a\tau_{max} + b$, where $a$ is the communication round index and $b$ is the iteration index within each communication round.
Then, we have
\begin{align}
& \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \sum_{s = 1}^{a\tau_{max} + b - 1} \mathop{\mathbb{E}} \left[ \left\| (\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s}) \right\|^2 \left\| (\mathbf{J} - \prod_{l=s}^{k-1} \mathbf{W}_{l}) \right\|_{op}^2 \right] \nonumber\\
& \quad\quad = \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \sum_{s = 1}^{a\tau} \mathop{\mathbb{E}}\left[ \left\| (\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s}) \right\|^2 \left\| (\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1} \mathbf{W}_{l}) \right\|_{op}^2\right] \nonumber \\
& \quad\quad\quad\quad + \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + b - 1} \mathop{\mathbb{E}}\left[ \left\| (\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s}) \right\|^2 \left\| (\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1} \mathbf{W}_{l}) \right\|_{op}^2\right] \nonumber\\
& \quad\quad = \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + b - 1} \mathop{\mathbb{E}}\left[ \left\| (\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s}) \right\|^2 \left\| (\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1} \mathbf{W}_{l}) \right\|_{op}^2\right] \label{eq:niid_t3_zeroout} \\
& \quad\quad = \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \sum_{s = a\tau_{max} + 1}^{a\tau + b - 1} \mathop{\mathbb{E}}\left[ \left\| (\mathbf{\hat{g}}_{s} - \mathbf{\hat{f}}_{s}) \right\|^2 \right] \label{eq:niid_t3_opout} \\
& \quad\quad = \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + b - 1} \sum_{i = 1}^{m} p_i \mathop{\mathbb{E}}\left[ \left\| (g_{i}(\mathbf{x}_{s}^{i}) - \nabla F_i (\mathbf{x}_{s}^{i})) \right\|^2 \right] \nonumber\\
& \quad\quad \leq \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + b - 1} \sum_{i = 1}^{m} p_i \sigma^2 \label{eq:niid_t3_sigma} \\
& \quad\quad = \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} (b - 1) \sigma^2 \nonumber\\
& \quad\quad = \sum_{a = 0}^{K / \tau_{max} - 1} \frac{\tau_{max} (\tau_{max} - 1)}{2} \sigma^2 \nonumber\\
& \quad\quad \leq K\frac{(\tau_{max} - 1)}{2} \sigma^2, \label{eq:t3_bound}
\end{align}
where (\ref{eq:niid_t3_zeroout}) holds because $\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1} \mathbf{W}_{l}$ becomes $0$ when $s \leq a\tau_{max}$.
Remember FedLAMA guarantees that the whole parameters are synchronized at least once within every $\tau_{max}$ iterations.
(\ref{eq:niid_t3_opout}) holds based on Lemma \ref{lemma:zeroout}.
(\ref{eq:niid_t3_sigma}) holds based on Assumption 3.
\textbf{Bounding $T_4$}
\begin{align}
& \sum_{k = 1}^{K - \tau_{max}} \mathop{\mathbb{E}}\left[\left\| \sum_{s=1}^{k-1} (\mathbf{J} - \prod_{l=s}^{k-1}\mathbf{W}_{l}) \mathbf{\hat{f}}_{s} \right\|^2\right] \nonumber\\
& = \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \mathop{\mathbb{E}}\left[\left\| \sum_{s=1}^{a \tau + b - 1} (\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1}\mathbf{W}_{l}) \mathbf{\hat{f}}_{s} \right\|^2\right] \nonumber\\
& = \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \mathop{\mathbb{E}}\left[\left\| \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + b - 1} (\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1}\mathbf{P}_{l}) \mathbf{\hat{f}}_{s} \right\|^2\right] \label{eq:t4_zeroout}\\
& \leq \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \left( (b - 1) \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + b - 1} \mathop{\mathbb{E}}\left[\left\| (\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1} \mathbf{P}_{l}) \mathbf{\hat{f}}_{s} \right\|^2\right] \right) \label{eq:t4_jensen}\\
& \leq \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \left( (b - 1) \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + b - 1} \mathop{\mathbb{E}}\left[\left\| \mathbf{\hat{f}}_{s} \right\|^2 \left\| (\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1}\mathbf{P}_{l}) \right\|_{op}^2 \right] \right) \label{eq:t4_op}\\
& \leq \sum_{a = 0}^{K / \tau_{max} - 1} \sum_{b = 1}^{\tau_{max}} \left( (b - 1) \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + b - 1} \mathop{\mathbb{E}}\left[\left\| \mathbf{\hat{f}}_{s} \right\|^2 \right] \right) \label{eq:t4_opout} \\
& \leq \frac{\tau_{max} (\tau_{max} - 1)}{2} \sum_{a = 0}^{K / \tau_{max} - 1} \left( \sum_{s = a\tau_{max} + 1}^{a\tau_{max} + \tau_{max} - 1} \mathop{\mathbb{E}}\left[\left\| \mathbf{\hat{f}}_{s} \right\|^2 \right] \right) \nonumber \\
& \leq \frac{\tau_{max} (\tau_{max} - 1)}{2} \sum_{k = 1}^{K} \mathop{\mathbb{E}}\left[\left\| \mathbf{\hat{f}}_{k} \right\|^2 \right] \nonumber \\
& = \frac{\tau_{max}(\tau_{max} - 1)}{2} \sum_{k=1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{x}_k^i) \right\|^2 \right] , \label{eq:t4_bound}
\end{align}
where (\ref{eq:t4_zeroout}) holds because $\mathbf{J} - \prod_{l=s}^{a\tau_{max} + b - 1}\mathbf{P}_{l}$ becomes 0 when $s \leq a\tau$.
(\ref{eq:t4_jensen}) holds based on the convexity of $\ell$2 norm and Jensen's inequality.
(\ref{eq:t4_op}) holds based on Lemma \ref{lemma:operator}.
(\ref{eq:t4_opout}) holds based on Lemma \ref{lemma:zeroout}.
\textbf{Final Result}
By plugging in (\ref{eq:t3_bound}) and (\ref{eq:t4_bound}) into (\ref{eq:niid_t3t4}), we have
\begin{align}
& \frac{1}{K} \sum_{k = 1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber \\
& \quad\quad \leq \frac{2 \eta^2}{K} \left( K \frac{(\tau_{max} - 1)}{2} \sigma^2 + \frac{\tau_{max} (\tau_{max} - 1)}{2} \left( \sum_{k=1}^{K} \sum_{i = 1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \right) \right) \nonumber \\
& \quad\quad = \eta^2 (\tau_{max} - 1) \sigma^2 + \frac{\eta^2 \tau_{max} (\tau_{max} - 1)}{K} \left( \sum_{k=1}^{K} \sum_{i = 1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] \right) \label{eq:niid_before}
\end{align}
The local gradient term on the right-hand side in (\ref{eq:niid_before}) can be rewritten using the following inequality.
\begin{align}
\mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{x}_{k}^{i}) \right\|^2 \right] & = \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{x}_{k}^{i}) - \nabla F_i(\mathbf{u}_{k}) + \nabla F_i(\mathbf{u}_{k}) \right\|^2 \right] \nonumber \\
& \leq 2 \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{x}_{k}^{i}) - \nabla F_i(\mathbf{u}_{k}) \right\|^2 \right] + 2 \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{u}_{k}) \right\|^2 \right] \label{eq:niid_localgrad_jensen} \\
& \leq 2 L^2 \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] + 2 \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{u}_{k}) \right\|^2 \right], \label{eq:niid_localgrad}
\end{align}
where (\ref{eq:niid_localgrad_jensen}) holds based on the convexity of $\ell2$ norm and Jensen's inequality.
Plugging in (\ref{eq:niid_localgrad}) into (\ref{eq:niid_before}), we have
\begin{align}
& \frac{1}{K} \sum_{k = 1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber \\
& \quad\quad \leq \eta^2 (\tau_{max} - 1) \sigma^2 + \frac{2 \eta^2 \tau_{max} (\tau_{max} - 1) L^2}{K} \sum_{k=1}^{K} \sum_{i = 1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber \\
& \quad\quad\quad\quad + \frac{2 \eta^2 \tau_{max} (\tau_{max} - 1)}{K} \sum_{k=1}^{K} \sum_{i = 1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{u}_{k}) \right\|^2 \right]
\end{align}
After a minor rearranging, we have
\begin{align}
& \frac{1}{K} \sum_{k = 1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber \\
& \quad\quad \leq \frac{\eta^2 (\tau_{max} - 1) \sigma^2}{1 - 2\eta^2 \tau_{max} (\tau_{max} - 1) L^2} + \frac{2 \eta^2 \tau_{max} (\tau_{max} - 1)}{K (1 - 2\eta^2 \tau_{max} (\tau_{max} - 1) L^2)} \sum_{k=1}^{K} \sum_{i = 1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{u}_{k}) \right\|^2 \right] \label{eq:niid_rearrange} \\
\end{align}
Let us define $A = 2\eta^2 \tau_{max} (\tau_{max} - 1) L^2$. Then (\ref{eq:niid_rearrange}) is simplified as follows.
\begin{align}
& \frac{1}{K} \sum_{k = 1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber \\
& \quad\quad \leq \frac{\eta^2 (\tau_{max} - 1) \sigma^2}{1 - A} + \frac{A}{K L^2 (1 - A)} \sum_{k=1}^{K} \sum_{i = 1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \nabla F_i(\mathbf{u}_{k}) \right\|^2 \right] \nonumber
\end{align}
Based on Assumption 4, we have
\begin{align}
& \frac{1}{K} \sum_{k = 1}^{K} \sum_{i=1}^{m} p_i \mathop{\mathbb{E}} \left[ \left\| \mathbf{u}_{k} - \mathbf{x}_{k}^{i} \right\|^2 \right] \nonumber \\
& \quad\quad \leq \frac{\eta^2 (\tau_{max} - 1) \sigma^2}{1 - A} + \frac{A \beta^2}{K L^2 (1 - A)} \sum_{k=1}^{K} \mathop{\mathbb{E}} \left[ \left\| \sum_{i=1}^{m} p_i \nabla F_i(\mathbf{u}_{k}) \right\|^2 \right] + \frac{A \kappa^2}{L^2 (1 - A)} \nonumber \\
& \quad\quad = \frac{\eta^2 (\tau_{max} - 1) \sigma^2}{1 - A} + \frac{A \beta^2}{K L^2 (1 - A)} \sum_{k=1}^{K} \mathop{\mathbb{E}} \left[ \left\| \nabla F(\mathbf{u}_{k}) \right\|^2 \right] + \frac{A \kappa^2}{L^2 (1 - A)}, \label{eq:final}
\end{align}
where (\ref{eq:final}) holds based on the definition of the objective function (\ref{eq:objective_appendix}).
Here, we complete the proof.
\end{proof}
\subsubsection {Proof of Other Lemmas}
\begin{lemma}
\label{lemma:operator}
Consider a real matrix $\mathbf{A} \in \mathbb{R}^{md_j \times md_j}$ and a real vector $\mathbf{b} \in \mathbb{R}^{md_j}$. If $A$ is symmetric and $\mathbf{b} \neq \mathbf{0}_{md_j}$, we have
\begin{equation}
\| \mathbf{Ab} \| \leq \| \mathbf{A} \|_{op} \| \mathbf{b} \|
\end{equation}
\end{lemma}
\begin{proof}
\begin{align}
\| \mathbf{Ab} \|^2 & = \frac{\| \mathbf{Ab} \|^2}{\|\mathbf{b} \|^2} \|\mathbf{b}\|^2 \nonumber \\
& \leq \|\mathbf{A}\|_{op}^{2}\|\mathbf{b}\|^2 \label{eq:op}
\end{align}
where (\ref{eq:op}) holds based on the definition of operator norm.
\end{proof}
\begin{lemma}
\label{lemma:zeroout}
Suppose there is a $md \times md$ averaging matrix $\mathbf{P}$ and $t < \tau_{max}$, then
\begin{equation}
\| \mathbf{J} - \prod_{i=1}^{t}\mathbf{P}_{i} \|_{op}^2 = 1.
\end{equation}
\end{lemma}
\begin{proof}
By the property 2 and 3 of the averaging matrix $P$, $\prod_{i=1}^{t}\mathbf{P}_i$ is symmetric for all $t < \tau$.
Since $\mathbf{J}$ is symmetric, $\prod_{i=1}^{t}\mathbf{P}_i - \mathbf{J}$ is also symmetric and it can be decomposed to $Q\Lambda_t Q^\top$, where $Q$ is an orthogonal matrix and $\Lambda_t$ is an eigenvalue vector.
By the definition of $\mathbf{P}_i$ and the property 4, the columns of $\prod_{i=1}^{t}\mathbf{P}_i$ can be grouped to two subsets:
The first group is the columns that are the same as the corresponding columns of $\mathbf{J}$.
Each of these columns contain $m$ elements of $\frac{1}{m}$.
The second group is the columns that do not contain $\frac{1}{m}$.
These columns strictly have only one element of $1$ and all the other elements are $0$.
Thus, in $\mathbf{J} - \prod_{i=1}^{t}\mathbf{P}_i$, the first group columns are all zeroed out while the second group columns remain.
Because every block in both $\mathbf{J}$ and $\prod_{i=1}^{t}\mathbf{P}_i$ is diagonal, all the second group columns in $\mathbf{J} - \prod_{i=1}^{t}\mathbf{P}_i$ are orthogonal and their eigenvalues are either $1$ or $-1$.
Since $t < \tau$, $\mathbf{J} - \prod_{i=1}^{t}\mathbf{P}_i$ has at least $\frac{md}{\tau}$ second group columns.
By the definition of the matrix operator norm, $\| \mathbf{J} - \prod_{i=1}^{t}\mathbf{P}_i \|_{op}^2 = max\{| \lambda(\mathbf{J} - \prod_{i=1}^{t}\mathbf{P}_i) |\} = 1$.
\end{proof}
\clearpage
\subsection {Additional Experimental Results}
In this section, we provide extra experimental results with extensive hyper-parameter settings.
We commonly use $128$ clients and a lcoal batch size of $32$ in all the experiments.
The gradual learning rate warmup (\cite{goyal2017accurate}) is also applied to the first 10 epochs in all the experiments.
\textbf{CIFAR-10} --
Table \ref{tab:app-cifar10-iid} and \ref{tab:app-cifar10-niid} show the CIFAR-10 classification performance of FedLAMA across different $\phi$ settings.
As expected, the accuracy is reduced as $\phi$ increases.
The IID and non-IID data settings show the common trend.
Depending on the system network bandwidth, $\phi$ can be tuned to be an appropriate value.
When $\phi=2$, the accuracy is almost the same as or even slightly higher than FedAvg accuracy.
If the network bandwidth is limited, one can increase $\phi$ and slightly increase the epoch budget to achieve a good accuracy.
Table \ref{tab:app-cifar10-niid-fedavg} shows the CIFAR-10 accuracy across different $\tau'$ settings.
We see that the accuracy is significantly dropped as $\tau'$ increases.
\textbf{CIFAR-100} --
Table \ref{tab:app-cifar100-iid} and \ref{tab:app-cifar100-niid} show the CIFAR-100 classification performance of FedLAMA across different $\phi$ settings.
FedLAMA achieves a comparable accuracy to FedAvg with a short aggregation interval, even when the degree of data heterogeneity is extreamly high ($25\%$ device sampling and Direchlet's coefficient of $0.1$).
Table \ref{tab:app-cifar100-niid-fedavg} shows the FedAvg accuracy with different $\tau'$ settings.
Under the strongly heterogeneous data distributions, FedAvg with a large aggregation interval ($\tau \geq 12$) do not achieve a reasonable accuracy.
\textbf{FEMNIST} --
Table \ref{tab:app-femnist} shows the FEMNIST classification performance of FedLAMA across different $\phi$ settings.
FedLAMA achieves a similar accuracy to the baseline (FedAvg with $\tau'=10$) even when using a large interval increase factor $\phi \geq 4$.
These results demonstrate the effectiveness of the proposed layer-wise adaptive model aggregation method on the problems with heterogeneous data distributions.
\begin{table}[h!]
\scriptsize
\centering
\caption{
(IID data) CIFAR-10 classification results of FedLAMA with different $\phi$ settings.
}
\begin{tabular}{|c||c|c|c|c||c|} \hline
\# of clients & Local batch size & LR & Averaging interval: $\tau'$ & Interval increase factor: $\phi$ & Validation acc. \\ \hline \hline
\multirow{4}{*}{128} & \multirow{4}{*}{32} & 0.8 & \multirow{4}{*}{6} & 1 (FedAvg) & $88.37 \pm 0.1 \%$\\ \cline{3-3}\cline{5-6}
& & \multirow{3}{*}{$0.5$} & & 2 & $88.41 \pm 0.04 \%$\\ \cline{5-6}
& & & & 4 & $86.33 \pm 0.2\%$ \\ \cline{5-6}
& & & & 8 & $85.08 \pm 0.04\%$ \\ \hline
\end{tabular}
\label{tab:app-cifar10-iid}
\end{table}
\begin{table}[h!]
\scriptsize
\centering
\caption{
(Non-IID data) CIFAR-10 classification results of FedLAMA with different $\phi$ settings.
}
\begin{tabular}{|c||c|c|c|c|c|c||c|} \hline
\# of clients & Local batch size & LR & $\tau'$ & Active ratio & Dirichlet coeff. & $\phi$ & Validation acc. \\ \hline \hline
\multirow{27}{*}{128} & \multirow{27}{*}{32} & \multirow{18}{*}{$0.8$} & \multirow{27}{*}{6} & \multirow{3}{*}{$100\%$} & \multirow{3}{*}{1} & 1 (FedAvg) & $90.79 \pm 0.1\%$\\ \cline{7-8}
& & & & & & 2 & $89.01 \pm 0.04\%$\\ \cline{7-8}
& & & & & & 4 & $87.84 \pm 0.01\%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$100\%$} & \multirow{3}{*}{0.5} & 1 (FedAvg) & $90.53 \pm 0.18\%$\\ \cline{7-8}
& & & & & & 2 & $89.21 \pm 0.2 \%$\\ \cline{7-8}
& & & & & & 4 & $86.68 \pm 0.12 \%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$100\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $89.52 \pm 0.11\%$\\ \cline{7-8}
& & & & & & 2 & $89.00 \pm 0.1 \%$\\ \cline{7-8}
& & & & & & 4 & $84.82 \pm 0.08 \%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$50\%$} & \multirow{3}{*}{1} & 1 (FedAvg) & $90.34 \pm 0.12\%$\\ \cline{7-8}
& & & & & & 2 & $89.56 \pm 0.13\%$\\ \cline{7-8}
& & & & & & 4 & $87.48 \pm 0.21\%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$50\%$} & \multirow{3}{*}{0.5} & 1 (FedAvg) & $89.86 \pm 0.13 \%$\\ \cline{7-8}
& & & & & & 2 & $88.44 \pm 0.15\%$\\ \cline{7-8}
& & & & & & 4 & $87.29 \pm 0.18\%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$50\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $87.83 \pm 0.2\%$\\ \cline{7-8}
& & & & & & 2 & $86.18 \pm 0.17\%$\\ \cline{7-8}
& & & & & & 4 & $85.92 \pm 0.21 \%$ \\ \cline{3-3}\cline{5-8}
& & \multirow{6}{*}{$0.6$}& & \multirow{3}{*}{$25\%$} & \multirow{3}{*}{1} & 1 (FedAvg) & $88.97 \pm 0.03\%$\\ \cline{7-8}
& & & & & & 2 & $87.89 \pm 0.2\%$\\ \cline{7-8}
& & & & & & 4 & $86.61 \pm 0.1\%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$25\%$} & \multirow{3}{*}{0.5} & 1 (FedAvg) & $87.59 \pm 0.05\%$\\ \cline{7-8}
& & & & & & 2 & $87.12 \pm 0.08\%$\\ \cline{7-8}
& & & & & & 4 & $86.07 \pm 0.02\%$ \\ \cline{3-3}\cline{5-8}
& & \multirow{3}{*}{$0.3$} & & \multirow{3}{*}{$25\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $84.02 \pm 0.04\%$\\ \cline{7-8}
& & & & & & 2 & $83.55 \pm 0.02\%$\\ \cline{7-8}
& & & & & & 4 & $82.78 \pm 0.03\%$ \\ \hline
\end{tabular}
\label{tab:app-cifar10-niid}
\end{table}
\begin{table}[t]
\scriptsize
\centering
\caption{
(Non-IID data) CIFAR-10 classification results of FedAvg with different $\tau'$ settings.
}
\begin{tabular}{|c||c|c|c|c|c|c||c|} \hline
\# of clients & Local batch size & LR & $\tau'$ & Active ratio & Dirichlet coeff. & $\phi$ & Validation acc. \\ \hline \hline
\multirow{3}{*}{128} & \multirow{3}{*}{32} & \multirow{3}{*}{$0.8$} & $6$ & \multirow{3}{*}{$100\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $89.52 \pm 0.11\%$\\ \cline{4-4}\cline{7-8}
& & & $12$ & & & 1 (FedAvg) & $87.29 \pm 0.05\%$\\ \cline{4-4}\cline{7-8}
& & & $24$ & & & 1 (FedAvg) & $84.82 \pm 0.1\%$\\ \hline
\multirow{3}{*}{128} & \multirow{3}{*}{32} & \multirow{3}{*}{$0.3$} & $6$ & \multirow{3}{*}{$25\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $84.02 \pm 0.1\%$\\ \cline{4-4}\cline{7-8}
& & & $12$ & & & 1 (FedAvg) & $82.48 \pm 0.2\%$\\ \cline{4-4}\cline{7-8}
& & & $24$ & & & 1 (FedAvg) & $76.72 \pm 0.1\%$\\ \hline
\end{tabular}
\label{tab:app-cifar10-niid-fedavg}
\end{table}
\begin{table}[t]
\scriptsize
\centering
\caption{
(IID data) CIFAR-100 classification results of FedLAMA with different $\phi$ settings.
}
\begin{tabular}{|c||c|c|c|c||c|} \hline
\# of clients & Local batch size & LR & Averaging interval: $\tau'$ & Interval increase factor: $\phi$ & Validation acc. \\ \hline \hline
\multirow{4}{*}{128} & \multirow{4}{*}{32} & \multirow{4}{*}{$0.6$} & \multirow{4}{*}{6} & 1 (FedAvg) & $76.50 \pm 0.02 \%$\\ \cline{5-6}
& & & & 2 & $75.99 \pm 0.03 \%$\\ \cline{5-6}
& & & & 4 & $76.17 \pm 0.2 \%$ \\ \cline{5-6}
& & & & 8 & $76.15 \pm 0.2 \%$ \\ \hline
\end{tabular}
\label{tab:app-cifar100-iid}
\end{table}
\begin{table}[t]
\scriptsize
\centering
\caption{
(Non-IID data) CIFAR-100 classification results of FedLAMA with different $\phi$ settings.
}
\begin{tabular}{|c||c|c|c|c|c|c||c|} \hline
\# of clients & Local batch size & LR & $\tau'$ & Active ratio & Dirichlet coeff. & $\phi$ & Validation acc. \\ \hline \hline
\multirow{27}{*}{128} & \multirow{27}{*}{32} & \multirow{6}{*}{$0.4$} & \multirow{27}{*}{6} & \multirow{3}{*}{$100\%$} & \multirow{3}{*}{1} & 1 (FedAvg) & $80.34 \pm 0.01\%$\\ \cline{7-8}
& & & & & & 2 & $78.92 \pm 0.01\%$\\ \cline{7-8}
& & & & & & 4 & $77.16 \pm 0.05\%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$100\%$} & \multirow{3}{*}{0.5} & 1 (FedAvg) & $80.19 \pm 0.02\%$\\ \cline{7-8}
& & & & & & 2 & $78.88 \pm 0.1\%$\\ \cline{7-8}
& & & & & & 4 & $78.03 \pm 0.08\%$ \\ \cline{3-3}\cline{5-8}
& & \multirow{3}{*}{$0.2$} & & \multirow{3}{*}{$100\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $79.78 \pm 0.02\%$\\ \cline{7-8}
& & & & & & 2 & $79.07 \pm 0.02\%$\\ \cline{7-8}
& & & & & & 4 & $79.32 \pm 0.01\%$ \\ \cline{3-3}\cline{5-8}
& & \multirow{6}{*}{$0.4$} & & \multirow{3}{*}{$50\%$} & \multirow{3}{*}{1} & 1 (FedAvg) & $79.94 \pm 0.1\%$\\ \cline{7-8}
& & & & & & 2 & $78.98 \pm 0.01\%$\\ \cline{7-8}
& & & & & & 4 & $77.50 \pm 0.02\%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$50\%$} & \multirow{3}{*}{0.5} & 1 (FedAvg) & $79.95 \pm 0.05\%$\\ \cline{7-8}
& & & & & & 2 & $78.37 \pm 0.05\%$\\ \cline{7-8}
& & & & & & 4 & $76.93 \pm 0.1\%$ \\ \cline{3-3}\cline{5-8}
& & \multirow{3}{*}{$0.2$} & & \multirow{3}{*}{$50\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $79.62 \pm 0.06\%$\\ \cline{7-8}
& & & & & & 2 & $78.76 \pm 0.02\%$\\ \cline{7-8}
& & & & & & 4 & $77.44 \pm 0.02\%$ \\ \cline{3-3}\cline{5-8}
& & \multirow{2}{*}{$0.4$} & & \multirow{3}{*}{$25\%$} & \multirow{3}{*}{1} & 1 (FedAvg) & $78.78 \pm 0.02\%$\\ \cline{7-8}
& & & & & & 2 & $78.10 \pm 0.02\%$\\ \cline{3-3} \cline{7-8}
& & $0.2$ & & & & 4 & $76.84 \pm 0.03\%$ \\ \cline{3-3} \cline{5-8}
& & \multirow{5}{*}{$0.4$} & & \multirow{3}{*}{$25\%$} & \multirow{3}{*}{0.5} & 1 (FedAvg) & $78.81 \pm 0.01\%$\\ \cline{7-8}
& & & & & & 2 & $77.77 \pm 0.05\%$\\ \cline{7-8}
& & & & & & 4 & $77.01 \pm 0.1\%$ \\ \cline{5-8}
& & & & \multirow{3}{*}{$25\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $79.06 \pm 0.03\%$\\ \cline{7-8}
& & & & & & 2 & $77.84 \pm 0.02\%$\\ \cline{3-3} \cline{7-8}
& & $0.2$ & & & & 4 & $77.17 \pm 0.01\%$ \\ \hline
\end{tabular}
\label{tab:app-cifar100-niid}
\end{table}
\begin{table}[t]
\scriptsize
\centering
\caption{
(Non-IID data) CIFAR-100 classification results of FedAvg with different $\tau'$ settings.
}
\begin{tabular}{|c||c|c|c|c|c|c||c|} \hline
\# of clients & Local batch size & LR & $\tau'$ & Active ratio & Dirichlet coeff. & $\phi$ & Validation acc. \\ \hline \hline
\multirow{3}{*}{128} & \multirow{3}{*}{32} & \multirow{3}{*}{$0.4$} & $6$ & \multirow{3}{*}{$100\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $79.78 \pm 0.02\%$\\ \cline{4-4}\cline{7-8}
& & & $12$ & & & 1 (FedAvg) & $77.71 \pm 0.1\%$\\ \cline{4-4}\cline{7-8}
& & & $24$ & & & 1 (FedAvg) & $69.63 \pm 0.1\%$\\ \hline
\multirow{3}{*}{128} & \multirow{3}{*}{32} & \multirow{3}{*}{$0.4$} & $6$ & \multirow{3}{*}{$25\%$} & \multirow{3}{*}{0.1} & 1 (FedAvg) & $79.06 \pm 0.03\%$\\ \cline{4-4}\cline{7-8}
& & & $12$ & & & 1 (FedAvg) & $76.16 \pm 0.05\%$\\ \cline{4-4}\cline{7-8}
& & & $24$ & & & 1 (FedAvg) & $67.43 \pm 0.1\%$\\ \hline
\end{tabular}
\label{tab:app-cifar100-niid-fedavg}
\end{table}
\begin{table}[t]
\scriptsize
\centering
\caption{
FEMNIST classification results of FedLAMA with different $\phi$ settings.
}
\begin{tabular}{|c||c|c|c|c|c||c|} \hline
\# of clients & Local batch size & LR & Averaging interval: $\tau'$ & Active ratio & Interval increase factor: $\phi$ & Validation acc. \\ \hline \hline
\multirow{12}{*}{128} & \multirow{12}{*}{32} & \multirow{12}{*}{$0.04$} & \multirow{12}{*}{12} & \multirow{4}{*}{$100\%$} & 1 (FedAvg) & $85.74 \pm 0.21\%$\\ \cline{6-7}
& & & & & 2 & $85.40 \pm 0.13\%$\\ \cline{6-7}
& & & & & 4 & $84.67 \pm 0.1\%$ \\ \cline{6-7}
& & & & & 8 & $84.15 \pm 0.18\%$ \\ \cline{5-7}
& & & & \multirow{4}{*}{$50\%$} & 1 (FedAvg) & $86.59 \pm 0.2\%$\\ \cline{6-7}
& & & & & 2 & $86.07 \pm 0.1\%$\\ \cline{6-7}
& & & & & 4 & $85.77 \pm 0.15\%$ \\ \cline{6-7}
& & & & & 8 & $85.31 \pm 0.03\%$ \\ \cline{5-7}
& & & & \multirow{4}{*}{$25\%$} & 1 (FedAvg) & $86.04 \pm 0.2\%$\\ \cline{6-7}
& & & & & 2 & $86.01 \pm 0.1\%$\\ \cline{6-7}
& & & & & 4 & $85.62 \pm 0.08\%$ \\ \cline{6-7}
& & & & & 8 & $85.23 \pm 0.1\%$ \\ \hline
\end{tabular}
\label{tab:app-femnist}
\end{table}
|
1,116,691,499,113 | arxiv |
\subsection{Proving Lemma \ref{lem:non_adaptive_optimum}}
Let us suppose that $G=(U,V,E)$ is a stochastic graph with downward-closed
probing constraints $(\mathcal{C}_v)_{v \in V}$. In order to prove
Lemma \ref{lem:non_adaptive_optimum}, we must show that there exists
an optimal relaxed probing algorithm which is non-adaptive and satisfies \ref{eqn:committal}.
Our high level approach is to consider an optimal relaxed probing algorithm $\mathcal{A}$
which satisfies \ref{eqn:committal}, and then to construct a new
non-adaptive algorithm $\mathcal{B}$ by \textit{stealing} the strategy
of $\mathcal{A}$, without any loss in performance. More
specifically, we construct $\mathcal{B}$ by writing down for each $v \in V$
and $\bm{e} \in \mathcal{C}_v$ the probability that
$\mathcal{A}$ probes the edges of $\bm{e}$ in order. These
probabilities necessarily satisfy certain inequalities which we make
use of in designing $\mathcal{B}$. In order to do so, we need a technical randomized rounding
procedure whose precise relevance will become clear in the proof of
Lemma \ref{lem:non_adaptive_optimum}.
Suppose that $\bm{e} \in E^{(*)}$, and recall that $\lambda$ is the empty string/character. For each $j \ge 0$,
denote $\bm{e}_{j}$ as the $j^{th}$ character of $\bm{e}_j$,
where $\bm{e}_j:=\lambda$ when $j=0$ or $j > |\bm{e}|$.
Let us now assume that $(y_{v}(\bm{e}))_{\bm{e} \in \mathcal{C}_v}$ is a collection of non-negative values
which satisfy $y_{v}(\lambda)=1$, and
\begin{equation} \label{eqn:marginal_distribution}
\sum_{\substack{e \in \partial(v): \\ (\bm{e}',e) \in \mathcal{C}_v}} y_{v}(\bm{e}', e) \le y_{v}(\bm{e}'),
\end{equation}
for each $\bm{e}' \in \mathcal{C}_v$.
\begin{proposition} \label{prop:vertex_round}
Given a collection of values $(y_{v}(\bm{e}))_{\bm{e} \in \mathcal{C}_v}$ which satisfy
$y_{v}(\lambda)=1$ and \eqref{eqn:marginal_distribution}, there exists a distribution $\mathcal{D}^v$ supported on $\mathcal{C}_v$,
such that if $\bm{Y} \sim \mathcal{D}^v$, then for each $\bm{e} \in \mathcal{C}_v$ with $k:= |\bm{e}| \ge 1$,
it holds that
\begin{equation}\label{eqn:target_probability}
\mathbb{P}[ (\bm{Y}_{1}, \ldots , \bm{Y}_{k}) = (\bm{e}_1, \ldots , \bm{e}_{k})] = y_{v}(\bm{e}),
\end{equation}
where $\bm{Y}_1,\ldots ,\bm{Y}_k$ are the first $k$ characters of $\bm{Y}$.
\end{proposition}
\begin{proof}
First define $\mathcal{C}^{>}_v:= \{ \bm{e}' \in \mathcal{C}_v : y_{v}(\bm{e}') > 0\}$, which we observe
is downward-closed since by assumption $\mathcal{C}_v$ is downward-closed and \eqref{eqn:marginal_distribution} holds. We
prove the proposition for $\mathcal{C}^{>}_v$, which we then argue implies the proposition holds for $\mathcal{C}_v$.
Observe now that for each $\bm{e}' \in \mathcal{C}^{>}_v$, we have that
\begin{equation}\label{eqn:online_vertex_edge_distribution}
\sum_{\substack{e \in \partial(v): \\ (\bm{e}',e) \in \mathcal{C}^{>}_v}} \frac{y_{v}(\bm{e}', e)}{y_{v}(\bm{e}')} \le 1
\end{equation}
as a result of \eqref{eqn:marginal_distribution} (recall that $y_{v}(\lambda):=1$). We thus define for
each $\bm{e}' \in \mathcal{C}^{>}_v$,
\begin{equation}\label{eqn:pass_probability}
z_{v}(\bm{e}'):= 1 - \sum_{\substack{e \in \partial(v): \\ (\bm{e}',e) \in \mathcal{C}^{>}_v}} \frac{y_{v}(\bm{e}',e)}{y_{v}(\bm{e}')},
\end{equation}
which we observe has the property that $0 \le z_{v}(\bm{e}') < 1$. The strict inequality follows from the definition
of $\mathcal{C}^{>}_v$. This leads to the following randomized rounding algorithm,
which we claim outputs a random string $\bm{Y}$ which satisfies the desired properties:
\begin{algorithm}[H]
\caption{VertexRound} \label{alg:general_vertex_rounding}
\begin{algorithmic}[1]
\Require a collection of values $(y_{v}(\bm{e}))_{\bm{e} \in \mathcal{C}^{>}_v}$ satisfying \eqref{eqn:marginal_distribution} and $y_{v}(\lambda)=1$.
\Ensure a random string $\bm{Y}=(Y_{1},\ldots ,Y_{|\partial(v)|})$ supported on $\mathcal{C}^{>}_v$.
\State Set $\bm{e}' \leftarrow \lambda$.
\State Initialize $Y_{i}=\lambda$ for each $i=1, \ldots , |\partial(v)|$.
\For{$i=1, \ldots , |\partial(v)|$}
\State Exit the ``for loop'' with probability $z_{v}(\bm{e}')$.
\Comment{pass with a certain probability -- see \eqref{eqn:pass_probability}}
\State Draw $e \in \partial(v)$ satisfying $(\bm{e}',e) \in \mathcal{C}^{>}_v$ with probability $y_{v}(\bm{e}', e)/ (y_{v}(\bm{e}') \, (1-z_{v}(\bm{e}')))$. \label{line:edge_draw}
\State Set $Y_{i}=e$.
\State $\bm{e}' \leftarrow (\bm{e}',e)$.
\EndFor
\State \Return $\bm{Y}=(Y_{1},\ldots ,Y_{|\partial(v)|})$. \Comment{concatenate the edges in order and return the resulting string}
\end{algorithmic}
\end{algorithm}
Clearly, the random string $\bm{Y}$ is supported on $\mathcal{C}^{>}_v$, thanks to line \ref{line:edge_draw} of Algorithm \ref{alg:general_vertex_rounding}.
We now show that \eqref{eqn:target_probability} holds. As such,
let us first assume $k=1$, and $e \in \partial(v)$ satisfies $(e) \in \mathcal{C}^{>}_v$. Observe
that
\[
\mathbb{P}[Y_{1} = e] = (1 - z_{v}(\lambda)) \frac{ y_{v}(e)}{1 - z_{v}(\lambda)} = y_{v}(e),
\]
as the algorithm does not exit the ``for loop'' with probability $1-z_{v}(\lambda)$, in which case it draws $e$
with probability $y_{v}(e)/(1-z_{v}(\lambda))$. In general, take $k \ge 2$, and assume that for each $\bm{e}' \in \mathcal{C}^{>}_v$ with $1 \le |\bm{e}'| < k$,
it holds that
\[
\mathbb{P}[ (Y_{1},\ldots ,Y_{k}) = \bm{e}'] = y_{v}(\bm{e}').
\]
If we now fix $\bm{e} =(e_{1}, \ldots, e_{k}) \in \mathcal{C}^{>}_v$ with $|\bm{e}|=k$,
observe that $\bm{e}_{<k}:= (e_{1}, \ldots ,e_{k-1}) \in \mathcal{C}^{>}_v$, as $\mathcal{C}^{>}_v$ is downward-closed.
Moreover,
\begin{align*}
\mathbb{P}[(Y_{1}, \ldots , Y_{k}) = \bm{e}] &= \mathbb{P}[ Y_{k} = e_k \, | \, (Y_{1}, \ldots , Y_{k-1}) = \bm{e}_{<k}] \cdot \mathbb{P}[ (Y_{1}, \ldots , Y_{k-1}) = \bm{e}_{<k}] \\
&= \mathbb{P}[ Y_{k} = e_k \, | \, (Y_{1}, \ldots , Y_{k-1}) = \bm{e}_{<k})] \cdot y_{v}(\bm{e}_{<k}),
\end{align*}
where the last line follows by the induction hypothesis since $\bm{e}_{<k} \in \mathcal{C}^{>}_v$ is of length $k-1$.
We know however that
\[
\mathbb{P}[ Y_{k} = \bm{e}_k \, | \, (Y_{1}, \ldots , Y_{k-1}) = \bm{e}_{<k}]= (1-z_{v}(\bm{e}_{<k})) \, \frac{y_{v}(\bm{e}_{<k},e_k)}{y_{v}(\bm{e}_{<k}) \, (1-z_{v}(\bm{e}_{<k}))} = \frac{y_{v}(\bm{e}_{<k},e_k)}{y_{v}(\bm{e}_{<k})}.
\]
This is because once we condition on the event $(Y_{1}, \ldots , Y_{k-1}) = \bm{e}_{<k}$, we know that the algorithm
does not exit the ``for loop'' with probability $1 - z_{v}(\bm{e}_{<k})$, in which case it selects $e_{k} \in \partial(v)$ with probability
$y_{v}(\bm{e}_{<k}, e_k)/(y_{v}(\bm{e}_{<k}) \, (1-z_{v}(\bm{e}_{<k})))$, since $(\bm{e}_{<k}, e_k) \in \mathcal{C}^{>}_v$ by assumption. As such, we have that
\[
\mathbb{P}[(Y_{1}, \ldots , Y_{k}) = \bm{e}] = y_{v}(\bm{e}),
\]
and so the proposition holds for $\mathcal{C}^{>}_v$.
To complete the argument, observe
that since $\bm{Y}$ is supported on $\mathcal{C}^{>}_v$, the substrings
of $\bm{Y}$ are also supported on $\mathcal{C}^{>}_v$, as $\mathcal{C}^{>}_v$ is downward-closed. Thus, $\bm{Y}$
satisfies \eqref{eqn:target_probability} for the non-empty strings of $\mathcal{C}_v \setminus \mathcal{C}^{>}_v$, in addition
to the non-empty strings of $\mathcal{C}^{>}_v$.
\end{proof}
We are now ready to prove Lemma \ref{lem:non_adaptive_optimum}.
\begin{proof}[Proof of Lemma \ref{lem:non_adaptive_optimum}]
Suppose that $\mathcal{A}$ is an optimal relaxed probing algorithm which returns
the one-sided matching $\mathcal{M}$ after executing on the stochastic graph
$G=(U,V,E)$. In a slight abuse of terminology, we say that $e$
is matched by $\mathcal{A}$, provided $e$ is included in $\mathcal{M}$.
We shall also make the simplifying assumption that $p_{e} < 1$ for each $e \in E$,
as the proof can be clearly adapted to handle the case when certain edges have
$p_{e}=1$ by restricting which strings of each $\mathcal{C}_v$ are considered.
Observe that since $\mathcal{A}$ is optimal, it is clear
that we may assume the following properties hold w.l.o.g. for each $e \in E$:
\begin{enumerate}
\item $e$ is probed only if $e$ can be added to the currently constructed one-sided
matching. \label{eqn:probing_only_if}
\item If $e$ is probed and $\text{st}(e)=1$, then $e$ is included in $\mathcal{M}$. \label{eqn:if_active_probe}
\end{enumerate}
Thus, in order to prove the lemma, we must find an alternative algorithm $\mathcal{B}$ which
is non-adaptive, yet continues to be optimal.
To this end, we shall first express $\mathbb{E}[w(\mathcal{M}(v))]$ in
a convenient form for each $v \in V$, where $w(\mathcal{M}(v))$ is the weight of the edge matched to $v$ (which is $0$ if no match occurs).
Given $v \in V$ and $1 \le i \le |U|$, we define $X_{i}^{v}$ to be the $i^{th}$ edge adjacent to $v$ that is probed by $\mathcal{A}$.
This is set equal to $\lambda$ by convention, provided no such edge exists. We may
then define $\bm{X}^{v}:=(X^{v}_{1}, \ldots , X^{v}_{|U|})$, and
$\bm{X}_{\le k}^{v} := (X^{v}_1, \ldots , X^{v}_k)$ for each $1 \le k \le |U|$. Moreover, given $\bm{e} =(e_{1}, \ldots ,e_{k}) \in E^{(*)}$ with $k \ge 1$, define $S(\bm{e})$ to be
the event in which $e_{k}$ is the only active edge amongst $e_{1}, \ldots ,e_{k}$.
Observe then that
\[
\mathbb{E}[ w(\mathcal{M}(v))] = \sum_{\substack{\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_{v}: \\ k \ge 1}} w_{e_k} \mathbb{P}[S(\bm{e}) \cap \{\bm{X}^{v}_{\le k} = \bm{e}\}],
\]
as \eqref{eqn:probing_only_if} and \eqref{eqn:if_active_probe} ensure $v$ is matched to the first probed
edge which is revealed to be active. Moreover,
if $\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_v$ for $k \ge 2$, then
\begin{equation}
\mathbb{P}[S(\bm{e}) \cap \{\bm{X}_{\le k}^{v} = \bm{e} \}] = \mathbb{P}[\{\text{st}(e_k) =1 \}\cap \{\bm{X}_{\le k}^{v} = \bm{e}\}],
\end{equation}
as \eqref{eqn:probing_only_if} and \eqref{eqn:if_active_probe} ensure $\bm{X}_{\le k}^{v} = \bm{e}$ only if $e_{1}, \ldots , e_{k-1}$ are inactive. Thus,
\begin{align*}
\mathbb{E}[ w(\mathcal{M}(v))] &= \sum_{\substack{\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_{v}: \\ k \ge 1}} w_{e_k} \mathbb{P}[S(\bm{e}) \cap \{\bm{X}_{\le k}^{v} = \bm{e} \}] \\
&= \sum_{\substack{\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_{v}: \\ k \ge 1}} w_{e_k} \mathbb{P}[\{\text{st}(e_{k}) =1\} \cap \{\bm{X}_{\le k}^{v} = \bm{e} \}] \\
&= \sum_{\substack{\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_{v}: \\ k \ge 1}} w_{e_k} p_{e_k} \mathbb{P}[\bm{X}_{\le k}^{v} = \bm{e}],
\end{align*}
where the final equality holds since $\mathcal{A}$ must decide on whether to probe $e_{k}$ prior to revealing $\text{st}(e_k)$.
As a result, after summing over $v \in V$,
\begin{equation}\label{eqn:target_value}
\mathbb{E}[ w(\mathcal{M})] = \sum_{v \in V} \sum_{\substack{\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_{v}: \\ k \ge 1}} w_{e_k} p_{e_k} \mathbb{P}[\bm{X}_{\le k}^{v} = \bm{e} ].
\end{equation}
Our goal is to find a non-adaptive relaxed probing algorithm which matches the value of \eqref{eqn:target_value}.
Thus, for each $v \in V$ and $\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_v$ with $k \ge 1$,
define
\[
x_{v}(\bm{e}):= \mathbb{P}[\bm{X}_{\le k}^{v} = \bm{e}],
\]
where $x_{v}(\lambda):=1$. Observe now that for each $\bm{e}'=(e_{1}', \ldots , e_{k}') \in \mathcal{C}_v$,
\begin{equation}\label{eqn:probability_consistency_conditional}
\sum_{\substack{e \in \partial(v): \\ (\bm{e}',e) \in \mathcal{C}_v}} \mathbb{P}[\bm{X}_{\le k+1}^{v} = (\bm{e}',e) \, | \, \bm{X}_{\le k}^{v} = \bm{e}'] \le 1 - p_{e_k'}.
\end{equation}
To see \eqref{eqn:probability_consistency_conditional}, observe that the the left-hand side corresponds to the probability $\mathcal{A}$
probes some edge $e \in \partial(v)$, given it already probed $\bm{e}'$ in order. On the other hand, if a subsequent edge is probed,
then \eqref{eqn:probing_only_if} and \eqref{eqn:if_active_probe} imply that $e'_k$ must have been inactive, which occurs
independently of the event $\bm{X}_{\le k}^{v} = \bm{e}'$. This explains the right-hand side of \eqref{eqn:probability_consistency_conditional}. Using \eqref{eqn:probability_consistency_conditional}, the values
$(x_{v}(\bm{e}))_{\bm{e} \in \mathcal{C}_{v}}$ satisfy
\begin{equation}\label{eqn:probability_consistency}
\sum_{\substack{e \in \partial(v): \\ (\bm{e}',e) \in \mathcal{C}_v}} x_{v}(\bm{e}', e) \le (1 - p_{e'_k}) \cdot x_{v}(\bm{e}'),
\end{equation}
for each $\bm{e}'=(e_{1}', \ldots , e_{k}') \in \mathcal{C}_v$ with $k \ge 1$. Moreover, clearly
$\sum_{e \in \partial(v)} x_{v}(e) \le 1$.
Given $\bm{e} =(e_1, \ldots ,e_k) \in \mathcal{C}_v$ for $k \ge 1$,
recall that $\bm{e}_{<k}:=(e_1, \ldots ,e_{k-1})$ where $\bm{e}_{< 1}:= \lambda$ if $k=1$.
Moreover, $g(\bm{e}_{<k}):= \prod_{i=1}^{k-1}(1- p_{e_i})$, where $g(\lambda):=1$.
Using this notation, define for each $\bm{e} \in \mathcal{C}_v$
\begin{equation}\label{eqn:y_value_definition}
y_{v}(\bm{e}):=
\begin{cases}
x_{v}(\bm{e})/ g(\bm{e}_{<|\bm{e}|}) & \text{if $|\bm{e}| \ge 1$,} \\
1 & \text{otherwise.}
\end{cases}
\end{equation}
Observe that \eqref{eqn:probability_consistency} ensures that for each $\bm{e}' \in \mathcal{C}_v$,
\begin{equation}
\sum_{\substack{e \in \partial(v): \\ (\bm{e}',e) \in \mathcal{C}_v}} y_{v}(\bm{e}', e) \le y_{v}(\bm{e}'),
\end{equation}
and $y_{v}(\lambda):=1$.
As a result, Proposition \ref{prop:vertex_round} implies
that for each $v \in V$, there exists a distribution $\mathcal{D}^{v}$ such that if
$\bm{Y}^{v} \sim \mathcal{D}^{v}$, then for each $\bm{e} \in \mathcal{C}_v$ with $|\bm{e}|=k \ge 1$,
\begin{equation}\label{eqn:non_adaptive_target_probability}
\mathbb{P}[\bm{Y}^{v}_{\le k} = \bm{e}] = y_{v}(\bm{e}).
\end{equation}
Moreover, $\bm{Y}^{v}$ is drawn independently from the edge states, $(\text{st}(e))_{e \in E}$. Consider now the following algorithm $\mathcal{B}$, which satisfies the desired properties
\ref{eqn:committal} and \ref{eqn:non_adaptive} of Lemma \ref{lem:non_adaptive_optimum}:
\begin{algorithm}[H]
\caption{Algorithm $\mathcal{B}$} \label{alg:non_adaptive_relaxed}
\begin{algorithmic}[1]
\Require a stochastic graph $G=(U,V,E)$.
\Ensure a one-sided matching $\mathcal{N}$ of $G$ of active edges.
\State Set $\mathcal{N} \leftarrow \emptyset$.
\State Draw $(\bm{Y}^{v})_{v \in V}$ according to the product distribution $\prod_{v \in V} \mathcal{D}^{v}$.
\For{$v \in V$}
\For{$i=1, \ldots , |\bm{Y}^{v}|$}
\State Set $e \leftarrow \bm{Y}^{v}_{i}$. \Comment{$\bm{Y}^{v}_{i}$ is the $i^{th}$ edge of $\bm{Y}^{v}$}
\State Probe the edge $e$, revealing $\text{st}(e)$.
\If{$\text{st}(e) =1$ and $v$ is unmatched by $\mathcal{N}$}
\State Add $e$ to $\mathcal{N}$.
\EndIf
\EndFor
\EndFor
\State \Return $\mathcal{N}$.
\end{algorithmic}
\end{algorithm}
Using \eqref{eqn:non_adaptive_target_probability} and the non-adaptivity of $\mathcal{B}$, it is clear that
for each $v \in V$,
\begin{align*}
\mathbb{E}[w(\mathcal{N}(v))] &= \sum_{\substack{\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_{v}: \\ k \ge 1}} w_{e_k} \mathbb{P}[S(\bm{e})] \cdot \mathbb{P}[\bm{Y}_{\le k}^{v} = \bm{e}] \\
&= \sum_{\substack{\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_{v}: \\ k \ge 1}} w_{e_k} p_{e_k} g(\bm{e}_{<k}) y_{v}(\bm{e}) \\
&= \sum_{\substack{\bm{e}=(e_{1}, \ldots ,e_{k}) \in \mathcal{C}_{v}: \\ k \ge 1}} w_{e_k} p_{e_k} x_{v}(\bm{e}) \\
&= \mathbb{E}[w(\mathcal{M}(v))].
\end{align*}
Thus, after summing over $v \in V$, it holds that $\mathbb{E}[w(\mathcal{N})] = \mathbb{E}[w(\mathcal{M})] = \text{OPT}_{\text{rel}}(G)$,
and so in addition to satisfying \ref{eqn:committal} and \ref{eqn:non_adaptive}, $\mathcal{B}$ is optimal.
Finally, it is easy to show that each $u \in U$ is matched by $\mathcal{N}$
at most once in expectation since $\mathcal{M}$ has this property.
Thus, $\mathcal{B}$ is a relaxed probing algorithm which is optimal and satisfies the required properties of Lemma \ref{lem:non_adaptive_optimum}.
\end{proof}
\subsection{Extending to Known I.D. Arrivals} \label{subsec:known_id}
Suppose that $(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$ is a known $i.d.$ input,
where $H_{\text{typ}}=(U,B,F)$ has downward-closed online probing constraints $(\mathcal{C}_b)_{b \in B}$.
If $G \sim (H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$, where $G=(U,V,E)$ has vertices $V=\{v_{1}, \ldots ,v_{n}\}$, then
define $r_{i}(b):=\mathbb{P}[v_i = b]$ for each $i \in [n]$ and $b \in B$, where we hereby assume
that $r_{i}(b) > 0$. We generalize \ref{LP:new} to account for the distributions $(\mathcal{D}_i)_{i=1}^{n}$.
For each $i \in [n], b \in B$ and $\bm{e} \in \mathcal{C}_b$,
we introduce a decision variable $x_{i}( \bm{e} \, || \, b)$
to encode the probability that $v_i$ has type $b$ \textit{and} $\bm{e}$
is the sequence of edges of $\partial(v_i)$ probed by the \textit{relaxed} benchmark.
\begin{align}\label{LP:new_id}
\tag{LP-config-id}
&\text{maximize} & \sum_{i \in [n], b \in B, \bm{e} \in \mathcal{C}_b} \text{val}(\bm{e}) \cdot x_{i}(\bm{e} \, || \, b) \\
&\text{subject to} & \sum_{i \in [n], b \in B} \sum_{\substack{ \bm{e} \in \mathcal{C}_b: \\ (u,b) \in \bm{e}}}
p_{u,b} \cdot g(\bm{e}_{< (u,b)}) \cdot x_{i}( \bm{e} \, || \, b) \leq 1 && \forall u \in U \label{eqn:relaxation_efficiency_matching_id}\\
&& \sum_{\bm{e} \in \mathcal{C}_b} x_{i}(\bm{e} \, || \, b)= r_{i}(b) && \forall b \in B, i \in [n] \label{eqn:relaxation_efficiency_distribution_id} \\
&&x_{i}(\bm{e} \, || \, b) \ge 0 && \forall b \in B, \bm{e} \in \mathcal{C}_b, i \in [n]
\end{align}
Let us denote $\text{LPOPT}(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$ as the value of an optimum solution
to \ref{LP:new_id}.
\begin{theorem} \label{thm:known_id_relaxation}
$\text{OPT}(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n}) \le \text{LPOPT}(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$.
\end{theorem}
One way to prove Theorem \ref{thm:known_id_relaxation} is to use the properties
of the relaxed benchmark on $G$ guaranteed by Lemma \ref{lem:non_adaptive_optimum},
and the above interpretation of the decision variables to argue that
\[
\mathbb{E}[\text{OPT}_{\text{rel}}(G)] \le \text{LPOPT}(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n}),
\]
where $\text{OPT}_{\text{rel}}(G)$ is the value of the relaxed benchmark on $G$. Specifically, we can interpret \eqref{eqn:relaxation_efficiency_matching_id} as saying that the
relaxed benchmark matches each offline vertex at most once in expectation. Moreover,
\eqref{eqn:relaxation_efficiency_distribution_id} holds by observing that if
$v_i$ is of type $b$, then the relaxed benchmark selects some $\bm{e} \in \mathcal{C}_{b}$
to probe (note $\bm{e}$ could be the empty-string). We provide a morally equivalent proof of Theorem \ref{thm:known_id_relaxation} in
Appendix \ref{sec:known_id_additions}. Specifically, we consider an optimum solution of \ref{LP:new} with respect to $G$,
and apply a conditioning argument in conjunction with Theorem \ref{thm:new_LP_relaxation}.
Given a feasible solution to \ref{LP:new_id},
say $(x_{i}(\bm{e} \, || \, b))_{i \in [n], b \in B, \bm{e} \in \mathcal{C}_b}$, for each $u \in U, i \in [n]$ and $b \in B$ define
\begin{equation}\label{eqn:induced_edge_variables_id}
\widetilde{x}_{u,i}(b):= \sum_{\substack{ \bm{e} \in \mathcal{C}_b: \\ (u,b) \in \bm{e}}}
g(\bm{e}_{< (u,b)}) \cdot x_{i}( \bm{e} \, || \, b).
\end{equation}
We refer to $\widetilde{x}_{u,i}(b)$ as an \textbf{(induced) edge variable}, thus extending the definition from the known stochastic
graph setting. Suppose now that we fix $i \in [n]$ and $b \in B$, and consider the variables, $(x_{i}(\bm{e} \, || \, b))_{\bm{e} \in \mathcal{C}_b}$. Observe that \eqref{eqn:relaxation_efficiency_distribution_id} ensures that
\[
\frac{\sum_{\bm{e} \in \mathcal{C}_b} x_{i}(\bm{e} \, || \, b)}{ r_{i}(b)} = 1.
\]
Hence, regardless of which type node $v_{i}$ is drawn as,
\[
\frac{\sum_{\bm{e} \in \mathcal{C}_{v_i}} x_{i}(\bm{e} \, || \, v_i)}{ r_{i}(v_i)} = 1.
\]
We can therefore generalize \textsc{VertexProbe} as follows. Given vertex $v_i$,
draw $\bm{e}' \in \mathcal{C}_{v_i}$ with probability $x_{i}(\bm{e}' \, || \, v_i)/r_{i}(v_i)$. If $\bm{e}' = \lambda$, then return the empty-set. Otherwise, set $\bm{e}' = (e_{1}', \ldots ,e_{k}')$ for $k := |\bm{e}'| \ge 1$, and probe the
edges of $\bm{e}'$ in order. Return the first edge which is revealed to be active, if such an
edge exists. Otherwise, return the empty-set. We denote the output of \textsc{VertexProbe} on the input
$(v_i, \partial(v_i), (x_{i}(\bm{e} \, || \, v_i)/ r_{i}(v_i))_{\bm{e} \in \mathcal{C}_{v_i}})$ by
$\textsc{VertexProbe}(v_i, \partial(v_i), (x_{i}(\bm{e} \, || \, v_i)/ r_{i}(v_i))_{\bm{e} \in \mathcal{C}_{v_i}})$.
Define $C(u,v_i)$ as the event in which \textsc{VertexProbe} outputs the edge $(u,v_i)$,
and observe the following extension of Lemma \ref{lem:fixed_vertex_probe}:
\begin{lemma}\label{lem:fixed_vertex_probe_id}
If \textsc{VertexProbe} is passed $\left(v_{i}, \partial(v_i), (x_{i}(\bm{e} \, || \, v_i) / r_{i}(v_i))_{\bm{e} \in \mathcal{C}_{v_i}}\right)$, then for any $b \in B$ and $u \in U$,
\[
\mathbb{P}[C(u,v_i) \, | \, v_{i} = b] = \frac{p_{u,b} \cdot \widetilde{x}_{u,i}(b)}{r_{i}(b)}.
\]
\end{lemma}
\begin{remark}
As in Definition \ref{def:vertex_probe}, if $C(u,v_i)$ occurs, then we say that $u$ commits
to $(u,v_i)$ or $v_i$.
\end{remark}
Consider now the generalization of Algorithm \ref{alg:known_stochastic_graph} where
$\pi$ is generated either u.a.r. or adversarially.
\begin{algorithm}[H]
\caption{Known I.D}
\label{alg:known_id}
\begin{algorithmic}[1]
\Require a known i.d. input $(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$.
\Ensure a matching $\mathcal{M}$ of active edges of $G \sim (H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$.
\State $\mathcal{M} \leftarrow \emptyset$.
\State Compute an optimum solution of \ref{LP:new_id} for $(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$, say $(x_{i}(\bm{e} \, || \, b))_{i \in [n], b \in B, \bm{e} \in \mathcal{C}_b}$.
\For{$t=1, \ldots , n$}
\State Let $a \in B$ be the type of the current arrival $v_{\pi(t)}$. \Comment{to simplify notation}
\State Set $e \leftarrow \textsc{VertexProbe}\left(v_{\pi(t)},\partial(v_{\pi(t)}), \left( x_{\pi(t)}(\bm{e} \, || \, a) \cdot r^{-1}_{\pi(t)}(a) \right)_{\bm{e} \in \mathcal{C}_{a}}\right)$.
\If{$e=(u,v_{\pi(t)})$ for some $u \in U$, and $u$ is unmatched}
\State Add $e$ to $\mathcal{M}$.
\EndIf
\EndFor
\State \Return $\mathcal{M}$.
\end{algorithmic}
\end{algorithm}
Similarly, to Algorithm \ref{alg:known_stochastic_graph} of Proposition \ref{prop:known_stochastic_graph},
one can show that Algorithm \ref{alg:known_id} attains a competitive ratio of $1/2$ for random order arrivals. Interestingly, if the distributions $(\mathcal{D}_{i})_{i=1}^{n}$ are identical -- that is, we work with known i.i.d. arrivals -- then it is relatively easy to show that
this algorithm's competitive ratio improves to $1-1/e$.
\begin{proposition} \label{prop:known_iid}
If Algorithm \ref{alg:known_id} is presented a known $i.i.d.$ input, say the type graph $H_{\text{typ}}$
together with the distribution $\mathcal{D}$, then
$\mathbb{E}[w(\mathcal{M})] \ge \left(1 - 1/e \right) \text{OPT}(H_{\text{typ}} , \mathcal{D})$.
\end{proposition}
\begin{remark}
Proposition \ref{prop:known_iid} is proven explicitly for the case of patience values
in an earlier arXiv version of this paper \cite{borodin2020}.
\end{remark}
Returning to the case of non-identical distributions, observe
that in the execution of Algorithm \ref{alg:known_id}
the probability that $v_i$ commits to the edge $(u,v_i)$ for $u \in U$ is precisely
\begin{equation}
z_{u,i} := \sum_{b \in B} p_{u,b} \cdot \widetilde{x}_{u,i}(b) = \sum_{b \in B} \sum_{\substack{ \bm{e} \in \mathcal{C}_b: \\ (u,b) \in \bm{e}}}
p_{u,b} \cdot g(\bm{e}_{< (u,b)}) \cdot x_{i}( \bm{e} \, || \, b).
\end{equation}
Moreover, the events $(C(u,v_i))_{i=1}^{n}$ are independent, so this suggests
applying the same contention resolutions schemes as in the known stochastic graph setting.
We first focus on the adversarial arrival model, where we assume the vertices $v_{1}, \ldots ,v_{n}$
are presented in some unknown order $\pi:[n] \rightarrow [n]$. We make use of the OCRS from before
(Algorithm \ref{alg:online_contention_resolution}).
For each $t \in [n]$ and $u \in U$, define
\begin{equation}
q_{u,t}:= \frac{1}{2 - \sum_{i=1}^{t-1} z_{u,\pi(i)}},
\end{equation}
where $q_{u,1}:=1/2$. Note that $1/2 \le q_{u,t} \le 1$ as $\sum_{j \in [n]} z_{u, j} \le 1$ by constraint \eqref{eqn:relaxation_efficiency_matching_id} of \ref{LP:new_id}.
We define Algorithm \ref{alg:known_id_aom_modified} by modifying Algorithm \ref{alg:known_id} using the OCRS to ensure that each $i \in [n]$ is matched to $u \in U$ with probability $z_{u,i}/2$. However, to achieve a competitive ratio of $1/2$,
we require the stronger claim that for each type node $a \in B$, the probability $(u,v_i)$ is added to the matching \textit{and} $v_i$ is of type $a$ is lower bounded by $p_{u,a} \widetilde{x}_{u,i}(a)/2$. Crucially, if we condition on $u \in U$ being unmatched when $v_i$ is processed, $v_i$ having type $a$, and $C(u,v_i)$, then the probability the OCRS matches $u$ to $v_i$ does \textit{not} depend on $a$.
As we show below in the proof of Theorem \ref{thm:known_id_adversarial},
this implies the desired lower bound of $p_{u,a} \widetilde{x}_{u,i}(a)/2$, and so
Algorithm \ref{alg:known_id_aom_modified} attains a competitive ratio of $1/2$
by \eqref{eqn:induced_edge_variables_id} and Theorem \ref{thm:known_id_relaxation}.
\begin{algorithm}[H]
\caption{Known I.D. -- AOM -- Modified}
\label{alg:known_id_aom_modified}
\begin{algorithmic}[1]
\Require a known i.d. input $(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$.
\Ensure a matching $\mathcal{M}$ of active edges of $G \sim (H_{\text{typ}}, (\mathcal{D}_t)_{t=1}^{n})$.
\State $\mathcal{M} \leftarrow \emptyset$.
\State Compute an optimum solution of \ref{LP:new_id} for $(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n})$, say $(x_{i}(\bm{e} \, || \, b))_{i \in [n], b \in B, \bm{e} \in \mathcal{C}_b}$.
\For{$t=1,\ldots ,n$}
\State Let $a \in B$ be the type of the current arrival $v_{\pi(t)}$.
\State Based on the previous arrivals $v_{\pi(1)},\ldots ,v_{\pi(t-1)}$ before $v_{\pi(t)}$, compute values $(q_{u,t})_{u \in U}$.
\State Set $e \leftarrow \textsc{VertexProbe}\left(v_{\pi(t)}, \partial(v_{\pi(t)}), \left(x_{\pi(t)}(\bm{e} \, || \, a) \cdot r^{-1}_{\pi(t)}(a) \right)_{\bm{e} \in \mathcal{C}_{a}} \right)$.
\If{$e=(u,v_t)$ for some $u \in U$, and $u$ is unmatched}
\State Add $e$ to $\mathcal{M}$ independently with probability $q_{u,t}$. \label{line:adversarial_contention}
\EndIf
\EndFor
\State \Return $\mathcal{M}$.
\end{algorithmic}
\end{algorithm}
\begin{proof}[Proof of Theorem \ref{thm:known_id_adversarial}]
For notational simplicity, let us assume that $\pi(t)=t$ for each $t \in [n]$,
so that the online vertices arrive in order $v_{1}, \ldots , v_{n}$.
Now, the edge variables $(\widetilde{x}_{u,t}(b))_{u \in U,t \in [n], b \in B}$
satisfy
\[
\text{LPOPT}(H_{\text{typ}}, (\mathcal{D}_i)_{i=1}^{n}) = \sum_{u \in U, t \in [n], b \in B} p_{u,b} w_{u,b} \widetilde{x}_{u,t}(b).
\]
Thus, to complete the proof it suffices to show that
\begin{equation} \label{eqn:adversarial_desired_selectibility}
\mathbb{P}[ \text{$(u,v_t) \in \mathcal{M}$ and $v_{t} = b$}] \ge \frac{\widetilde{x}_{u,t}(b)}{2}
\end{equation}
for each $u \in U, t \in [n]$ and $b \in B$, where we hereby assume w.l.o.g. that $\widetilde{x}_{u,t}(b) > 0$.
In order to prove this, we first observe that by the same coupling argument used in the proof of Proposition \ref{prop:known_stochastic_graph_modified_aom},
\begin{equation} \label{eqn:adversarial_contention_id}
\mathbb{P}[(u,v_t) \in \mathcal{M}] \ge \frac{z_{u,t}}{2} = \frac{1}{2} \sum_{b \in B} p_{u,b} \widetilde{x}_{u,t}(b)
\end{equation}
as a result of the $1/2$-selectability of Algorithm \ref{alg:online_contention_resolution}.
Let us now define $R_{t}$ as the unmatched vertices of $U$ when $v_t$ arrives. Observe then that
\begin{equation}\label{eqn:conditional_contention_probability}
\mathbb{P}[ (u,v_t) \in \mathcal{M} \, | \, \text{$v_{t} =b, C(u,v_t)$ and $u \in R_t$}] = q_{u,t}.
\end{equation}
Now, $\mathbb{P}[\text{$v_{t} =b, C(u,v_t)$ and $u \in R_t$}] = p_{u,b} \cdot \widetilde{x}_{u,t}(b) \cdot \mathbb{P}[u \in R_t]$,
by Lemma \ref{lem:fixed_vertex_probe_id} and the independence of the events $\{v_{t} =b \} \cap \{ C(u,v_t)\}$
and $\{u \in R_t\}$. Thus, by the law of total probability,
\begin{align*}
\sum_{b \in B} p_{u,b} \widetilde{x}_{u,t} q_{u,t} \cdot \mathbb{P}[u \in R_t] &= \mathbb{P}[(u,v_t) \in \mathcal{M}] \\
&\ge \frac{z_{u,t}}{2} \\
&=\frac{1}{2} \sum_{b \in B} p_{u,b} \widetilde{x}_{u,t}(b)
\end{align*}
where the second inequality follows from \eqref{eqn:adversarial_contention_id}.
Thus, $q_{u,t} \cdot \mathbb{P}[u \in R_t] \ge 1/2$, and so combined with \eqref{eqn:conditional_contention_probability},
\eqref{eqn:adversarial_desired_selectibility} follows, thus completing the proof.
\end{proof}
Suppose now that each vertex $v_t$ has an arrival time, say $\widetilde{Y}_{t} \in [0,1]$,
drawn u.a.r. and independently for $t \in [n]$. The values $(\widetilde{Y}_{t})_{t=1}^{n}$
indicate the increasing order in which the vertices $v_{1}, \ldots ,v_{n}$ arrive.
\begin{algorithm}[H]
\caption{Known I.D. -- ROM -- Modified}
\label{alg:known_id_rom_modified}
\begin{algorithmic}[1]
\Require a known i.d. input $(H_{\text{typ}}, (\mathcal{D}_t)_{t=1}^{n})$.
\Ensure a matching $\mathcal{M}$ of active edges of $G \sim (H_{\text{typ}}, (\mathcal{D}_t)_{t=1}^{n})$.
\State $\mathcal{M} \leftarrow \emptyset$.
\State Compute an optimum solution of \ref{LP:new_id} for $(H_{\text{typ}}, (\mathcal{D}_t)_{t=1}^{n})$, say $(x_{t}(\bm{e} \, || \, b))_{t \in [n], b \in B, \bm{e} \in \mathcal{C}_b}$.
\For{$t \in [n]$ in increasing order of $\widetilde{Y}_t$}
\State Set $e \leftarrow \textsc{VertexProbe}\left(v_{t}, \partial(v_t), (x_{t}(\bm{e} \, || \, v_t) / r_{t}(v_t))_{\bm{e} \in \mathcal{C}_{v_t}} \right)$.
\If{$e=(u,v_t)$ for some $u \in U$, and $u$ is unmatched}
\State Add $e$ to $\mathcal{M}$ independently with probability $\exp(-\widetilde{Y}_{t} \cdot z_{u,t})$.
\EndIf
\EndFor
\State \Return $\mathcal{M}$.
\end{algorithmic}
\end{algorithm}
\begin{proof}[Proof of Theorem \ref{thm:known_id_ROM}]
Clearly, Algorithm \ref{alg:known_id_rom_modified} is non-adaptive. The competitive ratio of $1-1/e$ follows by the same coupling argument as in Proposition \ref{prop:known_stochastic_graph_modified_rom},
together with the same observations used in the proof of Theorem \ref{thm:known_id_adversarial}, and so we omit the argument.
\end{proof}
\section{Introduction}
\input{new-intro}
\section{Preliminaries and Our Results}\label{sec:prelim}
\input{new-preliminaries}
\section{Relaxing the Adaptive Benchmark via \ref{LP:new}} \label{sec:relaxation_adaptive_benchmark}
\input{LP_relaxation}
\section{Proving Theorems \ref{thm:known_id_adversarial} and \ref{thm:known_id_ROM}} \label{sec:known_id}
\input{known_id}
\section{Efficiency of Our Algorithms} \label{sec:algorithm_efficiency}
\input{efficient-algorithms}
\section{A Tight Adaptivity Gap} \label{sec:adaptivity_gap}
\input{non_adaptive_negative}
\section{Conclusion and open problems} \label{sec:conclusion}
\input{conclusion-full}
\bibliographystyle{plain}
\subsection{An Overview of Our Techniques} \label{sec:overview_of_techniques}
In order to describe the majority of our technical contributions, it suffices to focus on the known stochastic graph setting.
Let us suppose we are presented a stochastic graph $G=(U,V,E)$. For the case of patience values $(\ell_v)_{v \in V}$, a natural solution
is to solve an LP introduced by Bansal et al. \cite{BansalGLMNR12} (see \ref{LP:standard_definition_general} in Appendix \ref{sec:LP_relations}) to obtain fractional values for the edges of $G$, say $(x_{e})_{e \in E}$, such that $x_e$ upper bounds the probability $e$ is probed by the adaptive benchmark. Clearly, $\sum_{e \in \partial(v)} x_{e} \le \ell_{v}$ is a constraint for each $v \in V$, and
so by applying a dependent rounding algorithm (such as the GKSP algorithm of Gandhi et al. \cite{GandhiGKSP06}), one can round the values $(x_{e})_{e \in \partial(v)}$ to determine $\ell_{v}$ edges of $\partial(v)$ to probe. By probing these edges in a carefully
chosen order, and matching $v$ to the first edge revealed to be active, one can guarantee that each
$e \in \partial(v)$ is matched with probability reasonably close to $p_e x_e$. This is the high-level approach used in many previous stochastic matching algorithms (for example \cite{BansalGLMNR12,Adamczyk15, BavejaBCNSX18,BrubachSSX20,brubach2021offline}). However, even for a single online node, \ref{LP:standard_definition_general} overestimates the value of the adaptive benchmark, and so any algorithm designed in
this way will match certain edges with probability strictly less than $p_e x_e$. This is problematic,
for the value of the match made to $v$ is ultimately compared to $\sum_{e \in \partial(v)} p_{e} w_{e} x_e$, the contribution
of the variables $(x_e)_{e \in \partial(v)}$ to the LP solution. In fact, Brubach et al. \cite{Brubach2019} showed that the ratio between $\text{OPT}(G)$ and an optimum solution to \ref{LP:standard_definition_general} can be as small as $0.544$, so the $1-1/e$ competitive ratio of Theorem \ref{thm:known_id_ROM} cannot be achieved via a comparison to \ref{LP:standard_definition_general},
even for the special case of patience values.
Our approach is to introduce a new configuration LP (\ref{LP:new}) with exponentially many variables which accounts for the many probing strategies available to an arriving vertex $v$ with probing constraint $\mathcal{C}_v$. For each $\bm{e} =(e_{1}, \ldots , e_{|\bm{e}|}) \in E^{(*)}$, define $g(\bm{e}) := \prod_{i=1}^{|\bm{e}|} (1 - p_{e_i})$,
where $g(\bm{e})$ corresponds to the probability that all the edges of $\bm{e}$ are inactive, and $g(\lambda):=1$
for the empty string/character $\lambda$. We
also define $\bm{e}_{< e_i} := (e_{1}, \ldots ,e_{i-1})$ for each $2 \le i \le |\bm{e}|$,
which we denote by $\bm{e}_{< i}$ when clear, where $\bm{e}_{< 1}:= \lambda$ by convention.
Observe then that $\text{val}(\bm{e}):=\sum_{i=1}^{|\bm{e}|} p_{e_i} w_{e_i} g(\bm{e}_{< i})$
corresponds to the expected weight of the first active edge revealed if $\bm{e}$
is probed in order of its indices. For each $v \in V$, we introduce a decision variable $x_{v}(\bm{e})$:
\begin{align}\label{LP:new}
\tag{LP-config}
&\text{maximize} & \sum_{v \in V} \sum_{\bm{e} \in \mathcal{C}_v } \text{val}(\bm{e}) \cdot x_{v}(\bm{e}) \\
&\text{subject to} & \sum_{v \in V} \sum_{\substack{ \bm{e} \in \mathcal{C}_v: \\ (u,v) \in \bm{e}}}
p_{u,v} \cdot g(\bm{e}_{< (u,v)}) \cdot x_{v}( \bm{e}) \leq 1 && \forall u \in U \label{eqn:relaxation_efficiency_matching}\\
&& \sum_{\bm{e} \in \mathcal{C}_v} x_{v}(\bm{e}) = 1 && \forall v \in V, \label{eqn:relaxation_efficiency_distribution} \\
&&x_{v}( \bm{e}) \ge 0 && \forall v \in V, \bm{e} \in \mathcal{C}_v
\end{align}
When each $\mathcal{C}_v$ is downward-closed, \ref{LP:new} can be solved efficiently by using a deterministic separation oracle
for \ref{LP:new_dual}, the dual of \ref{LP:new}, in conjunction with the ellipsoid algorithm \cite{Groetschel,GartnerM},
as we prove in Theorem \ref{thm:LP_solvability} of Section \ref{sec:algorithm_efficiency}.
Crucially, \ref{LP:new} is also a \textbf{relaxation} of the adaptive benchmark.
\begin{theorem}\label{thm:new_LP_relaxation}
If $G=(U,V,E)$ has downward-closed probing constraints, then $\text{OPT}(G) \le \text{LPOPT}(G)$.
\end{theorem}
\begin{remark}
For the case of patience values, \ref{LP:new} was also recently independently introduced by Brubach et al. \cite{brubach2021follow,brubach2021conf} to design probing algorithms for known i.i.d. arrivals and known i.d. adversarial arrivals. Their competitive ratios are proven against an optimal solution to \ref{LP:new}, which they argue
upper bounds the \textit{online} adaptive benchmark. Theorem \ref{thm:new_LP_relaxation} thus implies that their results in fact hold against the stronger adaptive benchmark.
\end{remark}
In order to prove Theorem \ref{thm:new_LP_relaxation},
the natural approach is to view $x_{v}(\bm{e})$ as the probability that the adaptive benchmark
probes the edges of $\bm{e}$ in order, where $v \in V$ and $\bm{e} \in \mathcal{C}_v$.
Let us suppose that hypothetically we could make the following restrictive assumptions
regarding the adaptive benchmark:
\begin{enumerate}[label=(\subscript{P}{{\arabic*}})]
\item If $e=(u,v)$ is probed and $\text{st}(e)=1$, then $e$ is included in the matching, provided $v$ is currently unmatched.
\label{eqn:single_vertex_committal}
\item For each $v \in V$, the edge probes involving $\partial(v)$ are made independently of the edge states $(\text{st}(e))_{e \in \partial(v)}$. \label{eqn:single_vertex_non_adaptivity}
\end{enumerate}
Observe then that \ref{eqn:single_vertex_committal} and \ref{eqn:single_vertex_non_adaptivity} would imply
that the expected weight of the edge assigned to $v$ is
$\sum_{\bm{e} \in \mathcal{C}_v } \text{val}(\bm{e}) \cdot x_{v}(\bm{e})$.
Moreover, the left-hand side of \eqref{eqn:relaxation_efficiency_matching}
would correspond to the probability $u \in U$ is matched,
so $(x_{v}(\bm{e}))_{v \in V, \bm{e} \in \mathcal{C}_v}$
would be a feasible solution to \ref{LP:new}, and so
we could upper bound $\text{OPT}(G)$ by $\text{LPOPT}(G)$. Now, if we were working
with the \textit{online} adaptive benchmark, then it is clear that we could assume \ref{eqn:single_vertex_committal} and \ref{eqn:single_vertex_non_adaptivity} simultaneously\footnote{It is clear that we may assume
the adaptive benchmark satisfies \ref{eqn:single_vertex_committal} w.l.o.g., but not \ref{eqn:single_vertex_non_adaptivity}.}
w.l.o.g. On the other hand, if a probing algorithm does \textit{not} respect an adaptive vertex ordering
on $V$, then the probes involving $v \in V$ will in general depend on $(\text{st}(e))_{e \in \partial(v)}$. For instance, if $e \in \partial(v)$ is probed and inactive, then perhaps the adaptive benchmark probes $e' =(u,v') \in \partial(v')$ for some $v' \neq v$. If $e'$ is active and thus added to the matching by \ref{eqn:single_vertex_committal}, then the adaptive benchmark can never subsequently probe $(u,v)$ without violating \ref{eqn:single_vertex_committal}, as $u$ is now unavailable to be matched to $v$. Thus, the natural interpretation of the decision variables of \ref{LP:new} does not seem to easily lend itself to a proof
of Theorem \ref{thm:new_LP_relaxation}.
Our solution is to consider a \textbf{combinatorial relaxation} of the offline stochastic matching problem, which we define to be a new stochastic probing problem on $G$ whose optimal value
$\text{OPT}_{\text{rel}}(G)$ satisfies $\text{OPT}(G) \le \text{OPT}_{\text{rel}}(G)$. We refer to this problem as the \textbf{relaxed stochastic matching problem},
a solution to which is a \textbf{relaxed probing algorithm}. Roughly speaking, a relaxed probing algorithm
operates in the same framework as an offline probing algorithm, yet it returns a one-sided matching of the online vertices which matches each offline node at most once \textit{in expectation}. We provide a precise definition in Section \ref{sec:relaxation_adaptive_benchmark}. Crucially, there exists
an \textit{optimal} relaxed probing algorithm which satisfies \ref{eqn:single_vertex_committal} and \ref{eqn:single_vertex_non_adaptivity} simultaneously, which by
the above discussion allows us to conclude that $\text{OPT}_{\text{rel}}(G) \le \text{LPOPT}(G)$. Since $\text{OPT}(G) \le \text{OPT}_{\text{rel}}(G)$ by construction,
this implies Theorem \ref{thm:new_LP_relaxation}. Proving the existence of an optimal relaxed probing algorithm with these properties is the most technically challenging part of the paper, and is the content of Lemma \ref{lem:non_adaptive_optimum} of
Section \ref{sec:relaxation_adaptive_benchmark}.
After proving that \ref{LP:new} is a relaxation of the adaptive benchmark, we use it to design online probing algorithms.
Suppose that we are presented a feasible solution, say $(x_{v}(\bm{e}))_{v \in V, \bm{e} \in \mathcal{C}_v}$, to \ref{LP:new} for $G$.
For each $e \in E$, define
\begin{equation}
\widetilde{x}_{e}:= \sum_{\substack{\bm{e}' \in \mathcal{C}_v: \\ e \in \bm{e}'}} g(\bm{e}_{ < e}') \cdot x_{v}( \bm{e}').
\end{equation}
In order to simplify our notation in the later sections,
we refer to the values $(\widetilde{x}_{e})_{e \in E}$
as the \textbf{(induced) edge variables}
of the solution $(x_{v}(\bm{e}))_{v \in V,\bm{e} \in \mathcal{C}_v}$.
If we now fix $s \in V$, then we can easily
leverage constraint \eqref{eqn:relaxation_efficiency_distribution} to
design a simple \textit{fixed vertex} probing algorithm which matches
each edge of $e \in \partial(s)$ with probability exactly equal to $p_e \widetilde{x}_e$. Specifically, draw $\bm{e}' \in \mathcal{C}_s$ with probability $x_{s}(\bm{e}')$. If $\bm{e}' = \lambda$, then return the empty set. Otherwise,
set $\bm{e}' = (e_{1}', \ldots ,e_{k}')$ for $k := |\bm{e}'| \ge 1$, and probe the
edges of $\bm{e}'$ in order. Return the first edge which is revealed to be active, if such an
edge exists. Otherwise, return the empty set. We refer to this algorithm as \textsc{VertexProbe},
and denote its output on the input $(s, \partial(s), (x_{s}(\bm{e}))_{\bm{e} \in \mathcal{C}_s})$ by
$\textsc{VertexProbe}(s, \partial(s), (x_{s}(\bm{e}))_{ \bm{e} \in \mathcal{C}_{s}})$.
Observe the following claim, which follows immediately from the definition
of the edge variables, $(\widetilde{x}_{e})_{e \in E}$:
\begin{lemma}\label{lem:fixed_vertex_probe}
Let $G=(U,V,E)$ be a stochastic graph with \ref{LP:new} solution $(x_{v}(\bm{e}))_{v \in V, \partial(v)}$, and whose induced edge variables we denote by $(\widetilde{x}_{e})_{e \in E}$.
If \textsc{VertexProbe} is passed $(s, \partial(s), (x_{s}(\bm{e}))_{ \bm{e} \in \mathcal{C}_{s}})$, then each $e \in \partial(s)$
is returned by the algorithm with probability $p_e \widetilde{x}_e$.
\end{lemma}
\begin{definition}\label{def:vertex_probe}
We say that \textsc{VertexProbe} \textbf{commits} to the edge $e=(u,s) \in \partial(s)$, or equivalently the vertex $u \in N(s)$, provided the algorithm outputs $e$ when executing on the fixed node $s \in V$. When it is clear that \textsc{VertexProbe} is being executed on $s$, we say that $s$ commits to $e$ (equivalently the vertex $u$).
\end{definition}
Consider now the simple online probing algorithm, where
$\pi$ is generated either u.a.r. or adversarially.
\begin{algorithm}[H]
\caption{Known Stochastic Graph}
\label{alg:known_stochastic_graph}
\begin{algorithmic}[1]
\Require a stochastic graph $G=(U,V,E)$.
\Ensure a matching $\mathcal{M}$ of active edges of $G$.
\State $\mathcal{M} \leftarrow \emptyset$.
\State Compute an optimal solution of \ref{LP:new} for $G$, say $(x_{v}(\bm{e}))_{v \in V, \bm{e} \in \mathcal{C}_v}$
\For{$s \in V$ in order based on $\pi$}
\State Set $e \leftarrow \textsc{VertexProbe}(s, \partial(s), (x_{s}(\bm{e}))_{ \bm{e} \in \mathcal{C}_{s}})$.
\If{$e=(u,s)$ for some $u \in U$, and $u$ is unmatched} \Comment{this line ensures $e \neq \emptyset$}
\State Add $e$ to $\mathcal{M}$. \label{line:matched_edge}
\EndIf
\EndFor
\State \Return $\mathcal{M}$.
\end{algorithmic}
\end{algorithm}
\begin{remark}
Technically, line \eqref{line:matched_edge}
should occur within the \textsc{VertexProbe} subroutine to adhere
to the probe-commit model, however we express our algorithms in this way for simplicity.
\end{remark}
We observe the following claim, which is easily proven so we omit the argument:
\begin{proposition}\label{prop:known_stochastic_graph}
In the adversarial arrival model, Algorithm \ref{alg:known_stochastic_graph} does not attain a constant competitive ratio.
In the random order arrival model, Algorithm \ref{alg:known_stochastic_graph} attains a competitive ratio of $1/2$
and the analysis is asymptotically tight.
\end{proposition}
Since the analysis of Algorithm \ref{alg:known_stochastic_graph} cannot be improved in either arrival model, we must modify the algorithm to prove Theorems \ref{thm:known_id_adversarial} and \ref{thm:known_id_ROM}, even in the known stochastic graph setting.
Our modification involves concurrently applying an appropriate rank one matroid \textbf{contention resolution scheme}
to each offline vertex of $G$, a concept formalized much more generally in the seminal paper by Chekuri, Vondrak, and Zenklusen \cite{Vondrak_2011}. Fix $u \in U$, and observe that constraint \eqref{eqn:relaxation_efficiency_matching} ensures
that $\sum_{e \in \partial(u)} p_e \widetilde{x}_{e} \le 1$. Moreover, if we set
$z_{e}:= p_{e} \widetilde{x}_e$, then observe that as \textsc{VertexProbe} executes on $v$,
each edge $e =(u,v) \in \partial(u)$ is committed to $u$ independently with probability $z_e$.
On the other hand, there may be many edges which commit to $u$ so we must resolve
which one to take. In Algorithm \ref{alg:known_stochastic_graph}, $u$ is matched greedily to the first online
vertex which commits to it, regardless of how $\pi$ is generated. We apply \textbf{online} and \textbf{random order} contention resolution schemes to ensure that $e$ is matched to $u$ with probability $1/2 \cdot z_e$ when $\pi$ is generated by an adversary, and $(1-1/e) \cdot z_e$ when $\pi$ is generated u.a.r. This allows us to conclude the desired competitive ratios, as $\sum_{e \in E} w_{e} p_{e} \widetilde{x}_e$ upper bounds $\text{OPT}(G)$ by Theorem \ref{thm:new_LP_relaxation}. We review the relevant contention resolution schemes in Section \ref{sec:known_id}, and also extend the argument to the case when $G$ is unknown and instead drawn from the known i.d. input $(H_{\text{typ}}, (\mathcal{D}_{i})_{i=1}^{n})$, thus proving Theorems \ref{thm:known_id_adversarial} and \ref{thm:known_id_ROM}.
|
1,116,691,499,114 | arxiv | \section*{Diffusion of Zwitterion Glycine, Diglycine, and Triglyine in Water}}
{\centering\textbf{Yadav Prasad Kandel and Narayan Prasad Adhikari${^*}$}\\}
{\centering{\it {Central Department of Physics, Tribhuvan University, Kathmandu, Nepal.}\\}}
{\centering {\it{${^*}$ email: [email protected]}\\}}
\begin{abstract}
\noindent Diffusion, transport of mass in response to concentration and thermal energy gradient, is an important transport property, vital in material science and life science. In the present work classical molecular dynamics study of diffusion of zwittterion glycine, zwitterion diglycine and zwitterion triglycine in water have been carried out. Self and binary diffusion coefficients of aqueous solution of these molecules have been calculated using Einstein method. Our results agree with experimental data reported in literatures. Temperature dependency of diffusion of glycine in water have been explored using estimated values of self and binary diffusion coefficients at four different temperatures. Effect of peptide bond formation in diffusion has been studied using peptide chain composed of up to three monomers of glycine. The structure of the system is analyzed using radial distribution function of different atom.
\noindent Keywords: Diffusion Coefficient, Molecular dynamics, RDF, Arrhenius behaviour, Glycine, Diglycine, Triglycine.
\end{abstract}
\section{Introduction}
\noindent Amino acids are the organic substances which contain both amino and acid groups. Out of 300 naturally found amino acids only 20 serve as building blocks of protein \cite{AminoDefn}. Among these 20 building blocks of protein, glycine (gly), the simplest of the amino acids, is an essential component of important biological molecules, a key substance in many metabolic reactions, the major inhibitory neurotransmitter in the spinal cord and brain stem, and an anti-inflammatory, cytoprotective, and immune modulating substance \cite{AAS:AAS786}. Glycylglycine or diglycine (dgl) and glycylglycylglycine or triglycine (tgl) are peptides of glycine haveing two and three monomer units in chain respectively \cite{Lehninger}. Understanding of physical properties like diffusion, transfer of mass due to concentration and thermal gradient, can provide meaningful information about interatomic and intermolecular interactions. Knowledge of diffusion of amino acids also helps in the understanding the dynamics of protein and protein folding. To produce the amino acids in reaction and extraction/separation processes efficiently, it is of great importance to estimate the mass transfer rates and design optimum chemical reactors and separators \cite{Umecky}. Also, it is very crucial to understand the correlation between the structure of system and diffusion.\\
\noindent Study of diffusion of biomolecules in aqueous and other medium has attracted a significant number of researchers, both in experiment as well as in simulation. At 298.2 K temperature the binary diffusion coefficient of aqueous glycine, at infinitesimally dilute solution, has been reported by Longsworth \cite{longsworth2} to be 10.55, Lyons and Thomas \cite{glydiff1} to be 10.64, Woolf et al. \cite{glydiff2} to be 10.59, and Ma et al. \cite{glydiff4} to be 10.62, all in the unit of $10^{-10} m^2.s^{-1}$. Umecky et al. \cite{Umecky} have studied the temperature dependency of binary diffusion coefficient of glycine and reported the diffusion coefficient to be 9.36 x $10^{-10} m^2.s^{-1}$at 293.2 K and increase in it by 12.83 x $10^{-10} m^2.s^{-1}$ corresponding to 40 K increase in temperature. Changwei et al. \cite{concenExp} have shown the binary diffusion coefficient of aqueous glycine depends upon concentration and changes from 10.4011 to 9.4258 x $10^{-10} m^2.s^{-1}$ corresponding change in concentration from 0.1057 to 0.9045 $mol.L^{-1}$. Campo \cite{campo} have carried molecular dynamics (MD) study of hydration and structure of glycine in water and other works have been done to study other different properties of glycine, diglycine, and triglycine. To our best knowledge, no realistic molecular dynamics simulation has been carried out, which mimics the experiment in best possible way, to estimate diffusion coefficient of aqueous solution of these molecules. In present work, self and binary diffusion coefficient of aqueous solution of zwitterion glycine has been estimated at different temperatures from molecular dynamics simulation, at the mole fraction scaled to match experimental values reported by Umecky et al. \cite{Umecky}, and Longsworth \cite{longsworth2}. Diffusion coefficients of aqueous solution of zwitterion diglycine and zwitterion triglycine have also been estimated. The structure of systems and its effect on diffusion have been discussed.\\
\noindent In molecular dynamics simulation, the experimental environment could be mimicked by modeling different interactions between the atoms and molecules \cite{sunilsir}. Therefore, it is considered to be one of the best alternative for study of system dynamics like diffusion and structure for it being economical and free from experimental hazards \cite{skthapa}. In the recent years, the MD techniques have evolved in the application to macroscopic or real systems to study the complex dynamic processes that occur in biological systems like protein stability, protein folding and unfolding, conformational changes etc. \cite{DipendraPaper, mehrer}.
\noindent In this paper, we have discussed theoretical background of the present work in section two. Computational details are explained in section three. Results are presented and discussed in section four, and conclusions are presented in section five.
\section{Theory}
\noindent Diffusion is a spontaneous mass transport phenomena by which matter is carried from one part of the system to another as a result of random molecular motion \cite{ ippaper}. It takes place in an account of concentration inhomogeneity and thermal gradient. Diffusion is an essential function in living organisms and has great many application in modern material science and technology.\\% It plays a key role in a variety of atmospheric and biospheric sciences. \\
\noindent Diffusion of particle in homogeneous medium with no chemical concentration gradient is called the self diffusion coefficient. Rate of the self-diffusion is measured in terms of self-diffusion coefficient \cite{ksharma}. Under the assumption of isotropy of medium, if $r(t) - r(0)$ is the change of position of diffusing particle in time $t$, then the macroscopic transport property- self diffusion coefficient can be related to microscopic property- mean squared displacement of material, by using Einstein's relation, as:
\begin{equation}
\centering
\label{selfdiff}
D = \lim_{t \to \infty} \frac{\langle \lbrack r(t) - r(0) \rbrack ^2 \rangle}{6 t}
\end{equation}
\noindent Here, $\langle ... \rangle$ represents the ensemble average of quantity inside the angled bracket, which in our case is the square of displacement. Thus, the self diffusion coefficient of any species is one sixth of slope of mean squared displacement plotted as a function of time.\\
\noindent The diffusion of two different species in a binary mixture is called binary or mutual diffusion and the corresponding diffusion coefficient is called binary or mutual diffusion coefficient. If self diffusion coefficients of two individual species A and B is $D_A$ and $D_B$, with mole fraction $N_A$ and $N_B$ respectively, the binary diffusion coefficient $D_{AB}$ of these species, according to Darken's phenomenological relation, is \cite{darken}:
\begin{equation}
\centering
\label{darken}
D_{AB} = N_{B}D_{A} + N_{A}D_{B}
\end{equation}
\section{Computaitonal Details}
In the present work, zwitterion form of glycine, dyglycine and triglycine were taken and modeled in GROMOS53A6 force field platform \cite{forcefield1, forcefield2}. Specific bonds, and bond angles were taken in g96 format to configure the topology of the molecules. Different proper dihedrals were defined to prevent rotation around a bond and improper dihedrals were defined to confine four atoms in plane or tetrahedral configuration. We took CH$_2$ as united atom in which position of united atom is the position of heaviest atom, C in this case. Figure-\ref{fig:gly} shows the models of zwitterion glycine, figure-\ref{fig:dgl} shows diglycine and figure-\ref{fig:tgl} shows triglycine molecules. SPC/E model of water \cite{SPCE} was taken as solvent. To estimate the diffusion coefficient of zwitterion glycine in water and check its dependency in temperature, two zwitterion glycine molecules were dissolved in 11,112 water molecules in system-I. It was simulated at four different temperatures: 293.2 K, 303.2 K, 313.2 K, and 333.2 K. In order to estimate variation of diffusion coefficient with increase in monomer units in peptide chain two zwitterion glycine, two zwitterion diglycine, and two zwitterion triglycine molecules were separately dissolved in 1,385 water molecules in system-II, system-III, and system-IV respectively and diffusion coefficients were estimated at temperature 298.2 K. All the simulations were carried out at pressure of one atmosphere. The number of molecules were chosen to match mole fraction of experimentally reported data \cite{ Umecky, longsworth2}.
\begin{figure}[h]
\minipage{0.40\textwidth}
\centering
\includegraphics[scale=0.55]{gly.eps}
\caption{Zwitterion glycine with CH2 as united atom centered at position of atom C. }\label{fig:gly}
\endminipage\hfill
\minipage{0.450\textwidth}
\centering
\includegraphics[scale=0.45]{dgl.eps}
\caption{Zwitterion diglycine with CH2 as united atom centered at position of atom C. }\label{fig:dgl}
\endminipage\hfill
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{tgl.eps}\hfill
\caption{Zwitterion triglycine with CH2 as united atom centered at position of atom C.}\label{fig:tgl}
\end{figure}
\noindent During the simulation if the simulation box contains overlapped particles or particle with bad contact, molecular dynamics could explode and never bring system to equilibrium \cite{allen}. To ensure equilibration energy minimization was carried out for each of the systems with threshold energy of 50 kJ/mol. Steepest-descent method of energy minimization was used as its simple and sturdy, which simply takes a step in the direction of the negative gradient, without any consideration of the history built up in previous steps. The step size is adjusted such that the search is fast, but the motion is always downhill \cite {manual}. Particle Mesh Ewald (PME) was used for coulomb interaction. Cut off distance for both of the non bonded interactions- coulomb and LJ was 1.0 nm. Dynamical properties like diffusion, thermal conduction, etc depend strongly upon the temperature, and pressure of system \cite{sunildai}. To bring system under study at a condition that best mimics experimental environment each of the systems was equilibrated for 200 ns with time step of 0.02ps. Temperature of system was coupled to reference temperature using velocity rescale and reference pressure was coupled to system using berendsen coupling. Density of the equlibrated system was tallied with experimentally reported values to ensure proper equilibration and to be sure that the force field parameters used are suitable for the system under consideration. In no case, density of system after equilibration differed from experimental value by 1.5 percent. This suggests that the force field parameters used are well suited for the systems under consideration. After equilibration of system, production run was carried out to estimate diffusion coefficients. Structural of system after production run was explored using radial distribution function (RDF).
\section{RESULTS AND DISCUSSIONS}
\noindent We have calculated the self diffusion coefficients of glycine and water, and their binary diffusion coefficient at different temperatures using system-I. Diffusion coefficients of aqueous glycine, diglycine, and triglycine have been calculated at temperature 298.2 K. These diffusion coefficients have been calculated using Einstein method \cite{crank}, where self diffusion coefficient of any species in three dimensional isotropic medium is one sixth of slope of mean squared displacement (MSD) versus time plot as in equation (\ref{selfdiff}). Structure of systems have been studied using the radial distribution function (RDF), which gives preferred position of one particle around other particle.
\subsubsection*{Mean Squared Displacement (MSD)}
\noindent Mean squared displacement (MSD) as a function of time is used to calculate self diffusion coefficient in Einstein's method. Linear fit of MSD of different molecules at different temperatures are plotted as a function of time for 5 ns. Figure-\ref{fig:msdgly} is MSD plot of glycine at four different temperatures. The MSD is becoming steeper with increase in temperature. This shows that rate of diffusion is increasing with increase in temperature. Figure-\ref{fig:msdwat} is MSD plot of water at four different temperatures and shows similar behaviour of MSD plot of glycine in figure-\ref{fig:msdgly}. MSD plot of glycine, diglycine and triglycine at temperature 298.2 K is shown in figure-\ref{fig:msdgdt}. The MSD line is getting less steep with increase in molecule size indicating smaller rate of diffusion of larger molecule than that of smaller molecule at same temperature. MSD plot of water at 298.2 K containing three different solutes is shown in fiugre-\ref{fig:msdgdtwat}. The MSD curve is steeper when lighter solute molecules are present. It indicate that water diffuses faster at given temperature when smaller solute molecules are present. Figure-\ref{fig:msdgdt} and figure-\ref{fig:msdgdtwat} show the size effect of solute on MSD and diffusion.
\begin{figure}[!htb]
\minipage{0.450\textwidth}
\includegraphics[scale=0.30]{msdgly.eps}
\caption{MSD plot of glycine at four different temperatures. }\label{fig:msdgly}
\endminipage\hfill
\minipage{0.450\textwidth}
\includegraphics[scale=0.30]{msdwat.eps}
\caption{MSD plot of water at four different temperatures.}\label{fig:msdwat}
\endminipage\hfill
\end{figure}
\begin{figure}[!htb]
\minipage{0.450\textwidth}
\includegraphics[scale=0.30]{gdtmsd.eps}
\caption{MSD plot of glycine, diglycine and triglycine at temperature 298.2 K.}\label{fig:msdgdt}
\endminipage\hfill
\minipage{0.450\textwidth}
\includegraphics[scale=0.30]{gdtwatermsd.eps}
\caption{MSD plot of water with different solute at temperature 298.2 K.}\label{fig:msdgdtwat}
\endminipage\hfill
\end{figure}
\subsubsection*{Self Diffusion Coefficient}
\noindent Table \ref{table:selfdiff} presents the estimated values of self diffusion coefficient of different molecules at different temperatures. The self diffusion coefficient of zwitterion glycine and water is increasing with increase in temperature. This is in accord with increased random velocity with increased temperature. Self diffusion coefficients of zwitterion glycine, zwitterion diglycine and zwitterion triglycine each dissolved separately in 1,385 water molecules at temperature 298.2 K and pressure 1 bar is presented in the same table. The estimated self diffusion coefficient of these molecules have decreased with increase in monomer unit in chain. It could be attributed to size of the molecules. At given temperature molecule of larger molecular weight attains smaller velocity than lighter molecule. Therefore, in the same environment and physical conditions, large molecules diffuses slowly than small molecule. Chain length and hence molecular weight of molecule increases from zwitterion glycine, zwitterion diglycine to zwitterion triglycine and the rate of diffusion decreases. At temperature 298.2 K, self diffusion coefficient of glycine appears less than that at 293.2 K. It could be because of higher concentration of glycine at 298.2 K compared to that at 293.2 K.
\begin{table}[h]
\centering
\caption{Estimated values of self diffusion coefficient of diferent molecules at one atm pressure and different temperatures. }
\label{table:selfdiff}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{Temperature (K)} & \multirow{2}{*}{Solute} &\multirow{2}{*}{ $D_{self}^{solute}(10^{-9}m^2.s^{-1})$} & \multicolumn{2}{|c|}{Water$(10^{-9}m^2.s^{-1})$}\\\cline{4-5}
& & & $D_{self}^{est}$ & $D_{self}^{exp}$\\
\hline
293.2 & Glycine &1.09& 2.48 & 2.03 \cite{waterexpe}\\
\hline
\multirow{3}{*}{298.2} & Glycine & 0.99& 2.55 &2.30 \cite{waterexpe} \\\cline{2-5}
& Diglycine & 0.70& 2.52 &2.30 \cite{waterexpe} \\\cline{2-5}
& Triglycine & 0.48& 2.51 &2.30 \cite{waterexpe} \\\cline{2-5}
\hline
303.2 & Glycine & 1.43 &2.97 &2.59 \cite{waterexpe} \\
\hline
313.2 & Glycine & 1.81 &3.61 & 3.24 \cite{waterexp} \\
\hline
333.2 & Glycine & 2.36 &5.00 & 4.77 \cite{waterexp} \\
\hline
\end{tabular}
\end{table}
\noindent The value of self diffusion coefficient of water has increased with increase in temperature. Estimated values are slightly greater than experimental values reported in literatures. It has been shown by Paudyal et al. \cite{Ipaudyal} that diffusion coefficient of water estimated using GROMACS slightly increases with increase in size of simulation and they have shown that estimated values best tally with experimental values at small simulation size. All the systems in the present work contained large number of water molecules. That could be the reason why estimated values here are slightly higher than experimental values. Further, the self diffusion coefficient could have been affected by presence of solute. The self diffusion coefficient of water at 298.2 K is slightly smaller in the presence of larger molecules because of increased hindrance in motion of water molecules with increased solute size.
\subsubsection*{Binary Diffusion Coefficient}
\noindent Binary diffusion coefficient has been calculated for zeitterion glycine-water mixture at four different temperatures, and mixture of zwitterion glycine - water, zwitterion diglycine - water and zwitterion triglycine - water at temperature 298.2 K using Darken's relation (\ref{darken}). Binary diffusion coefficients of different pairs of molecules at different temperatures are presented in table-\ref{table:binarydiff} and compared with experimental values reported in literature \cite{Umecky, longsworth2}. $D_{binary}^{est}$ is binary diffusion coefficient estimated in present work and $D_{binary}^{exp}$ is experimental value of binary diffusion coefficient reported in literatures. With the increase in temperature, thermal agitation of molecules increases which boosts the diffusion. This means diffusion coefficient should be greater at higher temperatures.\\
\begin{table}[!h]
\centering
\caption{Binary diffusion coefficient of different paris of molecules at different temperatures.}
\label{table:binarydiff}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Solvent} &\multirow{2}{*}{Solute} & \multirow{2}{*}{Temperature (K)}&\multicolumn{2}{|c|}{ Binary diffusion coeff. $m^2/s$}& $\%$ Error\\\cline{4-5}
& & & $D_{binary}^{est} (10^{-9}m^2.s^{-1})$& $D_{binary}^{exp}(10^{-9}m^2.s^{-1})$& \\
\hline
\multirow{7}{*}{Water} & Glycine & 293.2 &1.10 &0.94 \cite{Umecky}& 17.02 \\\cline{2-6}
& Glycine & 298.2 &0.99 &1.06 \cite{longsworth2}& 6.60 \\\cline{2-6}
& Diglycine & 298.2 & 0.70 & 0.79 \cite{longsworth2}& 11.39 \\\cline{2-6}
& Triglycine & 298.2 & 0.48& 0.67 \cite{longsworth2}& 28.36 \\\cline{2-6}
& Glycine & 303.2 & 1.43 & 1.22 \cite{Umecky}& 17.21 \\\cline{2-6}
& Glycine & 313.2 & 1.81 & 1.50 \cite{Umecky}& 20.67 \\\cline{2-6}
& Glycine & 333.2 & 2.36 & 2.22 \cite{Umecky} & 6.31 \\\cline{2-6}
\hline
\end{tabular}
\end{table}
\noindent Binary diffusion coefficient of glycine in water is increasing with increase in temperature, just like self diffusion coefficient. Binary diffusion coefficient of glycine, diglycine and triglycine in water is also follwoing decreasing nature of self diffusion coefficient with increase in molecular weight. Mass of molecules increases in progressing from glycine to diglycine and then to triglycine and it is seen that the value of binary diffusion coefficient decreases sharply with increase in number of monomers in peptide chain. The standard error in the estimation of self as well as binary diffusion coefficients are very small and insignificant compared to the estimated values where as symmetric round off errors in each estimation of diffusion coefficient is of the order $10^{-11}$. The value of D$_{binary}^{ext}$ for triglycine has deviated from experimentally reported value by about 28 percent. It is possible that the united atom modeling and corresponding force field parameters we used might not be adequate for large molecule like triglycine. Moreover, the data in the literature \cite{longsworth2} have been reported using Rayleigh interference method. We suggest to use more advanced and reliable experimental methods like NMR to measure diffusion coefficient.\\
\subsubsection*{Effect of Temperature on Diffusion}
Diffusion is transport of mass due to concentration and thermal gradient. It strongly depends upon temperature \cite{udhal}. We have checked the temperature dependency of diffusion coefficient of glycine, water and their binary mixture using Arrhenius formula \cite{crank}:
\begin{equation}
\label{arrhenius}
D = D_o \exp( -\frac{E_a}{N_A k_B T})
\end{equation}
\noindent where D$_o$ denotes the pre-exponential factor, also called the frequency factor, E$_a$ is the activation energy for diffusion, T is the absolute temperature, N$_A$ is Avogadro number whose value is 6.022x10$^{23}$ per mol, and k$_B$ is the Boltzmann constant whose value is 1.38x10$^{-23}$ J.K$^{-1}$. The activation energy of diffusion process corresponds to the slope of Arrhenius plot, which is plot between ln(D) and reciprocal of absolute temperature. Figure \ref{fig:arrheniusall} shows the Arrhenius diagram of self-diffusion coefficient of zwitterion glycine, self-diffusion coefficient of water, and their binary-diffusion coefficient. In the plot, points obtained from simulation have aligned around straight line. Hence, it could be said that diffusion coefficient of these molecules at different temperatures follows Arrhenius behaviour. As the self diffusion coefficient of glycine and binary diffusion coefficient of aqueous glycine are nearly equal, Arrhenius plot have overlapped. This is because of very small mole fraction of glycine. The pre-exponential factors and activation energies are presented in table-\ref{table:arrhenius}.\\
\begin{table}[H]
\centering
\caption{Pre-exponential factor and activation energy of diffusion.}
\label{table:arrhenius}
\begin{tabular}{|c|c|c|}
\hline
Molecule& Pre-exponent factor (D$_o$ $m^2.s^{-1}$ )&Activation energy (E$_a$)(kJ.mol$^{-1}$) \\
\hline
Water & 8.23 x $10^{-7}$ &14135.48 \\
\hline
Zwit. glycine & 6.67 x $10^{-7}$ &15517.51 \\
\hline
Binary mixture- simulated &6.74 x $10^{-7}$ &15517.18 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[h]
\minipage{0.45\textwidth}
\includegraphics[scale=0.30]{allinone.eps
\caption{Arrhenius diagram of diffusion coefficient of water, zwitterion glycine, and their binary mixture.}\label{fig:arrheniusall}
\endminipage\hfill
\minipage{0.450\textwidth}
\includegraphics[scale=0.3]{rdfOWOW.eps}
\caption{Radial distribution function of oxygen of water (OW) in reference of oxygen of water (OW).}\label{rdf:owow}
\endminipage\hfill
\end{figure}
\subsubsection*{Radial Distribution Function (RDF)}
\noindent Radial distribution function (RDF) is used to study the pair correlation and structure of system. It gives the preferred position of one particle with respect to other particle. For isotropic system, it is only the function of distance between particles \cite{mcquarrie}.\\
\noindent Figure-\ref{rdf:owow} is the RDF of oxygen atom of water (OW) in reference to oxygen atom of other water molecule (OW). van der Waals radius of oxygen atom is 2$^{ 1/6} \sigma$ = 0.355 nm \cite{DipendraPaper}. This means, if these atoms were to interact only under LJ potential, they wouldn't go closer to each other than van der Waals radius. In the plot first peak positions are 0.274 nm, 0.275 nm, 0.276 nm, and 0.276 nm and corresponding peak values are 3.151, 3.053, 2.982, and 2.835 at temperatures 293.2 K, 303.2 K, 313.2 K, and 333.3 K respectively. The hydrogen and oxygen atom in SPC/E model of water has partial positive and negative charges. Thus, the coulomb interaction is responsible for first peak position being smaller than van der Waals radius. Value of r in this plot gives the preferred distance of oxygen atom in a water molecule around other water molecules and value of g(r) gives relative the probability of finding OW. Only three peaks are observable and beyond that value of g(r) is unity which means there is no correlation between oxygen in water molecules beyond the position of third peak.\\
\begin{figure}[h]
\minipage{0.450\textwidth}
\includegraphics[scale=0.3]{rdfO1OW.eps}
\caption{Radial distribution function of oxygen (OW) in water molecule in reference of oxygen (O1) in zwitterion glycine.}\label{rdf:o1ow}
\endminipage\hfill
\minipage{0.450\textwidth}
\includegraphics[scale=0.3]{rdfNOW4.eps}
\caption{Radial distribution function of oxygen of water (OW) in reference of nitrogen (N) in zwitterion glycine.}\label{rdf:now}
\endminipage\hfill
\end{figure}
\noindent Figure-\ref{rdf:o1ow} is the RDF of oxygen (OW) in water molecule in reference of oxygen (O1) in glycine. It gives average distribution of OW and hence the water molecules around O1 in zwitterion glycine. The first peak positions of OW around O1 at temperatures 293.2 K, 303.2 K, 313.2 K, 333.2 K are 0.280 nm, 0.282 nm, 0.282 nm, and 0.284 nm while the corresponding peak values are 2.034, 2.024, 1.971, and 1.907 respectively. Figure-\ref{rdf:now} is the RDF of OW in reference of nitrogen (N) in glycine. It gives the distribution of water molecules around $NH3^+$ terminal of zwitterion glycine. The first peak positions of OW around N at temperatures 293.2 K, 303.2 K, 313.2 K, 333.2 K are 0.294 nm, 0.294 nm, 0.296 nm, and 0.298 nm while the corresponding peak values are 2.191, 2.161, 2.128, and 2.062respectively. \\
\noindent In all three RDF plots, with increase in temperature, the first peak positions have moved farther while hight of first peak have decreased and their width have increased. These phenomenon indicate the increased random motion with temperature. Further, wide RDF at higher temperatures means more space in between the molecules. This allows the molecules to move more freely, resulting increase in diffusion coefficient. Therefore, the increase in width of RDF means increase in diffusion. The existence of many peaks in RDF, their position, height and width could not be explained just by interaction between pair of species under calculation but interaction between and of these species with other species in the surrounding.
\section{Conclusions and Concluding Remarks}
\noindent We carried out realistic classical molecular dynamics simulation of glycine, diglycine, and triglycine in water where concentration of solutes were same as that reported in experiments whose results were compared with present work. The estimated values were in good agreement with experimental datas. The solutions used in simulation were very dilute and binary diffusion coefficients were nearly equal to binary diffusion coefficient of molecule of extremely small mole fraction. Temperature dependency of diffusion coefficient of glycine, water and their binary mixture was tested. RDF revealed distribution of different atoms in isotropic medium and effect of temperature on diffusion.
\section*{Acknowledgement}
YPK acknowledges the Master Thesis Grants from University Grants Commission, Nepal.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.